PMCCPMCCPMCC

Search tips
Search criteria 

Advanced

 
Logo of cogneurospringer.comThis journalToc AlertsSubmit OnlineOpen Choice
 
Cogn Neurodyn. 2009 September; 3(3): 205–222.
Published online 2009 June 25. doi:  10.1007/s11571-009-9086-0
PMCID: PMC2727166

Iterated function systems in the hippocampal CA1

Abstract

How does the information of spatiotemporal sequence stemming from the hippocampal CA3 area affect the postsynaptic membrane potentials of the hippocampal CA1 neurons? In a recent study, we observed hierarchical clusters of the distribution of membrane potentials of CA1 neurons, arranged according to the history of input sequences (Fukushima et al Cogn Neurodyn 1(4):305–316, 2007). In the present paper, we deal with the dynamical mechanism generating such a hierarchical distribution. The recording data were investigated using return map analysis. We also deal with a collective behavior at population level, using a reconstructed multi-cell recording data set. At both individual cell and population levels, a return map of the response sequence of CA1 pyramidal cells was well approximated by a set of contractive affine transformations, where the transformations represent self-organized rules by which the input pattern sequences are encoded. These findings provide direct evidence that the information of temporal sequences generated in CA3 can be self-similarly represented in the membrane potentials of CA1 pyramidal cells.

Keywords: Hippocampus, Patch-clamp recording, Cantor coding, Iterated function systems (IFS), History-dependent neural representation

Introduction

Clinical studies (Scoville and Milner 1957; Zola-Morgan et al. 1986), have established that the hippocampus is a necessary organ for the formation of episodic and semantic memories, particularly episodic memory. The hippocampus receives all kinds of sensory information via the entorhinal cortex. One of the main components of the hippocampus, CA3, is considered to function as a network for autoassociative memories via the framework of attractor dynamics, where memories can be stably stored as corresponding neuronal patterns and can be retrieved by partial cues (Marr 1971; McNaughton and Morris 1987; Treves and Rolls 1994). These theoretical predictions were partially verified by experimental studies (Nakazawa et al. 2002; Wills et al. 2005). Moreover, a hypothesis has been proposed that a temporary instability is a key to regeneration of episodic events (Tsuda 2001; Tsuda and Kuroda 2001, 2004). On the other hand, the pyramidal cells have less recurrent connections in CA1 than CA3. What is the difference in function of CA3 and CA1? (Treves 2004).

Some studies indicate that CA1 appears to be involved more in the processing of temporal information than CA3 (see a review by Kesner et al. 2004). Although the main interests of early studies of the hippocampus of behaving rodents was spatial memory and place-dependent neuronal activities (O’Keefe and Dostrovsky 1971), there recently has been growing interest in their episodic dependencies (Wood et al. 2000; Frank et al. 2000; Ferbinteanu and Shapiro 2003; Takahashi et al. 2009). Several model studies have been proposed for sequence learning and recall of the sequence with place cells in CA3 (Wallenstein and Hasselmo 1997; Levy 1996; Yamaguchi 2003), and the possibility of sequence-dependent firing in CA1 (Hasselmo and Eichenbaum 2005; Yoshida and Hayashi 2007). Moreover, a learning rule depending on spatial and temporal correlation of inputs in CA1 has been proposed based on in vitro studies (Tsukada et al. 1996; Tsukada and Pan 2005; Tsukada et al. 2007). An understanding of how the information of spatiotemporal sequence generated in CA3 is represented in CA1 would provide important insights into a computational role of CA1. We have also proposed a scheme for encoding the temporal sequence of events in CA1, which we refer to as “Cantor coding” (Tsuda and Kuroda 2001; Siegelmann and Sontag 1994), and have discussed its significance for the formation of episodic memory in the hippocampus-cortex system (Tsuda 2001; Tsuda and Kuroda 2004). Cantor coding enables the temporal pattern sequences generated in CA3 to be represented hierarchically in fractal subsets in state space of CA1. Fukushima et al. (2007) conducted experiments to verify the presence of Cantor coding in rat hippocampal slices and found clusters of membrane potentials corresponding to different temporal sequences of inputs. They also verified the hierarchies of such clusters up to depth two or three of the sequences. However, this finding does not necessarily imply the fractality of the membrane potentials, namely Cantor coding. Other data analyses are necessary to show more explicitly the presence of Cantor coding. The aim of the present paper is to obtain direct evidence of Cantor coding by performing new analyses of the experimental data. Before describing the data analyses conducted, we will briefly explain iterated function systems (IFSs) which play a key role in the analyses.

Iterated function systems

Here we briefly explain the mechanism of Cantor coding by using a simple mathematical model, called iterated function systems (IFSs) (Hutchinson 1981; Barnsley 1988). IFSs provide a deterministic framework for generating self-similar fractal patterns as their attractors, and have been applied to many systems (see, for instance, Pollack 1991; Bressloff and Stark 1992; Kolen 1994; Yamamoto and Gohara 2000; Kaneko 2005).

An IFS is defined as a finite set of contractive transformations on a complete metric space, and is often called a hyperbolic IFS. Here, a transformation f : X → X on a metric space (X, d) is called contractive if there is a constant 0 ≤ s < 1 such that d(f(x), f(y)) ≤ s · d(x, y) for all xy [set membership] X, where equation M1 is a metric on the space X. Figure 1a shows an example of a two-dimensional IFS consisting of three contractive transformations on equation M2 The attractor of this IFS is a Cantor set named a “Sierpinski triangle” (Fig. 1c). The attractor is obtained as follows. Take any closed bounded nonempty subset B in equation M3 First, by each of three transformations Fi (i = 1, 2, 3), B is contracted and moved to a respective position. For these three images, take their union to obtain images of the first transformations (Fig. 1a). Repeat this process. For any nth images of the three transformations, their union is also transformed, and we have n + 1st images (Fig. 1b). In each step of such procedures, each component of the images is naturally associated with a sequence of transformations applied to make such a component. The attractor A is produced by an infinite sequence of this procedure (Fig. 1c). The elements of the attractor are hierarchically clustered in a self-similar manner according to the similarity of sequences of the applied transformations.

Fig. 1
Iterated Function System (IFS) and its attractor. a An example of two dimensional IFS: equation M4 It consists of three contractive transformations on equation M5 Its attractor is given as equation M6 where B is any closed bounded subset of equation M7 This IFS satisfies a non-overlapping condition: ...

It should be noted that the attractor is composed of three images of itself by these three transformations. This self-referential structure permits the definition of a continuous mapping from the space of sequence consisting of three transformations onto the attractor, provided an appropriate metric in such a sequence space. Moreover, if the transformations of IFS satisfy the non-overlapping condition equation M13 (see also Fig. 1a), the mapping becomes a homeomorphism, and the obtained attractor is a Cantor set. Now, each element of the Cantor set is associated with a sequence of the applied transformations, and the distance between different sequences of applied transformations can be measured by the Euclidean distance between corresponding two elements in the Cantor set. Thus, the Cantor set can be regarded as a spatial code table for sequences of applied transformations (Fig. 1c). Here, the history in sequence of applied transformations is retrospectively represented by the spatial hierarchy of the Cantor set; each cluster of the Cantor set is associated with similar sequences of applied transformations which have common resent history with a length that is equivalent to the depth of the cluster in the spatial hierarchy. Figure 1d shows two examples of one-dimensional IFSs consisting of three contractive affine transformations on variables x and y, respectively. Here, an affine transformation on equation M14 is contractive when the absolute value of its slope is smaller than one. A two-dimensional IFS as a pair of them has the same attractor A described above. This example shows that a non-overlapping condition can be satisfied on a two-dimensional IFS as a pair, albeit the condition can not be satisfied on each one-dimensional IFS.

IFSs are naturally realized as input-driven contractive systems, an example of which is schematically shown in Fig. 2. In this formulation, a transformation, which represents a rule of state transition, is selected by each input, and thus a sequence of inputs generates a sequence of rules. Consequently, the orbit in the system is trapped on the fractal attractor of the underlying IFS. Hereafter, the input-driven contractive systems are also referred to as input-driven IFSs. As underlying IFS let us take two-dimensional IFS described above (Fig. 1a). Then, after the system is received an input sequence, the states of the system would be restricted on the Sierpinski triangle (Fig. 2b; see also Fig. 1c). Now this Cantor set provides a spatial code table for input sequence so that the states of the system are hierarchically clustered in a self-similar manner according to the similarity of the input histories. For example, the code [312] representing cluster enclosed by a open circle in Fig. 2b corresponds to the input sequence patterns 3, 1 and 2, where 3 is the most recent pattern in the input.

Fig. 2
Input-driven IFS and its attractor. a Schematic diagram of input-driven IFS. Assume the number of kinds of input to be finite equation M15 and each input i induces an application of a contractive transformation Fi on the state space equation M16 Then, a sequence of inputs { ...

In real circumstances including brain, a system is contaminated by various kinds of noise. In input-driven IFSs, however, a perturbation by adding noise to the state is rapidly reduced by their contractive dynamics. Thus, even if such a perturbation occurs at each instant of time, the perturbed orbit still follows the original orbit of the noiseless system within a constant distance determined by the contractive ratio. Hence, although the fine structure of the attractor would be disturbed in such a noisy environment, such disturbance remains at finer scales. For further discussions about noise effect, see, for instance, Tsuda and Yamaguchi (1998), Tsuda (2001).

A dynamical mechanism of Cantor coding

We have previously proposed the Cantor coding hypothesis for the hippocampal CA1 area through the studies of hippocampus models. First, we constructed a skeleton model for CA3 as well as for CA1 with unidirectional excitatory couplings from CA3 to CA1 and disinhibitory couplings from the septum to both CA3 and CA1 (Tsuda and Kuroda 2001). Using this model, we demonstrated the Cantor coding in the state space of the CA1 network consisting of neuronal patterns. Second, we constructed a CA1 network model consisting of two compartment model neurons, and investigated the possibilities of Cantor coding when the network receives spatiotemporal input sequences (Yamaguti et al. 2009). We also observed Cantor coding in the space of firing rates and clarified the biological conditions of Cantor coding.

Using the concepts of IFS described above, our Cantor coding hypothesis for CA1 can be stated as follows. When CA1 receives a temporal input sequence consisting of a finite set of input patterns, contractive response-rules corresponding to the input patterns are self-organized. This is an emergent property of the network. The self-organized rule generates a Cantor set in the space of membrane potentials, where the input histories are encoded.

Materials and methods

How can we verify the hypothesis of Cantor coding in CA1? In Fukushima et al. (2007), we conducted experiments to clarify how the information of spatiotemporal sequence of the hippocampal CA3 area affects the postsynaptic membrane potentials of individual hippocampal CA1 pyramidal cells. Sequential electrical stimulations consisting of four distinct spatial patterns were randomly applied to the Schaffer collaterals with gamma frequency, and the post-synaptic membrane potentials were recorded using patch-clamp techniques. We observed that the distributions of the membrane potentials were hierarchically clustered according to the histories of input sequences up to depth two or three. However, the finding of such hierarchical clusters is still only indirect evidence of the presence of Cantor sets, because these sets are essentially infinite objects; observed sets are finite. A direct evidence of Cantor sets may be obtained by showing the existence of emergent rules such as IFS. This is what we wanted to establish in the present paper. From this point of view, in the present study we examined the experimental data, and adopted a different method of data analysis. The electrophysiological method used here was similar to that used in a previous study (Fukushima et al. 2007), where only the stimulus protocol was replaced by a new one.

Procedure of the experiment

The general surgical procedures, electrode preparation and electrical stimulation method employed in this study are described in detail in our previous paper (Fukushima et al. 2007). All experimental protocols were approved by Tamagawa University Animal Care and Use Committee.

Preparations and patch-clamp recordings

Whole cell patch-clamp recordings were conducted in the soma of pyramidal cells in the rat hippocampal CA1 area (Fig. 3a). See Appendix 1 for the detailed explanations.

Fig. 3
Experimental procedure and sample traces. a Schematic diagram and IR-DIC image of hippocampal slices. Two independent extracellular electrodes were set onto the Schaffer collaterals, at sites proximal and distal to the soma, respectively. For each cell, ...

Brain slices of six Wister rats (4–5 weeks old) were made according to the standard procedure reported by Tsukada et al. (2005). Each slice was placed in the recording chamber and neurons were visualized with an infrared differential interference contrast (IR-DIC) camera. The internal solution of the recording electrode was prepared following the method of Magee (2001). The calcium chelator EGTA was used at a high concentration to block long-term potentiation. Whole cell patch-clamp recording was obtained from the soma of a pyramidal cell in CA1 using an electrical amplifier. Signals low-pass filtered at 10 kHz were digitally sampled at 25 kHz, and stored on a computer hard disk.

Spatial stimuli

In order to generate spatiotemporal inputs to the pyramidal cell in CA1, two stimulus electrodes were set to the Schaffer collaterals, in sites proximal and distal to the soma, respectively (Fig. 3a). Duration of the stimulus current pulses was 200 μs, and strength was adjusted to the peak amplitude of evoked EPSP to 1–13 mV below firing threshold. The two stimulus electrodes were set without overlap of the stimulating pathways between them, which was checked in paired-pulse facilitation (50 ms interval). Four spatial input patterns of electrical stimulation were prepared and symbolized from 1 to 4 as follows: no electrical stimulation (“1”), electrical stimulation through an electrode placed in a distal site (“2”), in a proximal site (“3”) and in both sites (“4”).

Here, we should remark on the case of no electrical stimulation, 1, which was also regarded as a spatial input pattern. In this experiment, we prepared the spatial input patterns as imitations of actual spatial input from CA3 to CA1, although these were very restricted ones due to experimental constraints. Actual inputs could include the possibility of a case of no input to the observed cells. For this reason we include the pattern 1 as a spatial input from CA3 to CA1.

Stimulus protocol

The following new stimulus protocol was adopted. For each cell, a recording session consisted of 122 stimulus periods with an intervening rest period of 10 s (Fig. 3a). In a stimulus period, ten successive inputs were applied with 30 ms intervals. The input interval of 30 ms was chosen considering the observations of gamma frequency band in this area (Csicsvari et al. 2003). Each input pattern was randomly selected among the four spatial input patterns. The random input sequence was fixed throughout all recording sessions. The stimulus protocol adopted in this study enabled us to obtain many stable responses and to provide strong stimulation, compared with the stimulus protocol adopted in the previous experiment (Fukushima et al. 2007).

Data analysis procedure

In this study, we recorded from eleven cells in six slices. One cell was excluded from analysis because the cell exceeded a criterion for variation of resting potential during the recording session. The ten other cells were classified into two groups, sub-threshold and supra-threshold, according to whether or not the continual stimulations induced spikes. The sub-threshold group consisted of five cells in four slices (cell 1, …, cell 5) whose membrane potentials stayed below the firing threshold throughout their recording sessions. The supra-threshold group consisted of five cells in three slices (cell 6, …, cell 10) in which spikes were elicited in each stimulus period. For each stimulus-period, the baseline membrane potential was determined as mean amplitude during 2 s before the stimulus period (−70.1 ± 3.0 mV (mean ± SEM), n = 10), where the average standard deviation for each cell was 1.1 (n = 122)). Hereafter, we express membrane potential as the difference between the measured voltage and the baseline membrane potential at each stimulus period.

Definition of response of an individual neuron

It is possible to take several choices as “responses”, x(n), to nth input. Let Δt be the time interval up to the measurement of responses after a stimulation. In this study, we estimated the following quantities: Vlast, VΔt and Vave. A response at Δt to nth input was defined as the membrane potential at a fixed elapsed time Δt after the input, which is denoted by VΔt(n) (Fig. 3b). In particular, Vlast(n) denotes the value at Δt = 28 ms taken as the timing just before the next input. We also considered an averaged membrane potential of VΔt(n) over Δt = 4–30 ms, which is denoted by Vave(n). In the calculation of this quantity, spikes were removed from the time course of membrane potential. Responses for analysis were gathered from all stimulus periods for each cell using the same procedure.

Return map analysis of time series of responses

Return map analysis was used to examine the dynamics underlying the generation of responses to a spatiotemporal input sequence. For a response sequence {x(n)}n, a return map was generated by plotting each response x(n) against the previous response, x(n − 1).

Conditional distribution of responses for an input sequence

Responses were associated with not only (spatial) input patterns, but also their temporal sequences. We denote an input pattern sequence of k length by i1···ik from the most recent input i1 to the kth most recent input ik. A notation [i1···ik] is used for a conditional distribution of responses to the input pattern sequence of k length i1···ik, that is, equation M20 where S(n) denotes an nth input pattern.

Construction of the virtual population response

Virtual multi-cell recording data were reconstructed from the observed five single-cell recording data sets in the case of the sub-threshold group. Considering the fact that the same random input sequence was used throughout all of the recording sessions, the population response was constructed by expressing a population state as a population vector equation M21 using five single-cell recording data sets obtained from four different slices. Similarly in the case of individual cell recording, we also studied their return maps, conditional distributions and spatial clustering indexes described below.

Statistical analysis

The correlation coefficient was calculated by Pearson’s method. To evaluate the significance of differences among conditional distributions of a variable such as response of an individual cell and peak amplitude of EPSPs, where the distributions were conditioned by input patterns, the Kruskal-Wallis test with the Bonferroni-Dunn (BD) multiple comparison test (Kruskal and Wallis 1952; Dunn 1964) was used. To evaluate whether the conditional distributions equation M22 had the same order as equation M23 when an individual cell responds to input pattern sequences with k(≥2) length sharing a common recent history with (k − 1) length, the Jonckheere-Terpstra (JT) test with the BD test (Jonckheere 1954; Terpstra 1952) was used. See Appendix 2 for detailed explanations. The JT test with the BD test was also used to evaluate whether conditional distributions of a fitting parameter (explained in “Simulated time course of somatic membrane potential using the leaky-integrator model with synaptic inputs”) had the same order as that for response. p values of less than 0.05 were considered significant.

Spatial clustering index

To quantify the degree of coding error for input pattern sequences, the spatial clustering index was introduced (Fukushima et al. 2007). The index was calculated for each input pattern sequence of k length by conditioning on input pattern sequences of (k − 1) length. Thus, at each length of input pattern sequence, the spatial clustering index indicates the degree of code overlap.

First, we provide a criterion to judge whether a responsive membrane potential correctly represents the sequence of inputs, using a simple classifier as follows. Here, z is a specific response of an individual cell and a population to one input pattern sequence of k length i1···ik. The response z is correctly decoded up to depth k if the distance between z and equation M24 which is a distribution obtained by deleting z from the conditional distribution [i1···ik], is smaller than the distances between z and three other conditional distributions [i1···ik] (ik ≠ ik) sharing a common recent history with (k − 1) length. Second, for the responses to the four input pattern sequences of k length sharing a common recent history with (k − 1) length i1···ik−1, the number of decoding failures is counted, whose error rate is denoted by ek(i1···ik−1) for k ≥ 2 and e1 for k = 1.

Finally, the spatial clustering index at depth k, denoted by C(k), is defined by an averaged value of the error rates over common recent histories with (k − 1) length: equation M25 for k ≥ 2 and e1/0.75 for k = 1, where the denominator 0.75 is the compensation factor of chance level. The value C(k) ranges from 0 to 4/3. Here, the value 0 denotes the perfect level and the value 1 denotes the chance level. In the present study, the clustering index was calculated up to depth three. For the distance between response and distribution, we used Mahalanobis distance between the response z and the mean μ of the distribution, which is defined by equation M26 where Σ is a covariance matrix of the distribution which is simply the variance of the distribution if the response is from an individual cell. Another choice of the distance is possible, for example, Euclidean distance between the response and the mean of the distribution, but our results did not depend qualitatively on the choice made.

Results

When the cells were in resting state, peak amplitudes of the evoked EPSPs were largest for input pattern 4, followed by 3 and 2 (4/5 cells in the sub-threshold group and 5/5 cells in the supra-threshold group; p < 0.05, no significant difference between 4 and 3 in two cells in the supra-threshold group) and 4 and 2 were largest followed by 3 (1/5 cells in the sub-threshold group; p < 0.05, no significant difference between 4 and 2).

The peak amplitudes were 7.3 ± 1.6, 5.8 ± 1.2 and 2.2 ± 1.0 mV (mean ± SEM, n = 5) in the sub-threshold group, and 11.0 ± 3.5, 9.5 ± 2.5 and 3.4 ± 2.0 mV (n = 5) in the supra-threshold group, respectively. The correlation coefficient between the two kinds of responses Vlast and Vave was very large (0.98 ± 0.01 (mean ± SD), n = 10). Therefore, we investigate here Vlast and VΔt.

Evidence of IFS at individual cell level

Figure 4a and b show the return maps of response sequence {Vlast} of cells in the sub-threshold group. The graph of the return map for each cell consists of four “branches” corresponding to four different input patterns: equation M27 These branches denote the self-organized transition rules of responses according to input patterns.

Fig. 4
Return maps of response sequence {Vlast} of each cell in the sub-threshold group. a A return map of response sequence {Vlast} of cell 1. The return map consists of four parts, called branches, corresponding to four n  + 1st input ...

The order of medians for conditional distributions of responses to the four input patterns was [1] < [2] < [3] < [4] (4/5 cells) or [1] < [3] < [2] = [4] (1/5 cell) (p < 0.05). The lower two branches have large correlation coefficients (0.83 ± 0.07 (mean ± SD), n = 10) between successive responses, Vlast(n) and Vlast(n + 1), which indicates a linear functional dependence between successive responses in these branches. Although the correlation coefficients of the upper two branches were not so high (0.55 ± 0.10 (mean ± SD), n = 10), it is likely that their “major” parts have a linear functional relationship. We will investigate this issue later in the paper from another viewpoint ("Simulated time course of somatic membrane potential using the leaky-integrator model with synaptic inputs"). Line-fitting for each branch was performed using major axis regression (Fig. 4d). All slopes of fitting lines for the branches were smaller than 1 (0.36 ± 0.10 (mean ± SD), n = 20), indicating the presence of contractive affine transformations.

We also investigated the supra-threshold group of cells, which output spikes in each stimulus period. The number of spikes in an input interval was less than three, and in most cases only a single spike was observed. The number of spikes in a stimulus period was 2.0 ± 0.7 (mean ± SEM, n = 5), where the average standard deviation for each cell was 1.6 (n = 122). The spikes appeared after input 3 and 4 in all cells, except for cell 10 in which spikes were also observed after input 2 as well as 3 and 4. Despite the presence of these spikes, similar results for the return map were also obtained in this group, although the orders of medians for conditional distributions of responses showed less variety than those in the sub-threshold group : [1] < [2] < [3] < [4] (3/5 cells) and [1] < [2] < [3] = [4] (2/5 cells) (p < 0.05). Figure 5a and b show the return maps of the response sequence {Vlast} of cells in the supra-threshold group. The correlation coefficients of the lower two branches and the upper two branches were 0.89 ± 0.06 (n = 10) and 0.56 ± 0.15 (n = 10), respectively. Again, all slopes of fitting lines for the branches were smaller than 1 (0.35 ± 0.15 (n = 20)). This indicates the presence of contractive affine transformations in the supra-threshold group too.

Fig. 5
Return maps of response sequence {Vlast} of each cell in the supra-threshold group. a A return map of response sequence {Vlast} of cell 6. The four branches are separately shown in the four right-hand side panels, where the points (Vlast(n), Vlast(n + 1)) ...

It should be noted that for {Vave}, the correlation coefficients of their branches were larger in both the sub- and supra-threshold groups (0.89 ± 0.04 (lower two branches (n = 20)); 0.70 ± 0.12 (upper two branches (n = 20))).

The fact that a major part of each branch is approximated by the contractive affine transformation enables us to conclude that the conditional distributions of responses to input sequences can be hierarchically clustered in a self-similar manner according to the similarity of the input pattern sequences. Actually, as shown in Fig. 6, the conditional distributions of Vlast were hierarchically clustered in a self-similar manner up to depth two or three of input pattern sequences, although in some cases the hierarchical clustering with depth three becomes unclear. These results are consistent with those reported in Fukushima et al. (2007).

Fig. 6
An example of conditional distributions of response Vlast. The case of cell 1. a Histograms of conditional distribution [i] of response {Vlast} to each input pattern i. The dot on each abscissa represents the median of the distribution. b Histograms of ...

To measure the performance of the coding of input pattern sequences, the spatial clustering index was calculated. Figure 7 shows the change of indices for Δt up to depth three for responses {VΔt}, where Δt is elapsed time from the most recent input (4 ≤ Δt ≤ 28 ms). At depth one, the index decreased and reached a minimum value at Δt* (19.6 ± 6.4 ms (mean ± SD), n = 10) as the elapsed time Δt increased. At depth two, the index monotonically increased as Δt increased, and was far from the chance level. At depth three, the index had a large deviation for the input pattern sequences and eventually achieved the chance level for some input pattern sequences.

Fig. 7
The coding performance of input pattern sequences at individual cell level. At each elapsed time Δt (4 ≤ Δ ≤ 28 ms), spatial clustering indices of the response VΔt up to depth three ...

Evidence of IFS at population level

At the individual cell level, the performance of the coding of input pattern sequences is successful but still somewhat poor. In particular, the history-coding at depth three of input sequences was still vague. However, because many neurons present in CA1, it is rather natural to think of the population level response, which can be described in high-dimensional state space. In “Iterated Function Systems”, we described one two-dimensional IFS and two one-dimensional IFSs, with the latter viewed as a two-dimensional IFS as a pair (Fig. 1a and d). The two-dimensional IFS has a Cantor set as its attractor. When this type of IFS is realized as an input-driven IFS, input histories are completely encoded by the subsets of a Cantor set in a self-similar manner. In other words, its performance for the sequence coding is perfect level. On the other hand, two one-dimensional IFSs, in general, achieves only low performance for the sequence-coding, because each IFS may not satisfy the non-overlapping condition. However, a pair of them can achieve high performance for the coding as a two-dimensional IFS. This is because they can play a mutually complementary role of distinguishing inputs. For instance, distinguishing between inputs 1 and 3, and also between input 2 and the rest. Thus, the input sequence can be completely encoded through their cooperation, even if each IFS individually produces an incomplete code.

The return maps of individual cells in Fig. 4 show such a possibility. Motivated by these considerations, we estimated the population behavior using five single-cell recording data sets in the sub-threshold group.

Figure 8 shows projections of the return maps of the population response equation M30 where the projections are onto the 1st and the 2nd principal components (PCs) of the distribution of the population response. Similarly to the case at the individual cell level, each figure consists of four branches of return maps corresponding to four different input patterns. All branches are well approximated by contractive affine transformations. The cumulative contribution ratio up to the 2nd PC was 93% and the return map for the 3rd PC did not have a clear structure (data not shown). Thus, the transition rules of the population responses according to the input patterns are represented by the pair of return maps for the 1st and the 2nd PC.

Fig. 8
Return map of response sequence equation M31 at population level. The return map of population response sequence equation M32 in the sub-threshold group was projected on the 1st and the 2nd principal components (PCs). Superimposed on the branches are the fitting lines using ...

Figure 9 shows the conditional distributions of the population response. They show hierarchical clusterization in a self-similar manner, according to the similarity of the input pattern sequences. The degree of overlapping at population level is much lower than that at individual cell level.

Fig. 9
Conditional distribution of population response equation M34 The population response equation M35 in the sub-threshold group was projected on two-dimensional subspace spanned by the 1st and the 2nd principal components of the distribution. The colors of points indicate the kinds ...

In Fig. 10, we estimate the difference in the spatial clustering index between population level and individual cell level. Coding performance is remarkably improved at population level. The dependence of the spatial clustering index on the depth of the input sequence and also on the elapsed time is much improved at population level.

Fig. 10
Comparison of the coding performance at individual cell and population levels. The upper white curves indicate averaged spatial clustering indices of response VΔt over the cells in the sub-threshold group, equation M36 The shaded region shows the standard ...

Simulated time course of somatic membrane potential using the leaky-integrator model with synaptic inputs

The use of the return map adopted in the present paper provides a new method for data analysis. If the membrane potentials of neurons were continuously observed as responses to input pattern sequences, neither Cantor sets nor affine transformations could be directly observed. The point is that the data were transformed for each pattern element of the input sequence. In this respect, the present data analysis is a discrete one. On the contrary, in this section we try to interpret the results of the return map analysis in terms of a continuous time evolution of somatic membrane potentials when temporal sequences of spatial patterns are input. For this purpose, we derived a time course discretized during each input interval using the leaky-integrator model under sub-threshold conditions. Through the examination of the distribution of the time courses, we will characterize the fluctuations in the responses, especially the fluctuation in each branch of the return maps. The main aim of this section is to provide a further justification of the approximation of the return maps by the set of contractive affine transformations obtained in “Evidence of IFS at individual cell level” from another viewpoint. Here, we briefly describe the leaky-integrator model with synaptic inputs. See Appendix 3 for the detailed derivation.

The time course of somatic membrane potential Vt) at elapsed time Δt between the subsequent inputs is given by the following equation:

equation M39
1

where V(0) is somatic membrane potential at Δt = 0, τm is a decay time constant with which the somatic membrane potential is leaked, and the second term A including four parameters represents an effect of the inputs on the somatic membrane potential at Δt, where the parameters qr, τs and δ are amplitude, a rise and a decay time constant and a transmission delay, respectively. To Eq. 1 we fitted each data set of responses {VΔt(n) ; 0 ≤ Δt ≤ 30} (S(n) = 2, 3 and 4) of the cells in the sub-threshold group. Thus, through Eq. 1, a time course from nth input S(n) to the subsequent input S(n + 1) can be specified by a set of four fitting parameters (τm, τs, qr, δ) = (τm, τs, qr, δ)(n) and an initial membrane potential V(0) = V0(n). The shape of the time course, especially, is specified by the values of the four parameters. The terms exp(−Δtm) and At; τm, τs, qr, δ) in Eq. 1 are here referred to as intrinsic slope and intrinsic intercept of the time course at Δt, respectively. In this formulation, {(V(0), V(30))} for the time course can be regarded as a point in a branch of the return map for response sequence {Vlast}.

For all five observed cells in the sub-threshold group and also for three different input patterns, most of the time courses passed the fitting criterion (86.5 ± 6.8% (mean ± SD), n = 15). The resultant fitting parameters τm, τs, qr and δ distributed in a wide range and the distributions depended on the cells and the input patterns. First, we describe the common features of the distributions for the observed cells and for the different input patterns. Each parameter obeys a skewed and single-peaked distribution, as shown in Fig. 11a. In some cases, however, there was a second small peak. Initial membrane potential V(0) possessed a small correlation coefficient with each parameter (Fig.(Fig.11c),11c), and no notable relationship between V(0) and the parameters was found in scatter diagrams (data not shown). The existence of large peaks in the distributions and the lack of dependency on V(0) of the parameters imply that for each input pattern in each cell there is a V(0)-independent shape of time course which is determined by typical values of the four fitting parameters (τm, τs, qr, δ). Moreover, the correlation coefficients between the parameters are consistent in their signs, +  or − (Fig. 11b). Hence, the distributions of the four fitting parameters are characterized as the following distributions of shapes of time course around a typical shape: (i) after some delay, the amplitude rapidly grows and gradually decays, and (ii) without a long pause the amplitude gradually grows and rapidly decays.

Fig. 11
The distributions of the values of fitting parameters, and the dependency between fitting parameters and also between fitting parameter and initial membrane potential V(0). a The left column shows histograms of τm, τs and δ for ...

Furthermore, we discuss how the typical time course and its neighbors appear in (V(0), V(30))-space, i.e. a branch of the return map for response sequence {Vlast}, for each input pattern in each cell (Fig. 12). We determined a typical V(0)-independent time course, using a four-tuple of medians of the fitting parameters equation M42 and also a set of time courses in its neighborhood, nbd(p*) = {(V(0), τm, τs, qr, δ); (1/τm, 1/τs, qr, δ) [set membership] E(p*)}, which is viewed as the typical time courses derived of the four fitting parameters in a Mahalanobis ellipsoid E(p*) centered at the median in the (1/τm, 1/τs, qr, δ)-space (Fig. 12a). The typical V(0)-independent time course corresponds to a line in (V(0), V(30))-space obtained by a substitution of p* into Eq. 1. The line runs through the centered part of the branch, and the points in nbd(p*) are distributed in a narrow band around the line, whose distribution is independent of V(0) (Fig. 12b). These result supports the approximation of branches by contractive affine transformations obtained in the previous section correctly capture the functional relationships between successive responses in the major parts of respective branches.

Fig. 12
Correspondence between the distribution of fitting parameters and the distribution of membrane potentials in (V(0), V(30)). Typical examples are shown. a Scatter diagrams of (1/τm, qr) and (1/τs, δ) for input pattern 4 in cell ...

The intrinsic slope and the intrinsic intercept at Δt = 30 ms, that is, exp(−30/τm) and A(30; τm, τs, qr, δ), are distributed with a large peak similar to those of the fitting parameters but in more symmetrical way (Fig. 13a). Similarly to the four fitting parameters, initial membrane potential V(0) possessed small correlation coefficients with both the intrinsic slope and the intrinsic intercept (0.10 ± 0.25 and 0.04 ± 0.28 (mean ± SD, n = 15), respectively). The intrinsic intercept strongly correlated with qr for each cell. The order of medians for both qr and the intrinsic intercept for the three input patterns were the same order as that for response Vlast, and the above order was strict (p < 0.05; an example is shown in the right-hand panel of Fig. 11a). The correlation coefficients between the intrinsic slope and the intrinsic intercept are distributed from negative to positive values, depending on the cells, but are almost independent of input patterns (Fig. 13b).

Fig. 13
The distribution of intrinsic slope and intrinsic intercept and their relation to one another. A typical example is shown. a Scatter diagram and histograms of intrinsic slope and intrinsic intercept for input pattern 4 in cell 5. The red filled circle ...

Statistical model

Based on these analyses of experimental data, we propose a statistical model for the response sequence of pyramidal cells in the CA1 network to spatiotemporal inputs given in the intervals that are assumed to synchronize with typical gamma waves. The model is given as follows.

equation M49
2

where (ai(n), bi(n)) is a randomly chosen sample from a two-dimensional normal distribution that is given by equation M50 such that P({0 < ai <  1}) = 1, for each kind of input pattern, i. Here ai and bi are not necessarily independent of each other. The degree of fluctuation around the line equation M51 depends on not only the variances equation M52, but also the covariance equation M53. For a fixed V = V(n − 1), the variance of V′ = V(n) is equivalent to equation M54. Thus, if equation M55 is negative, and V becomes larger in the positive region, then the variance of V′ becomes smaller than in the case that ai and bi are independent of each other. We see this effect in the upper branches of cell 1 which has larger amplitudes than those of other cells (Fig. 4a).

Conclusion and discussion

The response rules for input pattern sequences of the CA1 area in rat hippocampal slices were well approximated by a set of contractive affine transformations at both the individual cell and population levels. These findings strongly suggest that CA1 dynamics receiving spatiotemporal input from CA3 has a mode that is characterized by input-driven IFS consisting of a self-organized response rule for each spatial input pattern. This dynamics ensures that the distribution of response is hierarchically clustered according to input histories, and also ensures that a spatial and retrospective code table can be automatically formed. Hence we obtain Cantor coding.

In a previous study, we observed such hierarchical clusters in the distribution of membrane potentials (Fukushima et al. 2007). We also examined the sequences of responsive membrane potentials in the sub- and supra-threshold conditions (12 (=6 + 6) cells). The supra-threshold condition in the previous study was provided by weaker stimuli than the supra-threshold condition in the present study, and the membrane potentials reached near firing threshold but rarely exceeded such threshold. Reflecting this type of difference in supra-threshold dynamics, the return maps showed slightly different features in the previous case from those in the present case. In the return maps for the cells (4/6 cells), the slope of the major axis regression line for the most upper branches approached one (1.00 ± 0.07 (mean ± SD), n = 4; the correlation coefficients were 0.84 ± 0.08 (mean ± SD)) although the findings of the return map analysis in the present study were also confirmed at more than half of the cells in the previous study (7/12 cells). In these cells, the evolutions of membrane potential corresponding to the most upper branch were slow and their peak time in input interval (30 ms) were very late (19.6 ± 5.1 ms (mean ± SD), n = 388, 4/6 cells) compared with the case of contractive maps (12.5 ± 2.8 ms, (n = 171, 2/6 cells)). The mechanism of the breakdown of contractive properties at individual cell level is a subject for future study.

Though the present description regarding the collective behavior is preliminary since we used a virtual reconstruction of multi-cell recordings with five cell data sets obtained from different slices, the much better coding performance suggests that Cantor coding is in a class of population coding. In order to enable the spatial code table with fine hierarchical structure to be formed at population level, the response of CA1 neurons must be so heterogeneous throughout the population as to realize the mutually exclusive role required for the coding. For a direct verification of Cantor coding at population level, the development of intracellular multi-cell recording techniques are necessary. This project is left for future studies.

The important assumption in this study was that CA1 receives spatiotemporal sequences consisting of finite spatial patterns separated by a period of gamma activity from CA3. The linkage between gamma oscillations and theta phase precession provides a clue as to how episodic memories of place-related events involving sequences of events are represented in the hippocampus (see, for example, Lisman 2005).

Field potentials with gamma frequencies (30–100 Hz) in the hippocampus which are most commonly observed nested within oscillations with theta frequencies (4–10 Hz) has been suggested to assist in the encoding of memories (Fell et al. 2001; Sederberg et al. 2003, 2007) and retrieval of memory traces (Montgomery and Buzsáki 2007). The CA3-CA1 system is an intrahippocampal gamma generator, and gamma oscillations in CA1 seem to be entrained by the output of CA3 (Csicsvari et al. 2003). In place cells of rat hippocampus, O’Keefe and Recce (1993) discovered a phenomenon known as theta phase precession, which is a gradual shift of theta spike timings as the function of location. Subsequent experimental studies have revealed that cell assemblies representing different locations are recruited in different gamma cycles in a theta oscillation (Dragoi and Buzsáki 2006), and the cells firings are locked in a preferred phase in a gamma cycle (Senior et al. 2008) (see review, Lisman and Buzsáaki 2008). On the basis of these findings, when theta precession happens, it is expected that the neuronal patterns representing memory items are successively activated in CA3 synchronized with the gamma rhythm, and then CA1 receives such spatiotemporal sequences.

Acknowledgements

This work was supported in part by Grants-in-Aid (Nos. 18019002 and 18047001) for Scientific Research on Priority Areas from the Ministry of Education, Culture, Sports, Science and Technology (MEXT) and the 21st Century COE Program of the Department of Mathematics of Hokkaido University.

Appendix 1

Brain slices of six Wister rats (4–5 weeks old) were made according to the standard procedure reported by Tsukada et al. (2005). The brain was sliced at an angle of 30–45° to the long axis of the hippocampus, with a thickness of 300μm. Before recording, slices were kept in bath solutions (142 NaCl/2 MgSO4/2.6 NaH2PO4/2 CaCl2/26 NaHCO3/10 glucose (mM), bubbled with 95% O2/5% CO2) at room temperature for at least 60 min.

The slice was then placed in the recording chamber. Neurons were visualized with an IR-DIC camera (C2741079H, Hamamatsu, Japan). Recording electrodes were pulled from borosilicate glass and the resistance was 5–8 MΩ. Recordings were done at 28–30°C. The internal solution of the recording electrode was prepared following the method of Magee (2001), and contained 120 KMeSO4/20 KCl/10 HEPES/10 EGTA/4 Mg2ATP/0.3 TrisGTP/14 Tris2phosphocreatine/4 NaCl (mM, pH7.25 with KOH). Whole cell patch-clamp recording was obtained from the soma of pyramidal cells in CA1 using an electrical amplifier (CEZ-3100, Nihonkoden, Japan). Signals were filtered at 10 kHz, sampled at 25 kHz, and stored (micro 1401, CED, England). The starting voltage of the recorded neurons was between −57 and −44 mV, and the membrane potential was maintained at a voltage between −67 and −76 mV by current injection to the soma before electrical stimulations using two theta glass electrodes (TST-150-6, WPI, Florida).

Appendix 2

When an order of two or more groups with respect to the value of a variable is specified by an alternative hypothesis, the Jonckheere-Terpstra (JT) test (Terpstra 1952; Jonckheere 1954) can be employed instead of the Kruskal-Wallis test (Sheskin 2004). In this study, we used the JT test to evaluate whether conditional distributions of responses of an individual cell to input pattern sequences with length greater than one were ordered in a self-similar manner according to the similarity of the input pattern sequences. Specifically, for each input pattern sequence with (k − 1) length i1···ik−1, the JT test was applied to a set of four conditional distributions equation M56 sharing the common recent history i1···ik−1. The alternative hypothesis is that the {Gi} have the same order as the four conditional distributions equation M57 to the four input patterns. More precisely, if the order of medians mi of [i] is m1 < ··· <  m4 and the median of the distribution Gi is θi, the alternative hypothesis is θ1 ≤ ··· ≤ θ4 with at least one strict inequality. After the JT test, one-sided comparisons between θi <  θj (i < j) were conducted using the Bonferroni-Dunn (BD) multiple comparison test (Dunn 1964; Sheskin 2004). For all tests in this study, the significance levels were set at p = 0.05. In the BD multiple comparison test, this corresponds to the assignment of statistical significance at p-values 0.05/6 (4C2 = 6) to each one-sided comparison.

Appendix 3

To know the mechanism for the emergence of contractive affine transformations, we describe a continuous time evolution of somatic membrane potentials with spatiotemporal inputs. For simplicity, we treated the case in which the membrane potentials stayed below the firing threshold. Then, we conducted data-fitting of its time course during each input interval using the leaky-integrator neuron model,

equation M58
3

where V [mV] is the somatic membrane potential expressing deviation from the resting potential, τm [ ms = kΩ μF] is the decay time constant of the somatic membrane potential due to leakage, and Vsyn [mV] is the effect of synaptic input on the somatic membrane potential. The solution Vt) of Eq. 3 in elapsed time Δt from an input time is expressed by the following formula:

equation M59
4

where V(0) is the somatic membrane potential at input time, and At) is the convolution of Vsyn(s) and exp(−sm) from s = 0 to Δt, that is, At) = τm−10ΔtVsynts) exp(−sm) ds. The time course of the effect of synaptic input Vsynt) was simply given by an α-function (Gerstner and Kistler 2002):

equation M60
5

where qr [mV ms = kΩ μC] is the total change in somatic membrane potential due to the total effective charge that is injected in the neuron via its synapses, τs[ms] is the rise and decay time constant of the α-function, δ[ms] is the transmission delay and Θ(s) is the Heaviside step function with Θ(s) = 1 for s > 0 and Θ(s) = 0 otherwise. We assumed that the term Vsyn represents a collective input-triggered effect on the somatic membrane potential, which includes not only excitatory synaptic inputs but also feedback and feedforward inhibitory inputs.

For each data set of responses {VΔt(n) ; 0 ≤ Δt ≤ 30} (S(n) = 2, 3 and 4) of cells in the sub-threshold group, we estimated four parameters (τm, τs, qr, δ) in Eqs. 4 and 5, using nonlinear regression with the least-squares criterion. However, recording data at an early phase in each input interval, [0, t0), were omitted for estimation to avoid contamination due to electrical stimulus artifact. The least-squares minimization was performed using nls package with a port algorithm of version 2.5.1 of the R software (R Development Core Team 2007), which iteratively searches for a solution in a given bounded range from a given initial condition using information of the functional derivative.

A solution for the four parameters was searched for from the following intervals: 0.1 <  τm <  500, 0.1 <  τs <  min(30, τm), 0.1 <  qr < 500, 0.1 < δ < min(10, t0) and t0 ≥ 6. These search intervals were taken so widely in order that the solution not saturate so much. For each data set, twenty initial conditions were prepared. We accepted a set of convergent values of the parameters as the solution if its root mean square error was minimum among all sets of convergent values and was less than 0.15. In almost all data sets, however, the convergent values of the parameters were not sensitive to the choice of the initial conditions.

References


  • Barnsley M (1988) Fractals everywhere. Academic Press, San Diego

  • Bressloff PC, Stark J (1992) Analysis of associative reinforcement learning in neural networks using iterated function systems. IEEE Trans Syst Man and Cybern 22(6):1348–1360

  • Csicsvari J, Jamieson B, Wise KD, Buzsáki G (2003) Mechanisms of gamma oscillations in the hippocampus of the behaving rat. Neuron 37(2):311–322 [PubMed]

  • Dragoi G, Buzsáki G (2006) Temporal encoding of place sequences by hippocampal cell assemblies. Neuron 50(1):145–157 [PubMed]

  • Dunn OJ (1964) Multiple comparisons using rank sums. Technometrics 6:241–252

  • Fell J, Klaver P, Lehnertz K, Grunwald T, Schaller C, Elger CE, Fernández G (2001) Human memory formation is accompanied by rhinal-hippocampal coupling and decoupling. Nat Neurosci 4(12):1259–1264 [PubMed]

  • Ferbinteanu J, Shapiro ML (2003) Prospective and retrospective memory coding in the hippocampus. Neuron 40(18):1227–1239 [PubMed]

  • Frank LM, Brown EM, Wilson MA (2000) Trajectory encoding in the hippocampus and entorhinal cortex. Neuron 27(1):169–178 [PubMed]

  • Fukushima Y, Tsukada M, Tsuda I, Yamaguti Y, Kuroda S (2007) Spatial clustering property and its self-similarity in membrane potentials of hippocampal CA1 pyramidal neurons for a spatio-temporal input sequence. Cogn Neurodyn 1(4):305–316 [PMC free article] [PubMed]
  • Gerstner W, Kistler WM (2002) Spiking neuron models: single neurons, populations, plasticity. Cambridge University Press

  • Hasselmo ME, Eichenbaum H (2005) Hippocampal mechanisms for the context-dependent retrieval of episodes. Neural Netw 18(9):1172–1190 [PMC free article] [PubMed]

  • Hutchinson JE (1981) Fractals and self similarity. Indiana Univ Math J 30(5):713–747

  • Jonckheere AR (1954) A distribution-free k-sample test against ordered alternatives. Biometrika 41:133–145

  • Kaneko K (2005) Inter-intra molecular dynamics as an iterated function system. J Phys Soc Jpn 74(9):2386–2390

  • Kesner RP, Lee I, Gilbert P (2004) A behavioral assessment of hippocampal function based on a subregional analysis. Rev Neurosci 15(5):333–351 [PubMed]
  • Kolen JF (1994) Exploring the computational capabilities of recurrent neural networks. Ph.D. Thesis, Ohio State University.

  • Kruskal WH, Wallis WA (1952) Use of ranks in one-criterion variance analysis. J Am Stat Assoc 47:583–621

  • Levy WB (1996) A sequence predicting CA3 is a flexible associator that learns and uses context to solve hippocampal-like tasks. Hippocampus 6(6):579–659 [PubMed]

  • Lisman J (2005) The theta/gamma discrete phase code occurring during the hippocampal phase precession may be a more general brain coding scheme. Hippocampus 15(7):913–922 [PubMed]
  • Lisman J, Buzsáki G (2008) A neural coding scheme formed by the combined function of gamma and theta oscillations. Schizophr Bull. doi:10.1093/schbul/sbn060 [PMC free article] [PubMed]

  • Magee JC (2001) Dendritic mechanisms of phase precession in hippocampal pyramidal neurons. J Neurophysiol 86(1):528–532 [PubMed]

  • Marr D (1971) Simple memory: a theory for archicortex. Philos Trans R Soc Lond B Biol Sci 262(841):23–81 [PubMed]

  • McNaughton BL, Morris RGM (1987) Hippocampal synaptic enhancement and information storage within a distributed memory system. Trends Neurosci 10(10):408–415

  • Montgomery SM, Buzsáki G (2007) Gamma oscillations dynamically couple hippocampal CA3 and CA1 regions during memory task performance. Proc Natl Acad Sci USA 104(36):14495–14500 [PubMed]

  • Nakazawa, K, Quik MC, Chiltwood RA, Watanabe M, Yeckel MF, Sun LD, Kato A, Carr CA, Johnston D, Wilson MA, Tonegawa S (2002) Requirement for hippocampal CA3 NMDA receptors in associative memory recall. Science 297(5579):211–218 [PMC free article] [PubMed]

  • O’Keefe J, Dostrovsky J (1971) The hippocampus as a spatial map. Preliminary evidence from unit activity in the freely-moving rat. Brain Res 34(1):171–175 [PubMed]

  • O’Keefe J, Recce ML (1993) Phase relationship between hippocampal place units and the EEG theta rhythm. Hippocampus 3(3):317–330 [PubMed]

  • Pollack JB (1991) The induction of dynamical recognizers. Mach Learn 7:227–252
  • R Development Core Team (2007) R: a language and environment for statistical computing. R Foundation for Statistical Computing, Vienna, Austria. http://www.Rproject.org

  • Scoville WB, Milner B (1957) Loss of recent memory after bilateral hippocampal lesions. J Neurol Neurosurg Psychiatry 20(11):11–21 [PMC free article] [PubMed]

  • Sederberg PB, Kahana MJ, Howard MW, Donner EJ, Madsen JR (2003) Theta and gamma oscillations during encoding predict subsequent recall. J Neurosci 23(34):10809–10814 [PubMed]

  • Sederberg PB, Schulze-Bonhage A, Madsen JR, Bromfield EB, McCarthy DC, Brandt A, Tully MS, Kahana MJ (2007) Hippocampal and neocortical gamma oscillations predict memory formation in humans. Cereb Cortex 17(5):1190–1196 [PubMed]

  • Senior TJ, Huxter JR, Allen K, O’Neill J, Csicsvari J (2008) Gamma oscillatory firing reveals distinct populations of pyramidal cells in the CA1 region of the hippocampus. J Neurosci 28(9):2274–2286 [PubMed]
  • Sheskin DJ (2004) Handbook of parametric and nonparametric statistical procedures. 3rd edn. CRC Press

  • Siegelmann HT, Sontag ED (1994) Analog computation via neural networks. Theor Comput Sci 131(2):331–360

  • Takahashi M, Lauwereyns J, Sakurai Y, Tsukada M (2009) Behavioral state-dependent episodic representations in rat CA1 neuronal activity during spatial alternation. Cogn Neurodyn 3(2):165–175 [PMC free article] [PubMed]

  • Terpstra TJ (1952) The asymptotic normality and consistency of Kendall’s test against trend, when ties are present in one ranking. Indaga Math 14:327–333

  • Treves A (2004) Computational constraints between retrieving the past and predicting the future, and the CA3-CA1 differentiation. Hippocampus 14(5):539–556 [PubMed]

  • Treves A, Rolls ET (1994) Computational analysis of the role of the hippocampus in memory. Hippocampus 4(3):374–391 [PubMed]

  • Tsuda I (2001) Towards an interpretation of dynamic neural activity in term of chaotic dynamical systems. Behav Brain Sci 24(5):793–847 [PubMed]

  • Tsuda I, Kuroda S (2001) Cantor coding in the hippocampus. Jpn J Indust Appl Math 18:249–281

  • Tsuda I, Kuroda S (2004) A complex systems approach to an interpretation of dynamic brain activity II: does Cantor coding provide a dynamic model for the formation of episodic memory. Lect Notes Comput Sci 3146:129–139

  • Tsuda I, Yamaguchi A (1998) Singular-continuous nowhere-differentiable attractors in neural systems. Neural Netw 11(5):927–937 [PubMed]

  • Tsukada M, Aihara T, Saito H, Kato H (1996) Hippocampal LTP depends on spatial and temporal correlation of inputs. Neural Netw 9(8):1357–1365 [PubMed]

  • Tsukada M, Aihara T, Kobayashi Y, Shimazaki H (2005) Spatial analysis of spike-timing-dependent LTP and LTD in the CA1 area of hippocampal slices using optical imaging. Hippocampus 15(1):104–109 [PubMed]

  • Tsukada M and Pan X (2005) The spatiotemporal learning rule and its efficiency in separating spatiotemporal patterns. Biol Cybern 92:139–146 [PubMed]

  • Tsukada M, Yamazaki Y, Kojima H (2007) Interaction between the spatiotemporal learning rule (STLR) and Hebb type (HEBB) in single pyramidal cells in the hippocampus CA1 area. Cogn Neurodyn 1(2):157–167 [PMC free article] [PubMed]

  • Wallenstein GV, Hasselmo ME (1997) GABAergic modulation of hippocampal activity: sequence learning, place field development, and the phase precession effect. J Neurophysiol 78(1):393–408 [PubMed]

  • Wills TJ, Lever C, Cacucci F, Burgess N, O’Keefe J (2005) Attractor dynamics in the hippocampal representation of the local environment. Science 308(6):873–876 [PMC free article] [PubMed]

  • Wood ER, Dudchenko PA, Robitsek RJ, Eichenbaum H (2000) Hippocampal neurons encode information about different types of memory episodes occurring in the same location. Neuron 27(3):623–633 [PubMed]

  • Yamaguchi Y (2003) A theory of hippocampal memory based on theta phase precession. Biol Cybern 89(1):1–9 [PubMed]
  • Yamaguti Y et al. (2009) in preparation.

  • Yamamoto Y, Gohara K (2000) Continuous hitting movements modeled from the perspective of dynamical systems with temporal input. Hum Mov Sci 19(3):341–371

  • Yoshida M, Hayashi H (2007) Emergence of sequence sensitivity in a hippocampal CA3-CA1 model. Neural Netw 20(6):653–667 [PubMed]

  • Zola-Morgan S, Squire LR, Amaral DG (1986) Human amnesia and the medial temporal region: enduring memory impairment following a bilateral lesion limited to field CA1 of the hippocampus. J Neurosci 6(10):2950–2967 [PubMed]

Articles from Cognitive Neurodynamics are provided here courtesy of Springer Science+Business Media B.V.