Home | About | Journals | Submit | Contact Us | Français |

**|**HHS Author Manuscripts**|**PMC2768379

Formats

Article sections

Authors

Related links

Behav Brain Res. Author manuscript; available in PMC 2010 June 8.

Published in final edited form as:

Behav Brain Res. 2009 June 8; 200(1): 220–231.

doi: 10.1016/j.bbr.2009.01.021PMCID: PMC2768379

NIHMSID: NIHMS91628

Peter J. Siekmeier, Harvard Medical School and McLean Hospital, 115 Mill Street, Belmont, MA 02478 USA, Telephone: 617-855-3588, Fax: 617-855-3826, Email: ude.dravrah.naelcm@reiemkeisp;

See other articles in PMC that cite the published article.

The manner in which hippocampus processes neural signals is thought to be central to the memory encoding process. A theoretically-oriented literature has suggested that this is carried out via “attractors” or distinctive spatio-temporal patterns of activity. However, these ideas have not been thoroughly investigated using computational models featuring both realistic single-cell physiology and detailed cell-to-cell connectivity. Here we present a 452 cell simulation based on Traub et al’s pyramidal cell [81] and interneuron [83] models, incorporating patterns of synaptic connectivity based on an extensive review of the neuroanatomic literature. When stimulated with a one second physiologically realistic input, our simulated tissue shows the ability to hold activity on-line for several seconds; furthermore, its spiking activity, as measured by frequency and interspike interval (ISI) distributions, resembles that of *in vivo* hippocampus. An interesting emergent property of the system is its tendency to transition from stable state to stable state, a behavior consistent with recent experimental findings [73]. Inspection of spike trains and simulated blockade of K_{AHP} channels suggest that this is mediated by spike frequency adaptation. This finding, in conjunction with studies showing that apamin, a K_{AHP} channel blocker, enhances the memory consolidation process in laboratory animals, suggests the formation of stable attractor states is central to the process by which memories are encoded. Ways that this methodology could shed light on the etiology of mental illness, such as schizophrenia, are discussed.

Hippocampus is thought to be central in the process of encoding episodic memories, but the manner in which it performs this task is unknown. In light of this, and the fact that hippocampal dysfunction has been implicated in a number of neuropsychiatric disorders, including Alzheimer’s Disease, seizure disorder, and schizophrenia, a large number of neurobiological studies have focused on this area, producing vast amounts of highly detailed information. Computational modeling is a research tool that allows one to understand how a large number of variables at the neurophysiologic and neuroanatomic level combine to produce the *emergent* behaviors of a system; it is a natural method to attempt to better understand brain function.

It is not surprising that a number of computational models of hippocampus have been made, ranging from the single-cell to the network level. Researchers have described multicompartment single-cell models of hippocampal pyramidal neurons that faithfully reproduce transmembrane voltage patterns of cells recorded *in vitro* [43,47,63,81]. Traub et al [82] constructed a model of hippocampus CA3 which reproduces the gamma-modulated theta oscillatory behavior seen in this brain area. Menschik and Finkel [61] modeled CA3 and included the effects of acetylcholine to better understand the etiology of Alzheimer’s Disease. Many [34,61,90] have used network level modeling to attempt to more directly understand function. For example, Wallenstein et al [87] present a model that is capable of performing the *transitive inference* task which sheds light on how this might be carried out at the level of long term potentiation (LTP).

Durstewitz et al [23] reviewed a number of neurocomputational models suggesting that attractor states—that is, stable spatial patterns of neuronal activation—of various kinds play a role in the encoding of working memory in prefrontal cortex. Researchers have noted that hippocampal subfields CA1 [e.g., 89] and CA3 [e.g., 71] have the neuroanatomic substrate to behave in this way, and there is a body of theoretical work indicating that attractor states [2,40], or the sequencing of such states [44,73] are important in mnemonic activity. The question we aim to address is: Does hippocampal tissue encode information in this way, or some related way? In as much as it has been suggested that hippocampus carries out its function by in some manner transforming the information flowing through it, we examine the fluid, temporal qualities of these patterns: do distinctive stable states persist over time, as some have suggested; if so, what characterizes the transitional phenomena? The recent explosion of detailed neuroanatomic studies has allowed us to incorporate a level of detail that has not been feasible in previous models. Indeed, it is the first hippocampal simulation that is based on an exhaustive review of the neurobiological literature—i.e., it is the first “tissue level” model—and the first to faithfully reproduce spiking patterns seen *in vivo*.

We found that our hippocampal model appears to form multiple stable states. Furthermore, the simulation suggests that the phenomenon of spike frequency adaptation (SFA) at a single-cell level underlies the transitions from state to state. Psychopharmacologic studies in which the K_{AHP} channel (which subserves SFA) is blocked produces what appears to be enhanced memory storage capacities in laboratory animals and increased LTP. The analogous “virtual lesion” study in our model produces fewer, longer stable states. This simulation points to the importance of attractor-like states in the memory encoding process.

An explicit goal of our modeling approach was to include as much biological realism as possible, in particular at the level of neural connectivity, but also at the level of the individual neuron. The overall computer simulation consisted of 400 pyramidal cells and 52 interneurons. We wished to include a number of neurons that was large enough to reveal the emergent properties of the hippocampal tissue, but within the constraints of our computing hardware. We used the 64 compartment pyramidal cell model described by Traub et al [81], and the 46 compartment interneuron model implemented by Menschik and Finkel, which is an adaptation of the Traub and Miles [83] model. Both models feature realistic dendritic arborizations, and incorporate Na^{+}, Ca^{++}, K^{+}_{DR}, K^{+}_{AHP}, K^{+}_{C}, and K^{+}_{A} channels distributed along the somato-dendritic axis, as well as an explicit representation of internal Ca^{++} concentration. Axon initial segments were explicitly modeled as compartments, but axons themselves were operationalized simply as delays. The number of neurons in the overall computer simulation is of an order of magnitude similar to those of previously published computational models. The standard for adequate network size, however, is not an arbitrary number of cells—rather, an important criterion is that it exhibits network level behaviors seen in actual tissue; evidence of this was given in the Results section.

This model was explicitly conceptualized as a three-dimensional section of hippocampal tissue, extending 154 microns in the septo-temporal direction, 154 microns in the transverse direction, and 634 microns in the dimension orthogonal to these, extending from the stratum lacunosum-moleculare to the alveus. Each soma has an x, y, and z location in the 3-space defined by these axes (Figure 1).

Schematic diagram of overall model. Grey spheres indicate location of cell bodies, black cubes show placement of electrodes. Boundaries of simulated tissue block are shown with black lines. Dense layer of cells is s. pyramidale, with s. radiatum and s. **...**

The model was constructed, and will be described, as a two-step process: first, calculating the steric relationship of the various cells of the model, and second, representing their patterns of connectivity.

Densities of interneuron subtypes by calcium binding proteins in the various hippocampal strata (stratum oriens, pyramidale, radiatum, lacunosum-moleculare) were calculated from data given in Freund and Buzsaki [29] and Jino and Kosaka [45]. Freund and Buzsaki’s study (their Figure 23) indicates areal densities for 60 micron hippocampal slices; from this, we calculated spatial densities (see Table 1). The breakdown of model interneurons is as follows: 16 parvalbumin (PV) cells, 6 calbindin (CB) cells, 19 calretinin (CR) cells, 7 somatostatin (SOM) cells, and 4 cholecystokinin (CCK) cells. For all interneuron subtypes, we use the same dendritic morphology and dendritic ion channel distribution. While it is certain that these subtypes differ in terms of dendritic morphology and neurophysiology, we felt that this was a reasonable approximation, particularly given the large number of neurobiological unknowns involved for each of the subgroups. In the model, the calcium-binding subtypes are distinguished from one another on the basis of axonal projection characteristics, as described below.

Density of pyramidal cell bodies in stratum pyramidale of CA1 was taken from the neuroanatomic studies of Boss et al [11] and Hosseini-Sharifabad and Nyengaard [42], which provide similar numerical estimates. For each cell, x, y, and z coordinates within the relevant stratum were randomly generated.

A good deal about neuronal connectivity in hippocampus remains unknown. To take maximum advantage of what is known at a neuroanatomic level, we constructed axon-to-dendrite synaptic connections according to the following algorithm: (i) Map cell categories based on calcium-binding proteins onto categories based on morphology, to the extent possible. (ii) For each morphological class, define axonal projection patterns based on synaptic targets, in terms of stratum, target cell type (pyramidal cell vs. interneuron), and synaptic target area on cell (initial segment vs. soma vs. dendrites). (iii) Calculate spatial bouton densities for the axon projection clouds of the various subtypes, and apportion synapses according to percentages of (ii) above.

At a summary level, interneuron connectivity was based on the information shown in Figure 2, which represents the distillation by Freund and Buzsaki [29] of a large number of neuroanatomic studies. The precise manner in which the interneuron categorization based on calcium binding proteins used here, and in much of the current literature, maps onto traditional categories based on morphology (e.g., basket cells, chandelier cells, etc) is only beginning to become clear. However, to the extent that relationships are known, we attempted to include them in the model. For example, as indicated in Figure 2, PV interneurons project to stratum pyramidale and the proximal area of stratum oriens. It is thought that interneurons that stain for PV are almost entirely basket cells or chandelier cells, though the precise percentage of each is not known. Nonetheless, we felt that it would be more realistic to assume two distinct subgroups of PV interneurons—one projecting densely to pyramidal cell somata, the other to pyramidal cell initial segments—than to assume a homogeneous category of soma/IS-projecting interneurons.

Summary diagram of axonal and dendritic arborizations of interneurons, by subgroup. The dark circles indicate the location of cell bodies for each of the interneuron subtypes; the dark lines emanating from them show the orientation and laminar distribution **...**

The manner in which the calcium binding categories included in our model—which are largely non-overlapping—were allocated to one or more morphological categories is described below. All of the following is summarized in Table 2a.

Connectivity parameters. a. Morphological categories of interneuron subtypes. For cells of the given calcium-binding category in the given stratum, breakdown by morphological class is given. For example, the first row indicates that 45% of PV cells located **...**

Many researchers have found hippocampal PV cells to be either basket cells or chandelier cells [e.g., 9,78]. One group [66] found a small number of a sample of 36 PV-staining interneurons in CA1 to have a bistratified axon projection pattern. It was therefore assumed that 10% of all PV cells are bistratified, with the remainder evenly divided between basket and chandelier cells.

Researchers have found that CB cells tend to innervate the proximal and distal dendritic trees of interneurons in CA1 [56]. Freund and Buzsaki [29] divide CB cells into three subtypes: (a) Cells with a bistratified axonal arborization, projecting to SR (stratum radiatum) and SO (stratum oriens), but respecting the SP (stratum pyramidale); (b) Radiatum-projecting cells. We assume these to be the “type I” cells described by Gulyas and Freund [31]. Somata of these cells are said to lie predominantly in the SR, and only occasionally in SP and SL (stratum lacunosum-moleculare). (c) Horizontal cells at the SR-SL border. The axonal arborization of these cells is poorly characterized. Therefore, of the CB cells whose soma lie in SR, 50% were assumed to be bistratified and 50% were assumed to be radiatum-projecting. CB cells elsewhere were assumed to be bistratified, so that our categories would be entirely non-overlapping. Cells that stained for both CB and SOM were classified as SOM cells. Approximately 87% of CB-staining cells in SO (and 0% of cells in other strata) fall into this category [29].

These interneurons primarily target the dendritic processes of other interneurons [25,32]. Therefore, CR cells, regardless of location, are classified as interneuron-projecting cells (see below).

In CA1, SOM cells are said to correspond to oriens-lacunosum moleculare (o-lm) cells [48]. This was taken as a one-to-one correspondence in our model.

There is an extensive literature suggesting that CCK-immunoreactive cells are generally basket cells [29,55]. One labeling study [66] found a minority of 27 CCK immunoreactive cells to be bistratified cells, Shaffer collateral associated cells, or of other non-basket cell morphologies. Given the preponderance of data, we assumed that all CCK cells in the model are basket cells.

We define interneurons’ morphological categories in terms of axon projection pattern. (As mentioned above, we use a single canonical model for interneuron physiology and dendritic morphology.) These data are generally taken from studies in which an axon of a particular interneuron type is labeled with an anterograde tracer, allowing visualization of the entire axon with all of its ramifications. For example, Katona et al [48] found that o-lm cells project only to SL and that of the total number of synaptic connections they form there, 86% are on dendrites of pyramidal cells, 3% are on pyramidal cell somata, and 11% are on interneuron somata; quantitatively similar results were obtained by Matyas et al [56]. Analogous data were derived from comparable experiments on the other interneuron classes: basket cells [33], bistratified cells [31], oriens-lacunosum-moleculare cells [48], and pyramidal cells [79].

In terms of computational implementation, we chose a data structure that was consistent with the level of detail available in the neuroanatomic literature. Thus, as shown in Table 2b, for each neuron type we are able to specify the stratum to which it projects, whether the target cell is a PC or interneuron (but not which *subtype* of interneuron), and the region on the target cell on which synapses fall (IS, cell body, or dendrites). In rare cases, however, data at a higher degree of resolution are available. For example, Gulyas et al [32] found that CR interneurons synapse with dendrites of other interneurons, with the following caveats: (a) CR cells avoid PV interneurons; and (b) when CR interneurons contact other CR cells, they do so at both dendrites AND somata. Given the data structure of the model, it was not possible to include these facts in the current simulation.

Quantitative estimates of spatial synaptic density for each of the interneuron classes were derived from the work of Sik et al [76]. These researchers anterogradely stained axonal projections of each of a number of different classes of interneurons. Making several tissue sections, they calculated a total (linear) axonal distance per volume, as well as a bouton frequency, in boutons per length of axon. From these data, a spatial bouton (synapse) density can be calculated. In our model, we made the following assumptions: (i) for a given cell, bouton density depends on the particular type of interneuron, as defined by the type of calcium binding protein expressed by the cell. Breakdown of synaptic targets—or, cast another way, probability that a bouton of a given cell in a given stratum will contact a particular target—is given by the figures of Table 2b, as described above. (ii) Spatial bouton density does not vary with septo-temporal or transverse distance from the parent cell. While Sik et al show that densities do decrease as distance from the soma increases to, for example, a mm or more, given the small scale of our model, this is likely a safe assumption. (iii) For a given interneuron type bouton density does not vary from stratum to stratum. The values used in the model, in units of synapses per micron^{3} are as follows: PV 2.12 × 10^{5}, CB 1.13 × 10^{4}, CR 6.36 × 10^{3}, SOM 4.21 × 10^{5}, CCK 2.12 × 10^{5}.

Neuroanatomic literature [14,49] suggests that CA1 axons course through striatum oriens toward alveus, giving off occasional collaterals. In our model, we assume that pyramidal to pyramidal connections occur in stratum oriens (that is, on cells’ basal dendrites). This should be contrasted with area CA3, where it is well known that recurrent excitatory connectivity is considerably denser.

It should be noted that postsynaptic targets, as percentages, are not known with a high degree of certainty for pyramidal cells [77]. Somogyi et al [79] state that CA1 pyramidal cells synapse on other pyramidal cells as well as a basket cells and SOM cells; synapses onto axo-axonic, bistratified, and other interneuron types are thought to be possible, but not yet proven. We therefore assume projection percentages (PCs vs. interneurons) in rough proportion to their presence in SO.

Using a methodology similar to that described for interneurons, spatial density for varicosities in pyramidal axons was calculated from data given in Perez et al [67] and Knowles and Schwarzkroin [49]. This was calculated to be 9.09 × 10^{4} synapses per micrometer. It is well known that pyramidal to pyramidal connectivity is considerably lower in CA1 than in CA3, though few quantitative estimates of this have been made.

While we aimed to include a high level of biological realism in our model, particularly in the area of connectivity, certain simplifying assumptions were made. All computational models necessarily make abstractions. Which details to include and not include must be based on the purpose of the computational model in question; it is best to explicitly acknowledge the assumptions made.

We used a pyramidal cell model designed as a CA3 cell, and there are some differences between the electrophysiological response properties of CA1 and CA3 PCs. For example, in contrast to CA1 cells, CA3 cells show action potentials that decrease in amplitude as repetitive spiking occurs. Also, it is thought that CA1 cells show greater SFA than CA3 cells. Thus, while this may introduce a degree of error in the model, given the *direction* of the error, it is tolerable. That is, our model may *understate* the degree of SFA at play in CA1, and it is possible that the SFA-mediated effect we describe could in actuality be greater.

Also, our model does not include gap junctions. Traub et al [84] created a network model incorporating gap junctions between dendrites. They show that these junctions underlie oscillatory activity, particularly in the gamma range. However, the primary focus of our study was the elucidation of system wide stable states and their intermittent transitions over several seconds. Gamma range (40 cycles per second) activity operates at a markedly different time scale, and we felt safe not including this, given our behavior of interest.

Synaptic conductances were assumed to obey a dual exponential function, as follows:

$${g}_{\mathit{syn}}(t)=\frac{A{g}_{\phantom{\rule{0.16667em}{0ex}}max}}{{\tau}_{1}-{\tau}_{2}}({e}^{-1/{\tau}_{1}}-{e}^{-1/{\tau}_{2}}),\phantom{\rule{0.38889em}{0ex}}\text{for}\phantom{\rule{0.16667em}{0ex}}{\tau}_{1}>{\tau}_{2},$$

(1)

where A is a normalization constant chosen such that g_{syn} reaches a maximum of g_{max} [12].

We assumed all interneurons form GABAergic synapses on their target cells. We assumed that pyramidal cells form excitatory synapses on their target cells, either AMPA or NMDA, with a 0.5 probability for each. Channel characteristics that were used in our model, in terms of values given in Equation 1, are presented in Table 3.

Synaptic channel parameters. Column headings: E equilibrium potential; τ_{1}, τ_{2}, and g_{max} as given in Equation 1. Subscript p indicates postsynaptic cell is pyramidal, subscript i signifies interneuron.

Because we were interested in understanding behaviors that were properties of the tissue itself, our model is intentionally self-contained: all synaptic stimulation to cells (aside from the initial stimulation) arises from the other cells of the model. A by-product of the fact that our model represents a very small piece of tissue is that the amount of innervation received by a given cell is considerably lower than that received by a hippocampal cells *in vivo*: for example, actual CA1 PCs receive about 30,000 excitatory and 1,700 inhibitory inputs [59]; the number of synaptic inputs received by PCs in our model were orders of magnitude lower than this. To compensate, we multiplied g_{max} for all synaptic conductances by a constant scaling or “weight” factor. We arrived at this multiplier by gradually increasing it and re-running the model until pyramidal interneuron transmembrane voltages showed realistic-appearing spike waveforms; the factor we used was 300. We were reassured in our choice of weight factor by subsequent statistical analysis (see Results) that indicated reasonable agreement between basic model behaviors and actual tissue.

In contrast to some modeling efforts, our intention was not to impress upon the network patterned stimuli implicitly containing information, or the output of a single neuroanatomic track. Rather, we wished to present the network with an input stimulus that was biologically realistic yet “neutral”, with the emphasis on understanding how this tissue processed and transformed the information. Thus, we placed a “dummy” AMPA channel in each cell soma of the model (D. Beeman, personal communication, May 2002), and activated each randomly, with a mean activation rate of 15 Hz; this stimulation was applied for one second, and the model was allowed to run for an additional five seconds of brain time. Transmembrane potential from the soma of each neuron was recorded.

The question of initial conditions is a difficult one, as any set of assumptions about starting values for transmembrane potentials, values of Hodgkin-Huxley channel gates, etc., introduces bias into the functioning of the model; ideally, one would select a starting state that is stochastic, but biologically plausible. Therefore, we allowed the system to start in a state in which all variables were set to 0; we then applied the random stimulation as described above. This produced unnatural behaviors briefly which damped out within about 0.4 second, and spike waveforms became normal-appearing. We took this 0.4 second mark as the starting point of the simulation, given that it would make for stochastic, but still neurophysiologically realistic, initial conditions. Data from the preceding “transient” period was excluded from our analysis.

We also placed “virtual electrodes” in 16 locations throughout the simulated tissue block to record local field potential (Figure 1). We calculated LFPs using the following equation, given by Nunez [64], and adapted for use in neurocomputational modeling [12], as follows:

$$\mathrm{\Phi}=\frac{1}{4\pi s}\sum _{i=1}^{n}\frac{{\text{I}}_{\text{m}\phantom{\rule{0.16667em}{0ex}}i}}{{R}_{i}}$$

(2)

Where Φ is the field potential in volts, s is conductivity of the medium surrounding the neurons, in 1/Ωm, I_{mi} is the transmembrane current in amperes across the ith neural compartment, and R_{i} is the distance from the ith neural compartment to the recording electrode. The sum is taken over every computational compartment of every neuron in the simulation.

The simulation was implemented using the GENESIS neural modeling language [12] and run under LINUX on a dual-processor PC. The individual pyramidal cell and interneuron models were ported to GENESIS by Sampat and Huerta (http://www.genesis-sim.org/BABEL/babeldirs/cells) and Menschik and Finkel [61], respectively; both are available at the aforementioned URL. Programs to specify cell placement and connectivity were written in the C programming language by the author. These were designed to be maximally flexible: parameter files to the programs, in the form of Tables 1 and and2,2, can be updated as additional neuroanatomic information becomes available.

We used the Crank-Nicholson numerical integration method [19], with an integration time step of 0.125 msec. This was among the fastest numerical integration methods available using the GENESIS software and, with the time step used, did not produce significant error [12]. A simulation of 2.5 seconds of “neuronal time” required about 24 hours of computer time. For questions on programming details, readers are invited to email the author at ude.dravrah.naelcm@reiemkeisp.

We wished to quantitatively compare the behavior of our model to that of hippocampi *in vivo*—to the extent there was agreement, we would feel reassured about the biological realism of the simulation. We did this by comparing model outputs to actual physiological recordings using three measures: average spike rates for pyramidal cells and interneurons, distribution of neurons’ spike rates, and distribution of interspike intervals (ISIs).

We found the average spike rate for all pyramidal cells in the model to be 2.45 Hz, and for all interneurons to be 25.6 Hz. We took 0 V to be spike threshold. Model outputs compared well with published *in vivo* values, as summarized in Figure 3a. Perhaps more important is the *distribution* of spike rates seen. Figure 3b shows a comparison of a frequency distribution for mean firing rates of neurons in rat hippocampus experimentally vs. outputs from our model for both pyramidal cells and interneurons. A similar comparison for ISIs is presented in Figure 4. Agreement is good in both cases.

Interspike interval distributions of model neurons vs. *in vivo* cells. a. Model pyramidal cells, b. CA1 PE (“putative excitatory”) cells, c. model interneurons, d. CA1 FS (“fast spiking”) cells. In all panels, x axis is **...**

At distinct moments in the 6-second run, system-wide “shifts” in the network’s overall pattern of firing appeared; these changes were reflected in the local field potential. The grey vertical bars of Figure 5b indicate these transitions, which were arrived at using the following three-step algorithm:

Overall model behavior. a. LFP recorded from s. pyramidale. Shifts in LFP traces were seen to occur, to a greater or lesser degree, in all electrodes of the model at the same instances, indicating systemic events (data not shown). b. Spike trains of model **...**

- Inspection of the spike train of a single model neuron in many cases reveals periods of fast spiking followed by periods of markedly slower firing, or no activity at all. Such transition points have been examined in the neurophysiologic literature, though there appears no consensus on how such a point should be defined or identified. A number of heuristics have been suggested. For example, Liu et al [53] define a
*neural activity change*(NAC) as the point at which there occurs an ISI greater than the mean ISI of the spike train by three standard deviations. Other methods have been suggested [65], and in a review of these methods, Churchward et al [15] found that simple visual inspection of spike train data is as reliable as any of the methods proposed to detect changes in neuronal discharge patterns. We defined*transition points*as those instances at which the sequential ISI ratio [ISI(t)/ISI(t+1)] was either much larger than average, indicating the beginning of a fast-spiking episode, or much smaller than average, indicating the end of such an episode. We termed this parameter*r*and assumed a cutoff value of 26. Thus, if ISI(t)/ISI(t+1) was 26 or greater, we marked this as the beginning of a fast-spiking episode. If ISI(t)/ISI(t+1) was less than or equal to 1/26, we marked this as the end of such an episode. Importantly, we performed a sensitivity analysis and found the simulation results to be insensitive to small changes in this parameter value: for 24 < r < 26, the model produced 7 shifts; for 16 < r < 28, the model produced 6, 7, or 8 shifts. - For a system-wide shift to occur, we required a certain number of these transition points, across all cells, to occur within a narrow time window; we termed this parameter
*w*. Again, there appears to be no clear consensus in the neurophysiologic literature as to what time window width defines simultaneity; values in the literature have ranged from 10 msec in studies examining simultaneity of cell spiking [30] to 50 [74] or 100 msec in studies examining simultaneity of firing rate transitions. We chose a value of 45 msec, near the center of this range. Sensitivity analyses revealed that model results were robust to small changes in this parameter value: specifically, for 44 < w < 47, the model produced 7 shifts; for 42 < w < 58, the model produced 6, 7, or 8 shifts. - When a sufficient number of these transition points (which we term the
*threshold number, n*) occur within a time window, w, a*state transition*is said to occur. We wished to select n such that the probability of identifying a transition point by chance was very low (< 0.01). By the binomial theorem, the probability of n occurrences (i.e., transition points) across the 452 cells of the model in a given time window is$$p(n)=\left(\begin{array}{l}452\hfill \\ n\hfill \end{array}\right){\theta}^{n}{(1-\theta )}^{452-n}$$(3)where θ is the probability of a transition point occurring within time window w for a given cell. As there were 512 total transition points across the 452 cells in the course of the 6 second run, the probability of the occurrence of a transition point for a particular cell in a window of width 45 msec was θ = 0.02673. Thus, the probability of 11 or more simultaneous transitions occurring by chance was p(n≥11) = 0.000219.

When using parameter values for sequential ISI ratio, time window, and threshold of 26, 45, and 11, respectively, the system exhibits 7 activity shifts defining 8 *activity epochs*. A representative LFP trace from a virtual electrode in stratum pyramidale is shown in Figure 5a; correspondence between LFP shifts and the transitions indicated in Figure 5b is apparent.

To make sure that the simulation run we analyzed approximated a steady state condition, as opposed to a continuation of the initial transient, we allowed the system to run for an extended period of time (68 simulated seconds). During this period it produced 74 epochs, or 1.09 epochs/second on average, similar to the 1.33 epochs/second of the initial (analyzed) run. Significantly, the rate at which epochs occurred was roughly constant throughout the extended run (data not shown).

To ensure that the transitions observed in our analyses did not occur strictly by chance, we created and evaluated surrogate data sets using three different methodologies, as described below:

- We randomly shuffled the interspike intervals for each of the simulated cells of the model. We repeated this 100 times, to create 100 surrogate data sets. For each of these, we tested for the presence of transitions, as defined above. For the 100 runs, the average number of epochs per run was 2.3, with a standard deviation of 0.88. Thus, the number of epochs in the original data set (8) differed by > 6 SDs from the mean of the surrogate data values, suggesting that the observed value did not occur strictly by chance.
- We re-ran our simulation 15 times, each time seeding with a different random number. We created a surrogate data set by recombining 452 spike trains randomly selected from these 15 runs. This ensured that the spike trains of a given run were physiologically realistic but were independent of one another, as they were from different simulations. We repeated this 15 times, to create 15 surrogate data sets. The average number of epochs was 1.60, with a standard deviation of 1.12, significantly different from the 8 epochs of the original run.
- We set all synaptic weights of the model to zero, thus decoupling all neurons from one another. We stimulated as we did in the original, intact model. No epochs were seen.

Finally, we wished to understand what, at a cellular level, underlay the shifts identified above. Inspection of the data indicated that many state transitions are preceded by the progressively slower firing of a number of cells, or what appears to be spike frequency adaptation (SFA). SFA is mediated by slow Ca^{+}-activated K^{+} channels, also known as SK or K_{AHP} channels, or the I_{AHP} (afterhyperpolarization) current [37]. To examine the degree to which SFA played a role in the network’s state transitions, we removed the I_{AHP} channels from the constituent neurons of our simulation and re-ran it, using the same one-second stimulation as described above. Under this regimen, the number of state transitions decreased to one, as shown in Figure 5c.

The activity epochs seen above are suggestive of the attractor states described in the neural network literature. Briefly, according to this body of theory—articulated, for example, by Hopfield [40]—attractors are stable fixed points of the system, which correspond to minima of the network’s energy function, over all values of its state-space. It has been theorized that these states, occurring in biological neural tissue, represent memories; thus, the matrix of neural connections is the storage medium for these memories. As the epochs we have identified are transient states, in the strict sense of the term they are not “attractors”, which are defined as stable states of a dynamical system. Nonetheless, the attractor energy formulation can usefully be brought to bear.

The “classical” attractor network energy function is

$$E=-1/2\sum _{i,j}{\mathbf{J}}_{i,j}{S}_{i}{S}_{j},\phantom{\rule{0.38889em}{0ex}}\text{where}\phantom{\rule{0.16667em}{0ex}}S=\pm 1.$$

(4)

**J** is the weight matrix, where J_{ij} is the strength of connection from neuron i to neuron j, and S_{i} represents the activation state of neuron i [40]. While such energy functions were originally discussed in terms of abstract network formulations with symmetric weights and discontinuous S_{i}s, it has also been used in the context of networks with asymmetric weights and continuously valued S_{i}s [41]. The term *energy function* comes from a physical analogy to magnetic materials [36] and has been popularized for use in neural network theory.

Intuitively speaking, Equation 4 measures the degree of correspondence between a pattern of neural activation and the “hardwired” connectivity of the network. That is, it rewards states—by assigning a lower (more negative) energy value—in which two neurons connected by an excitatory connection are both activated.

To use this energy function to determine if the stable states of our simulation had attractor characteristics, we define the following terms:

- The connectivity matrix,
**W**. We created a 452 × 452 element array**W,**whose i, jth element is the strength of synaptic stimulation (excitatory or inhibitory) that neuron i exerts on neuron j. If i is a pyramidal cell and sends x synaptic contacts to j, W_{i,j}is x; if i is an interneuron and makes the same number of contacts, W_{i,j}is –x. - Input vector, ${V}_{i}^{n}$. This 452 element vector represents the pattern of activation of each cell, denoted by subscript i, in the system during a particular epoch, n. ${V}_{i}^{n}$ is arrived at by calculating the spike rate of cell i during epoch n, then normalizing, so all elements of ${V}_{i}^{n}$ lie on [0, 1]. This is consistent with the assumption of rate-coded information.

S_{i} above is analogous to our V_{i}; since V_{i} lies on [0, 1] rather than [−1, +1], to apply Equation 4 to our system, we modify it as shown in Equation 5. This allows us to calculate the energy level for a single epoch:

$$E=-1/2\sum _{i=0}^{452}\sum _{j=0}^{452}\mathbf{W}(2{V}_{i}-1)(2{V}_{j}-1)$$

(5)

Calculating the energy function for each of the 8 epochs in Figure 5b yielded values ranging from −55.3 × 10^{3} to −58.1 × 10^{3}, with an average of −57.1 × 10^{3}; for the two epochs of Figure 5c, values ranged from −57.6 × 10^{3} to −57.8 × 10^{3} (see Figure 6). The units, of course, are arbitrary, and there is no absolute cutoff that would “define” an attractor.

“Energy level” of the network over time. Graph shows energy level (in arbitrary units), calculated using methodology described in the Results section. Solid line represents the baseline (with K_{AHP}) case; dashed line represents the without **...**

To interpret these numbers, we need a measure of the system’s baseline—that is, the energy level when the system had not “selected” any particular state. Theoretically speaking, the most accurate measure of a neural system’s baseline is not entirely clear; therefore, we calculated this using three different methods:

- As described above, we created 15 surrogate data sets by re-running the model 15 times and randomly recombining spike trains from these simulations. For the 15 runs, the average energy level was −49.9 × 10
^{3}, with a standard deviation of 2.3 × 10^{3}. Thus the attractor-like states showed lower energy by greater than three standard deviations. - We calculated the energy level of the disconnected version of the model described above; this eliminated cell to cell influence, necessary for attractor like states. Calculated energy was −42,125.
- For all active cells of the simulation, we randomized each neuron of the system to a value between 0 and 1. (The system shows fairly sparse activity: in the course of the simulation, only 122 and 91 of the 452 neurons show activity, in the with-I
_{APH}and the without-I_{AHP}scenarios, respectively.) Therefore, for the with-I_{AHP}case we created 5 stochastic input vectors by selecting 122 cells at random, and setting each to a random value from 0 to 1. The energy calculation was performed, yielding values from −32.2 × 10^{3}to −35.1 × 10^{3}, with an average of −33.4 × 10^{3}, a higher average energy level than seen in our simulation by a factor of slightly less than two. For the without-I_{AHP}case we used this methodology, but with 91 cells. This produced values in a similar range: from −36.4 × 10^{3}to −38.6 × 10^{3}, with an average of −37.1 × 10^{3}. This, in a sense, is a very “pure” measure of baseline state, given the fact that initial stimulation was given randomly to all active neurons.

The above results are shown graphically in Figure 6. As can been seen, our original system’s energy was consistently lower than baseline by each of the methods above, suggesting attractor characteristics.

The power of computational modeling lies in its ability to elucidate the relationship between cellular-level phenomena and system-level behaviors. A simulation does not need to contain all aspects of the physical thing being modeled in order to be useful. Indeed, as Mel [60] points out, *in vitro* brain slices are in fact crude models of intact brain (and the source of much of the data underlying our current understanding of brain function): a puff of glutamate is taken to model stimulation in vivo, and LTP is used as a model of learning. The particular biological details that can be abstracted away depend on the purpose(s) of the model.

Protopapas et al [68] describe a process of model development and “tuning” to ensure the simulation is an accurate representation of the tissue being modeled. The model output used for confirmation purposes should be related to, but not identical with, the ultimate model behavior of interest. We compared spike rates and ISI intervals generated by the model to those seen in laboratory experiments. While it is difficult to compare our model’s functioning to a particular activity state of live animal (e.g. sleeping vs. running a maze vs. other behaviors), it showed generally good agreement with the experimental data: average spike rates for pyramidal cells were within the range of values seen in the literature, as were average spike rates for interneurons (Figure 3a). As important, the distributions of average spike rates for the cell populations in both cases showed good agreement (Figure 3b). The same was true for ISI distributions (Figure 4). Thus, our model seemed to capture the essential qualities of the neural tissue and its spiking properties.

The tendency of the model to transition from stable state to stable state in a relatively discrete manner is analogous to a behavior of neural tissue that is increasingly being appreciated, based on *in vivo* [16,46,54,57], *in vitro* [17,22,24], and tissue culture [5] studies. For example, studies of locust antennal lobe neurons [4,51] indicate that specific odors are encoded by the sequential activation and deactivation of particular subsets of neurons. *In vivo* multi-unit recording studies of monkey prefrontal cortex [1,74] similarly showed the existence of a number of stable configurations of firing activity; the relatively abrupt changes between these states were marked by simultaneous changes in firing rates of several neurons. Of note, these states “flipped” one to three times per second [1], similar to the rates seen in our computational model. Studies in gustatory cortex of behaving rats [46] and slice studies from mouse visual cortex [17] have shown similar results—that percepts may be encoded as particular *sequences* of neural ensembles, not just one particular pattern of activation.

As the above experimental studies indicate, a good deal of this work has focused on sensory cortex. However, more recent integrative, theoretically oriented work [e.g., 69] suggests that such neural mechanisms may underlie cognitive as well as perceptual processes. Indeed, according to this work, understanding the *trajectory* of a neural system through state space—that is, how ensembles of neural activity evolve over time—may be more useful than simple measurements of steady state or periodic processes. The hippocampal model we have described is consistent with this line of analysis.

Our model did not exhibit theta frequency (4–8 Hz) activity, which is prominently seen in neurophysiologic studies of CA1. The precise mechanism of the generation of this rhythm is not known with certainty [13]. A long line of research has suggested that it depends on hippocampal connections with the medial septum-diagonal band of Broca (MS-DBB) [e.g., 20,88]. Other subcortical structures have been suggested also, either as pacemakers to CA1or as areas crucial in feedback loops with hippocampus [e.g., 10,86]. Also, more local interactions (e.g., between CA3 and entorhinal cortex) [72] have been postulated. As inter-areal communications appear important for the generation of CA1 theta, and our model does not include such connections, it is not surprising that such oscillations are absent.

An interesting emergent behavior of our model is its tendency to undergo “shifts”, even in the absence of external stimulation. To see if the periods between shifts showed attractor characteristics, an energy function was calculated for these states, as described in Results (Equations 4–5). The between-shift stable states consistently showed lower energy levels than baseline, by a number of different measures, suggesting that they are attractor-like states. In the theoretical neural network modeling paradigm, researchers generally “train” a network with particular patterns (“memories”), then demonstrate that it forms attractors at these configurations. An interesting finding of our study is that a network that is not trained *a priori*, but is constructed based solely on actual neuroanatomic data, can show attractor dynamics. Of note, our computational results are consistent with recent experimental work, particularly that of Sasaki et al [73]. They studied hippocampal slice cultures using multineuron calcium imaging; their analysis of cell spiking activity using an energy function framework revealed that these states had attractor characteristics.

This work suggests that the activity of a particular ion channel may underlie this behavior. The calcium dependent potassium current, which is often abbreviated I_{K(Ca)} or I_{K,Ca}, can be divided into three subtypes, based on voltage dependence, Ca^{++} sensitivity, pharmacology, and conductance: the BK (“big K(Ca)”), or I_{C}, channel; the IK channel; and the SK, or slow AHP channel, abbreviated K_{AHP}, which is responsible for spike frequency adaptation (SFA) [37]. The marked decrease in the number of state shifts in the absence of simulated K_{AHP} conductance suggests that spike frequency adaptation may, at least in part, be responsible for this behavior. An interpretation of the network’s shifts could be that as a group of mutually excitatory neurons maintain activation, a number of them begin to “fatigue out” via decreased spiking; when a sufficient number do so, that pattern looses prominence, and the network transitions to another stable state.

What do our findings suggest about the manner in which hippocampus carries out its cognitive functions? Previous modeling studies incorporating spike frequency adaptation [75] have pointed to the importance of this cellular level phenomenon in understanding cognitive processing. This is supplemented by pharmacologic studies on laboratory animals that have been carried out using apamin, an agent that specifically blocks the SK channel [52]. It was shown that apamin enhances LTP *in vitro* in area CA1 of rat hippocampus [6]. Consistent with this are numerous studies showing that apamin improves performance on memory tasks in laboratory animals [21,26,62,85]. While the experimental protocols in these studies varied, in general, apamin seemed to improve the learning, or memory encoding, aspect of the task, as opposed to the recall, or retrieval subtask. Further supporting evidence comes from Hasselmo et al [35]. These researchers, utilizing both experimental and computational approaches, showed that in CA3, increased levels of acetylcholine—whose cellular level effects include the suppression of SFA, among other things—set the conditions for learning in hippocampus, and decreased levels set the conditions for recall.

The precise manner in which decreased SFA could lead to increased memory performance is not entirely clear. We saw that computational blockade of the K_{AHP} channel led to system level behavior featuring two approximately three-second stable states, as opposed to eight in the control case. An attractor state is characterized by the firing of a particular subset of relatively highly interconnected cells. LTP requires the near simultaneous firing of a pre- and post-synaptic neuron—it has been estimated that the postsynaptic cell must show depolarization within approximately 100 ms of presynaptic activation for LTP to occur [50]. One could speculate that longer attractor states could be associated with greater rates of LTP. While the precise mechanism of such an effect is unclear, our model does provide support for the ideal that attractor formation is important in the memory encoding process. Furthermore, it is expressed at a degree of biological realism such that it can produce testable hypotheses; these can then be verified or falsified, leading to further model refinements. It could usefully be used in conjunction with the growing body of *in vitro* and *in vivo* data indicating “metastability” [73] or sequencing [44] of attractor states is crucial in understanding brain function. More generally, our study suggests that central to understanding how hippocampus encodes memories is an appreciation of *spatiotemporal* dynamics—the manner in which the system transitions from state to state over time.

State transitions of this kind have been discussed in the theoretical neural network literature. A mathematical framework for understanding their dynamics is offered by Sompolinsky and Kanter [80], in a paper discussing the ability of asymmetrically connected neural networks to store and recall sequences of spatial patterns of activation. Their formulation is reviewed briefly below:

The state of the network at time t can be described by S_{i}(t), I = 1, …, N, where N is the number of neurons in the system. Conceptually, synaptic connections can be divided into those that are symmetrical, and will thus tend to maintain the system in its current state, which are termed
${J}_{ij}^{(1)}$, and those that are asymmetric, and will tend to push the network into a new state, which are termed
${J}_{ij}^{(2)}$. In both cases, *J _{ij}* denotes the synaptic strength of the connection from j to i. S

$${h}_{i}^{(1)}(t)=\sum _{j=1}^{N}{J}_{ij}^{(1)}{S}_{j}(t)$$

(6)

as well as

$${h}_{i}^{(2)}(t)=\sum _{j=1}^{N}{J}_{ij}^{(2)}\overline{{S}_{j}}(t).$$

(7)

(While the biologically realistic model that we have described does not have neurons with threshold functions in a strict sense, we can think of them as units that will fire to produce an output when a sufficiently high level of dendritic stimulation is achieved.)
$\overline{{S}_{j}}$ above is a function embodying a dynamic memory characterized by a time decay constant τ, meaning that h_{i}^{(2)} (t) averages the activity state over time ~ τ. Thus, h^{(2)} will induce transition to state μ(t+1) only after the system has stayed in state μ (t) for a period of time of order τ. Sompolinski and Kanter [80] state that networks containing asymmetric synaptic connections with markedly longer response times could, in the manner outlined above, give rise to temporal pattern generation.

Our model suggests a measurable, neurobiological characteristic that could underlie
$\overline{{S}_{j}}$: internal neuronal Ca^{++} concentration. As [Ca^{++}] increases over time, the prominence of the K_{AHP} current increases, because this conductance is a function only of [Ca^{++}]. Increased K_{AHP} causes spiking to slow, and finally terminate.

Models like ours can be used to better understand the cellular-level etiologies of neuropsychiatric illnesses. For example, modeling studies of schizophrenia [38,39,75] have indicated the way in which decreased neural connectivity can give rise to some of its clinical manifestations. While the precise hippocampal lesion is unknown, a number of abnormalities at a neuroanatomic level have been identified. For example, in postmortem studies, Benes and others [8] have found deficiencies in interneurons in schizophrenic hippocampus in several subfields [7]. Others have quantified this by subtype, finding, for example, an approximate 50% reduction in parvalbumin-staining interneurons in CA1 [91]. Furthermore, as reviewed by Coyle [18] and Meador-Woodruff and Healy [58], there are also abnormalities in a number of glutamate receptor subtypes in schizophrenic hippocampus. “Virtual experiments” based on these findings could help elucidate the underlying cause of this and other illnesses.

I would like to thank Steven Matthysse for mentoring work on this project. This research was supported by funding from the NIMH (5K08MH072771 and Conte Center Grant 5P50MH060450), and a NARSAD Young Investigator Award.

**Publisher's Disclaimer: **This is a PDF file of an unedited manuscript that has been accepted for publication. As a service to our customers we are providing this early version of the manuscript. The manuscript will undergo copyediting, typesetting, and review of the resulting proof before it is published in its final citable form. Please note that during the production process errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain.

1. Abeles M, Bergman H, Gat I, Meilijson I, Seidemann E, Tishby N, Vaadia E. Cortical flips among quasi-stationary states. Proceedings of the National Academy of Sciences of the USA. 1995;92:8616–8620. [PubMed]

2. Amit DJ. Modeling Brain Function: The World of Attractor Neural Networks. Cambridge University Press; New York: 1989. p. 504.

3. Barnes CA, McNaughton BL, Mizumori SJY, Leonard BW. Comparison of spatial and temporal characteristics of neuronal activity in sequential stages of hippocampal processing. Progress in Brain Research. 1990;83:287–300. [PubMed]

4. Bazhenov M, Stopfer M, Rabinovich M, Abarbanel HDI, Sejnowski T, Laurent G. Model of cellular and network mechanisms for odor-evoked temporal patterning in the locust antennal lobe. Neuron. 2001;30 [PMC free article] [PubMed]

5. Beggs JM, Plenz D. Neuronal avalanches are diverse and precise activity patterns that are stable for many hours in cortical slice cultures. The Journal of Neuroscience. 2004;24:5216–5229. [PubMed]

6. Behnisch T, Reymann KG. Inhibition of apamin-sensitive calcium dependent potassium channels facilitate the induction of long-term potentiation in the CA1 region of the rat hippocampus in vitro. Neuroscience Letters. 1998;253:91–94. [PubMed]

7. Benes FM, Berretta S. GABAergic interneurons: implications for understanding schizophrenia and bipolar disorder. Neuropsychopharmacology. 2001;25:1–27. [PubMed]

8. Benes FM, Khan Y, Vincent SL, Wickramasinghe R. Differences in the subregional and cellular distribution of GABA_{A} receptor binding in the hippocampal formation of schizophrenic brain. Synapse. 1996;22:338–349. [PubMed]

9. Berghius P, Dobszay MB, Ibanez RM, Ernfors P, Harkany T. Turning the heterogeneous into homogeneous: studies on selectively isolated GABAergic interneuron subsets. International Journal of Developmental Neuroscience. 2004;22:533–543. [PubMed]

10. Bland BH. Physiology and pharmacology of hippocampal formation theta rhythms. Progress in Neurobiology. 1986;26:1–54. [PubMed]

11. Boss BD, Turlejski K, Stanfield BB, Cowan WM. On the numbers of neurons in fields CA1 and CA3 of the hippocampus of Sprague-Dawley and Wistar rats. Brain Res. 1987;406:280–7. [PubMed]

12. Bower JM, Beeman D. The Book of GENESIS: Exploring Realistic Neural Models with the GEneral NEural SImulation System. Springer-VerlagTelos; Santa Clara, CA: 1998.

13. Buzsaki G. Theta oscillations in the hippocampus. Neuron. 2002;33:325–340. [PubMed]

14. Cenquizca LA, Swanson LW. Spatial organization of direct hippocampal field CA1 axonal projections to the rest of cerebral cortex. Brain Research Reviews. 2007;56:1–26. [PMC free article] [PubMed]

15. Churchward PR, Butler EG, Finkelstein DI, Aumann TD, Sudbury A, Horne MK. A comparison of methods used to detect changes in neuronal discharge patterns. Journal of Neuroscience Methods. 1997;76:203–210. [PubMed]

16. Compte A, Constantinidis C, Tegner J, Raghavachari S, Chafee MV, Goldman-Rakic PS, Wang XJ. Temporally Irregular Mnemonic Persistent Activity in Prefrontal Neurons of Monkeys During a Delayed Response Task. Journal of Neurophysiology. 2003;90:3441–3454. [PubMed]

17. Cossart R, Aronov D, Yuste R. Attractor dynamics of network UP states in the neocortex. Nature. 2003;423:283–288. [PubMed]

18. Coyle JT. The GABA-glutamate connection in schizophrenia: which is the proximal cause? Biochemical Pharmacology. 2004;68:1507–1514. [PubMed]

19. Crank J, Nicolson P. A practical method fro numerical evaluation of solutions of partial differential equations of the head conduction type. Proceedings of the Cambridge Philosophical Society. 1947;43:50–67.

20. Denham MJ, Borisyuk RM. A model of theta rhythm production in the septal-hippocampal system and its modulation by ascending brain stem pathways. Hippocampus. 2000;10:698–716. [PubMed]

21. Deschaux O, Bizot JC, Goyffon M. Apamin improves learning in an object recognition task in rats. Neuroscience Letters. 1997;222:159–162. [PubMed]

22. Durstewitz D, Gabriel T. Dynamical basis of irregular spiking in NMDA-driven prefrontal cortex neurons. Cerebral Cortex. 2007;17:894–908. [PubMed]

23. Durstewitz D, Seamans JK, Sejnowski TJ. Neurocomputational models of working memory. Nature Neuroscience. 2000;3:1184–1191. [PubMed]

24. Fellous J, Tiesinga PHE, Thomas PJ, Sejnowski TJ. Discovering spike patterns in neuronal responses. The Journal of Neuroscience. 2004;24:2898–3001. [PMC free article] [PubMed]

25. Ferraguti F, Cobden P, Pollard M, Cope D, Shigemoto R, Watanabe M, Somogyi P. Immunolocalization of metabotropic glutamate receptor 1α (mGluR1α) in distinct classes of interneurons in the CA1 region of the rat hippocampus. Hippocampus. 2004;14:193–215. [PubMed]

26. Fournier C, Kourrish S, Soumireu-Mourat B, Mourre C. Apamin improves reference memory but not procedural memory in rates by blocking small conductance Ca^{2+} activated K^{+} channels in an olfactory discrimination task. Behavioural Brain Research. 2001;121:81–93. [PubMed]

27. Fox SE, Ranck JB. Electrophysiological characteristics of hippocampal complex-spike cells and theta cells. Experimental Brain Research. 1981;41:399–410. [PubMed]

28. Frank LM, Brown EN, Wilson MA. A comparison of the firing properties of putative excitatory and inhibitory neurons from CA1 and the entorhinal cortex. Journal of Neurophysiology. 2001;86:2029–2024. [PubMed]

29. Freund TF, Buzsaki G. Interneurons of the hippocampus. Hippocampus. 1996;6:347–470. [PubMed]

30. Grun S, Diesmann M, Aertsen A. Unitary events in multiple single-neuron spiking activity: detection and significance. Neural Computation. 2001;14:43–80. [PubMed]

31. Gulyas AI, Freund TF. Pyramidal cell dendrites are the primary targets of calbindin D28k-immunoreactive interneurons in the hippocampus. Hippocampus. 1996;6:525–34. [PubMed]

32. Gulyas AI, Hajos N, Freund TF. Interneurons containing calretinin are specialized to control other interneurons in the rat hippocampus. J Neurosci. 1996;16:3397–411. [PubMed]

33. Halasy K, Buhl EH, Lorinczi Z, Tamas G, Somogyi P. Synaptic Target Selectivity and Input of GABAergic Basket and Bistratified Interneurons in the CA1 Area of the Rat Hippocampus. Hippocampus. 1996;6:306–329. [PubMed]

34. Hasselmo ME, Eichenbaum H. Hippocampal mechanisms for the context-dependent retrieval of episodes. Neural Networks. 2005;18:1172–1190. [PMC free article] [PubMed]

35. Hasselmo ME, Schnell E, Barkai E. Dynamics of learning and recall at excitatory recurrent synapses and cholinergic modulation in rat hippocampal region CA3. Journal of Neuroscience. 1995;15:5249–5262. [PubMed]

36. Hertz J, Krogh A, Palmer RG. Introduction to the Theory of Neural Computation. Addison-Wesley Publishing Co; Reading, MA: 1991. p. 327.

37. Hille B. Ion Channels of Excitable Membranes. 3. Sinauer Associates, Inc; Sunderland, MA: 2001.

38. Hoffman RE. Computer simulations of neural information processing and the schizophrenia-mania dichotomy. Archives of General Psychiatry. 1987;44:178–88. [PubMed]

39. Hoffman RE, Dobscha SK. Cortical pruning and the development of schizophrenia: a computer model. Schizophrenia Bulletin. 1989;15:477–490. [PubMed]

40. Hopfield JJ. Neural networks and physical systems with emergent collective computational abilities. Proceedings of the National Academy of Sciences of the United States of America. 1982;79:2254–2558. [PubMed]

41. Hopfield JJ. Neurons with graded response have collective computational properties like those of two-state neurons. Proceedings of the National Academy of Sciences of the United States of America. 1984;81:3088–3092. [PubMed]

42. Hosseini-Sharifabad M, Nyengaard JR. Design-based estimation of neuronal number and individual neuronal volume in the rat hippocampus. Journal of Neuroscience Methods. 2007;162:206–214. [PubMed]

43. Ibarz JM, Makarova I, Herreras O. Relation of apical dendritic spikes to output decision in CA1 pyramidal cells during synchronous activation: a computational study. European Journal of Neuroscience. 2006;23:1219–1233. [PubMed]

44. Ikegaya Y, Aaron G, Cossart R, Aronov D, Lampl I, Ferster D, Yuste R. Synfire chains and cortical songs: temporal modules of cortical activity. Science. 2004;304:559–564. [PubMed]

45. Jinno S, Kosaka T. Cellular architecture of the mouse hippocampus: A quantitative aspect of chemically defined GABAergic neurons with stereology. Neuroscience Research. 2006;56:229–245. [PubMed]

46. Jones LM, Fontanini A, Sadacca BF, Miller P, Katz DB. Natural stimuli evoke dynamic sequences of states in sensory cortical ensembles. Proceedings of the National Academy of Sciences of the USA. 2007;104:18772–18777. [PubMed]

47. Kager H, Wadman WJ, Somjen GG. Seizure-like afterdischarges simulated in a model neuron. Journal of Computational Neuroscience. 2007;22:105–128. [PubMed]

48. Katona I, Acsady L, Freund TF. Postsynaptic targets of somatostatin-immunoreactive interneurons in the rat hippocampus. Neuroscience. 1999;88:37–55. [PubMed]

49. Knowles WD, Schwartzkroin PA. Axonal ramifications of hippocampal CA1 pyramidal cells. Journal of Neuroscience. 1981;1:1236–41. [PubMed]

50. Larson J, Lynch G. Theta pattern stimulation and the induction of LTP: the sequence in which synapses are stimulated determines the degree to which they potentiate. Brain Research. 1989;489:49–58. [PubMed]

51. Laurent G, Stopfer M, Freidrich RW, Rabinovich M, Volkovskii A, Abarbanel HDI. Odor encoding as an active, dynamical process: experiments, computation, and theory. Annual Review of Neuroscience. 2001;24:263–297. [PubMed]

52. Liegeois J-F, Mercier F, Graulich A, Graulich-Lorge F, Scuvee-Moreau J, Seutin V. Modulation of small conductance calcium-activated potassium (SK) channels: a new challenge in medicinal chemistry. Current Medicinal Chemistry. 2003;10:625–647. [PubMed]

53. Liu Y, Denton JM, Frykberg BP, Nelson RJ. Detecting neuronal activity changes using an interspike interval algorithm compared with using visual inspection. Journal of Neuroscience Methods. 2006;155:49–55. [PubMed]

54. Luczak A, Bartho P, Marguet SL, Buzsaki G, Harris KD. Sequential structure of neocortical spontaneous activity *in vivo*. Proceedings of the National Academy of Sciences of the USA. 2007;104:347–352. [PubMed]

55. Matyas F, Freund TF, Gulyas AI. Convergence of excitatory and inhibitory inputs onto CCK-containing basket cells in the CA1 area of the rat hippocampus. European Journal of Neuroscience. 2004;19:1243–1256. [PubMed]

56. Matyas F, Freund TF, Gulyas AI. Immunocytochemically defined interneuron populations in the hippocampus of mouse strains used in transgenic technology. Hippocampus. 2004;14:460–481. [PubMed]

57. Mazor O, Laurent G. Transient dynamics versus fixed points in odor representations by locust antennal lobe projection neurons. Neuron. 2005;48:661–673. [PubMed]

58. Meador-Woodruff JH, Healy DJ. Glutamate receptor expression in schizophrenic brain. Brain Research - Brain Research Reviews. 2000;31:288–294. [PubMed]

59. Megias M, Emri ZS, Freund TF, Gulyas AI. Total number and distribution of inhibitory and excitatory synapses on hippocampal CA1 pyramidal cells. Neuroscience. 2001;102:527–540. [PubMed]

60. Mel BW. In the brain, the model is the goal. Nature Neuroscience. 2000;3:1183.

61. Menschik ED, Finkel LH. Neuromodulatory control of hippocampal function: towards a model of Alzheimer’s disease. Artificial Intelligence in Medicine. 1998;13:99–121. [PubMed]

62. Messier C, Mourre C, Bontempi B, Sif J, Lazdunski M, Destrade C. Effect of apamin, a toxin that inhibits Ca2+ dependent K+ channels, on learning and memory processes. Brain Research. 1991;551:322–326. [PubMed]

63. Migliore M, Cook EP, Jaffe DB, Turner DA, Johnston D. Computer simulations of morphologically reconstructed CA3 hippocampal neurons. Journal of Neurophysiology. 1995;73:1157–1168. [PubMed]

64. Nunez PL. Electric fields of the brain: The neurophysics of EEG. Oxford University Press; New York: 1981. p. 484.

65. Pauluis Q, Baker SN. An accurate measure of the instantaneous discharge probability, with application to unitary joint-event analysis. Neural Computation. 2000;12:647–669. [PubMed]

66. Pawelzik H, Hughes DI, Thomson AM. Physiological and morphological diversity of immunocytochemically defined parvalbumin- and cholecystokinin-positive interneurones in CA1 of the adult rat hippocampus. J Comp Neurol. 2002;443:346–67. [PubMed]

67. Perez Y, Morin F, Beaulieu C, Lacaille JC. Axonal sprouting of CA1 pyramidal cells in hyperexcitable hippocampal slices of kainate-treated rats. Eur J Neurosci. 1996;8:736–748. [PubMed]

68. Protopapas AD, Vanier M, Bower JM. Simulating Large Networks of Neurons. In: Segev I, editor. Methods in Neuronal Modeling: From Ions to Networks. The MIT Press; Cambridge, MA: 1999. pp. 461–498.

69. Rabinovich M, Huerta R, Laurent G. Transient dynamics for neural processing. Science. 2008;321:48–50. [PubMed]

70. Ranck JB. Studies on single neurons in dorsal hippocampal formation and septum of unrestrained rats. I. Behavioral correlates and firing repertoires. Experimental Neurology. 1973;41:461–555. [PubMed]

71. Rolls ET. An attractor network in the hippocampus: Theory and neurophysiology. Learning and Memory. 2007;14:714–731. [PubMed]

72. Sabolek HR, Penley SC, Hinman JR, Bunce JG, Markus EJ, Escabi MA, Chrobak JJ. Theta and gamma coherence along the septotemporal axis of the hippocampus. Journal of Neurophysiology. 2008 [PubMed]

73. Sasaki T, Matsuki N, Ikegaya Y. Metastability of active CA3 networks. The Journal of Neuroscience. 2007;27:517–528. [PubMed]

74. Seidemann E, Meilijson I, Abeles M, Bergman H, Vaadia E. Simultaneously recorded single units in the frontal cortex go through sequences of discrete and stable states in monkeys performing a delayed localization task. The Journal of Neuroscience. 1996;16:752–768. [PubMed]

75. Siekmeier PJ, Hoffman RE. Enhanced semantic priming in schizophrenia: a computer model based on excessive pruning of local connections in association cortex. British Journal of Psychiatry. 2002;180:345–350. [PubMed]

76. Sik A, Penttonen M, Ylinen A, Buzsaki G. Hippocampal CA1 interneurons: an in vivo intracellular labeling study. J Neurosci. 1995;15:6651–65. [PubMed]

77. Soltesz I. Diversity in the Interneuronal Machine: Order and Variability in Interneuronal Microcircuits. Oxford University Press; New York: 2006. p. 238.

78. Somogyi P, Klausberger T. Defined types of cortical interneurone structure space and spike timing in the hippocampus. Journal of Physiology. 2005;562:9–26. [PubMed]

79. Somogyi P, Tamas G, Lujan R, Buhl EH. Salient features of synaptic organization in the cerebral cortex. Brain Research Reviews. 1998;26:113–135. [PubMed]

80. Sompolinsky H, Kanter I. Temporal association in asymmetric neural networks. Physical Review Letters. 1986;57:2861–2864. [PubMed]

81. Traub RD, Jefferys JG, Miles R, Whittington MA, Toth K. A branching dendritic model of a rodent CA3 pyramidal neurone. Journal of Physiology (London) 1994;481:79–95. [PubMed]

82. Traub RD, Jefferys JG, Whittington MA. Fast Oscillations in Cortical Circuits. The MIT Press; Cambridge, MA: 1999.

83. Traub RD, Miles R. Pyramidal cell-to-inhibitory cell spike transduction explicable by active dendritic conductances in inhibitory cell. Journal of Computational Neuroscience. 1995;2:291–298. [PubMed]

84. Traub RD, Pais I, Bibbig A, LeBeau FEN, Buhl EH, Hormuzdi SG, Monyer H, Whittington MA. Contrasting roles of axonal (pyramidal cell) and dendritic (interneuron) electrical coupling in the generation of neuronal network oscillations. Proceedings of the National Academy of Sciences of the United States of America. 2003;100:1370–1374. [PubMed]

85. van der Staay FJ, Fanelli RJ, Blokland A, Schmidt BH. Behavioral effects of apamin, a selective inhibitor or the SK_{Ca}-channel, in mice and rats. Neuroscience and Biobehavioral Reviews. 1999;23:1087–1110. [PubMed]

86. Vertes RP, Kocsis B. Brainstem-diencephalo-septohippocampal systems controlling the theta rhythm of the hippocampus. Neurosciece. 1997;81:893–926. [PubMed]

87. Wallenstein GV, Eichenbaum H, Hasselmo ME. The hippocampus as an associator of discontiguous events. Trends in Neuroscience. 1998;21:317–323. [PubMed]

88. Wang X-J. Pacemaker neurons for the theta rhythm and their synchronization in the septo-hippocampal reciprocal loop. Journal of Neurophysiology. 2002;87:889–900. [PubMed]

89. Wills TJ, Lever C, Cacucci F, Burgess N, O’Keefe J. Attractor dynamics in the hippocampal representation of the local environment. Science. 2005;308:873–876. [PMC free article] [PubMed]

90. Yoshida M, Hayashi H. Emergence of sequence sensitivity in a hippocampal CA3-CA1 model. Neural Networks. 2007;20:653–667. [PubMed]

91. Zhang ZJ, Reynolds GP. A selective decrease in the relative density of parvalbumin-immunoreactive neurons in the hippocampus in schizophrenia. Schizophrenia Research. 2002;55:1–10. [PubMed]

PubMed Central Canada is a service of the Canadian Institutes of Health Research (CIHR) working in partnership with the National Research Council's national science library in cooperation with the National Center for Biotechnology Information at the U.S. National Library of Medicine(NCBI/NLM). It includes content provided to the PubMed Central International archive by participating publishers. |