|Home | About | Journals | Submit | Contact Us | Français|
The manner in which hippocampus processes neural signals is thought to be central to the memory encoding process. A theoretically-oriented literature has suggested that this is carried out via “attractors” or distinctive spatio-temporal patterns of activity. However, these ideas have not been thoroughly investigated using computational models featuring both realistic single-cell physiology and detailed cell-to-cell connectivity. Here we present a 452 cell simulation based on Traub et al’s pyramidal cell  and interneuron  models, incorporating patterns of synaptic connectivity based on an extensive review of the neuroanatomic literature. When stimulated with a one second physiologically realistic input, our simulated tissue shows the ability to hold activity on-line for several seconds; furthermore, its spiking activity, as measured by frequency and interspike interval (ISI) distributions, resembles that of in vivo hippocampus. An interesting emergent property of the system is its tendency to transition from stable state to stable state, a behavior consistent with recent experimental findings . Inspection of spike trains and simulated blockade of KAHP channels suggest that this is mediated by spike frequency adaptation. This finding, in conjunction with studies showing that apamin, a KAHP channel blocker, enhances the memory consolidation process in laboratory animals, suggests the formation of stable attractor states is central to the process by which memories are encoded. Ways that this methodology could shed light on the etiology of mental illness, such as schizophrenia, are discussed.
Hippocampus is thought to be central in the process of encoding episodic memories, but the manner in which it performs this task is unknown. In light of this, and the fact that hippocampal dysfunction has been implicated in a number of neuropsychiatric disorders, including Alzheimer’s Disease, seizure disorder, and schizophrenia, a large number of neurobiological studies have focused on this area, producing vast amounts of highly detailed information. Computational modeling is a research tool that allows one to understand how a large number of variables at the neurophysiologic and neuroanatomic level combine to produce the emergent behaviors of a system; it is a natural method to attempt to better understand brain function.
It is not surprising that a number of computational models of hippocampus have been made, ranging from the single-cell to the network level. Researchers have described multicompartment single-cell models of hippocampal pyramidal neurons that faithfully reproduce transmembrane voltage patterns of cells recorded in vitro [43,47,63,81]. Traub et al  constructed a model of hippocampus CA3 which reproduces the gamma-modulated theta oscillatory behavior seen in this brain area. Menschik and Finkel  modeled CA3 and included the effects of acetylcholine to better understand the etiology of Alzheimer’s Disease. Many [34,61,90] have used network level modeling to attempt to more directly understand function. For example, Wallenstein et al  present a model that is capable of performing the transitive inference task which sheds light on how this might be carried out at the level of long term potentiation (LTP).
Durstewitz et al  reviewed a number of neurocomputational models suggesting that attractor states—that is, stable spatial patterns of neuronal activation—of various kinds play a role in the encoding of working memory in prefrontal cortex. Researchers have noted that hippocampal subfields CA1 [e.g., 89] and CA3 [e.g., 71] have the neuroanatomic substrate to behave in this way, and there is a body of theoretical work indicating that attractor states [2,40], or the sequencing of such states [44,73] are important in mnemonic activity. The question we aim to address is: Does hippocampal tissue encode information in this way, or some related way? In as much as it has been suggested that hippocampus carries out its function by in some manner transforming the information flowing through it, we examine the fluid, temporal qualities of these patterns: do distinctive stable states persist over time, as some have suggested; if so, what characterizes the transitional phenomena? The recent explosion of detailed neuroanatomic studies has allowed us to incorporate a level of detail that has not been feasible in previous models. Indeed, it is the first hippocampal simulation that is based on an exhaustive review of the neurobiological literature—i.e., it is the first “tissue level” model—and the first to faithfully reproduce spiking patterns seen in vivo.
We found that our hippocampal model appears to form multiple stable states. Furthermore, the simulation suggests that the phenomenon of spike frequency adaptation (SFA) at a single-cell level underlies the transitions from state to state. Psychopharmacologic studies in which the KAHP channel (which subserves SFA) is blocked produces what appears to be enhanced memory storage capacities in laboratory animals and increased LTP. The analogous “virtual lesion” study in our model produces fewer, longer stable states. This simulation points to the importance of attractor-like states in the memory encoding process.
An explicit goal of our modeling approach was to include as much biological realism as possible, in particular at the level of neural connectivity, but also at the level of the individual neuron. The overall computer simulation consisted of 400 pyramidal cells and 52 interneurons. We wished to include a number of neurons that was large enough to reveal the emergent properties of the hippocampal tissue, but within the constraints of our computing hardware. We used the 64 compartment pyramidal cell model described by Traub et al , and the 46 compartment interneuron model implemented by Menschik and Finkel, which is an adaptation of the Traub and Miles  model. Both models feature realistic dendritic arborizations, and incorporate Na+, Ca++, K+DR, K+AHP, K+C, and K+A channels distributed along the somato-dendritic axis, as well as an explicit representation of internal Ca++ concentration. Axon initial segments were explicitly modeled as compartments, but axons themselves were operationalized simply as delays. The number of neurons in the overall computer simulation is of an order of magnitude similar to those of previously published computational models. The standard for adequate network size, however, is not an arbitrary number of cells—rather, an important criterion is that it exhibits network level behaviors seen in actual tissue; evidence of this was given in the Results section.
This model was explicitly conceptualized as a three-dimensional section of hippocampal tissue, extending 154 microns in the septo-temporal direction, 154 microns in the transverse direction, and 634 microns in the dimension orthogonal to these, extending from the stratum lacunosum-moleculare to the alveus. Each soma has an x, y, and z location in the 3-space defined by these axes (Figure 1).
The model was constructed, and will be described, as a two-step process: first, calculating the steric relationship of the various cells of the model, and second, representing their patterns of connectivity.
Densities of interneuron subtypes by calcium binding proteins in the various hippocampal strata (stratum oriens, pyramidale, radiatum, lacunosum-moleculare) were calculated from data given in Freund and Buzsaki  and Jino and Kosaka . Freund and Buzsaki’s study (their Figure 23) indicates areal densities for 60 micron hippocampal slices; from this, we calculated spatial densities (see Table 1). The breakdown of model interneurons is as follows: 16 parvalbumin (PV) cells, 6 calbindin (CB) cells, 19 calretinin (CR) cells, 7 somatostatin (SOM) cells, and 4 cholecystokinin (CCK) cells. For all interneuron subtypes, we use the same dendritic morphology and dendritic ion channel distribution. While it is certain that these subtypes differ in terms of dendritic morphology and neurophysiology, we felt that this was a reasonable approximation, particularly given the large number of neurobiological unknowns involved for each of the subgroups. In the model, the calcium-binding subtypes are distinguished from one another on the basis of axonal projection characteristics, as described below.
Density of pyramidal cell bodies in stratum pyramidale of CA1 was taken from the neuroanatomic studies of Boss et al  and Hosseini-Sharifabad and Nyengaard , which provide similar numerical estimates. For each cell, x, y, and z coordinates within the relevant stratum were randomly generated.
A good deal about neuronal connectivity in hippocampus remains unknown. To take maximum advantage of what is known at a neuroanatomic level, we constructed axon-to-dendrite synaptic connections according to the following algorithm: (i) Map cell categories based on calcium-binding proteins onto categories based on morphology, to the extent possible. (ii) For each morphological class, define axonal projection patterns based on synaptic targets, in terms of stratum, target cell type (pyramidal cell vs. interneuron), and synaptic target area on cell (initial segment vs. soma vs. dendrites). (iii) Calculate spatial bouton densities for the axon projection clouds of the various subtypes, and apportion synapses according to percentages of (ii) above.
At a summary level, interneuron connectivity was based on the information shown in Figure 2, which represents the distillation by Freund and Buzsaki  of a large number of neuroanatomic studies. The precise manner in which the interneuron categorization based on calcium binding proteins used here, and in much of the current literature, maps onto traditional categories based on morphology (e.g., basket cells, chandelier cells, etc) is only beginning to become clear. However, to the extent that relationships are known, we attempted to include them in the model. For example, as indicated in Figure 2, PV interneurons project to stratum pyramidale and the proximal area of stratum oriens. It is thought that interneurons that stain for PV are almost entirely basket cells or chandelier cells, though the precise percentage of each is not known. Nonetheless, we felt that it would be more realistic to assume two distinct subgroups of PV interneurons—one projecting densely to pyramidal cell somata, the other to pyramidal cell initial segments—than to assume a homogeneous category of soma/IS-projecting interneurons.
The manner in which the calcium binding categories included in our model—which are largely non-overlapping—were allocated to one or more morphological categories is described below. All of the following is summarized in Table 2a.
Many researchers have found hippocampal PV cells to be either basket cells or chandelier cells [e.g., 9,78]. One group  found a small number of a sample of 36 PV-staining interneurons in CA1 to have a bistratified axon projection pattern. It was therefore assumed that 10% of all PV cells are bistratified, with the remainder evenly divided between basket and chandelier cells.
Researchers have found that CB cells tend to innervate the proximal and distal dendritic trees of interneurons in CA1 . Freund and Buzsaki  divide CB cells into three subtypes: (a) Cells with a bistratified axonal arborization, projecting to SR (stratum radiatum) and SO (stratum oriens), but respecting the SP (stratum pyramidale); (b) Radiatum-projecting cells. We assume these to be the “type I” cells described by Gulyas and Freund . Somata of these cells are said to lie predominantly in the SR, and only occasionally in SP and SL (stratum lacunosum-moleculare). (c) Horizontal cells at the SR-SL border. The axonal arborization of these cells is poorly characterized. Therefore, of the CB cells whose soma lie in SR, 50% were assumed to be bistratified and 50% were assumed to be radiatum-projecting. CB cells elsewhere were assumed to be bistratified, so that our categories would be entirely non-overlapping. Cells that stained for both CB and SOM were classified as SOM cells. Approximately 87% of CB-staining cells in SO (and 0% of cells in other strata) fall into this category .
In CA1, SOM cells are said to correspond to oriens-lacunosum moleculare (o-lm) cells . This was taken as a one-to-one correspondence in our model.
There is an extensive literature suggesting that CCK-immunoreactive cells are generally basket cells [29,55]. One labeling study  found a minority of 27 CCK immunoreactive cells to be bistratified cells, Shaffer collateral associated cells, or of other non-basket cell morphologies. Given the preponderance of data, we assumed that all CCK cells in the model are basket cells.
We define interneurons’ morphological categories in terms of axon projection pattern. (As mentioned above, we use a single canonical model for interneuron physiology and dendritic morphology.) These data are generally taken from studies in which an axon of a particular interneuron type is labeled with an anterograde tracer, allowing visualization of the entire axon with all of its ramifications. For example, Katona et al  found that o-lm cells project only to SL and that of the total number of synaptic connections they form there, 86% are on dendrites of pyramidal cells, 3% are on pyramidal cell somata, and 11% are on interneuron somata; quantitatively similar results were obtained by Matyas et al . Analogous data were derived from comparable experiments on the other interneuron classes: basket cells , bistratified cells , oriens-lacunosum-moleculare cells , and pyramidal cells .
In terms of computational implementation, we chose a data structure that was consistent with the level of detail available in the neuroanatomic literature. Thus, as shown in Table 2b, for each neuron type we are able to specify the stratum to which it projects, whether the target cell is a PC or interneuron (but not which subtype of interneuron), and the region on the target cell on which synapses fall (IS, cell body, or dendrites). In rare cases, however, data at a higher degree of resolution are available. For example, Gulyas et al  found that CR interneurons synapse with dendrites of other interneurons, with the following caveats: (a) CR cells avoid PV interneurons; and (b) when CR interneurons contact other CR cells, they do so at both dendrites AND somata. Given the data structure of the model, it was not possible to include these facts in the current simulation.
Quantitative estimates of spatial synaptic density for each of the interneuron classes were derived from the work of Sik et al . These researchers anterogradely stained axonal projections of each of a number of different classes of interneurons. Making several tissue sections, they calculated a total (linear) axonal distance per volume, as well as a bouton frequency, in boutons per length of axon. From these data, a spatial bouton (synapse) density can be calculated. In our model, we made the following assumptions: (i) for a given cell, bouton density depends on the particular type of interneuron, as defined by the type of calcium binding protein expressed by the cell. Breakdown of synaptic targets—or, cast another way, probability that a bouton of a given cell in a given stratum will contact a particular target—is given by the figures of Table 2b, as described above. (ii) Spatial bouton density does not vary with septo-temporal or transverse distance from the parent cell. While Sik et al show that densities do decrease as distance from the soma increases to, for example, a mm or more, given the small scale of our model, this is likely a safe assumption. (iii) For a given interneuron type bouton density does not vary from stratum to stratum. The values used in the model, in units of synapses per micron3 are as follows: PV 2.12 × 105, CB 1.13 × 104, CR 6.36 × 103, SOM 4.21 × 105, CCK 2.12 × 105.
Neuroanatomic literature [14,49] suggests that CA1 axons course through striatum oriens toward alveus, giving off occasional collaterals. In our model, we assume that pyramidal to pyramidal connections occur in stratum oriens (that is, on cells’ basal dendrites). This should be contrasted with area CA3, where it is well known that recurrent excitatory connectivity is considerably denser.
It should be noted that postsynaptic targets, as percentages, are not known with a high degree of certainty for pyramidal cells . Somogyi et al  state that CA1 pyramidal cells synapse on other pyramidal cells as well as a basket cells and SOM cells; synapses onto axo-axonic, bistratified, and other interneuron types are thought to be possible, but not yet proven. We therefore assume projection percentages (PCs vs. interneurons) in rough proportion to their presence in SO.
Using a methodology similar to that described for interneurons, spatial density for varicosities in pyramidal axons was calculated from data given in Perez et al  and Knowles and Schwarzkroin . This was calculated to be 9.09 × 104 synapses per micrometer. It is well known that pyramidal to pyramidal connectivity is considerably lower in CA1 than in CA3, though few quantitative estimates of this have been made.
While we aimed to include a high level of biological realism in our model, particularly in the area of connectivity, certain simplifying assumptions were made. All computational models necessarily make abstractions. Which details to include and not include must be based on the purpose of the computational model in question; it is best to explicitly acknowledge the assumptions made.
We used a pyramidal cell model designed as a CA3 cell, and there are some differences between the electrophysiological response properties of CA1 and CA3 PCs. For example, in contrast to CA1 cells, CA3 cells show action potentials that decrease in amplitude as repetitive spiking occurs. Also, it is thought that CA1 cells show greater SFA than CA3 cells. Thus, while this may introduce a degree of error in the model, given the direction of the error, it is tolerable. That is, our model may understate the degree of SFA at play in CA1, and it is possible that the SFA-mediated effect we describe could in actuality be greater.
Also, our model does not include gap junctions. Traub et al  created a network model incorporating gap junctions between dendrites. They show that these junctions underlie oscillatory activity, particularly in the gamma range. However, the primary focus of our study was the elucidation of system wide stable states and their intermittent transitions over several seconds. Gamma range (40 cycles per second) activity operates at a markedly different time scale, and we felt safe not including this, given our behavior of interest.
Synaptic conductances were assumed to obey a dual exponential function, as follows:
where A is a normalization constant chosen such that gsyn reaches a maximum of gmax .
We assumed all interneurons form GABAergic synapses on their target cells. We assumed that pyramidal cells form excitatory synapses on their target cells, either AMPA or NMDA, with a 0.5 probability for each. Channel characteristics that were used in our model, in terms of values given in Equation 1, are presented in Table 3.
Because we were interested in understanding behaviors that were properties of the tissue itself, our model is intentionally self-contained: all synaptic stimulation to cells (aside from the initial stimulation) arises from the other cells of the model. A by-product of the fact that our model represents a very small piece of tissue is that the amount of innervation received by a given cell is considerably lower than that received by a hippocampal cells in vivo: for example, actual CA1 PCs receive about 30,000 excitatory and 1,700 inhibitory inputs ; the number of synaptic inputs received by PCs in our model were orders of magnitude lower than this. To compensate, we multiplied gmax for all synaptic conductances by a constant scaling or “weight” factor. We arrived at this multiplier by gradually increasing it and re-running the model until pyramidal interneuron transmembrane voltages showed realistic-appearing spike waveforms; the factor we used was 300. We were reassured in our choice of weight factor by subsequent statistical analysis (see Results) that indicated reasonable agreement between basic model behaviors and actual tissue.
In contrast to some modeling efforts, our intention was not to impress upon the network patterned stimuli implicitly containing information, or the output of a single neuroanatomic track. Rather, we wished to present the network with an input stimulus that was biologically realistic yet “neutral”, with the emphasis on understanding how this tissue processed and transformed the information. Thus, we placed a “dummy” AMPA channel in each cell soma of the model (D. Beeman, personal communication, May 2002), and activated each randomly, with a mean activation rate of 15 Hz; this stimulation was applied for one second, and the model was allowed to run for an additional five seconds of brain time. Transmembrane potential from the soma of each neuron was recorded.
The question of initial conditions is a difficult one, as any set of assumptions about starting values for transmembrane potentials, values of Hodgkin-Huxley channel gates, etc., introduces bias into the functioning of the model; ideally, one would select a starting state that is stochastic, but biologically plausible. Therefore, we allowed the system to start in a state in which all variables were set to 0; we then applied the random stimulation as described above. This produced unnatural behaviors briefly which damped out within about 0.4 second, and spike waveforms became normal-appearing. We took this 0.4 second mark as the starting point of the simulation, given that it would make for stochastic, but still neurophysiologically realistic, initial conditions. Data from the preceding “transient” period was excluded from our analysis.
We also placed “virtual electrodes” in 16 locations throughout the simulated tissue block to record local field potential (Figure 1). We calculated LFPs using the following equation, given by Nunez , and adapted for use in neurocomputational modeling , as follows:
Where Φ is the field potential in volts, s is conductivity of the medium surrounding the neurons, in 1/Ωm, Imi is the transmembrane current in amperes across the ith neural compartment, and Ri is the distance from the ith neural compartment to the recording electrode. The sum is taken over every computational compartment of every neuron in the simulation.
The simulation was implemented using the GENESIS neural modeling language  and run under LINUX on a dual-processor PC. The individual pyramidal cell and interneuron models were ported to GENESIS by Sampat and Huerta (http://www.genesis-sim.org/BABEL/babeldirs/cells) and Menschik and Finkel , respectively; both are available at the aforementioned URL. Programs to specify cell placement and connectivity were written in the C programming language by the author. These were designed to be maximally flexible: parameter files to the programs, in the form of Tables 1 and and2,2, can be updated as additional neuroanatomic information becomes available.
We used the Crank-Nicholson numerical integration method , with an integration time step of 0.125 msec. This was among the fastest numerical integration methods available using the GENESIS software and, with the time step used, did not produce significant error . A simulation of 2.5 seconds of “neuronal time” required about 24 hours of computer time. For questions on programming details, readers are invited to email the author at psiekmeier/at/mclean.harvard.edu.
We wished to quantitatively compare the behavior of our model to that of hippocampi in vivo—to the extent there was agreement, we would feel reassured about the biological realism of the simulation. We did this by comparing model outputs to actual physiological recordings using three measures: average spike rates for pyramidal cells and interneurons, distribution of neurons’ spike rates, and distribution of interspike intervals (ISIs).
We found the average spike rate for all pyramidal cells in the model to be 2.45 Hz, and for all interneurons to be 25.6 Hz. We took 0 V to be spike threshold. Model outputs compared well with published in vivo values, as summarized in Figure 3a. Perhaps more important is the distribution of spike rates seen. Figure 3b shows a comparison of a frequency distribution for mean firing rates of neurons in rat hippocampus experimentally vs. outputs from our model for both pyramidal cells and interneurons. A similar comparison for ISIs is presented in Figure 4. Agreement is good in both cases.
At distinct moments in the 6-second run, system-wide “shifts” in the network’s overall pattern of firing appeared; these changes were reflected in the local field potential. The grey vertical bars of Figure 5b indicate these transitions, which were arrived at using the following three-step algorithm:
When using parameter values for sequential ISI ratio, time window, and threshold of 26, 45, and 11, respectively, the system exhibits 7 activity shifts defining 8 activity epochs. A representative LFP trace from a virtual electrode in stratum pyramidale is shown in Figure 5a; correspondence between LFP shifts and the transitions indicated in Figure 5b is apparent.
To make sure that the simulation run we analyzed approximated a steady state condition, as opposed to a continuation of the initial transient, we allowed the system to run for an extended period of time (68 simulated seconds). During this period it produced 74 epochs, or 1.09 epochs/second on average, similar to the 1.33 epochs/second of the initial (analyzed) run. Significantly, the rate at which epochs occurred was roughly constant throughout the extended run (data not shown).
To ensure that the transitions observed in our analyses did not occur strictly by chance, we created and evaluated surrogate data sets using three different methodologies, as described below:
Finally, we wished to understand what, at a cellular level, underlay the shifts identified above. Inspection of the data indicated that many state transitions are preceded by the progressively slower firing of a number of cells, or what appears to be spike frequency adaptation (SFA). SFA is mediated by slow Ca+-activated K+ channels, also known as SK or KAHP channels, or the IAHP (afterhyperpolarization) current . To examine the degree to which SFA played a role in the network’s state transitions, we removed the IAHP channels from the constituent neurons of our simulation and re-ran it, using the same one-second stimulation as described above. Under this regimen, the number of state transitions decreased to one, as shown in Figure 5c.
The activity epochs seen above are suggestive of the attractor states described in the neural network literature. Briefly, according to this body of theory—articulated, for example, by Hopfield —attractors are stable fixed points of the system, which correspond to minima of the network’s energy function, over all values of its state-space. It has been theorized that these states, occurring in biological neural tissue, represent memories; thus, the matrix of neural connections is the storage medium for these memories. As the epochs we have identified are transient states, in the strict sense of the term they are not “attractors”, which are defined as stable states of a dynamical system. Nonetheless, the attractor energy formulation can usefully be brought to bear.
The “classical” attractor network energy function is
J is the weight matrix, where Jij is the strength of connection from neuron i to neuron j, and Si represents the activation state of neuron i . While such energy functions were originally discussed in terms of abstract network formulations with symmetric weights and discontinuous Sis, it has also been used in the context of networks with asymmetric weights and continuously valued Sis . The term energy function comes from a physical analogy to magnetic materials  and has been popularized for use in neural network theory.
Intuitively speaking, Equation 4 measures the degree of correspondence between a pattern of neural activation and the “hardwired” connectivity of the network. That is, it rewards states—by assigning a lower (more negative) energy value—in which two neurons connected by an excitatory connection are both activated.
To use this energy function to determine if the stable states of our simulation had attractor characteristics, we define the following terms:
Si above is analogous to our Vi; since Vi lies on [0, 1] rather than [−1, +1], to apply Equation 4 to our system, we modify it as shown in Equation 5. This allows us to calculate the energy level for a single epoch:
Calculating the energy function for each of the 8 epochs in Figure 5b yielded values ranging from −55.3 × 103 to −58.1 × 103, with an average of −57.1 × 103; for the two epochs of Figure 5c, values ranged from −57.6 × 103 to −57.8 × 103 (see Figure 6). The units, of course, are arbitrary, and there is no absolute cutoff that would “define” an attractor.
To interpret these numbers, we need a measure of the system’s baseline—that is, the energy level when the system had not “selected” any particular state. Theoretically speaking, the most accurate measure of a neural system’s baseline is not entirely clear; therefore, we calculated this using three different methods:
The above results are shown graphically in Figure 6. As can been seen, our original system’s energy was consistently lower than baseline by each of the methods above, suggesting attractor characteristics.
The power of computational modeling lies in its ability to elucidate the relationship between cellular-level phenomena and system-level behaviors. A simulation does not need to contain all aspects of the physical thing being modeled in order to be useful. Indeed, as Mel  points out, in vitro brain slices are in fact crude models of intact brain (and the source of much of the data underlying our current understanding of brain function): a puff of glutamate is taken to model stimulation in vivo, and LTP is used as a model of learning. The particular biological details that can be abstracted away depend on the purpose(s) of the model.
Protopapas et al  describe a process of model development and “tuning” to ensure the simulation is an accurate representation of the tissue being modeled. The model output used for confirmation purposes should be related to, but not identical with, the ultimate model behavior of interest. We compared spike rates and ISI intervals generated by the model to those seen in laboratory experiments. While it is difficult to compare our model’s functioning to a particular activity state of live animal (e.g. sleeping vs. running a maze vs. other behaviors), it showed generally good agreement with the experimental data: average spike rates for pyramidal cells were within the range of values seen in the literature, as were average spike rates for interneurons (Figure 3a). As important, the distributions of average spike rates for the cell populations in both cases showed good agreement (Figure 3b). The same was true for ISI distributions (Figure 4). Thus, our model seemed to capture the essential qualities of the neural tissue and its spiking properties.
The tendency of the model to transition from stable state to stable state in a relatively discrete manner is analogous to a behavior of neural tissue that is increasingly being appreciated, based on in vivo [16,46,54,57], in vitro [17,22,24], and tissue culture  studies. For example, studies of locust antennal lobe neurons [4,51] indicate that specific odors are encoded by the sequential activation and deactivation of particular subsets of neurons. In vivo multi-unit recording studies of monkey prefrontal cortex [1,74] similarly showed the existence of a number of stable configurations of firing activity; the relatively abrupt changes between these states were marked by simultaneous changes in firing rates of several neurons. Of note, these states “flipped” one to three times per second , similar to the rates seen in our computational model. Studies in gustatory cortex of behaving rats  and slice studies from mouse visual cortex  have shown similar results—that percepts may be encoded as particular sequences of neural ensembles, not just one particular pattern of activation.
As the above experimental studies indicate, a good deal of this work has focused on sensory cortex. However, more recent integrative, theoretically oriented work [e.g., 69] suggests that such neural mechanisms may underlie cognitive as well as perceptual processes. Indeed, according to this work, understanding the trajectory of a neural system through state space—that is, how ensembles of neural activity evolve over time—may be more useful than simple measurements of steady state or periodic processes. The hippocampal model we have described is consistent with this line of analysis.
Our model did not exhibit theta frequency (4–8 Hz) activity, which is prominently seen in neurophysiologic studies of CA1. The precise mechanism of the generation of this rhythm is not known with certainty . A long line of research has suggested that it depends on hippocampal connections with the medial septum-diagonal band of Broca (MS-DBB) [e.g., 20,88]. Other subcortical structures have been suggested also, either as pacemakers to CA1or as areas crucial in feedback loops with hippocampus [e.g., 10,86]. Also, more local interactions (e.g., between CA3 and entorhinal cortex)  have been postulated. As inter-areal communications appear important for the generation of CA1 theta, and our model does not include such connections, it is not surprising that such oscillations are absent.
An interesting emergent behavior of our model is its tendency to undergo “shifts”, even in the absence of external stimulation. To see if the periods between shifts showed attractor characteristics, an energy function was calculated for these states, as described in Results (Equations 4–5). The between-shift stable states consistently showed lower energy levels than baseline, by a number of different measures, suggesting that they are attractor-like states. In the theoretical neural network modeling paradigm, researchers generally “train” a network with particular patterns (“memories”), then demonstrate that it forms attractors at these configurations. An interesting finding of our study is that a network that is not trained a priori, but is constructed based solely on actual neuroanatomic data, can show attractor dynamics. Of note, our computational results are consistent with recent experimental work, particularly that of Sasaki et al . They studied hippocampal slice cultures using multineuron calcium imaging; their analysis of cell spiking activity using an energy function framework revealed that these states had attractor characteristics.
This work suggests that the activity of a particular ion channel may underlie this behavior. The calcium dependent potassium current, which is often abbreviated IK(Ca) or IK,Ca, can be divided into three subtypes, based on voltage dependence, Ca++ sensitivity, pharmacology, and conductance: the BK (“big K(Ca)”), or IC, channel; the IK channel; and the SK, or slow AHP channel, abbreviated KAHP, which is responsible for spike frequency adaptation (SFA) . The marked decrease in the number of state shifts in the absence of simulated KAHP conductance suggests that spike frequency adaptation may, at least in part, be responsible for this behavior. An interpretation of the network’s shifts could be that as a group of mutually excitatory neurons maintain activation, a number of them begin to “fatigue out” via decreased spiking; when a sufficient number do so, that pattern looses prominence, and the network transitions to another stable state.
What do our findings suggest about the manner in which hippocampus carries out its cognitive functions? Previous modeling studies incorporating spike frequency adaptation  have pointed to the importance of this cellular level phenomenon in understanding cognitive processing. This is supplemented by pharmacologic studies on laboratory animals that have been carried out using apamin, an agent that specifically blocks the SK channel . It was shown that apamin enhances LTP in vitro in area CA1 of rat hippocampus . Consistent with this are numerous studies showing that apamin improves performance on memory tasks in laboratory animals [21,26,62,85]. While the experimental protocols in these studies varied, in general, apamin seemed to improve the learning, or memory encoding, aspect of the task, as opposed to the recall, or retrieval subtask. Further supporting evidence comes from Hasselmo et al . These researchers, utilizing both experimental and computational approaches, showed that in CA3, increased levels of acetylcholine—whose cellular level effects include the suppression of SFA, among other things—set the conditions for learning in hippocampus, and decreased levels set the conditions for recall.
The precise manner in which decreased SFA could lead to increased memory performance is not entirely clear. We saw that computational blockade of the KAHP channel led to system level behavior featuring two approximately three-second stable states, as opposed to eight in the control case. An attractor state is characterized by the firing of a particular subset of relatively highly interconnected cells. LTP requires the near simultaneous firing of a pre- and post-synaptic neuron—it has been estimated that the postsynaptic cell must show depolarization within approximately 100 ms of presynaptic activation for LTP to occur . One could speculate that longer attractor states could be associated with greater rates of LTP. While the precise mechanism of such an effect is unclear, our model does provide support for the ideal that attractor formation is important in the memory encoding process. Furthermore, it is expressed at a degree of biological realism such that it can produce testable hypotheses; these can then be verified or falsified, leading to further model refinements. It could usefully be used in conjunction with the growing body of in vitro and in vivo data indicating “metastability”  or sequencing  of attractor states is crucial in understanding brain function. More generally, our study suggests that central to understanding how hippocampus encodes memories is an appreciation of spatiotemporal dynamics—the manner in which the system transitions from state to state over time.
State transitions of this kind have been discussed in the theoretical neural network literature. A mathematical framework for understanding their dynamics is offered by Sompolinsky and Kanter , in a paper discussing the ability of asymmetrically connected neural networks to store and recall sequences of spatial patterns of activation. Their formulation is reviewed briefly below:
The state of the network at time t can be described by Si(t), I = 1, …, N, where N is the number of neurons in the system. Conceptually, synaptic connections can be divided into those that are symmetrical, and will thus tend to maintain the system in its current state, which are termed , and those that are asymmetric, and will tend to push the network into a new state, which are termed . In both cases, Jij denotes the synaptic strength of the connection from j to i. Si will fire when a threshold level of input is achieved; that is, Si(t + 1) = sgn[hi(t)], where hi(t) = hi(1)(t) + hi(2)(t) and
as well as
(While the biologically realistic model that we have described does not have neurons with threshold functions in a strict sense, we can think of them as units that will fire to produce an output when a sufficiently high level of dendritic stimulation is achieved.) above is a function embodying a dynamic memory characterized by a time decay constant τ, meaning that hi(2) (t) averages the activity state over time ~ τ. Thus, h(2) will induce transition to state μ(t+1) only after the system has stayed in state μ (t) for a period of time of order τ. Sompolinski and Kanter  state that networks containing asymmetric synaptic connections with markedly longer response times could, in the manner outlined above, give rise to temporal pattern generation.
Our model suggests a measurable, neurobiological characteristic that could underlie : internal neuronal Ca++ concentration. As [Ca++] increases over time, the prominence of the KAHP current increases, because this conductance is a function only of [Ca++]. Increased KAHP causes spiking to slow, and finally terminate.
Models like ours can be used to better understand the cellular-level etiologies of neuropsychiatric illnesses. For example, modeling studies of schizophrenia [38,39,75] have indicated the way in which decreased neural connectivity can give rise to some of its clinical manifestations. While the precise hippocampal lesion is unknown, a number of abnormalities at a neuroanatomic level have been identified. For example, in postmortem studies, Benes and others  have found deficiencies in interneurons in schizophrenic hippocampus in several subfields . Others have quantified this by subtype, finding, for example, an approximate 50% reduction in parvalbumin-staining interneurons in CA1 . Furthermore, as reviewed by Coyle  and Meador-Woodruff and Healy , there are also abnormalities in a number of glutamate receptor subtypes in schizophrenic hippocampus. “Virtual experiments” based on these findings could help elucidate the underlying cause of this and other illnesses.
I would like to thank Steven Matthysse for mentoring work on this project. This research was supported by funding from the NIMH (5K08MH072771 and Conte Center Grant 5P50MH060450), and a NARSAD Young Investigator Award.
Publisher's Disclaimer: This is a PDF file of an unedited manuscript that has been accepted for publication. As a service to our customers we are providing this early version of the manuscript. The manuscript will undergo copyediting, typesetting, and review of the resulting proof before it is published in its final citable form. Please note that during the production process errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain.