PMCCPMCCPMCC

Search tips
Search criteria 

Advanced

 
Logo of ploscompComputational BiologyView this ArticleSubmit to PLoSGet E-mail AlertsContact UsPublic Library of Science (PLoS)
 
PLoS Comput Biol. 2010 April; 6(4): e1000759.
Published online 2010 April 29. doi:  10.1371/journal.pcbi.1000759
PMCID: PMC2861628

How Informative Are Spatial CA3 Representations Established by the Dentate Gyrus?

Lyle J. Graham, Editor

Abstract

In the mammalian hippocampus, the dentate gyrus (DG) is characterized by sparse and powerful unidirectional projections to CA3 pyramidal cells, the so-called mossy fibers. Mossy fiber synapses appear to duplicate, in terms of the information they convey, what CA3 cells already receive from entorhinal cortex layer II cells, which project both to the dentate gyrus and to CA3. Computational models of episodic memory have hypothesized that the function of the mossy fibers is to enforce a new, well separated pattern of activity onto CA3 cells, to represent a new memory, prevailing over the interference produced by the traces of older memories already stored on CA3 recurrent collateral connections. Can this hypothesis apply also to spatial representations, as described by recent neurophysiological recordings in rats? To address this issue quantitatively, we estimate the amount of information DG can impart on a new CA3 pattern of spatial activity, using both mathematical analysis and computer simulations of a simplified model. We confirm that, also in the spatial case, the observed sparse connectivity and level of activity are most appropriate for driving memory storage – and not to initiate retrieval. Surprisingly, the model also indicates that even when DG codes just for space, much of the information it passes on to CA3 acquires a non-spatial and episodic character, akin to that of a random number generator. It is suggested that further hippocampal processing is required to make full spatial use of DG inputs.

Author Summary

The CA3 region at the core of the hippocampus, a structure crucial to memory formation, presents one striking anatomical feature. Its neurons receive many thousands of weak inputs from other sources, but only a few tens of very strong inputs from the neurons in the directly preceding region, the dentate gyrus. It had been proposed that such sparse connectivity helps the dentate gyrus to drive CA3 activity during the storage of new memories, but why it needs to be so sparse had remained unclear. Recent recordings of neuronal activity in the dentate gyrus (Leutgeb, et al. 2007) show the firing maps of granule cells of rodents engaged in exploration: the few cells active in a given environment, about 3% of the total, present multiple firing fields. Following these findings, we could now construct a network model that addresses the question quantitatively. Both mathematical analysis and computer simulations of the model show that, while the memory system would function also otherwise, connections as sparse as those observed make it function optimally, in terms of the bits of information new memories contain. Much of this information, we show, is encoded however in a difficult format, suggesting that other regions of the hippocampus, until now with no clear role, may contribute to decode it.

Introduction

The hippocampus presents the same organizaton across mammals, and distinct ones in reptiles and in birds. A most prominent and intriguing feature of the mammalian hippocampus is the dentate gyrus (DG). As reviewed in [1], the dentate gyrus is positioned as a sort of intermediate station in the information flow between the entorhinal cortex and the CA3 region of the hippocampus proper. Since CA3 receives also direct, perforant path connections from entorhinal cortex, the DG inputs to CA3, called mossy fibers, appear to essentially duplicate the information that CA3 can already receive directly from the source. What may be the function of such a duplication?

Within the view that the recurrent CA3 network operates as an autoassociative memory [2], [3], it has been suggested that the mossy fibers (MF) inputs are those that drive the storage of new representations, whereas the perforant path (PP) inputs relay the cue that initiates the retrieval of a previously stored representation, through attractor dynamics, due largely to recurrent connections (RC). Such a proposal is supported by a mathematical model which allows a rough estimate of the amount of information, in bits, that different inputs may impart to a new CA3 representation [4]. That model, however, is formulated in the Marr [5] framework of discrete memory states, each of which is represented by a single activity configuration or firing pattern.

Conversely, the prediction that MF inputs may be important for storage and not for retrieval has received tentative experimental support from experiments with spatial tasks, either the Morris water maze [6] or a dry maze [7]. Two-dimensional spatial representations, to be compatible with the attractor dynamics scenario, require a multiplicity of memory states, which approximate a 2D continuous manifold, isomorphic to the spatial environment to be represented. Moreover, there has to be of course a multiplicity of manifolds, to represent distinct environments with complete remapping from one to the other [8]. Attractor dynamics then occurs along the dimensions locally orthogonal to each manifold, as in the simplified “multi-chart” model [9], [10], whereas tangentially one expects marginal stability, allowing for small signals related to the movement of the animal, reflecting changing sensory cues as well as path integration, to displace a “bump” of activity on the manifold, as appropriate [9], [11].

Although the notion of a really continuous attractor manifold appears as a limit case, which can only be approximated by a network of finite size [12], [13], [14], [15], even the limit case raises the issue of how a 2D attractor manifold can be established. In the rodent hippocampus, the above theoretical suggestion and experimental evidence point at a dominant role of the dentate gyrus, but it has remained unclear how the dentate gyrus, with its MF projections to CA3, can drive the establishment not just of a discrete pattern of activity, as envisaged by [4], but of an entire spatial representation, in its full 2D glory. This paper reports the analysis of a simplified mathematical model aimed at addressing this issue in a quantitative, information theoretical fashion.

Such an analysis would have been difficult even only a few years ago, before the experimental discoveries that largely clarified, in the rodent, the nature of the spatial representations in the regions that feed into CA3. First, roughly half of the entorhinal PP inputs, those coming from layer II of the medial portion of entorhinal cortex, were found to be often in the form of grid cells, i.e. units that are activated when the animal is in one of multiple regions, arranged on a regular triangular grid [16]. Second, the sparse activity earlier described in DG granule cells [17] was found to be concentrated on cells also with multiple fields, but irregularly arranged in the environment [18]. These discoveries can now inform a simplified mathematical model, which would have earlier been based on ill-defined assumptions. Third, over the last decade neurogenesis in the adult dentate gyrus has been established as a quantitatively constrained but still significant phenomenon, stimulating novel ideas about its functional role [19]. The first and third of these phenomena will be considered in extended versions of our model, to be analysed elsewhere; here, we focus on the role of the multiple DG place fields in establishing novel CA3 representations.

A simplified mathematical model

The complete model considers the firing rate of a CA3 pyramidal cell, An external file that holds a picture, illustration, etc.
Object name is pcbi.1000759.e001.jpg, to be determined by the firing rates An external file that holds a picture, illustration, etc.
Object name is pcbi.1000759.e002.jpg of other cells in CA3, which influence it through RC connections; by the firing rates An external file that holds a picture, illustration, etc.
Object name is pcbi.1000759.e003.jpg of DG granule cells, which feed into it through MF connections; by the firing rates An external file that holds a picture, illustration, etc.
Object name is pcbi.1000759.e004.jpg of layer II pyramidal cells in entorhinal cortex (medial and lateral), which project to CA3 through PP axons; and by various feedforward and feedback inhibitory units. A most important simplification is that the fine temporal dynamics, e.g. on theta and gamma time scales, is neglected altogether, so that with “firing rate” we mean an average over a time of order the theta period, a hundred msec or so. Very recent evidence indicates, in fact, that only one of two competing spatial representations tends to be active in CA3 within each theta period [Jezek et al, SfN abstract, 2009]. Information coding over shorter time scales would require anyway a more complex analysis, which is left to future refinements of the model.

For the different systems of connections, we assume the existence of anatomical synapses between any two cells to be represented by fixed binary matrices An external file that holds a picture, illustration, etc.
Object name is pcbi.1000759.e005.jpg taking 0 or 1 values, whereas the efficacy of those synapses to be described by matrices An external file that holds a picture, illustration, etc.
Object name is pcbi.1000759.e006.jpg. Since they have been argued to have a minor influence on coding properties and storage capacity [20], consistent with the diffuse spatial firing of inhibitory interneurons [21], the effect of inhibition and of the current threshold for activating a cell are summarized into a subtractive term, of which we denote with An external file that holds a picture, illustration, etc.
Object name is pcbi.1000759.e007.jpg the mean value across CA3 cells, and with An external file that holds a picture, illustration, etc.
Object name is pcbi.1000759.e008.jpg the deviation from the mean for a particular cell An external file that holds a picture, illustration, etc.
Object name is pcbi.1000759.e009.jpg.

Assuming finally a simple threshold-linear activation function [22] for the relation between the activating current and the output firing rate, we write

equation image
(1)

where An external file that holds a picture, illustration, etc.
Object name is pcbi.1000759.e011.jpg indicates taking the sum inside the brackets if positive in value, and zero if negative, and An external file that holds a picture, illustration, etc.
Object name is pcbi.1000759.e012.jpg is a gain factor. The firing rates of the various populations are all assumed to depend on the position An external file that holds a picture, illustration, etc.
Object name is pcbi.1000759.e013.jpg of the animal, and the notation is chosen to minimize differences with our previous analyses of other components of the hippocampal system (e.g. [22], [23]).

The storage of a new representation

When the animal is exposed to a new environment, we make the drastic modelling assumption that the new CA3 representation be driven solely by MF inputs, while PP and RC inputs provide interfering information, reflecting the storage of previous representations on those synaptic systems, i.e., noise. Such “noise” can in fact act as an undesired signal and bring about the retrieval of a previous, “wrong” representation, an interesting process which is not however analysed here. We reabsorb the mean of such noise into the mean of the “threshold+inhibition” term An external file that holds a picture, illustration, etc.
Object name is pcbi.1000759.e014.jpg and similarly for the deviation from the mean. We use the same symbols for the new variables incorporating RC and PP interference, but removing in both cases the “An external file that holds a picture, illustration, etc.
Object name is pcbi.1000759.e015.jpg” sign, thus writing

equation image
(2)

where the gain has been set to An external file that holds a picture, illustration, etc.
Object name is pcbi.1000759.e017.jpg, without loss of generality, by an appropriate choice of the units in which to measure An external file that holds a picture, illustration, etc.
Object name is pcbi.1000759.e018.jpg (pure numbers) and An external file that holds a picture, illustration, etc.
Object name is pcbi.1000759.e019.jpg (An external file that holds a picture, illustration, etc.
Object name is pcbi.1000759.e020.jpg).

As for the MF inputs, we consider a couple of simplified models that capture the essential finding by [18], of the irregularly arranged multiple fields, as well as the observed low activity level of DG granule cells [24], while retaining the mathematical simplicity that favours an analytical treatment. We thus assume that only a randomly selected fraction An external file that holds a picture, illustration, etc.
Object name is pcbi.1000759.e021.jpg of the granule cells are active in a new environment, of size An external file that holds a picture, illustration, etc.
Object name is pcbi.1000759.e022.jpg, and that those units are active in a variable number An external file that holds a picture, illustration, etc.
Object name is pcbi.1000759.e023.jpg of locations, with An external file that holds a picture, illustration, etc.
Object name is pcbi.1000759.e024.jpg drawn from a distribution with mean An external file that holds a picture, illustration, etc.
Object name is pcbi.1000759.e025.jpg. In model A, which we take as our reference, the distribution is taken to be Poisson (the data reported by Leutgeb et al [18] are fit very well by a Poisson distribution with An external file that holds a picture, illustration, etc.
Object name is pcbi.1000759.e026.jpg, but their sampling is limited). In model B, which we use as a variant, the distribution is taken to be exponential (this better describes the results of the simulations in [25], though that simple model may well be inappropriate). Therefore, in either model, the firing rate An external file that holds a picture, illustration, etc.
Object name is pcbi.1000759.e027.jpg of DG unit An external file that holds a picture, illustration, etc.
Object name is pcbi.1000759.e028.jpg is a combination of An external file that holds a picture, illustration, etc.
Object name is pcbi.1000759.e029.jpg gaussian “bumps”, or fields, of equal effective size An external file that holds a picture, illustration, etc.
Object name is pcbi.1000759.e030.jpg and equal height An external file that holds a picture, illustration, etc.
Object name is pcbi.1000759.e031.jpg, centered at random points An external file that holds a picture, illustration, etc.
Object name is pcbi.1000759.e032.jpg in the new environment

equation image
(3)

The informative inputs driving the firing of a CA3 pyramidal cell, during storage of a new representation, result therefore from a combination of three distributions, in the model. The first, Poisson but close to normal, determines the MF connectivity, that is how it is that each CA3 unit receives only a few tens of connections out of An external file that holds a picture, illustration, etc.
Object name is pcbi.1000759.e034.jpg granule cells (in the rat), whereby An external file that holds a picture, illustration, etc.
Object name is pcbi.1000759.e035.jpg with An external file that holds a picture, illustration, etc.
Object name is pcbi.1000759.e036.jpg. The second, Poisson, determines which of the DG units presynaptic to a CA3 unit is active in the new environment, with An external file that holds a picture, illustration, etc.
Object name is pcbi.1000759.e037.jpg. The third, either Poisson or exponential (and see model C below), determines how many fields an active DG unit has in the new environment. Note that in the rat An external file that holds a picture, illustration, etc.
Object name is pcbi.1000759.e038.jpg [26] whereas An external file that holds a picture, illustration, etc.
Object name is pcbi.1000759.e039.jpg, even when considering presumed newborn neurons [24]. As a result, the total number of active DG units presynaptic to a given CA3 unit, An external file that holds a picture, illustration, etc.
Object name is pcbi.1000759.e040.jpg, is of order one, An external file that holds a picture, illustration, etc.
Object name is pcbi.1000759.e041.jpg, so that the second Poisson distribution effectively dominates over the first, and the number of active MF impinging on a CA3 unit can approximately be taken to be itself a Poisson variable with mean An external file that holds a picture, illustration, etc.
Object name is pcbi.1000759.e042.jpg. As a qualification to such an approximation, one has to consider that different CA3 pyramidal cells, among the An external file that holds a picture, illustration, etc.
Object name is pcbi.1000759.e043.jpg present in the rat (on each side), occasionally receive inputs from the same active DG granule cells, but rarely, as An external file that holds a picture, illustration, etc.
Object name is pcbi.1000759.e044.jpg, hence the pool of active units An external file that holds a picture, illustration, etc.
Object name is pcbi.1000759.e045.jpg is only one order of magnitude smaller than the population of receiving units An external file that holds a picture, illustration, etc.
Object name is pcbi.1000759.e046.jpg.

In a further simplification, we consider the MF synaptic weights to be uniform in value, An external file that holds a picture, illustration, etc.
Object name is pcbi.1000759.e047.jpg. This assumption, like those of equal height and width of the DG firing fields, is convenient for the analytical treatment but not necessary for the simulations. It will be relaxed later, in the computer simulations addressing the effect of MF synaptic plasticity.

The new representation is therefore taken to be established by an informative signal coming from the dentate gyrus

equation image
(4)

modulated, independently for each CA3 unit, by a noise term An external file that holds a picture, illustration, etc.
Object name is pcbi.1000759.e049.jpg, reflecting recurrent and perforant path inputs as well as other sources of variability, and which we take to be normally distributed with zero mean and standard deviation An external file that holds a picture, illustration, etc.
Object name is pcbi.1000759.e050.jpg.

The position An external file that holds a picture, illustration, etc.
Object name is pcbi.1000759.e051.jpg of the animal determines the firing An external file that holds a picture, illustration, etc.
Object name is pcbi.1000759.e052.jpg of DG units, which in turn determine the probability distribution for the firing rate of any given CA3 pyramidal unit

equation image

where

equation image

is the integral of the gaussian noise up to given signal-to-noise ratio

equation image

and An external file that holds a picture, illustration, etc.
Object name is pcbi.1000759.e056.jpg is Heaviside's function vanishing for negative values of its argument. The first term, multiplying Dirac's An external file that holds a picture, illustration, etc.
Object name is pcbi.1000759.e057.jpg, expresses the fact that negative activation values result in zero firing rates, rather than negative rates.

Note that the resulting sparsity, i.e. how many of the CA3 units end up firing significantly at each position, which is a main factor affecting memory storage [21], is determined by the threshold An external file that holds a picture, illustration, etc.
Object name is pcbi.1000759.e058.jpg, once the other parameters have been set. The approach taken here is to assume that the system requires the new representation to be sparse and regulates the threshold accordingly. We therefore set the sparsity parameter An external file that holds a picture, illustration, etc.
Object name is pcbi.1000759.e059.jpg, in broad agreement with experimental data [14], and adjust An external file that holds a picture, illustration, etc.
Object name is pcbi.1000759.e060.jpg (as shown, for the mathematical analysis, in the third section of the Methods).

The distribution of fields per DG unit is given in model A by the Poisson form

equation image

in model B by the exponential form

equation image

and we also consider, as another variant, model C, where each DG unit has one and only one field

equation image

Assessing spatial information content

In the model, spatial position An external file that holds a picture, illustration, etc.
Object name is pcbi.1000759.e064.jpg is represented by CA3 units, whose activity is informed about position by the activity of DG units. The activity of each DG unit is determined independently of others by its place fields

equation image

with

equation image

where each contributing field is a gaussian bump

equation image

The Mutual Information An external file that holds a picture, illustration, etc.
Object name is pcbi.1000759.e068.jpg quantifies the efficiency with which CA3 activity codes for position, on average, as

equation image
(5)

where the outer brackets An external file that holds a picture, illustration, etc.
Object name is pcbi.1000759.e070.jpg indicate that the average is not just over the noise An external file that holds a picture, illustration, etc.
Object name is pcbi.1000759.e071.jpg, as usual in the estimation of mutual information, but also, in our case, over the quenched, i.e. constant but unknown values of the microscopic quantities An external file that holds a picture, illustration, etc.
Object name is pcbi.1000759.e072.jpg, the connectivity matrix, An external file that holds a picture, illustration, etc.
Object name is pcbi.1000759.e073.jpg, the number of fields per active unit, and An external file that holds a picture, illustration, etc.
Object name is pcbi.1000759.e074.jpg, their centers. For given values of the quenched variables, the total entropy An external file that holds a picture, illustration, etc.
Object name is pcbi.1000759.e075.jpg and the (average) equivocation An external file that holds a picture, illustration, etc.
Object name is pcbi.1000759.e076.jpg are defined as

equation image
(6)

equation image
(7)

where An external file that holds a picture, illustration, etc.
Object name is pcbi.1000759.e079.jpg is the area of the given environment; the An external file that holds a picture, illustration, etc.
Object name is pcbi.1000759.e080.jpgs are intended in base 2, to yield information values in bits.

The estimation of the mutual information can be approached analytically directly from these formulas, using the replica trick (see [27]), as shown by [28] and [29], and briefly described in the first section of the Methods. As in those two studies, however, here too we are only able to complete the derivation in the limit of low signal-to-noise, or more precisely of limited variation, across space, of the signal-to-noise around its mean, that is An external file that holds a picture, illustration, etc.
Object name is pcbi.1000759.e081.jpg. In this case we obtain, to first order in An external file that holds a picture, illustration, etc.
Object name is pcbi.1000759.e082.jpg, an expression that can be shown to be equivalent to

equation image
(8)

where we use the notation An external file that holds a picture, illustration, etc.
Object name is pcbi.1000759.e084.jpg (cp. [29], Eqs.17, 45).

Being limited to the first order in An external file that holds a picture, illustration, etc.
Object name is pcbi.1000759.e085.jpg, the expression above can be obtained in a straightforward manner by directly expanding the logarithms, in the large noise limit An external file that holds a picture, illustration, etc.
Object name is pcbi.1000759.e086.jpg, in the simpler formula quantifying the information conveyed by a single CA3 unit

equation image
(9)

This single-unit formula cannot quantify the higher-order contributions in An external file that holds a picture, illustration, etc.
Object name is pcbi.1000759.e088.jpg, which decrease the information conveyed by a population in which some of the units inevitably convey some of the same information. The replica derivation, instead, in principle would allow one to take into proper account such correlated selectivity, which ultimately results in the information conveyed by large CA3 populations not scaling up linearly with An external file that holds a picture, illustration, etc.
Object name is pcbi.1000759.e089.jpg, and saturating instead once enough CA3 units have been sampled, as shown in related models by [28], [29]. In our case however the calculation of e.g. the second order terms in An external file that holds a picture, illustration, etc.
Object name is pcbi.1000759.e090.jpg is further complicated by the fact that different CA3 units receive inputs coming from partially overlapping subsets of DG units. This may cause saturation at a lower level, once all DG units have been effectively sampled. The interested reader can follow the derivation sketched in the Methods.

Having to take, in any case, the large noise limit implies that the resulting formula is not really applicable to neuronally plausible values of the parameters, but only to the uninteresting case in which DG units impart very little information onto CA3 units. Therefore we use only the single-unit formula, and resort to computer simulations to assess the effects of correlated DG inputs. The second and third sections of the Methods indicate how to obtain numerical results by evaluating the expression in Eq. 9.

Computer simulations can be used to estimate the information present in samples of CA3 units of arbitrary size, and at arbitrary levels of noise, but at the price of an indirect decoding procedure. A decoding step is required because the dimensionality of the space spanned by the CA3 activity An external file that holds a picture, illustration, etc.
Object name is pcbi.1000759.e091.jpg is too high. It increases in fact exponentially with the number An external file that holds a picture, illustration, etc.
Object name is pcbi.1000759.e092.jpg of neurons sampled, as An external file that holds a picture, illustration, etc.
Object name is pcbi.1000759.e093.jpg, where An external file that holds a picture, illustration, etc.
Object name is pcbi.1000759.e094.jpg is the number of possible responses of each neuron. The decoding method we use, described in the fourth section of the Methods, leads to two different types of information estimates, based on either the full or reduced localization matrix. The difference between the two, and between them and the analytical estimate, is illustrated under Results and further discussed at the end of the paper.

Results

The essential mechanism described by the model is very simple, as illustrated in Fig. 1. CA3 units which happen to receive a few DG overlapping fields combine them in a resulting field of their own, that can survive thresholding. The devil is in the quantitative details: what proportion of CA3 cells express place fields, how large are the fields, and how strong are the fields compared with the noise, all factors that determine the information contained in the spatial representation. Note that a given CA3 unit can express multiple fields.

Figure 1
Network scheme.

It is convenient to discuss such quantitative details with reference to a standard set of parameters. Our model of reference is a network of DG units with fields represented by Gaussian-like functions of space, with the number of fields per each DG units given by a Poisson distribution with mean value An external file that holds a picture, illustration, etc.
Object name is pcbi.1000759.e096.jpg, and parameters as specified in Table 1.

Table 1
Parameters: Values used in the standard version of the model.

In general, the stronger the mean DG input, the more it dominates over the noise, and also the higher the threshold has to be set in CA3 to make the pattern of activity as sparse as required, by fixing An external file that holds a picture, illustration, etc.
Object name is pcbi.1000759.e107.jpg. To control for the trivial advantage of a higher signal-to-noise, we perform comparisons in which it is kept fixed, by adjusting e.g. the MF synaptic strength An external file that holds a picture, illustration, etc.
Object name is pcbi.1000759.e108.jpg.

Multiple input cells vs. multiple fields per cell

The first parameter we considered is An external file that holds a picture, illustration, etc.
Object name is pcbi.1000759.e109.jpg, the average number of fields for each DG unit, in light of the recent finding that DG units active in a restricted environment are more likely to have multiple fields than CA3 units, and much more often than expected, given their weak probability of being active [18]. We wondered whether receiving multiple fields from the same input units would be advantageous for CA3, and if so whether there is an optimal An external file that holds a picture, illustration, etc.
Object name is pcbi.1000759.e110.jpg value. We therefore estimated the mutual information when An external file that holds a picture, illustration, etc.
Object name is pcbi.1000759.e111.jpg varies and An external file that holds a picture, illustration, etc.
Object name is pcbi.1000759.e112.jpg, the total mean number of DG fields that each CA3 cell receives as input, is kept fixed, by varying An external file that holds a picture, illustration, etc.
Object name is pcbi.1000759.e113.jpg correspondigly. As shown in Fig. 2, varying An external file that holds a picture, illustration, etc.
Object name is pcbi.1000759.e114.jpg in this manner makes very little difference in the bits conveyed by each CA3 cell. This figure reports the results of computer simulations, that illustrate also the dependence of the mutual information on An external file that holds a picture, illustration, etc.
Object name is pcbi.1000759.e115.jpg, the number of cells sampled. The dependence is sub-linear, but rather smooth, with significant fluctuations from sample-to-sample which are largely averaged out in the graph. The different lines correspond to different distributions of the input DG fields among active DG cells projecting to CA3, that is different combinations of values for An external file that holds a picture, illustration, etc.
Object name is pcbi.1000759.e116.jpg and An external file that holds a picture, illustration, etc.
Object name is pcbi.1000759.e117.jpg, with An external file that holds a picture, illustration, etc.
Object name is pcbi.1000759.e118.jpg kept constant; these different distributions do not affect much the information in the representation.

Figure 2
The exact multiplicity of fields in DG units is irrelevant.

The analytical estimate of the information per CA3 unit confirms that there is no dependence on An external file that holds a picture, illustration, etc.
Object name is pcbi.1000759.e121.jpg (Fig. 2, inset). This is not a trivial result, as it would be if only the parameter An external file that holds a picture, illustration, etc.
Object name is pcbi.1000759.e122.jpg entered the analytical expression. Instead, the second section of the Methods shows that the parameters An external file that holds a picture, illustration, etc.
Object name is pcbi.1000759.e123.jpg of the An external file that holds a picture, illustration, etc.
Object name is pcbi.1000759.e124.jpg-field decomposition depend separately on An external file that holds a picture, illustration, etc.
Object name is pcbi.1000759.e125.jpg and An external file that holds a picture, illustration, etc.
Object name is pcbi.1000759.e126.jpg, so the fact that the two separate dependencies almost cancel out in a single dependence on their product, An external file that holds a picture, illustration, etc.
Object name is pcbi.1000759.e127.jpg, is remarkable. Moreover, such analytical estimate of the information conveyed by one unit does not match the first datapoints, for An external file that holds a picture, illustration, etc.
Object name is pcbi.1000759.e128.jpg, extracted from the computer simulation; it is not higher, as might have been expected considering that the simulation requires an additional information loosing decoding step, but lower, by over a factor of 2. The finding that the analytical estimate differs from, and is in fact much lower than, the slope parameter extracted from the simulations, after the decoding step, is further discussed below. Despite their incongruity in absolute values, neither the estimate derived from the simulations nor the analytical estimate have separate dependencies on An external file that holds a picture, illustration, etc.
Object name is pcbi.1000759.e129.jpg and An external file that holds a picture, illustration, etc.
Object name is pcbi.1000759.e130.jpg, as shown in Fig. 2.

More MF connections, but weaker

Motivated by the striking sparsity of MF connections, compared to the thousands of RC and PP synaptic connections impinging on CA3 cells in the rat, we have then tested the effect of changing An external file that holds a picture, illustration, etc.
Object name is pcbi.1000759.e131.jpg without changing An external file that holds a picture, illustration, etc.
Object name is pcbi.1000759.e132.jpg. In order to vary the mean number of DG units that project to a single CA3 unit, while keeping constant the total mean input strength, assumed to be an independent biophysically constrained parameter, we varied inversely to An external file that holds a picture, illustration, etc.
Object name is pcbi.1000759.e133.jpg the synaptic strength parameter An external file that holds a picture, illustration, etc.
Object name is pcbi.1000759.e134.jpg. As shown in Fig. 3, the information presents a maximum at some intermediate value An external file that holds a picture, illustration, etc.
Object name is pcbi.1000759.e135.jpg, which is observed both in simulations and in the analytical estimate, despite the fact that again they differ by more than a factor of two.

Figure 3
A sparse MF connectivity is optimal, but not too sparse.

Again we find that the analytical estimate differs from, and is in fact much lower than, the slope parameter extracted from the simulations, after the decoding ste. Both measures, however, show that the standard model is not indifferent to how sparse are the MF connections. If they are very sparse, most CA3 units receive no inputs from active DG units, and the competition induced by the sparsity constraint tends to be won, at any point in space, by those few CA3 units that are receiving input from just one active DG unit. The resulting mapping is effectively one-to-one, unit-to-unit, and this is not optimal information-wise, because too few CA3 units are active – many of them in fact have multiple fields (Fig. 4, right), reflecting the multiple fields of their “parent” units in DG. As An external file that holds a picture, illustration, etc.
Object name is pcbi.1000759.e139.jpg increases (with a corresponding decrease in MF synaptic weight), the units that win the competition tend to be those that summate inputs from two or more concurrently active DG units. The mapping ceases to be one-to-one, and this increases the amount of information, up to a point. When An external file that holds a picture, illustration, etc.
Object name is pcbi.1000759.e140.jpg is large enough that CA3 units begin to sample more effectively DG activity, those that win the competition tend to be the “happy few” that happen to summate several active DG inputs, and this tends to occur at only one place in the environment. As a result, an ever smaller fraction of CA3 units have place fields, and those tend to have just one, often very irregular, as shown in Fig. 4, right. From that point on, the information in the representation decreases monotonically. The optimal MF connectivity is then in the range which maximizes the fraction of CA3 units that have a field in the newly learned environment, at a value, roughly one third, broadly consistent with experimental data (see e.g. [30]).

Figure 4
Information vs. connectivity.

It is important to emphasize that what we are reporting is a quantitative effect: the underlying mechanism is always the same, the random summation of inputs from active DG units. DG in the model effectively operates as a sort of random number generator, whatever the values of the various parameters. How informative are the CA3 representations established by that random number generator, however, depends on the values of the parameters.

Other DG field distribution models

We repeated the simulations using other models for the DG fields distribution, the exponential (model B) and the single field one (model C), and the results are similar to those obtained for model A: the information has a maximum when varying An external file that holds a picture, illustration, etc.
Object name is pcbi.1000759.e145.jpg on its own, and is instead roughly constant if the parameter An external file that holds a picture, illustration, etc.
Object name is pcbi.1000759.e146.jpg is held constant (by varying An external file that holds a picture, illustration, etc.
Object name is pcbi.1000759.e147.jpg inversely to An external file that holds a picture, illustration, etc.
Object name is pcbi.1000759.e148.jpg). Fig. 5 reports the comparison, as An external file that holds a picture, illustration, etc.
Object name is pcbi.1000759.e149.jpg varies, between models A and B, with An external file that holds a picture, illustration, etc.
Object name is pcbi.1000759.e150.jpg, and model C, where An external file that holds a picture, illustration, etc.
Object name is pcbi.1000759.e151.jpg, so that in this latter case the inputs are 1/1.7 times weaker (we did not compensate by multiplying An external file that holds a picture, illustration, etc.
Object name is pcbi.1000759.e152.jpg by 1.7). Information measures are obtained by decoding several samples of 10 units, averaging and dividing by 10, and not by extracting the fit parameters. As one can see, the lower mean input for model C leads to lower information values, but the trend with An external file that holds a picture, illustration, etc.
Object name is pcbi.1000759.e153.jpg is the same in all three models. This further indicates that the multiplicity of fields in DG units, as well as its exact distribution, is of no major consequence, if comparisons are made keeping constant the mean number of fields in the input to a CA3 unit.

Figure 5
Information vs. connectivity.

Sparsity of DG activity

We study also how the level of DG activity affects the information flow. We choose diffferent values for the probability An external file that holds a picture, illustration, etc.
Object name is pcbi.1000759.e154.jpg that a single DG unit fires in the given environment, and again we adjust the synaptic weight An external file that holds a picture, illustration, etc.
Object name is pcbi.1000759.e155.jpg to keep the mean DG input per CA3 cell constant across the comparisons.

Results are simular to those obtained varying the sparsity of the MF connections (Fig. 6). Indeed, the analytical estimate in the two conditions would be exactly the same, within the approximation with which we compute it, because the two parameters An external file that holds a picture, illustration, etc.
Object name is pcbi.1000759.e156.jpg and An external file that holds a picture, illustration, etc.
Object name is pcbi.1000759.e157.jpg enter the calculation in equivalent form, as a product. The actual difference between the two parameters stems from the fact that increasing An external file that holds a picture, illustration, etc.
Object name is pcbi.1000759.e158.jpg, CA3 units end up sampling more and more the same limited population of active DG units, while increasing An external file that holds a picture, illustration, etc.
Object name is pcbi.1000759.e159.jpg this population increases in size. This difference can only be appreciated from the simulations, which however show that the main effect remains the same: an information maximum for rather sparse DG activity (and sparse MF connections), The subtle difference between varying the two parameters can be seen better in the saturation information value: with reference to the standard case, in the center of the graph in the inset, to the right increasing An external file that holds a picture, illustration, etc.
Object name is pcbi.1000759.e160.jpg leads to more information than increasing An external file that holds a picture, illustration, etc.
Object name is pcbi.1000759.e161.jpg, while to the left the opposite is the case, as expected.

Figure 6
Sparse DG activity is effective at driving CA3.

Full and simplified decoding procedures

As noted above, we find that the analytical estimate of the information per unit is always considerably lower than the slope parameter of the fit to the measures extracted from the simulations, contrary to expectations, since the latter require an additional decoding step, which implies some loss of information. We also find, however, that the measures of mutual information that we extract from the simulations are strongly dependent on the method used, in the decoding step, to construct the “localization matrix”, i.e. the matrix which compiles the frequency with which the virtual rat was decoded as being in position An external file that holds a picture, illustration, etc.
Object name is pcbi.1000759.e166.jpg when it was actually in position An external file that holds a picture, illustration, etc.
Object name is pcbi.1000759.e167.jpg. All measures reported so far, from simulations, are obtained constructing what we call the full localization matrix An external file that holds a picture, illustration, etc.
Object name is pcbi.1000759.e168.jpg which, if the square environment is discretized into An external file that holds a picture, illustration, etc.
Object name is pcbi.1000759.e169.jpg spatial bins, is a large An external file that holds a picture, illustration, etc.
Object name is pcbi.1000759.e170.jpg matrix, which requires of order 160,000 decoding events to be effectively sampled. We run simulations with trajectories of 400,000 steps, and additionally corrected the information measures to avoid the limited sampling bias [31].

An alternative, that allows extracting unbiased measures from much shorter simulations, is to construct a simplified matrix An external file that holds a picture, illustration, etc.
Object name is pcbi.1000759.e171.jpg, which averages over decoding events with the same vector displacement between actual and decoded positions. An external file that holds a picture, illustration, etc.
Object name is pcbi.1000759.e172.jpg is easily constructed on the torus we used in all simulations, and being a much smaller An external file that holds a picture, illustration, etc.
Object name is pcbi.1000759.e173.jpg matrix it is effectively sampled in just a few thousand steps.

The two decoding procedures, given that the simplified matrix is the shifted average of the rows of the full matrix, might be expected to yield similar measures, but they do not, as shown in Fig. 7. The simplified matrix, by assuming translation invariance of the errors in decoding, is unable to quantify the information implicitly present in the full distribution of errors around each actual position. Such errors are of an “episodic” nature: the local view from position An external file that holds a picture, illustration, etc.
Object name is pcbi.1000759.e174.jpg might happen to be similar to that from position An external file that holds a picture, illustration, etc.
Object name is pcbi.1000759.e175.jpg, hence neural activity reflecting in part local views might lead to confuse the two positions, but this does not imply that another position An external file that holds a picture, illustration, etc.
Object name is pcbi.1000759.e176.jpg has anything in common with An external file that holds a picture, illustration, etc.
Object name is pcbi.1000759.e177.jpg. Our little network model captures this discrepancy, in showing, in Fig. 7, that for any actual position there are a few selected position that are likely to be erroneously decoded from the activity of a given sample of units; when constructing instead the translationally invariant simplified matrix, all average errors are distributed smoothly around the correct position (zero error), in a roughly Gaussian bell. The upper right panel in Fig. 7 shows that such episodic information always prevails, whatever the connectivity, i.e. in all three parameter regimes illustrated in Fig. 4. The lower right panel in Fig. 7 compares, instead, the entropies of the decoded positions with the two matrices, conditioned on the actual position – that is, the equivocation values. Unlike the mutual information, such equivocation is much higher for the simplified matrix; for this matrix, it is simply a measure of how widely displaced are decoded positions, with respect to the actual positions, represented at the center of the square; and for small samples of units, which are not very informative, the “displacement” entropy approaches that of a flat distribution of decoded positions, i.e. An external file that holds a picture, illustration, etc.
Object name is pcbi.1000759.e178.jpg bits. For larger samples, which enable better localization, the simplified localization matrix begins to be clustered in a Gaussian bell around zero displacement, so that the equivocation gradually decreases (the list of displacements, with their frequencies, is computed for each sample, and it is the equivocation, not the list itself, which is averaged across samples). In contrast, the entropy of each row of the full localization matrix, i.e. the entropy of decoded positions conditioned on any actual position, is lower, and also decreasing more steeply with sample size; it differs from the full entropy, in fact, by the mutual information between decoded and actual positions, which increases with sample size. The two equivocation measures therefore both add up to the two mutual information measures to yield the same full entropy of about 8.64 bits (a bit less in the case of the full matrix, where the sampling is more limited), and thus serve as controls that the difference in mutual information is not due, for example, to inaccuracy. As a third crucial control, we calculated also the average conditional entropy of the full localization matrix, when the matrix is averaged across samples of a given size: the resulting entropy is virtually identical to the displacement entropy (which implies instead an average of the full matrix across rows, i.e. across actual positions). This indicates that different samples of units express distinct episodic content at each location, such that averaging across samples is equivalent to averaging across locations.

Figure 7
Localization matrices.

Apparently, also the analytical estimate is unable to capture the spatial information implicit in such “episodic” errors, as its values are well below those obtained with the full matrix, and somewhat above those obtained with the simplified matrix (consistent with some loss with decoding). One may wonder how can the information from the full localization matrix (which also requires a decoding step) be higher than the decoding-free analytical estimate, without violating the basic information processing theorem. The solution to the riddle, as we understand it, is subtle: when decoding, one takes essentially a maximum likelihood estimate, assigning a unique decoded position per trial, or time step. This leads to a “quantized” localization matrix, which in general tends to have substantially higher information content than the “smoothed” matrix based on probabilities [32]. In the analytical derivation there is no concept of trial, time step or maximal likelihood, and the matrix expresses smoothly varying probabilities. The more technical implications are discussed further at the end of the Methods. These differences do not alter the other results of our study, since they affect the height of the curves, not their shape, however they have important implications. The simplified matrix has the advantage of requiring much less data, i.e. less simulation time, but also less real data if applied to neurophysiological recordings, than the full matrix, and in most situations it might be the only feasible measure of spatial information (the analytical estimate is not available of course for real data). So in most cases it is only practical to measure spatial information with methods that, our model suggests, miss out much of the information present in neuronal activity, what we may refer to as “dark information”, not easily revealed. One might conjecture that the prevalence of dark information is linked to the random nature of the spatial code established by DG inputs. It might be that additional stages of hippocampal processing, either with the refinement of recurrent CA3 connections or in CA1, are instrumental in making dark information more transparent.

Effect of learning on the mossy fibers

While the results reported this far assume that MF weights are fixed, An external file that holds a picture, illustration, etc.
Object name is pcbi.1000759.e184.jpg, we have also conducted a preliminary analysis of how the amount of spatial information in CA3 might change as a consequence of plasticity on the mossy fibers. In an extension of the standard model, we allow the weights of the connections between DG and CA3 to change with a model “Hebbian” rule. This is not an attempt to capture the nature of MF plasticity, which is not NMDA-dependent and might not be associative [33], but only the adoption of a simple plasticity model that we use in other simulations. At each time step (that corresponds to a different place in space) weights are taken to change as follows:

equation image
(10)

where An external file that holds a picture, illustration, etc.
Object name is pcbi.1000759.e186.jpg is a plasticity factor that regulates the amount of learning. Modifying in this way the MF weights has the general effect of increasing information values, so that they approach saturation levels for lower number of CA3 cells; in particular this is true for the information extracted from both full and simplified matrices. In Fig. 8, the effect of such “learning” is shown for different values of the parameter An external file that holds a picture, illustration, etc.
Object name is pcbi.1000759.e187.jpg, as a function of connectivity.

Figure 8
Information vs. connectivity for different levels of learning.

We see that allowing for this type of plasticity on mossy fibers leads to shift the maximum of information as a function of the connectivity level. The structuring of the weights effectively results in the selection of favorite input connections, for each CA3 unit, among a pool of availables ones; the remaining strong connections are a subset of those “anatomically” present originally. It is logical, then, that starting with a larger pool of connnections, among which to pick the “right” ones, leads to more information than starting with few connections, which further decrease in effective number with plasticity. We expect better models of the details of MF plasticity to preserve this main effect.

A further effect of learning, along with the disappearance of some CA3 fields and the strengthening of others, is the refinement of their shape, as illustrated in Fig. 9. It is likely that also this effect will be observed even when using more biologically accurate models of MF plasticity.

Figure 9
MF plasticity can suppress, enlarge and in general refine CA3 place fields.

Retrieval abilities

Finally, all simulations reported so far involved a full complement of DG inputs at each time step in the simulation. We have also tested the ability of the MF network to retrieve a spatial representation when fed with a degraded input signal, with and without MF plasticity. The input is degraded, in our simulation, simply by turning on only a given fraction, randomly selected, of the DG units that would normally be active in the environment. The information extracted after decoding by a sample of units (in Fig. 10, 10 units) is then contrasted with the size of the cue itself. In the absence of MF plasticity, there is obviously no real retrieval process to talk about, and the DG-CA3 network simply relays partial information. When Hebbian plasticity is turned on, the expectation from similar network models (see e.g. [34], Fig. 9) is that there would be some pattern completion, i.e. some tendency for the network to express nearly complete output information when the input is partial, resulting in a more sigmoidal input-output curve (the exact shape of the curve depends of course also on the particular measure used).

Figure 10
Information reconstructed from a degraded input signal.

It is apparent from Fig. 10 that while, in the absence of plasticity, both parameters characterizing the information that can be extracted from CA3 grow roughly linearly with the size of the cue, with plasticity the growth is supralinear. This amounts to the statement that the beneficial effects of plasticity require a full cue to be felt – the conceptual opposite to pattern completion, the process of integrating a partial cue using information stored on modified synaptic weights. This result suggests that the sparse MF connectivity is sub-optimal for the associative storage that leads to pattern completion, a role that current perspectives ascribe instead to perforant path and recurrent connections to CA3. The role of the mossy fibers, even if plastic, may be limited to the establishment of new spatial representations.

Discussion

Ours is a minimal model, which by design overlooks several of the elements likely to play an important role in the functions of the dentate gyrus - perhaps foremost, neurogenesis [35]. Nevertheless, by virtue of its simplicity, the model helps clarify a number of quantitative issues that are important in refining a theoretical perspective of how the dentate gyrus may work.

First, the model indicates that the recently discovered multiplicity of place fields by active dentate granule cells [18] might be just a “fact of life”, with no major computational implications for dentate information processing. Still, requiring that active granule cells express multiple fields seems to lead, in another simple network model (of how dentate activity may result from entorhinal cortex input [25]), to the necessity of inputs coming from lateral EC, as well as from medial EC. The lateral EC inputs need not carry any spatial information but help to select the DG cells active in one environment. Thus the multiplicity of DG fields refines the computational constraints on the operation of hippocampal circuits.

Second, the model shows that, assuming a fixed total MF input strength on CA3 units, it is beneficial in information terms for the MF connectivity to be very sparse; but not vanishingly sparse. The optimal number of anatomical MF connections on CA3 units, designated as An external file that holds a picture, illustration, etc.
Object name is pcbi.1000759.e196.jpg in the model, depends somewhat on the various parameters (the noise in the system, how sparse is the activity in DG and CA3, etc.) and it may increase slightly when taking MF plasticity into account, but it appears within the range of the number, 46, reported for the rat by [26]. It will be interesting to see whether future measures of MF connectivity in other species correspond to those “predicted” by our model once the appropriate values of the other parameters are also experimentally measured and inserted into the model. A similar set of consideration applies to the fraction of granule cells active in a given environment, An external file that holds a picture, illustration, etc.
Object name is pcbi.1000759.e197.jpg, which in the model plays a similar, though not completely identical, role to An external file that holds a picture, illustration, etc.
Object name is pcbi.1000759.e198.jpg in determining information content.

Third, the model confirms that the sparse MF connections, even when endowed with associative plasticity, are not appropriate as devices to store associations between input and output patterns of activity – they are just too sparse. This reinforces the earlier theoretical view [2], [4], which was not based however on an analysis of spatial representations, that the role of the dentate gyrus is in establishing new CA3 representations and not in associating them to representations expressed elsewhere in the system. Availing itself of more precise experimental paramaters, and based on the spatial analysis, the current model can refine the earlier theoretical view and correct, for example, the notion that “detonator” synapses, firing CA3 cells on a one-to-one basis, would be optimal for the mossy fiber system. The optimal situation turns out to be the one in which CA3 units are fired by the combination of a couple of DG input units, although this is only a statistical statement. Whatever the exact distribution of the number of coincident inputs to CA3, DG can be seen as a sort of random pattern generator, that sets up a CA3 pattern of activity without any structure that can be related to its anatomical lay-out [36], or to the identity of the entorhinal cortex units that have activated the dentate gyrus. As with random number generators in digital computers, once the product has been spit out, the exact process that led to it can be forgotten. This is consistent with experimental evidence that inactivating MF transmission or lesioning the DG does not lead to hippocampal memory impairments once the information has already been stored, but leads to impairments in the storage of new information [6], [7]. The inability of MF connection to subserve pattern completion is also consistent with suggestive evidence from imaging studies with human subjects [37].

Fourth, and more novel, our findings imply that a substantial fraction of the information content of a spatial CA3 representation, over half when sampling limited subsets of CA3 units, can neither be extracted through the simplified method which assumes translation invariance, nor assessed through the analytical method (which anyway requires an underlying model of neuronal firing, and is hence only indirectly applicable to real neuronal data). This large fraction of the information content is only extracted through the time-consuming construction of the full localization matrix. To avoid the limited sampling bias [38] this would require, in our hands, the equivalent of a ten hour session of recording from a running rat (!), with a square box sampled in An external file that holds a picture, illustration, etc.
Object name is pcbi.1000759.e199.jpg spatial bins. We have hence labeled this large fraction as dark information, which requires a special effort to reveal. Although we know little of how the real system decodes its own activity, e.g. in downstream neuronal populations, we may hypothesize that the difficulty at extracting dark information affects the real system as well, and that successive stages of hippocampal processing have evolved to address this issue. If so, qualitatively this could be characterized as the representation established in CA3 being episodic, i.e. based on an effectively random process that is functionally forgotten once completed, and later processing, e.g. in CA1, may be thought to gradually endow the representations with their appropriate continuous spatial character. Another network model, intended to elucidate how CA1 could operate in this respect, is the object of our on-going analysis.

The model analysed here does not include neurogenesis, a most striking dentate phenomenon, and thus it cannot comment on several intriguing models that have been put forward about the role of neurogenesis in the adult mammalian hippocampus [39], [40], [41]. Nevertheless, presenting a simple and readily expandable model of dentate operation can facilitate the development of further models that address neurogenesis, and help interpret puzzling experimental observations. For example, the idea that once matured newborn cells may temporally “label” memories of episodes occurring over a few weeks [42], [43], [44], [45] has been weakened by the observation that apparently even young adult-born cells, which are not that many [45], [46], [47], are very sparsely active, perhaps only a factor of two or so more active than older granule cells [24]. Maybe such skepticism should be reconsidered, and the issue reanalysed using a quantitative model like ours. One could then investigate the notion that the new cells link together, rather than separating, patterns of activity with common elements (such as the temporal label). To do that clearly requires extending the model to include a description not only of neurogenesis, but also of plasticity within DG itself [48] and of its role in the establishment of successive representations one after the other.

Methods

Replica calculation

Estimation of the equivocation

Calculating the equivocation from its definition in Eq.7 is straightforward, thanks to the simplifying assumption of independent noise in CA3 units. We get

equation image
(11)

where

equation image

although the spatial integral remains to be carried out.

Estimation of the entropy

For the entropy, Eq.6, the calculation is more complicated. Starting from

equation image

we remove the logarithm using the replica trick (see [27])

equation image
(12)

which can be rewritten (Nadal and Parga [49] have shown how to use the replica trick in the An external file that holds a picture, illustration, etc.
Object name is pcbi.1000759.e204.jpg limit, a suggestion used in [50] to analyse information transfer in the CA3-CA1 system)

equation image
(13)

using the spatial averages, defined for an arbitrary real-valued number An external file that holds a picture, illustration, etc.
Object name is pcbi.1000759.e206.jpg of replicas

equation image
(14)

where we have defined a quantity dependent on both the number An external file that holds a picture, illustration, etc.
Object name is pcbi.1000759.e208.jpg of replicas and on the position in space, later to be integrated over, of each replica An external file that holds a picture, illustration, etc.
Object name is pcbi.1000759.e209.jpg:

equation image

We need therefore to carry out integrals over the firing rate of each CA3 unit, An external file that holds a picture, illustration, etc.
Object name is pcbi.1000759.e211.jpg, in order to estimate An external file that holds a picture, illustration, etc.
Object name is pcbi.1000759.e212.jpg, while keeping in mind that in the end we want to take An external file that holds a picture, illustration, etc.
Object name is pcbi.1000759.e213.jpg. Carrying out the integrals yields a below-threshold and an above-threshold term

equation image
(15)

where we have defined the quantities

equation image
(16)

and An external file that holds a picture, illustration, etc.
Object name is pcbi.1000759.e216.jpg, while An external file that holds a picture, illustration, etc.
Object name is pcbi.1000759.e217.jpg.

One might think that An external file that holds a picture, illustration, etc.
Object name is pcbi.1000759.e218.jpg, hence in the product over cells, that defines the entropy An external file that holds a picture, illustration, etc.
Object name is pcbi.1000759.e219.jpg, the only terms that survive in the limit An external file that holds a picture, illustration, etc.
Object name is pcbi.1000759.e220.jpg would just be the summed single-unit contributions obtained from the first derivatives with respect to An external file that holds a picture, illustration, etc.
Object name is pcbi.1000759.e221.jpg. This is not true, however, as taking the replica limit produces the counterintuitive effect that replica-tensor products of terms, which individually disappear for An external file that holds a picture, illustration, etc.
Object name is pcbi.1000759.e222.jpg, only vanish to first order in An external file that holds a picture, illustration, etc.
Object name is pcbi.1000759.e223.jpg, as shown by [29]. The replica method is therefore able, in principle, to quantify the effect of correlations among units, expressed in entropy terms stemming from the product of An external file that holds a picture, illustration, etc.
Object name is pcbi.1000759.e224.jpg across units.

Briefly, one has

equation image
(17)

where the first two rows come from the term below threshold, and the last two from the one above threshold. Then, following [29],

equation image
(18)

where

equation image
(19)

and where we have considered that in the limit An external file that holds a picture, illustration, etc.
Object name is pcbi.1000759.e228.jpg we have An external file that holds a picture, illustration, etc.
Object name is pcbi.1000759.e229.jpg appear in all terms of finite weight.

The products between the matrices An external file that holds a picture, illustration, etc.
Object name is pcbi.1000759.e230.jpg attached to each CA3 unit generate the higher order terms in An external file that holds a picture, illustration, etc.
Object name is pcbi.1000759.e231.jpg. Calculating them in our case, in which different CA3 units can receive partially overlapping inputs from DG units, is extremely complex (see [51], where information transmission across a network is also considered), and we do not pursue here the analysis of such higher order terms. One can retrieve the result of the TG model in Ref. [29] by taking the further limit An external file that holds a picture, illustration, etc.
Object name is pcbi.1000759.e232.jpg, which implies An external file that holds a picture, illustration, etc.
Object name is pcbi.1000759.e233.jpg and An external file that holds a picture, illustration, etc.
Object name is pcbi.1000759.e234.jpg. A further subtlety is that, in taking the An external file that holds a picture, illustration, etc.
Object name is pcbi.1000759.e235.jpg limit, there is a single replica, say An external file that holds a picture, illustration, etc.
Object name is pcbi.1000759.e236.jpg, which is counted once in the limit, but also several different replicas, denoted An external file that holds a picture, illustration, etc.
Object name is pcbi.1000759.e237.jpg, whose weights vanish, but which remain to determine e.g. the terms proportional to An external file that holds a picture, illustration, etc.
Object name is pcbi.1000759.e238.jpg emerging from the derivatives. Thus, in the very last term of Eq. 17, one has to derive An external file that holds a picture, illustration, etc.
Object name is pcbi.1000759.e239.jpg with respect to An external file that holds a picture, illustration, etc.
Object name is pcbi.1000759.e240.jpg to produce the An external file that holds a picture, illustration, etc.
Object name is pcbi.1000759.e241.jpg term of Eq. 19, which is absent in [29] because it vanishes with An external file that holds a picture, illustration, etc.
Object name is pcbi.1000759.e242.jpg. In the off-diagonal terms of the An external file that holds a picture, illustration, etc.
Object name is pcbi.1000759.e243.jpg matrix there are An external file that holds a picture, illustration, etc.
Object name is pcbi.1000759.e244.jpg entries dependent on replicas An external file that holds a picture, illustration, etc.
Object name is pcbi.1000759.e245.jpg and An external file that holds a picture, illustration, etc.
Object name is pcbi.1000759.e246.jpg, and An external file that holds a picture, illustration, etc.
Object name is pcbi.1000759.e247.jpg entries dependent on replicas An external file that holds a picture, illustration, etc.
Object name is pcbi.1000759.e248.jpg and An external file that holds a picture, illustration, etc.
Object name is pcbi.1000759.e249.jpg.

Focusing now solely on terms of order An external file that holds a picture, illustration, etc.
Object name is pcbi.1000759.e250.jpg, note that the term An external file that holds a picture, illustration, etc.
Object name is pcbi.1000759.e251.jpg is effectively a spatial signal. In the An external file that holds a picture, illustration, etc.
Object name is pcbi.1000759.e252.jpg limit it can be rewritten, using An external file that holds a picture, illustration, etc.
Object name is pcbi.1000759.e253.jpg for the single surviving replica, as

equation image

This allows us to derive, to order An external file that holds a picture, illustration, etc.
Object name is pcbi.1000759.e255.jpg, our result for the spatial information content, Eq. 8.

Note that when the threshold of each unit tends to An external file that holds a picture, illustration, etc.
Object name is pcbi.1000759.e256.jpg, and therefore its mean activation An external file that holds a picture, illustration, etc.
Object name is pcbi.1000759.e257.jpg, our units behave as threshold-less linear units with gaussian noise, and the information they convey tends to

equation image
(20)

which is simply expressed in terms of a spatial signal-to-noise ratio, and coincides with the results in Refs. [28], [29].

An external file that holds a picture, illustration, etc.
Object name is pcbi.1000759.e259.jpg-Field decomposition

Eqs. 8 and 9 simply sum equivalent average contributions from each CA3 unit. Each such contribution can then be calculated as a series in An external file that holds a picture, illustration, etc.
Object name is pcbi.1000759.e260.jpg, the number of DG fields feeding into the CA3 unit. One can in fact write, for example,

equation image

where in each term there are An external file that holds a picture, illustration, etc.
Object name is pcbi.1000759.e262.jpg active DG units, indexed by An external file that holds a picture, illustration, etc.
Object name is pcbi.1000759.e263.jpg, presynaptic to CA3 unit An external file that holds a picture, illustration, etc.
Object name is pcbi.1000759.e264.jpg, and each has An external file that holds a picture, illustration, etc.
Object name is pcbi.1000759.e265.jpg fields (including the possibility that An external file that holds a picture, illustration, etc.
Object name is pcbi.1000759.e266.jpg), indexed by An external file that holds a picture, illustration, etc.
Object name is pcbi.1000759.e267.jpg. A similar expansion can be written for the other terms. One then realizes that the spatial component reduces to integrals that depend solely on the total number of fields An external file that holds a picture, illustration, etc.
Object name is pcbi.1000759.e268.jpg, no matter how many DG active units they come from, and the expansion can be rearranged into an expansion in An external file that holds a picture, illustration, etc.
Object name is pcbi.1000759.e269.jpg

equation image
(21)

where one of the components in each term is, for example,

equation image
(22)

with An external file that holds a picture, illustration, etc.
Object name is pcbi.1000759.e272.jpg the mean signal-to-noise at position An external file that holds a picture, illustration, etc.
Object name is pcbi.1000759.e273.jpg produced by An external file that holds a picture, illustration, etc.
Object name is pcbi.1000759.e274.jpg fields, from no matter how many DG units. The numerical coefficient An external file that holds a picture, illustration, etc.
Object name is pcbi.1000759.e275.jpg, instead, stems from the combination of the distribution for the number of fields for each presynaptic DG unit active in the environment, which differs between models A, B and C, and the Poisson distribution for the number of such units

equation image

The sum extends in principle to An external file that holds a picture, illustration, etc.
Object name is pcbi.1000759.e277.jpg, but in practice it can be truncated after checking that successive terms give vanishing contributions. The appropriate truncation point obviously depends on the mean number of fields An external file that holds a picture, illustration, etc.
Object name is pcbi.1000759.e278.jpg, as well as on the model distribution of fields per unit. Note that the first few terms (e.g. for An external file that holds a picture, illustration, etc.
Object name is pcbi.1000759.e279.jpg) may give negative but not necessarily negligible contributions if the effective threshold An external file that holds a picture, illustration, etc.
Object name is pcbi.1000759.e280.jpg is high.

For model A,

equation image

and combining the two Poisson series one finds

equation image
(23)

where An external file that holds a picture, illustration, etc.
Object name is pcbi.1000759.e283.jpg and the other An external file that holds a picture, illustration, etc.
Object name is pcbi.1000759.e284.jpg are the polynomials

equation image

given by the modified Khayyam-Tartaglia recursion relation

equation image

and where An external file that holds a picture, illustration, etc.
Object name is pcbi.1000759.e287.jpg.

For model B,

equation image

and combining the Poisson with the exponential series one finds

equation image
(24)

where again An external file that holds a picture, illustration, etc.
Object name is pcbi.1000759.e290.jpg, while the other An external file that holds a picture, illustration, etc.
Object name is pcbi.1000759.e291.jpg are the distinct polynomials

equation image

given by the further modified Khayyam-Tartaglia recursion relation

equation image

and where An external file that holds a picture, illustration, etc.
Object name is pcbi.1000759.e294.jpg.

For model C,

equation image

there is no parameter An external file that holds a picture, illustration, etc.
Object name is pcbi.1000759.e296.jpg (i.e., An external file that holds a picture, illustration, etc.
Object name is pcbi.1000759.e297.jpg), and one simply finds

equation image
(25)

Note that in the limit An external file that holds a picture, illustration, etc.
Object name is pcbi.1000759.e299.jpg, when the mean input per CA3 unit An external file that holds a picture, illustration, etc.
Object name is pcbi.1000759.e300.jpg remains finite, for both models A and B one finds

equation image

which is equivalent to Eq. 25, in line with the fact that both models A and B reduce, in the An external file that holds a picture, illustration, etc.
Object name is pcbi.1000759.e302.jpg limit, to single-field distributions, but even units with single fields become vanishingly rare, so formally one has to scale up the mean number of active presynaptic units, An external file that holds a picture, illustration, etc.
Object name is pcbi.1000759.e303.jpg, to keep An external file that holds a picture, illustration, etc.
Object name is pcbi.1000759.e304.jpg finite and establish the correct comparison to model C.

Sparsity and threshold

The analytical relation between the threshold An external file that holds a picture, illustration, etc.
Object name is pcbi.1000759.e305.jpg of CA3 units and the sparsity An external file that holds a picture, illustration, etc.
Object name is pcbi.1000759.e306.jpg of the layer is obtained starting from the formula defining the sparsity An external file that holds a picture, illustration, etc.
Object name is pcbi.1000759.e307.jpg (see below) which can be rewritten

equation image
(26)

Since in the analytical calculation we have An external file that holds a picture, illustration, etc.
Object name is pcbi.1000759.e309.jpg as parameter, this equation can be taken as a relation An external file that holds a picture, illustration, etc.
Object name is pcbi.1000759.e310.jpg which has to be inverted to allow a comparison with the simulations, which are run controlling the sparsity level at a predefined level (in our case An external file that holds a picture, illustration, etc.
Object name is pcbi.1000759.e311.jpg) and adjusting the threshold parameter accordingly. The inversion requires using the An external file that holds a picture, illustration, etc.
Object name is pcbi.1000759.e312.jpg-field decomposition and numerical integration. A graphical example of the numerical relation is given in Fig. 11.

Figure 11
Sparsity-threshold relation.

Simulations

The mathematical model described above was simulated with a network of 15000 DG cells and 500 CA3 cells. A virtual rat explores a continuous two dimensional space, intended to represent a An external file that holds a picture, illustration, etc.
Object name is pcbi.1000759.e315.jpg square environment but realized as a torus, with periodic boundary conditions. For the numerical estimation of mutal information, the environment is discretized in a grid of An external file that holds a picture, illustration, etc.
Object name is pcbi.1000759.e316.jpg locations, whereas trajectories are in continuous space, but in discretized time steps. In each time step (intended to correspond to roughly An external file that holds a picture, illustration, etc.
Object name is pcbi.1000759.e317.jpg, half a theta cycle) the virtual rat moves half a grid unit (An external file that holds a picture, illustration, etc.
Object name is pcbi.1000759.e318.jpg) in a direction similar to the direction of the previous time step, with a small amount of noise. To allow construction of a full localization matrix with good statistics, simulations are run for typically 400,000 time steps (while for the simplified translationally invariant matrix 5,000 steps would be sufficient). The space has periodic boundary conditions, as in a torus, to avoid border effects; the longest possible distance between any two locations is hence equal to 14.1 grid units, or An external file that holds a picture, illustration, etc.
Object name is pcbi.1000759.e319.jpg.

DG place fields

After assigning a number of firing fields for each DG units, according to the distributions of models A, B and C, we assign to each field a randomly chosen center. The shape of the field is then given by a Gaussian bell with that center. The tails of the Gausssian function are truncated to zero when the distance from the center is larger than a fixed radius An external file that holds a picture, illustration, etc.
Object name is pcbi.1000759.e320.jpg, with An external file that holds a picture, illustration, etc.
Object name is pcbi.1000759.e321.jpg the ratio between the area of the field and the environment area An external file that holds a picture, illustration, etc.
Object name is pcbi.1000759.e322.jpg. In the standard model, only about 3 percent of the DG units on average are active in a given environment, in agreement with experimental findings [24]; i.e. the DG firing probability is An external file that holds a picture, illustration, etc.
Object name is pcbi.1000759.e323.jpg. The firing of DG units is not affected by noise, nor by any further threshold. Peak firing is conventionally set, in the center of the field, at the value An external file that holds a picture, illustration, etc.
Object name is pcbi.1000759.e324.jpg, but DG units can fire at higher levels if they are assigned two or more overlapping fields.

CA3 activation

CA3 units fire according to Eq. 2: the firing of a CA3 unit is a linear function of the total incoming DG input, distorted by a noise term. This term is taken from a gaussian distribution centered on zero, with variance An external file that holds a picture, illustration, etc.
Object name is pcbi.1000759.e325.jpg, and it changes for each unit and each time step. A threshold is imposed in the simulations to model the action of inhibition, hypothesizing that it serves to adjust the sparsity An external file that holds a picture, illustration, etc.
Object name is pcbi.1000759.e326.jpg of CA3 activity to its required value. The sparsity is defined as

equation image

and it is fixed to An external file that holds a picture, illustration, etc.
Object name is pcbi.1000759.e328.jpg. This implies that the activity of the CA3 cells population is under tight inhibitory control.

The decoding procedure and information extraction

At each time step, the firing vector of a set of CA3 units is compared to all the average vectors recorded at each position in the An external file that holds a picture, illustration, etc.
Object name is pcbi.1000759.e329.jpg grid, for the same sample, in a test trial (these are called template vectors). The comparison is made calculating the Euclidean distance between the current vector and each template, and the position of the closest template is taken to be the decoded position at that time step, for that sample. This procedure has been termed maximum likelihood Euclidean distance decoding [32]. The frequency of each pair of decoded and real positions are compiled in a so-called “confusion matrix”, or localization matrix, that reflects the ensemble of conditional probabilities An external file that holds a picture, illustration, etc.
Object name is pcbi.1000759.e330.jpg for that set of units. Should decoding “work” in a perfect manner, in the sense of always detecting the correct position in space of the virtual rat, the confusion matrix would be the identity matrix. From the confusion matrix obtained at the end of the simulation, the amount of information is extracted, and plotted versus the number of CA3 units present in the set. We averaged extensively over CA3 samples, as there are large fluctuations from sample to sample, i.e. for each given number of CA3 units we randomly picked several different groups of CA3 units and then averaged the mutual information values obtained. In all the results reported we averaged also over 3–4 simulation run with a different random number generator, i.e. over different trajectories. The same procedure leading to the information curve was repeated for different values of the parameters. In all the information measures we reported, we also corrected for the limited sampling bias, as discussed by [31]. In our case of spatial information, the bias is essentially determined by the spatial binning we used (An external file that holds a picture, illustration, etc.
Object name is pcbi.1000759.e331.jpg) and by the decoding method [52].

One should note the maximum likelihood decoding procedure to better understand the discrepancy between the information estimated from simulations (with the procedure based on the full matrix) and that calculated analytically. The analytical calculation distinguishes in a clear-cut manner so called annealed variables, which are interpreted as “fast” noise and are averaged in computing the relation between position and neuronal activity, and so called quenched variables, which are interpreted as frozen disorder and are averaged over only later, in computing average the entropy, free-energy or mutual information [27]. In using maximum likelihood decoding, instead, the localization matrix that relates actual and decoding position effectively averages only trial-to-trial variability, i.e. the noise that occurs on intermediate time scales. The variability on genuinely fast time scales is suppressed, in fact, by the maximum likelihood operation, which acts as a sort of temporal low pass filter with a cut-off time equal to one time step. This suppression of part of the annealed noise leads to larger information values extracted from the simulations, and hence to the notion of “dark” information. In the real system, the spiking nature of neuronal activity may induce a similar cut-off, although its quantitative relation to the one-time-step cut-off in the simulations (here intended to be half a theta cycle) remains to be firmly established.

Fitting

We fit the information curves obtained in simulations to exponentially saturating curves as a function of An external file that holds a picture, illustration, etc.
Object name is pcbi.1000759.e332.jpg in order to get the values of the two most relevant parameter that describe their shape: the initial slope An external file that holds a picture, illustration, etc.
Object name is pcbi.1000759.e333.jpg (i.e. the average information conveyed by the activity of individual units) and the total amount of information An external file that holds a picture, illustration, etc.
Object name is pcbi.1000759.e334.jpg (i.e. the asymptotic saturation value). The function we used for the fit is the following

equation image
(27)

In most cases the fit was in excellent agreement with individual data points, as expected on the basis of previous analyses [28].

Acknowledgments

We had valuable discussion with Jill Leutgeb, Bailu Si and Federico Stella.

Footnotes

The authors have declared that no competing interests exist.

This work was partially supported by the EU Spacebrain grant. The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.

References

1. Treves A, Tashiro A, Witter MP, Moser EI. What is the mammalian dentate gyrus good for? Neuroscience. 2008;154:1155–1172. [PubMed]
2. McNaughton BL, Morris RGM. Hippocampal synaptic enhancement and information storage within a distributed memory system. Trends Neurosci. 1987;10:408–415.
3. Rolls ET. Functions of neuronal networks in the hippocampus and neocortex in memory. In: Byrne JH, Berry WO, editors. Neural Models of Plasticity: Experimental and Theoretical Approaches. San Diego: Academic Press; 1989. pp. 240–265.
4. Treves A, Rolls ET. Computational constraints suggest the need for two distinct input systems to the hippocampal CA3 network. Hippocampus. 1992;2:189–199. [PubMed]
5. Marr D. Simple memory: A theory for archicortex. Philos Trans R Soc Lond B Biol Sci. 1971;262:23–81. [PubMed]
6. Lassalle JM, Bataille T, Halley H. Reversible inactivation of the hippocampal mossy fiber synapses in mice impairs spatial learning, but neither consolidation nor memory retrieval, in the Morris navigation task. Neurobiol Learn Mem. 2000;73:243–257. [PubMed]
7. Lee I, Kesner RP. Encoding versus retrieval of spatial memory: Double dissociation between the dentate gyrus and the perforant path inputs into CA3 in the dorsal hippocampus. Hippocampus. 2004;14:66–76. [PubMed]
8. Leutgeb S, Leutgeb JK, Barnes CA, Moser EI, McNaughton BL, et al. Independent codes for spatial and episodic memory in hippocampal neuronal ensembles. Science. 2005;309:619–623. [PubMed]
9. Samsonovich A, McNaughton BL. Path integration and cognitive mapping in a continuous attractor neural network model. J Neurosci. 1997;17:5900–5920. [PubMed]
10. Battaglia FP, Treves A. Attractor neural networks storing multiple space representations: A model for hippocampal place fields. Phys Rev E. 1998;58:7738–7753.
11. Stringer MS, Rolls ET. Invariant object recognition in the visual system with novel views of 3D objects. Neural Comput. 2002;14:2585–2596. [PubMed]
12. Tsodyks M, Sejnowski T. Associative memory and hippocampal place cells. Int J Neural Syst. 1995;6:81–86.
13. Hamaguchi K, Hatchett JPL. Analytic solution of neural network with disordered lateral inhibition. Phys Rev E Stat Nonlin Soft Matter Phys. 2006;73:art. 051104. [PubMed]
14. Papp G, Witter MP, A. Treves A. The CA3 network as a memory store for spatial representations. Learn Mem. 2007;14:732–744. [PubMed]
15. Roudi Y, Treves A. Representing where along with what information in a model of a cortical patch. PLoS Comput Biol. 2008;4:e1000012. doi: 10.1371/journal.pcbi.1000012. [PMC free article] [PubMed]
16. Hafting T, Fyhn M, Molden S, Moser MB, Moser EI. Microstructure of a spatial map in the entorhinal cortex. Science. 2005;436:801–806. [PubMed]
17. Jung MW, Wiener SI, McNaughton BL. Comparison of spatial firing characteristics of units in dorsal and ventral hippocampus of the rat. J Neurosci. 1994;14:7347–7356. [PubMed]
18. Leutgeb JK, Leutgeb S, Moser MB, Moser EI. Pattern separation in the dentate gyrus and CA3 of the hippocampus. Science. 2007;315:961–966. [PubMed]
19. Aimone JB, Wiskott L. Computational modeling of neurogenesis. Adult Neurogenesis. 2008;52:463–481.
20. Treves A, Rolls ET. What determines the capacity of autoassociative memories in the brain? Network: Computation in Neural Systems. 1991;2:371–397. doi: 10.1088/0954-898X/2/4/004.
21. Wilson MA, McNaughton BL. Dynamics of the hippocampal ensemble code for space. Science. 1993;261:1055–1058. [PubMed]
22. Treves A. Graded-response neurons and information encodings in autoassociative memories. Phys Rev A Gen Phys. 1990;42:2418–2430. [PubMed]
23. Kropff E, Treves A. The emergence of grid cells: Intelligent design or just adaptation? Hippocampus. 2008;18:1256–1269. [PubMed]
24. Chawla MK, Guzowski JF, Ramirez-Amaya V, Lipa P, Hoffman KL, et al. Sparse, environmentally selective expression of arc rna in the upper blade of the rodent fascia dentata by brief spatial experience. Hippocampus. 2005;15:579–586. [PubMed]
25. Si B, Treves A. The role of competitive learning in the generation of DG fields from EC inputs. Cogn Neurodyn. 2009;3:119–187. [PMC free article] [PubMed]
26. Amaral DG, Ishizuka N, Claiborne B. Neurons, numbers and the hippocampal network. Prog Brain Res. 1990;83:1–11. [PubMed]
27. Mezard M, Parisi G, Virasoro . Spin glasses and beyond. World Scientific; 1986.
28. Samengo I, Treves A. Representational capacity of a set of independent neurons. Phys Rev E Stat Nonlin Soft Matter Phys. 2000;63:art. 011910, 2000. [PubMed]
29. DelPrete V, Treves A. Theoretical model of neuronal population coding of stimuli with both continuous and discrete dimensions. Phys Rev E Stat Nonlin Soft Matter Phys. 2001;64:art. 021912, Jul 2001. [PubMed]
30. Leutgeb S, Leutgeb JK, Treves A, Moser MB, Moser EI. Distinct ensemble codes in hippocampal areas CA3 and CA1. Science. 2004;305:1295–1298. [PubMed]
31. Treves A, Panzeri S. The upward bias in measures of information derived from limited data samples. Neural Comput. 1995;2:399–407.
32. Rolls ET, Treves A, Tovee MJ. The representational capacity of the distributed encoding of information provided by populations of neurons in primate temporal visual cortex. Exp Brain Res. 1997;114:149–162. [PubMed]
33. Nicoll RA, Schmitz D. Synaptic plasticity at hippocampal mossy fibre synapses. Nat Rev Neurosci. 2005;6:863–876. [PubMed]
34. Treves A. Computational constraints between retrieving the past and predicting the future, and the CA3-CA1 differentiation. Hippocampus. 2004;14:539–556. [PubMed]
35. Kuhn HG, Dickinson-Anson H, Gage FH. Neurogenesis in the dentate gyrus of the adult rat: age-related decrease of neuronal progenitor proliferation. J Neurosci. 1996;16:2027–2033. [PubMed]
36. Redish AD, Battaglia FP, Chawla MK, Ekstrom AD, Gerrard JL, et al. Independence of firing correlates of anatomically proximate hippocampal pyramidal cell. J Neurosci. 2001;21: RC134:1–6. [PubMed]
37. Bakker A, Kirwan CB, Miller M, Stark CEL. Pattern separation in the human hippocampal CA3 and dentate gyrus. Science. 2008;319:1640–1642. [PMC free article] [PubMed]
38. Panzeri S, Treves A. Analytical estimates of limited sampling biases in different information measures. Network: Computation in Neural Systems. 1996;7:87–107.
39. Aimone JB, Wiles J, Gage FH. Potential role for adult neurogenesis in the encoding of time in new memories. Nat Neurosci. 2006;9:723–727. [PubMed]
40. Becker S. A computational principle for hippocampal learning and neurogenesis. Hippocampus. 2005;15:722–738. [PubMed]
41. Wiskott L, Rasch MJ, Kempermann G. A functional hypothesis for adult hippocampal neurogenesis: Avoidance of catastrophic interference in the dentate gyrus. Hippocampus. 2006;16:329–343. [PubMed]
42. Kee N, Teixeira CM, Wang AH, Frankland PW. Preferential incorporation of adult-generated granule cells into spatial memory networks in the dentate gyrus. Nat Neurosci. 2007;10:355–362. [PubMed]
43. Ge S, Yang C, Hsu K, Ming G, Song H. A critical period for enhanced synaptic plasticity in newly generated neurons of the adult brain. Neuron. 2007;54:559–566. [PMC free article] [PubMed]
44. Buzzetti RA, Marrone DF, Schaner MJ, Chawla MK, Bohanick JD, et al. Do dentate gyrus granule cells tag time-specific experiences? Soc Neurosci Abstr. 2007;744:16.
45. Tashiro A, Makino H, Gage FH. Experience-specific functional modification of the dentate gyrus through adult neurogenesis: A critical period during an immature stage. J Neurosci. 2007;27:3252–3259. [PubMed]
46. Cameron HA, Mckay RDG. Adult neurogenesis produces a large pool of new granule cells in the dentate gyrus. J Comp Neurol. 2001;435:406–417. [PubMed]
47. McDonald HY, Wojtowicz JM. Dynamics of neurogenesis in the dentate gyrus of adult rats. Neurosci Lett. 2005;385:70–75. [PubMed]
48. McHugh TJ, Jones MW, Quinn JJ, Balthasar N, Coppari R, et al. Dentate gyrus nmda receptors mediate rapid pattern separation in the hippocampal network. Science. 2007;317:94–99. [PubMed]
49. Nadal J, Parga N. Information processing by a perceptron in an unsupervised learning task. Network: Computation in Neural Systems. 1993;4:295–312. doi: 10.1088/0954-898X/4/3/004.
50. Treves A. Quantitative estimate of the information relayed by the schaffer collaterals. J Comput Neurosci. 1995;2:259–272. [PubMed]
51. DelPrete V, Treves A. Replica symmetric evaluation of the information transfer in a two-layer network in the presence of continuous and discrete stimuli. Phys Rev E Stat Nonlin Soft Matter Phys. 2002;65:art. 041918. [PubMed]
52. Panzeri S, Treves A, Schultz S, Rolls ET. On decoding the responses of a population of neurons from short time windows. Neural Comput. 1999;11:1553–1577. [PubMed]

Articles from PLoS Computational Biology are provided here courtesy of Public Library of Science