Search tips
Search criteria 


Logo of frontcompneuroLink to Publisher's site
Front Comput Neurosci. 2010; 4: 9.
Published online 2010 April 19. Prepublished online 2009 December 21. doi:  10.3389/fncom.2010.00009
PMCID: PMC2870944

Pooling and Correlated Neural Activity


Correlations between spike trains can strongly modulate neuronal activity and affect the ability of neurons to encode information. Neurons integrate inputs from thousands of afferents. Similarly, a number of experimental techniques are designed to record pooled cell activity. We review and generalize a number of previous results that show how correlations between cells in a population can be amplified and distorted in signals that reflect their collective activity. The structure of the underlying neuronal response can significantly impact correlations between such pooled signals. Therefore care needs to be taken when interpreting pooled recordings, or modeling networks of cells that receive inputs from large presynaptic populations. We also show that the frequently observed runaway synchrony in feedforward chains is primarily due to the pooling of correlated inputs.

Keywords: correlation, pooling, synchrony, feedforward networks, synfire chains


Cortical neurons integrate inputs from thousands of afferents. Similarly, a variety of experimental techniques record the pooled activity of large populations of cells. It is therefore important to understand how the structured response of a neuronal network is reflected in the pooled activity of cell groups.

It is known that weak dependencies between the response of cell pairs in a population can have a significant impact on the variability and signal-to-noise ratio of the pooled signal (Shadlen and Newsome, 1998; Salinas and Sejnowski, 2000; Moreno-Bote et al., 2008). It has also been observed that weak correlations between cells in two populations can cause much stronger correlations between the pooled activity of the populations (Bedenbaugh and Gerstein, 1997; Chen et al., 2006; Gutnisky and Josić, 2010; Renart et al., 2010). We give a simple example of this effect in Figure Figure1C:1C: Weak correlations were introduced between the spiking activity of cells in two non-overlapping presynaptic pools each providing input to a postsynaptic cell (see diagram in Figure Figure1B).1B). The activity between pairs of excitatory, and pairs of inhibitory cells was correlated, but excitatory–inhibitory pairs were uncorrelated. Even without shared inputs and with background noise, pooling resulted in strong correlations in postsynaptic membrane voltages. The connectivity in the presynaptic network was irrelevant – it only mattered that the inputs to the downstream neurons reflected the pooled activity of the afferent populations. A similar effect can cause large correlations between recordings of multiunit activity (MUA) or recordings of voltage sensitive dyes (VSD), even when correlations between cells in the recorded populations are small (Bedenbaugh and Gerstein, 1997; Chen et al., 2006; Stark et al., 2008). The effect is the same, but in this case pooling occurs at the level of a recording device rather than a downstream neuron (compare Figures Figures11A,B).

Figure 1
Models of pooled recordings and the effects of pooling on correlations. (A) Pooling in experimental recordings. Cells from different populations are correlated with average correlation coefficient ρ¯12 and cells from the same population ...

We present a systematic overview, as well as extensions and applications of a number of previous observations related to this phenomenon. Using a linear model, we start by examining the potential effects of pooling on recordings from large populations obtained using VSD or MUA recording techniques. These techniques are believed to reflect the pooled postsynpatic activity of groups of cells. We extend earlier models introduced to examine the impact of pooling on correlations (Bedenbaugh and Gerstein, 1997; Chen et al., 2006; Nunez and Srinivasan, 2006), and show that heterogeneities in the presynaptic pools can have subtle effects on correlations between pooled signals.

Since neurons respond to input from large presynaptic populations, pooling also impacts the activity of single cells and cell pairs. As observed in Figure Figure1C,1C, pooling can inflate weak correlations between afferents. However, excitatory–inhibitory correlations (Okun and Lampl, 2008) can counteract this amplification, as shown in Figure Figure1D1D (Hertz, 2010; Renart et al., 2010). We examine these effects analytically by modeling the subthreshold activity of postsynaptic cells as a filtered version of the inputs received (Tetzlaff et al., 2008). The impact of correlated subthreshold activity on the output spiking statistics is a nontrivial question which we address only briefly (Moreno-Bote and Parga, 2006; de la Rocha et al., 2007; Ostojić et al., 2009).

The effects of pooling provide a simple explanation for certain aspects of the dynamics of feedforward chains. Simulations and in vitro experiments show that layered feedforward architectures give rise to a robust increase in synchronous spiking from layer to layer (Diesmann et al., 1999; Litvak et al., 2003; Reyes, 2003; Doiron et al., 2006; Kumar et al., 2008). We describe how output correlations in one layer impact correlations between the pooled inputs to the next layer. This approach is used to derive a mapping that describes how correlations develop across layers (Tetzlaff et al., 2003; Renart et al., 2010), and to illustrate that the pooling of correlated inputs is the primary mechanism responsible for the development of synchrony in feedforward chains. Examining how correlations are mapped between layers also helps explain why asynchronous states are rarely observed in feedforward networks in the absence of strong background noise (van Rossum et al., 2002; Vogels and Abbott, 2005). This is in contrast to recurrent networks which can display stable asynchronous states (Hertz, 2010; Renart et al., 2010) similar to those observed in vivo (Ecker et al., 2010).

Materials and Methods

Correlations between stochastic processes

The cross-covariance of a pair of stationary stochastic processes, x(t) and y(t), is Cxy(t) = cov(x(s), y(s + t)). The auto-covariance function, Cxx(t), is the cross-covariance between a process and itself. The cross- and auto-covariance functions measure second order dependencies at time lag t between two processes, or a process and itself. We quantify the total magnitude of interactions over all time using the asymptotic statistics,


While the asymptotic correlation, ρxy,, measures correlations between x(t) and y(t) over large timescales, the auto- and cross- covariance functions determine the timescale of these dependencies.

Correlations between sums of random variables

Given two collections of correlated random variables {xi}i=1nx and {yj}j=1ny, define the pooled variables, X=Σixi and Y=Σiyi. Since covariance is bilinear (cov(Σixi,Σjyj)=Σijcov(xi,yj)) the variance and covariance of the pooled variables are

σX2=i=1nxj=1jinxσxiσxjρxixj+i=1nxσxi2, and γXY=i=1nxj=1nyσxiσyjρxiyj,

and similarly for σY2.

Using these expressions along with some algebraic manipulation, the correlation coefficient, ρXY=γXY/σXσY, between the pooled variables can be written as



wx=σxσx¯σxσy¯,       vx¯=1nxi=1nxσxi2,σxσy¯=1nxnyi=1nxj=1nyσxiσyi,σxσx¯=1nx(nx1)i=1nxj=1j1nxσxiσxj,ρ¯xy=1nxnyσxσy¯i=1nxj=1nyσxiσyjρxiyj,ρ¯xx=1nx(nx1)σxσx¯i=1nxj=1j1nxσxiσyjρxixj

and similarly for wy, vy, σyσy¯, and ρ¯yy. In deriving Eq. (3) we assumed that all pairwise statistics are uniformly bounded away from zero in the asymptotic limit.

Each overlined term above is a population average. Notably, ρ¯xy represents the average correlation between xi and yj pairs, weighted by the product of their standard deviations, and similarly for ρ¯xx and ρ¯yy. Correlation between weighted sums can be obtained by substituting xiwxixi and yjwyjyj for weights wxi and wyj and making the appropriate changes to the terms in the equation above (e.g., σxi|wxi|σxi,ρxiyjsign (wiwj)ρxiyj). Overlap between the two populations can be modeled by taking ρxiyj=1 for some pairs.

Assuming that variances are homogeneous within each population, that is σxi=σx and σyj=σy for i = 1,…,nx and j = 1,…,ny, simplifies these expressions. In particular, vx¯=σxσx¯=σx2, σxσy¯=σxσy, and


Assuming further that the populations are symmetric, σx = σy = σ, nx = ny = n, and ρ¯xx=ρ¯yy, the expression above simplifies to


where ρb=ρ¯xy is the average pairwise correlation between the two populations and ρw=ρ¯xx=ρ¯yy is the average pairwise correlation within each population. Eq. (5) was derived in Bedenbaugh and Gerstein (1997) in an examination of correlations between multiunit recordings. In Chen et al. (2006), a version of Eq. (5) with ρw = ρb is derived in the context of correlations between two VSD signals. The asymptotic, ρxy  0, limit when ρw = ρb is discussed in Renart et al. (2010).

Note that the results above hold for correlations computed over arbitrary time windows. We concentrate on infinite windows, and discuss extensions in the Appendix.

Neuron model

In the second part of the presentation we consider two excitatory and two inhibitory input populations projecting to two postsynaptic cells. The jth excitatory input to cell k is labeled ej,k(t) (k = 1 or 2). Similarly, ij,k(t) denotes the jth inhibitory input to cell k. Each cell receives ne excitatory and ni inhibitory inputs with individual rates νe and νi respectively.

Each of the excitatory and inhibitory inputs to cell k, are stationary spike trains modeled by point processes, ej,k(t)=Σiδ(ttj,ki) and ij,k(t)=Σiδ(tsj,ki) where {tj,ki} and {sj,ki} are input spike times. We assume that the spike trains are stationary in a multivariate sense (Stratonovich, 1963). The pooled excitatory and inhibitory inputs to neuron k are Ek(t)=Σj=1nekej,k(t), and Ik(t)=Σj=1nikij,k(t).

To generate correlated inputs to cells, we used the multiple interaction process (MIP) method (Kuhn et al., 2003), then jittered each spike time independently by a random value drawn from an exponential distribution with mean 5ms. The resulting processes are Poisson with cross-covariance functions proportional to a double exponential, Cxy(t) ~ e |t| /5. Note that since each input is Poisson, σe2=νe and σi2=νi.

While the dynamics of the afferent population were not modeled explicitly, the response of the two downstream neurons was obtained using a conductance-based IF model. The membrane potentials of the neurons were described by


with excitatory and inhibitory conductances determined by gEk(t)=(Ekαe)(t) and gIk(t)=(Ikαi)(t) where * denotes convolution. We used synaptic responses of the form αe(t)=τe2tet/τeΘ(t) and αi(t)=τi2tet/τiΘ(t), where Θ(t) is the Heaviside function. The area of a single excitatory or inhibitory postsynaptic conductance (EPSC or IPSC) is therefore equal to the synaptic weight, x2130 or x2110, with units nS·ms. This analysis can easily be extended to situations where each input, ej,k or ij,k, has a distinct synaptic weight.

When examining spiking activity, we assume that when Vk crosses a threshold voltage, Vth, an output spike is produced and Vk is reset to VL. When examining sub-threshold dynamics, we considered the free membrane potential without threshold.

As a measure of balance between excitation and inhibition we used (Troyer and Miller, 1997; Salinas and Sejnowski, 2000)


When β = 1, the net excitation and inhibition are balanced and the mean free membrane potential equals VL. In simulations, we set VL = −60 mV, VE = 0 mV, VI = −90 mV, τe = 10 ms, τi = 20 ms, Cm = 114 pF, and gL = 4.086 nS, giving a membrane time constant, τm = Cm/gL = 27.9 ms. In all simulations except those in Figure Figure7,7, the cells are balanced (β = 1).

Figure 7
Development of synchrony in feedforward networks. (A) A feedforward network with no overlap and independent, Poisson input. For excitatory cells, we set x2130  1.55nS·ms, and x2110  4.67nS·ms. ...

The conductance-based IF neuron behaves as a nonlinear filter in the sense that membrane potentials cannot be written as a linear transformation of the inputs. However, following Kuhn et al. (2004) and Coombes et al. (2007), we derive a linear approximation to the conductance based model. Let U = Vk  VL so that Eq. (6) becomes


Define the effective membrane time constant,τeffCm/(E[gL +gE(t) +gL(t)]) = Cm/(gL +nevex2130 +niνix2110 Substituting this average value in the previous equation yields the linear approximation to the conductance based model,


where Jk(t)=(gEk(t)(VLVE)gIk(t)(VLVI))/Cm is the total input current to cell k. Solving and reverting to the original variables gives the linear approximation Vk(t) = (Jk*K)(t) + VL, where K(t)=Θ(t)et/τeff is the kernel of the linear filter induced by Eq. (7).


The pooling of signals from groups of neurons can impact both recordings of population activity and the structure of inputs to postsynaptic cells. We start by discussing correlations in pooled recordings using a simple linear model. A similar model is then used to examine the impact of pooling on the statistics of inputs to cells. For simplicity we assume that all spike trains are stationary. However, non-stationary results can be obtained using similar methods as outlined in the Section “Discussion.” Though all parameters are defined in the Meterials and Methods, Tables Tables11 and and22 in the Appendix contain brief descriptions of parameters for quick reference. Also, Tables Tables33 and and44 summarize the values of parameters used for simulations throughout the article.

Table 1.
Definitions of variables pertaining to recordings.
Table 2.
Definitions of variables pertaining to downstream cells. Subscripts e and E (i and I) denote excitation (inhibition).
Table 3.
Parameter values for simulations of two downstream cells. For fields with “var,” various values of the indicated parameters were used and are described in the captions. For all simulations, VL = −60 mV, ...
Table 4.
Parameter values for simulations of feedforward networks. The parameter ν0 is the input rate to the first layer, (x2130, x2110)e indicates synaptic weights for excitatory cells, and (x2130, x2110)i for inhibitory cells. For all ...

Correlations between pooled recordings

Pooling can impact correlations between recordings of population activity obtained from voltage sensitive dyes (VSDs), multi-unit recordings and other techniques. Such signals might each represent the summed activity of hundreds or thousands of neurons. Let two recorded signals, X1(t) and X2(t), represent the weighted activity of cells in two populations (see diagram in Figure Figure1A).1A). If we assume homogeneity in the input variances and equal size of the recorded populations, Eq. (4) gives the correlation between the recorded signals


Here n represents the number of neurons recorded, ρ¯kk, k = 1,2 represents the average correlation between cells contributing to signal Xk(t), and ρ¯12 represents the average correlation between cells contributing to different signals. The averages are weighted so that cells that contribute more strongly to the recording, such as those closer to the recording site, contribute more to the average correlations (see Materials and Methods). Cells common to both recorded populations can be modeled by setting the corresponding correlation coefficients to unity. A form of Eq. (8) with ρ¯11=ρ¯22 was derived by Bedenbaugh and Gerstein (1997).

When the two recording sites are nearby, so that ρ¯12ρ¯11ρ¯22, even small correlations between individual cells are amplified by pooling so that the correlations between the recorded signals can be close to 1. This effect was observed in experiments and explained in similar terms in Stark et al. (2008).

A significant stimulus-dependent change in correlations between individual cells might be reflected only weakly in the correlation between the pooled signals. This can occur, for instance, in recordings of large populations when ρ¯12,ρ¯11, and ρ¯22 are increased by the same factor when a stimulus is presented. Similarly, an increase in correlations between cells can actually lead to a decrease in correlations between recorded signals when ρ¯11 and ρ¯22 increase by a larger factor thanρ¯12.

To illustrate these effects, we construct a simple model of stimulus dependent correlations motivated by the experiments in Chen et al. (2006), in which VSDs were used to record the population response in visual area V1 during an attention task. In their experiments, the imaged area is divided into 64 pixels, each 0.25 mm × .25 mm in size. The signal recorded from each pixel represents the pooled activity of n  1.25 × 104 neurons.

We model correlations between the signals, X1(t) and X2(t), recorded from two pixels in the presence or absence of a stimulus (see Figure Figure2B),2B), using a simplified model of stimulus dependent rates and correlations. The firing rate of a cell located at distance d from the center of the retinotopic image of a stimulus is

Figure 2
The effect of pooling on recordings of stimulus dependent correlations. (A) The response amplitude of a model neuron as a function of its distance from the retinotopic image of a stimulus [Eq. (9)] with B = 0.05 and λ = 10. ...
r(d)={B+(1B)(1+cos(dπ))λ2stimulus presentB          ​ stimulus absent.

Here, B [set membership] [0,1] represents baseline activity and λ ≥ 1 controls the rate at which activity decays with d. Both d and r were scaled so that their maximum value is 1 (see Figure Figure22A).

We assume that the correlation between the responses of two neurons is proportional to the geometric mean of their firing rates (de la Rocha et al., 2007; Shea-Brown et al., 2008), and that correlations decay exponentially with cell distance (Smith and Kohn, 2008; see however Poort and Roelfsema, 2009; Ecker et al., 2010). We therefore model the correlation between two cells as ρjk=Sr(dj)r(dk)eαDj,k where dj and dk are the distances from each cell to the center of the retinotopic image of the stimulus, Dj,k is the distance between cells j and k, α is the rate at which correlations decay with distance, and S ≤ 1 is a constant of proportionality.

If pixels are small compared to the scales at which correlations are assumed to decay, then the average correlation between cells within the same pixel are ρ¯11=Sr(d1) and ρ¯22=Sr(d2). The average correlation between cells in different pixels is ρ¯12=Sr(d1)r(d2)eαD1,2.

In this case, whether a stimulus is present or not, the correlation between the pooled signals is of the form ρX1X2=eαD1,2+O(1/n). Thus, even significant stimulus dependent changes in correlations would be invisible in the recorded signals. This overall trend is consistent with the results in Chen et al. (2006) (compare Figure Figure2C2C to their Figure 2f). In such settings, it is difficult to conclude whether pairwise correlations are stimulus dependent or not from the pooled data.

However, in Supplementary Figure 3 of Chen et al. (2006) the presence of a stimulus apparently results in a slight decrease in correlations between more distant pixels. In Figure Figure2D2D this effect was reproduced using the alternative model described above, with the additional assumption that baseline activity, B, decreases in the presence of a stimulus (Mitchell et al., 2009). The effect can also be reproduced by assuming that spatial correlation decay, α, increases when a stimulus is present.

As this example shows, care needs to be taken when inferring underlying correlation structures from pooled activity. The statistical structure of the recordings can depend on pairwise correlations between individual cells in a subtle way, and different underlying correlation structures may be difficult to distinguish from the pooled signals. However, downstream neurons may also be insensitive to the precise structure of pairwise correlations, as they are driven by the pooled input from many afferents.

Correlations between the pooled inputs to cells

We next examine the effects of pooling by relating the correlations between the activity of downstream cells to the pairwise correlations between cells in the input populations (see Figure Figure1B).1B). The idea that pooling amplifies correlations carries over from the previous section. However, the presence of inhibition and non- instantaneous synaptic responses introduces new issues.

A homogeneous population with overlapping and independent inputs

For simplicity, we first consider a homogeneous population model (see Figure Figure3A).3A). Each cell receives ne inputs from a homogeneous pool of inputs with pairwise correlation coefficients ρee and an additional qene inputs from an outside pool of independent inputs. The two cells share pene of the inputs drawn from the correlated pool. Processes in the independent pool are uncorrelated with all other processes. All excitatory inputs have variance σe2.

Figure 3
Two population models considered in the text. (A) Homogeneous population with overlap and independent inputs: A homogeneous pool of correlated inputs (large black circle) with correlation coefficient between any pair of processes equal to ρee ...

The correlation between the pooled excitatory inputs is given by (see Appendix)


A form of this equation, with pe = 0 and qe = 0, is derived in Chen et al. (2006). In the absence of correlations between processes in the input pools, ρee = 0, the correlation between the pooled signals is just the proportion of shared inputs, ρE1E2=pe. When ρee > 0 and ne is large, pooled excitatory inputs are highly correlated, even when pairwise correlations in the presynaptic pool, ρee, are small, and the neurons do not share inputs (pe = 0). Even when most inputs to the downstream cells are independent (qe > 1), correlations between the pooled signals will be nearly 1 for sufficiently large input pools (see Figure Figure44A).

Figure 4
The effect of pooling on correlations between summed input spike trains. (A) The correlation coefficient between the pooled excitatory spike trains (ρEE) is shown as a function of the size of the correlated excitatory input pool (ne) for various ...

Under analogous homogeneity assumptions for the inhibitory pools, the correlation, ρI1I2, between the pooled inhibitory inputs is given by an equation identical to Eq. (10), and the correlation between the pooled excitatory and inhibitory inputs is given by


Interestingly, since |ρE1I2|1, pairwise excitatory– inhibitory correlations obey the bound |ρei|ρeeρii+O(1/neni). Combining this inequality with Eq. (10) and the analogous equation for ρI1I2, it follows that |ρE1I2|ρE1E2ρI1I2+O(1/neni) for homogeneous populations. These are a result of the non-negative definiteness of covariance matrices.

Heterogeneity and the effects of spatially dependent correlations

We next discuss how heterogeneity can dampen the amplification of correlations due to pooling. In the absence of any homogeneity assumptions on the excitatory input population (see the population model in the Materials and Methods), Eq. (3) gives the pooled excitatory signals, ρE1E2=ρ¯e1e2/ρ¯e1e2+O(1/ne1ne2). The term ρ¯e1e2 is a weighted average of the correlation coefficients between the two excitatory populations, and ρ¯e1e1 and ρ¯e2e2 are weighted averages of the correlations within each excitatory input population.

To illuminate this result, we assume symmetry between the populations: Let nek=ne and σekj=σe for k = 1,2 and j = 1,2,…,ne, and assume ρ¯e1e1=ρ¯e2e2. The average “within” and “between” correlations, are ρeew=ρ¯e1e1=ρ¯e2e2 and ρeeb=ρ¯e1e2 respectively (see Figure Figure3B).3B). Under these assumptions, Eq. (5) can be applied to obtain (See also Bedenbaugh and Gerstein, 1997)


which is plotted in Figure Figure4A4A (green line) and Figure Figure4B.4B. For large ne, the correlation between the pooled signals is the ratio of “between” and “within” correlations.

This observation has implications for a situation ubiquitous in the cortex. A neuron is likely to receive afferents from cells that are physically close. The activity of nearby cells may be more strongly correlated than the activity of more distant cells (Chen et al., 2006; Smith and Kohn, 2008). We therefore expect that pairwise correlations within each input pool are on average larger than correlations between two input pools, that is, ρeew>ρeeb. This reduces the correlation between the inputs, regardless of the input population size.

An increase in correlations in the presynaptic pool can also decorrelate the pooled signals. If correlations within each input pool increase by a greater amount than correlations between the two pools, then the variance in the input to each cell will increased by a larger amount than the covariance between the inputs. As a consequence the correlations between the pooled inputs will be reduced. Modulations in correlation have been observed as a consequence of attention in V4 (Cohen and Maunsell, 2009; Mitchell et al., 2009; but apparently not in V1, Roelfsema et al., 2004). Such changes may be, in part, a consequence of small changes in “within” correlations between neurons in V1.

Equation 12 implies that correlations between large populations cannot be significantly larger than the correlations within each population. Since |ρE1E2|1, it follows that |ρeeb||ρeew|+O(1/ne).

The correlation, ρI1I2, between the pooled inhibitory inputs is given by an identical equation to Eq. (12) and the correlation between the pooled excitatory and inhibitory inputs is given by


Correlations between the free membrane potentials

We now look at the correlation between the free membrane potentials of two downstream neurons. The free membrane potentials are obtained by assuming an absence of threshold or spiking activity. For simplicity we assume symmetry in the statistics of the inputs to the postsynaptic cells: σEk=σE,σIk=σI,ρE1I2=ρE2I1,ρE1I1=ρE2I2,ρE1E1=ρE2E2 and ρI1I1=ρI2I2. The analysis is similar in the asymmetric case.

In the Section “Materials and Methods”, we derive a linear approximation of the free membrane potentials,


where Jk(t)=(gEk(t)(VLVE)+gIk(t)(VLVI))/Cm are the total input currents and K(t)=Θ(t)et/τeff for k = 1,2. Under this approximation, the correlation, ρV1V2, between the membrane potentials is equal to the correlation, ρin=ρJ1J2, between the total input currents and can be written as a weighted average of the pooled excitatory and inhibitory spike train correlations (see Appendix),


where ρE1E2,ρE1I2,ρI1I2 and ρE1I1 are derived above, and WE = x2130|VE  VLE and WI = x2110|VI  VLI are weights for the excitatory and inhibitory contributions to the correlation. In Figure Figure5,5, we compare this approximation with simulations.

Figure 5
The effects of pooling on correlations between postsynaptic membrane potentials. Results of the linear approximation (solid, dotted, and dashed lines) match simulations (points). For the solid blue line, ρee = ρii = 0.05, ...

The correlation between the membrane potentials has positive contributions from the correlation between the excitatory inputs (ρE1E2), and between the inhibitory inputs (ρI1I2). Contributions coming from excitatory–inhibitory correlations (ρE1I2 and ρE2I1) are negative, and can thus decorrelate the activity of downstream cells. This “cancellation” of correlations is observed in Figures Figures1D1D and and5,5, and can lead to asynchrony in recurrent networks (Hertz, 2010; Renart et al., 2010).

Implications for synchronization in feedforward chains

Feedforward chains, like that depicted in Figure Figure6A,6A, have been studied extensively (Diesmann et al., 1999; van Rossum et al., 2002; Litvak et al., 2003; Reyes, 2003; Tetzlaff et al., 2003; Câteau and Reyes, 2006; Doiron et al., 2006; Kumar et al., 2008). In such networks, cells in a layer necessarily share some of their inputs, leading to correlations in their spiking activity (Shadlen and Newsome, 1998). Frequently, spiking in deeper layers is highly synchronous (Reyes, 2003; Tetzlaff et al., 2003). However, in the presence of background noise, correlations can remain negligible (van Rossum et al., 2002; Vogels and Abbott, 2005).

Figure 6
The development of synchrony in a feedforward chain can be understood using a model dynamical system (Tetzlaff et al., 2003). (A) Schematic diagram of the network. Each layer consists of Ne excitatory and Ni inhibitory cells. Each cell in layer k receives ...

Feedforward chains amplify correlations as follows: When inputs to the network are independent, small correlations are introduced in the second layer by overlapping inputs. The inputs to each subsequent layer are pooled from the previous layer. The amplification of correlations by pooling is the primary mechanism for the development of synchrony (Compare solid and dotted blue lines in Figure Figure4A).4A). Overlapping inputs serve primarily to “seed” synchrony in early layers. The internal dynamics of the neurons and background noise can decorrelate the output of a layer, and compete with the correlation amplification due to pooling.

We develop this explanation by considering a feedforward network with each layer containing Ne excitatory and Ni inhibitory cells. Each cell in layer k + 1 receives ne excitatory and ni inhibitory inputs selected randomly from layer k. For simplicity we assume that all excitatory and inhibitory cells are dynamically identical and x2130|VE  VL| = x2110|VE  VL|. Spike trains driving the first layer are statistically homogeneous with pairwise correlations ρ0.

To explain the development of correlations, we consider a simplified model of correlation propagation (See also Renart et al., 2010 for a recurrent version). In the model, any two cells in a layer share the expected proportion pe = ne/Ne of their excitatory inputs and pi = ni/Ni of their inhibitory inputs (the expected proportions are taken with respect to random connectivity). We also assume that inputs are statistically identical across a layer.

For a pair of cells in layer k ≥ 1, let ρkin and ρkout represent the correlation coefficient between the total input currents and output spike trains respectively. The outputs from layer k are pooled (with overlap) to obtain the inputs to layer k + 1. Using the results developed above, ρin1=P(ρ0) and ρk+1in=P(ρkout), for k ≥ 1, where (see Appendix and Tetzlaff et al., 2003 for a similar derivation)


Here β measures the balance between excitation and inhibition (see Materials and Methods). From our assumptions, β = ne/ni. With imbalance (β  1) and a large number of cells in a layer, pooling amplifies small correlations, P(ρ) > ρ, as discussed earlier.

To complete the description of correlation transfer from layer to layer, we relate the correlations between inputs to a pair of cells, ρkin, to correlations in their output spike trains, ρkout. We assume that there is a transfer function, S, so that ρkout=S(ρkin) at each layer k. We additionally assume that S(0) = 0 and S(1) = 1, that is uncorrelated (perfectly correlated) inputs result in uncorrelated (perfectly correlated) outputs. We also assume that the cells are decorrelating, |ρ| > |S(ρ)| > 0 for ρ  0,1 (Shea-Brown et al., 2008). This is an idealized model of correlation transfer, as output correlations depend on cell dynamics and higher order statistics of the inputs (Moreno-Bote and Parga, 2006; de la Rocha et al., 2007; Barreiro et al., 2009; Ostojić et al., 2009).

Correlations between the spiking activity of cells in layers k + 1 are related to correlations in layer k by the layer-to-layer transfer function, T = S P. The development of correlations across layers is modeled by the dynamical system, ρk+1out=T(ρkout), with ρ1out=S(ρ0).

When the network is not balanced (β  1), pooling amplifies correlations at each layer and the activity between cells in deeper layers can become highly correlated (see Figure Figure6C).6C). The output of the first layer is uncorrelated if the individual inputs are independent (ρ0 = 0). In this case all of the correlations between the total inputs to the second layer come from shared inputs,


These correlations are then reduced by the second layer of cells, ρ2out=S(ρ2in)=T(0)>0, and subsequently amplified by pooling and input sharing before being received by layer 3, ρ3in=P(ρ2out). This process continues in subsequent layers. If the correlating effects of pooling and input sharing dominate the decorrelating effects of internal cell dynamics, correlations will increase from layer to layer (see Figure Figure66C).

When ρ0 = 0, overlapping inputs increase the input correlation to layer 2, but have a negligible effect on the mapping once correlations have developed since the effects of pooling dominate [see Eq. (14) and the dashed blue line in Figure Figure4A4A which shows that the effects of input overlaps are small when ne is large, ρ > 0 and β  1]. Therefore, shared inputs seed correlated activity at the first layer, and pooling drives the development of larger correlations. When ρ0 = 0, we cannot expect large correlations before layer 3, but when ρ0 > 0 large correlations can develop by layer 2.

To verify this conclusion, we constructed a two-layer feedforward network with no overlap between inputs (Pe = Pi = 0). In Figure Figure7A,7A, the inputs to layer 1 were independent (ρ0 = 0), and the firing of cells in layer 2 was uncorrelated. In Figure Figure7B,7B, we introduced small correlations (ρ0 = 0.05) between inputs to layer 1. These correlations were amplified by pooling so that strong synchrony is observed between cells in layer 2. We compared these results with a standard feedforward network with overlap in cell inputs (Figure (Figure7C,7C, where Pe = Pi = 0.05). Inputs to layer 1 were independent (ρ0 = 0), and hence outputs from layer 1 uncorrelated. Dependencies between inputs to layer 2 were weak and due to overlap alone, ρ2in=P(0)=0.05. Cells in layer 3 received pooled inputs from layer 2, and their output was highly correlated.

These results predict that correlations between spike trains develop in deeper layers, but they do not not directly address the timescale of the correlated behavior. In simulations, spiking becomes tightly synchronized in deeper layers (see for instance Litvak et al., 2003; Reyes, 2003; and Figure Figure7).7). This can be understood using results in Maršálek et al. (1997) and Diesmann et al. (1999) where it is shown that the response of cells to volleys of spikes is tighter than the volley itself. The firing of individual cells in the network becomes bursty in deeper layers and large correlations are manifested in tightly synchronized spiking events. Alternatively, one can predict the emergence of synchrony by observing that pooling increases correlations over finite time windows (see next section and Appendix) and therefore the analysis developed above can be adapted to correlations over small windows.

Balanced feedforward networks

In the simplified feedforward model above, when excitation balances inhibition, that is β  1, correlations between the pooled inputs to a layer are due to overlap alone, ρkin=P(ρk1out)(pe+pi)/2 for all k. The correlating effects of this map are weak, and this would seem to imply that cells in balanced feedforward chains remain asynchronous. Indeed, our model of correlation propagation displays a stable fixed point at low values of ρ when β  1 (see Figure Figure6D).6D). However, in practice, synchrony is difficult to avoid without careful fine-tuning (Tetzlaff et al., 2003), and almost always develops in feedforward chains (Litvak et al., 2003). We provide some reasons for this discrepancy.

Our focus so far has been on correlations over infinitely large time windows (see Materials and Methods where we define ρxy). Even when the membrane potentials are nearly uncorrelated over large time windows, differences between the excitatory and inhibitory synaptic time constants can cause larger correlations over smaller time windows (Renart et al., 2010). This can, in turn, lead to significant correlations between the output spike trains. We discuss this effect further in the Appendix and give an example in Figure Figure8.8. In this example, the correlations between the membrane potentials over long windows are nearly zero due to cancellation (see Figure Figure8A8A where ρVV = 0.0174 ± 0.0024 s.e. with threshold present), but positive over shorter timescales. The cross- covariance function between the output spike trains is primarily positive, yielding significant spike train correlations (ρspikes = 0.1570 ± 0.0033 s.e.). Therefore, the assumption that pairs of cells decorrelate their inputs may not be valid in the balanced case.

Figure 8
Cross-covariance functions between membrane potentials and output spike trains. (A) The cross-covariance function between membrane potentials, scaled so that its maximum is 1. The linear approximation in Eq. (16) (blue, shaded) agrees with simulations ...

Another source of discrepancies between the idealized model and simulations of feedforward networks are inhomogeneities, which become important when balance is exact. Note that Eq. (14) is an approximation obtained by ignoring fluctuations in connectivity from layer to layer. In a random network, inhomogeneities will be introduced by variability in input population overlaps. To fully describe the development of correlations in a feedforward network, it is necessary to include such fluctuations in a model of correlation propagation. The asynchronous fixed point that appears in the balanced case has a small basin of attraction and fluctuations induced by input inhomogeneities could destroy its stability (see Figure Figure6D).6D). Other sources of heterogeneity can further destabilize the asynchronous state (see Appendix).

It has been shown that asynchronous states can be stabilized through the decorrelating effects of background noise (van Rossum et al., 2002; Vogels and Abbott, 2005). To emulate these effects, a third transfer function, N, can be added to our model. The correlation transfer map then becomes T(ρ) = S*N*P(ρ). Sufficiently strong background noise can increase decorrelation from input to output of a layer, and stabilize the asynchronous fixed point.


We have illustrated how pooling and shared inputs can impact correlations between the inputs and free membrane voltages of postsynaptic cells in a feedforward setting. The increase in correlation due to pooling was discussed in a simpler setting in (Bedenbaugh and Gerstein, 1997; Super and Roelfsema, 2005; Chen et al., 2006; Stark et al., 2008), and similar ideas were also developed for the variance alone in (Salinas and Sejnowski, 2000; Moreno-Bote et al., 2008). The saturation of the signal-to-noise ratio with increasing population size observed in (Zohary et al., 1994) has a similar origin. Our aim was to present a unified discussion of these results, with several generalizations.

Other mechanisms, such as recurrent connectivity between cells receiving the inputs, can modulate correlated activity (Schneider et al., 2006; Ostojić et al., 2009). Importantly, the cancellation of correlations may be a dynamic phenomenon in recurrent networks, as observed in (Hertz, 2010; Renart et al., 2010). On the other hand, neurons may become entrained to network oscillations, resulting in more synchronous firing (Womelsdorf et al., 2007). A full understanding of the statistics of population activity in neuronal networks will require an understanding of how these mechanisms interact to shape the spatiotemporal properties of the neural response.

The results we presented relied on the assumption of linearity at the different levels of input integration. These assumptions can be expected to hold at least approximately. For instance, there is evidence that membrane conductances are tuned to produce a linear response in the subthreshold regime (Morel and Levy, 2009). The assumptions we make are likely to break down at the level of single dendrites where nonlinear effects may be much stronger (Johnston and Narayanan, 2008). The effects of correlated inputs to a single dendritic branch deserve further theoretical study (Gasparini and Magee, 2006; Li and Ascoli, 2006).

We demonstrated that the structure of correlations in a population may be difficult to infer from pooled activity. For instance, a change in pairwise correlations between individual cells in two populations causes a much smaller change in the correlation between the pooled signals. With a large number of inputs, the change in correlations between the pooled signals might not be detectable even when the change in the pairwise correlations is significant.

While we discussed the growth of second order correlations only, higher order correlations also saturate with increasing population size. For example, in a 3-variable generalization of the homogeneous model from Figure Figure3A,3A, it can be shown that ρE1E2E3=1O(1/ne) where ne is the size of each population and ρE1E2E3 is the triple correlation coefficient (Stratonovich, 1963) between the pooled signals E1, E2, and E3. The reason that higher order correlations also saturate follows from the generalization of the following observation at second order: Pooling amplifies correlations because the variance and covariance grow asymptotically with the same rate in ne. In particular σE2 and γE1E2 both behave asymptotically like ne2ρeeσe2+O(ne), and their ratio, ρE1E2=γE1E2/σE2, approaches unity (Bedenbaugh and Gerstein, 1997; Salinas and Sejnowski, 2000; Moreno-Bote et al., 2008).

We concentrated on correlations over infinitely long time windows (see Materials and Methods where we define ρxy). However, pooling amplifies correlations over finite time windows in exactly the same way as correlations over large time windows. Due to the filtering properties of the cells, the timescale of correlations between downstream membrane potentials may not reflect that of the inputs. We discuss this further in the Appendix where the auto- and cross-covariance functions between the membrane potentials are derived.

To simplify the presentation, we have so far assumed stationary. However, since Eq. (2) applies to the Pearson correlation between any pooled data, all of the results on pooling can easily be extended to the non-stationary case. In the non-stationary setting, the cross- covariance function has the form Rxy(s, t) = cov (x(s), y(s + t)), but there is no natural generalization of the asymptotic statistics defined in Eq. (1).

Correlated neural activity has been observed in a variety of neural populations (Gawne and Richmond, 1993; Zohary et al., 1994; Vaadia et al., 1995), and has been implicated in the propagation and processing of information (Oram et al., 1998; Maynard et al., 1999; Romo et al., 2003; Tiesinga et al., 2004; Womelsdorf et al., 2007; Stark et al., 2008), and attention (Steinmetz et al., 2000; Mitchell et al., 2009). However, correlations can also introduce redundancy and decrease the efficiency with which networks of neurons represent information (Zohary et al., 1994; Gutnisky and Dragoi, 2008; Goard and Dan, 2009). Since the joint response of cells and recorded signals can reflect the activity of large neuronal populations, it will be important to understand the effects of pooling to understand the neural code (Chen et al., 2006).

Conflict of Interest Statement

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.


We thank Jaime de la Rocha, Brent Doiron and Eric Shea-Brown for helpful discussions. We also thank the reviewers and the handling editor for numerous useful suggestions. This work was supported by NSF Grants DMS-0604429 and DMS-0817649 and a Texas ARP/ATP award.


Derivation of EQ. (10)

Equation (10) can be derived from Eq. (2). However, we find that it is more easily derived directly. We will calculate the variance, σE12=σE22, and covariance γE1E2 between the pooled signals.

The covariance is given by the sum of all pairwise covariances between the populations, γE1E2=Σe1E1,e2E2σe1σe2ρe1e2. Each cell receives ne+qene inputs so that there are (ne + qene)2 terms that appear in this sum. However, the qene “independent” inputs from each pool are uncorrelated with all other inputs and therefore don't contribute to the sum. Of the remaining ne2 pairs, nepe are shared and therefore have correlation ρe1e2=1. These shared processes therefore collectively contribute nepeσe2 to γE1E2. The remaining ne2nepe processes are correlated with coefficient ρee and collectively contribute (ne2nepe)ρeeσe2. The pooled covariance is thus


The variance is given by the sum of all pairwise covariances within a population, σE12=Σe1E1,e2E1σe1σe2ρe1e2. As above, there are ne + qene neurons in the population, so that the sum has (ne + qene)2 terms. Of these, ne + qene are “diagonal” terms (e1 = e2), each contributing σe2, for a total contribution of (ne+qene)σe2 to σE12. The processes from the independent pool do not contribute any additional terms. This leaves ne(ne  1) correlated pairs which each contribute σe2ρee for a collective contribution of ne(ne1)σe2ρee, giving


Now, ρE1E2=γE1E2/σE1 can be simplified to give Eq. (10). Equations for ρI1I2 and ρE1I2=ρI1E2 can be derived identically.

Finite-time correlations and cross-covariances

Throughout the text, we concentrated on correlations over large time windows. However, the effects of pooling described by Eq. (2) apply to the correlation, ρxy(t), between spike counts over any time window of size t, defined by ρxy(t)=cov(Nx(t),Ny(t))/var(Nx(t))var(Ny(t)) where Nx(t)=0tx(s)ds is the spike count over [0,t] for the spike train x(t). The equation also applies to the instantaneous correlation at time t, defined by Rxy(t)=Cxy(t)/Cxx(t)Cyy(t). Thus pooling increases correlations over all timescales equally.

However, the cell filters the pooled inputs to obtain the membrane potentials and, as a result, the correlations between membrane potentials is “spread out” in time (Tetzlaff et al., 2008). To quantify this effect, we derive an approximation to the auto- and cross-covariance functions between the membrane potentials.

The pooled input spike trains are obtained from from a weighted sum of the individual excitatory and inhibitory spike trains (see Materials and Methods). As a result cross-covariance functions between the pooled spike trains are just sums of the individual cross-covariance functions, CXY(t)=ΣxX,yYCxy(t) for X, Y = E1, E2, I1, I2 and x, y = e, i accordingly. Thus only the magnitude of the cross- covariance functions is affected by pooling. The change in magnitude is quadratic in ne or ni. This is consistent with the observation that pooling amplifies correlations equally over all timescales.

The conductances are obtained by convolving the total inputs with the synaptic filter kernels,

gEk(t)=(Ekαe)(t), and gIk(t)=(Ikαi)(t);k=1,2.

The cross-covariance between the conductances can therefore be written as a convolution of the cross-covariance function between the input signals and the deterministic cross-covariance between the synaptic kernels (Tetzlaff et al., 2008). In particular,


for X, Y = E1, E2, I1, I2 and x, y = e, i accordingly, where (αxαy)(t)=αx(s)αy(t+s) ds is the deterministic cross- covariance between the synaptic filters, αx and αy. Note that total correlations remain unchanged by convolution of the input spike trains with the synaptic filters, since the integral of a convolution will be equal to the product of the integrals (Tetzlaff et al., 2008).

The total input currents, JK(t)=(gEk(t)(VLVI))/Cm, obtained from the linearization of the conductance-based model described in the Section “Materials and Methods” are simply linear combinations of the individual conductances. The cross-covariance function between the input currents is therefore a linear combination of those between the conductances,

CJhJk(t)=|VEVL|2CgEhgEk(t)+|VIVL|2CgIhgIk(t)     2|VEVL||VIVL|CgIhgEk(t).

Combining this result with Eq. (1), yields the correlation, ρin=ρJ1J2, between the total input currents given in Eq. (13).

Using the solution of the linearized equations described in the Section “Materials and Methods”, we obtain a linear approximation to the cross-covariance functions,


for h,k = 1,2 where (KK)(t)=τeff2e|t|/τeff is the cross-covariance between the linear kernel, K, and itself. The convolution with (K[open star]K)(t) scales the area of both the auto- and cross-covariance functions by a factor of τeff2, and therefore leaves the ratio of the areas, ρV1V2 unchanged. Thus, the linear approximation predicts that ρV1V2ρin.

When the total inputs are strong, τeff is small and we can simplify Eq. (16) by approximating (K[open star]K)(t) with a delta function with mass τeff2 so that CV1V2(t)τeffCJ1J2(t)/2 and similarly for CV1V1(t). This approximation is valid when the synaptic time constants are significantly larger than τeff, which is likely to hold in high conductance states. We compare this approximation to cross-covariance functions obtained from simulations in Figure Figure88.

In all examples considered, the cross-covariance functions have exponentially decaying tails. We define the correlation time constant, τxy=limtt/ln(Cxy(t)), as a measure of the decay rate of the exponential tail. If t [dbl greater-than sign] τxy, then x(s) and y(s + t) can be regarded as approximately uncorrelated and γxytt(t|s|/t)Cxy(s) ds (Stratonovich, 1963).

The time constant of a convolution between two exponentially decaying functions is just the maximum time constant of the two functions. Thus, from the results above, the correlation time constant between the membrane potentials is the maximum of the correlation time constants between the inputs, the synaptic time constants, and the effective membrane time constant τV1V2=max{τE1E2,τI1I2,τE1I2,τE2I1,τe,τi,τeff} where τE1E2,τI1I2,τE1I2, and τE2I1 are the time constants of the input spike trains and τe and τi are synaptic time constants. Thus the cross-covariances functions between the membrane potentials are generally broader than the cross-covariance functions between the spike train inputs.

Derivation of EQ. (14)

Consider a feedforward network where each layer consists of Ne excitatory cells and Ni inhibitory cells; each cell in layer k receives ne excitatory and ni inhibitory inputs from layer (k  1), and these connections are chosen randomly and independently across neurons in layer k. Then the degree of overlap in the excitatory and inhibitory inputs to a pair of cells in layer k is a random variable. Following the derivation in Derivation of Eq. (10) in Appendix,


where se denotes the number of common excitatory inputs between the two cells. To understand the origin of se, suppose the ne excitatory inputs to cell 1 have been selected. Then the selection of the ne excitatory inputs to cell 2 involves choosing, without replacement, from two pools: the first, of size ne, projects to cell 1, and the second, of size (Ne  ne), does not. Therefore, se is follows a hyper-geometric distribution with parameters (Ne, ne, ne), and has mean ne2/Ne=nepe. In addition, this random variable is independently selected amongst each pair in layer k. Using the mean value of se, we obtain Eq. (10).

For simplicity, we assume that x2130 |VE  VL| = x2110 |VI  VL|, so that β = ni/ne. If we assume that the statistics in the (k  1)st layer are uniform across all cells and cell types (i.e., ρ=ρee=ρii=ρei=ρk1out and σe = σi) then by substituting Eq. (10) and the equivalent forms for ρII, ρEI in to Eq. (13), we may write the input correlations to the kth layer as


Substituting the values of the covariances and variances, and dividing the numerator and denominator by (x2130|VE  VLe)2, we get

ρin= se+(ne2se)ρ+si+(ni2si)ρ2neniρne+(ne2ne)ρ+ni+(ni2ni)ρ2neniρ .

Rearranging terms and dividing numerator and denominator by ni2, along with the substitution β = ne/ni, we have


This equation takes into account the variations in overlap due to finite size effects since se and si are random variables. Eq. (14) in the text represents the expected value P(ρ) = [left angle bracket]ρin[right angle bracket] which can be obtained by replacing the variables se and si in Eq. (17) with their respective means, [left angle bracket]se[right angle bracket] = nepe and [left angle bracket]si[right angle bracket] = nipi. The expectation above is taken over realizations of the random connectivity of the feedforward network.

To calculate the standard deviation for the inset in Figure Figure6D,6D, we ran Monte Carlo simulations, drawing se and si from a hypergeometric distribution and calculating the resulting transfer, S(ρin)=ρin2 using Eq. (17). Note, however, that Eq. (17) and the inset in Figure Figure6D,6D, do not account for all of the effects of randomness which may destabilize the balanced network. In deriving Eq. (17), we assumed that the statistics in the second layer were uniform. However, variations in the degree of overlap in one layer will cause inhomogeneities in the variances and rates at the next layer. In a feedforward setting, these inhomogeneities are compounded at each layer to destabilize the asynchronous fixed point.

Definitions and values of variables used in the text


  • Barreiro A., Shea-Brown E., Thilo E. (2009). Timescales of spike-train correlation for neural oscillators with common drive. Phys. Rev. E 81, Arxiv preprint arXiv:0907.3924. [PubMed]
  • Bedenbaugh P., Gerstein G. (1997). Multiunit normalized cross correlation differs from the average single-unit normalized correlation. Neural. Comput. 9, 1265–127510.1162/neco.1997.9.6.1265 [PubMed] [Cross Ref]
  • Câteau H., Reyes A. (2006). Relation between single neuron and population spiking statistics and effects on network activity. Phys. Rev. Lett. 96, 58101.10.1103/PhysRevLett.96.058101 [PubMed] [Cross Ref]
  • Chen Y., Geisler W. S., Seidemann E. (2006). Optimal decoding of correlated neural population responses in the primate visual cortex. Nat. Neurosci. 9, 1412–142010.1038/nn1792 [PMC free article] [PubMed] [Cross Ref]
  • Cohen M., Maunsell J. (2009). Attention improves performance primarily by reducing interneuronal correlations. Nat. Neurosci. 12, 1594–160010.1038/nn.2439 [PMC free article] [PubMed] [Cross Ref]
  • Coombes S., Timofeeva Y., Svensson C., Lord G., Josić K., Cox S., Colbert C. (2007). Branching dendrites with resonant membrane: A Òsum-over-tripsÓ approach. Biol. Cybern. 97, 137–14910.1007/s00422-007-0161-5 [PubMed] [Cross Ref]
  • de la Rocha J., Doiron B., Shea-Brown E., Josić K., Reyes A. (2007). Correlation between neural spike trains increases with firing rate. Nature 448, 802–80610.1038/nature06028 [PubMed] [Cross Ref]
  • Diesmann M., Gewaltig M., Aertsen A. (1999). Stable propagation of synchronous spiking in cortical neural networks. Nature 402, 529–53310.1038/990101 [PubMed] [Cross Ref]
  • Doiron B., Rinzel J., Reyes A. (2006). Stochastic synchronization in finite size spiking networks. Phys. Rev. E 74, 30903.10.1103/PhysRevE.74.030903 [PubMed] [Cross Ref]
  • Ecker A., Berens P., Keliris G., Bethge M., Logothetis N., Tolias A. (2010). Decorrelated neuronal firing in cortical microcircuits. Science 327, 584–58710.1126/science.1179867 [PubMed] [Cross Ref]
  • Gasparini S., Magee J. (2006). State-dependent dendritic computation in hippocampal CA1 pyramidal neurons. J. Neurosci. 26, 2088.10.1523/JNEUROSCI.4428-05.2006 [PubMed] [Cross Ref]
  • Gawne T., Richmond B. (1993). How independent are the messages carried by adjacent inferior temporal cortical neurons? J. Neurosci. 13, 2758. [PubMed]
  • Goard M., Dan Y. (2009). Basal forebrain activation enhances cortical coding of natural scenes. Nat. Neurosci. 12, 1444–144910.1038/nn.2402 [PMC free article] [PubMed] [Cross Ref]
  • Gutnisky D., Dragoi V. (2008). Adaptive coding of visual information in neural populations. Nature 452, 220–22410.1038/nature06563 [PubMed] [Cross Ref]
  • Gutnisky D. A., Josić K. (2010). Generation of spatio-temporally correlated spike-trains and local-field potentials using a multivariate autoregressive process. J. Neurophys. 10.1152/jn.00518.2009 [PubMed] [Cross Ref]
  • Hertz J. (2010). Cross-correlations in high-conductance states of a model cortical network. Neural. Comput. 22, 427–44710.1162/neco.2009.06-08-806 [PubMed] [Cross Ref]
  • Johnston D., Narayanan R. (2008).Active dendrites: colorful wings of the mysterious butterflies. Trends Neurosci. 31, 309–31610.1016/j.tins.2008.03.004 [PubMed] [Cross Ref]
  • Kuhn A., Aertsen A., Rotter S. (2003). Higher-order statistics of input ensembles and the response of simple model neurons. Neural Comput. 15, 67–10110.1162/089976603321043702 [PubMed] [Cross Ref]
  • Kuhn A., Aertsen A., Rotter S. (2004). Neuronal integration of synaptic input in the fluctuation-driven regime. J. Neurosci. 24, 2345.10.1523/JNEUROSCI.3349-03.2004 [PubMed] [Cross Ref]
  • Kumar A., Rotter S., Aertsen A. (2008). Conditions for propagating synchronous spiking and asynchronous firing rates in a cortical network model. J. Neurosci. 28, 5268.10.1523/JNEUROSCI.2542-07.2008 [PubMed] [Cross Ref]
  • Li X., Ascoli G. A. (2006). Computational simulation of the input–output relationship in hippocampal pyramidal cells. J. Comput. Neurosci. 21, 191–20910.1007/s10827-006-8797-z [PubMed] [Cross Ref]
  • Litvak V., Sompolinsky H., Segev I., Abeles M. (2003). On the transmission of rate code in long feedforward networks with excitatory-inhibitory balance. J. Neurosci. 23, 3006. [PubMed]
  • Maršálek P., Koch C., Maunsell J. (1997). On the relationship between synaptic input and spike output jitter in individual neurons. Proc. Natl. Acad. Sci. U.S.A 94, 735.10.1073/pnas.94.2.735 [PubMed] [Cross Ref]
  • Maynard E. M., Hatsopoulos N. G., Ojakangas C. L., Acuna B. D., Sanes J. N., Normann R. A., Donoghue J. P. (1999). Neuronal interactions improve cortical population coding of movement direction. J. Neurosci. 19, 8083–8093 [PubMed]
  • Mitchell J. F., Sundberg K. A., Reynolds J. H. (2009). Spatial attention decorrelates intrinsic activity fluctuations in macaque area v4. Neuron 63, 879–88810.1016/j.neuron.2009.09.013 [PMC free article] [PubMed] [Cross Ref]
  • Morel D., Levy W. (2009). The cost of linearization. J. Comput. Neurosci. 27, 259–27510.1007/s10827-009-0141-y [PubMed] [Cross Ref]
  • Moreno-Bote R., Parga N. (2006). Auto-and crosscorrelograms for the spike response of leaky integrate-and-fire neurons with slow synapses. Phys. Rev. Lett. 96, 28101.10.1103/PhysRevLett.96.028101 [PubMed] [Cross Ref]
  • Moreno-Bote R., Renart A., Parga N. (2008). Theory of input spike auto- and cross-correlations and their effect on the response of spiking neurons. Neural Comput. 20, 1651–170510.1162/neco.2008.03-07-497 [PubMed] [Cross Ref]
  • Nunez P., Srinivasan R. (2006). Electric Fields of the Brain: The Neurophysics of EEG. New York, NY: Oxford University Press; 10.1093/acprof:oso/9780195050387.001.0001 [Cross Ref]
  • Okun M., Lampl I. (2008). Instantaneous correlation of excitation and inhibition during ongoing and sensory-evoked activities. Nat. Neurosci. 11, 535–53710.1038/nn.2105 [PubMed] [Cross Ref]
  • Oram M. W., Földiák P., Perrett D. I., Sengpiel F. (1998). The ’ideal homunculus’: decoding neural population signals. Trends Neurosci. 21, 259–26510.1016/S0166-2236(97)01216-2 [PubMed] [Cross Ref]
  • Ostojić S., Brunel N., Hakim V. (2009). How connectivity, background activity, and synaptic properties shape the cross-correlation between spike trains. J. Neurosci. 29, 10234–1025310.1523/JNEUROSCI.1275-09.2009 [PubMed] [Cross Ref]
  • Poort J., Roelfsema P. (2009). Noise correlations have little influence on the coding of selective attention in area v1. Cereb. Cortex 19, 543.10.1093/cercor/bhn103 [PMC free article] [PubMed] [Cross Ref]
  • Renart A., de la Rocha J., Bartho P., Hollender L., Parga N., Reyes A., Harris K. (2010). The asynchronous state in cortical circuits. Science 327, 587–59010.1126/science.1179850 [PMC free article] [PubMed] [Cross Ref]
  • Reyes A. (2003). Synchrony-dependent propagation of firing rate in iteratively constructed networks in vitro. Nat. Neurosci. 6, 593–59910.1038/nn1056 [PubMed] [Cross Ref]
  • Roelfsema P., Lamme V., Spekreijse H. (2004). Synchrony and covariation of firing rates in the primary visual cortex during contour grouping. Nat. Neurosci. 7, 982–99110.1038/nn1304 [PubMed] [Cross Ref]
  • Romo R., Hernández A., Zainos A., Salinas E. (2003). Correlated neuronal discharges that increase coding efficiency during perceptual discrimination. Neuron 38, 649–65710.1016/S0896-6273(03)00287-3 [PubMed] [Cross Ref]
  • Salinas E., Sejnowski T. J. (2000). Impact of correlated synaptic input on output firing rate and variability in simple neuronal models. J. Neurosci. 20, 6193–209 [PubMed]
  • Schneider A., Lewis T., Rinzel J. (2006). Effects of correlated input and electrical coupling on synchrony in fast-spiking cell networks. Neurocomputing 69, 1125–112910.1016/j.neucom.2005.12.058 [Cross Ref]
  • Shadlen M. N., Newsome W. T. (1998). The variable discharge of cortical neurons: implications for connectivity, computation, and information coding. J. Neurosci. 18, 3870–96 [PubMed]
  • Shea-Brown E., Josić K., de La ocha J., Doiron B. (2008). Correlation and synchrony transfer in integrate-and-fire neurons: basic properties and consequences for coding. Phys. Rev. Lett. 100, 108102.10.1103/PhysRevLett.100.108102 [PubMed] [Cross Ref]
  • Smith M. A., Kohn A. (2008). Spatial and temporal scales of neuronal correlation in primary visual cortex. J. Neurosci. 28, 12591–1260310.1523/JNEUROSCI.2929-08.2008 [PMC free article] [PubMed] [Cross Ref]
  • Stark E., Globerson A., Asher I., Abeles M. (2008). Correlations between groups of premotor neurons carry information about prehension. J. Neurosci. 28, 10618–1063010.1523/JNEUROSCI.3418-08.2008 [PubMed] [Cross Ref]
  • Steinmetz P., Roy A., Fitzgerald P., Hsiao S., Johnson K., Niebur E. (2000). Attention modulates synchronized neuronal firing in primate somatosensory cortex. Nature 404, 187–19010.1038/35004588 [PubMed] [Cross Ref]
  • Stratonovich R. (1963). Topics in the Theory of Random Noise: General Theory of Random Processes. Nonlinear Transformations of Signals and Noise. New York, NY: Gordon and Breach
  • Super H., Roelfsema P. (2005). Chronic Multiunit Recordings in Behaving Animals: Advantages and Limitations. Development, dynamics and pathology of neuronal networks: from molecules to functional circuits: proceedings of the 23rd International Summer School of Brain Research, held at the Royal Netherlands Academy of Arts and Sciences, Amsterdam, from 25–29 August 2003, 263
  • Tetzlaff T., Buschermöhle M., Geisel T., Diesmann M. (2003). The spread of rate and correlation in stationary cortical networks. Neurocomputing 52, 949–954
  • Tetzlaff T., Rotter S., Stark E., Abeles M., Aertsen A., Diesmann M. (2008). Dependence of neuronal correlations on filter characteristics and marginal spike train statistics. Neural Comput. 20, 2133–218410.1162/neco.2008.05-07-525 [PubMed] [Cross Ref]
  • Tiesinga P. H., Fellous J.-M., Salinas E., José J. V., Sejnowski T. J. (2004). Inhibitory synchrony as a mechanism for attentional gain modulation. J. Physiol. (Paris) 98, 296–31410.1016/j.jphysparis.2005.09.002 [PMC free article] [PubMed] [Cross Ref]
  • Troyer T., Miller K. (1997). Physiological gain leads to high ISI variability in a simple model of a cortical regular spiking cell. Neural. Comput. 9, 971–98310.1162/neco.1997.9.5.971 [PubMed] [Cross Ref]
  • Vaadia E., Haalman I., Abeles M., Bergman H., Prut Y., Slovin H., Aertsen A. (1995). Dynamics of neuronal interactions in monkey cortex in relation to behavioural events. Nature 373, 515–51810.1038/373515a0 [PubMed] [Cross Ref]
  • van Rossum M., Turrigiano G., Nelson S. (2002). Fast propagation of firing rates through layered networks of noisy neurons. J. Neurosci. 22, 1956–1966 [PubMed]
  • Vogels T. P., Abbott L. F. (2005). Signal propagation and logic gating in networks of integrate-and-fire neurons. J. Neurosci. 25, 10786–1079510.1523/JNEUROSCI.3508-05.2005 [PubMed] [Cross Ref]
  • Womelsdorf T., Schoffelen J.-M., Oostenveld R., Singer W., Desimone R., Engel A. K., Fries P., Jun. (2007). Modulation of neuronal interactions through neuronal synchronization. Science 316, 1609–161210.1126/science.1139597 [PubMed] [Cross Ref]
  • Zohary E., Shadlen M., Newsome W. (1994). Correlated neuronal discharge rate and its implications for psychophysical performance. Nature 370, 140–14310.1038/370140a0 [PubMed] [Cross Ref]

Articles from Frontiers in Computational Neuroscience are provided here courtesy of Frontiers Media SA