Home | About | Journals | Submit | Contact Us | Français |

**|**Comput Intell Neurosci**|**v.2010; 2010**|**PMC2862319

Formats

Article sections

- Abstract
- 1. Introduction
- 2. Methods
- 3. Identifying an MVAR-Nonlinear-Poisson Model
- 4. Granger Causality Analysis
- 5. Discussion
- References

Authors

Related links

Comput Intell Neurosci. 2010; 2010: 752428.

Published online 2010 April 29. doi: 10.1155/2010/752428

PMCID: PMC2862319

Faculty of Biomedical Engineering, Technion—Israel Institute of Technology, 32000 Haifa, Israel

*Shy Shoham: Email: li.ca.noinhcet.mb@mahohss

Academic Editor: Andrzej Cichocki

Received 2009 April 3; Revised 2009 August 13; Accepted 2010 January 11.

Copyright © 2010 M. Krumin and S. Shoham.

This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

This article has been cited by other articles in PMC.

Recent years have seen the emergence of microelectrode arrays and optical methods allowing simultaneous recording of spiking activity from populations of neurons in various parts of the nervous system. The analysis of multiple neural spike train data could benefit significantly from existing methods for multivariate time-series analysis which have proven to be very powerful in the modeling and analysis of continuous neural signals like EEG signals. However, those methods have not generally been well adapted to point processes. Here, we use our recent results on correlation distortions in multivariate Linear-Nonlinear-Poisson spiking neuron models to derive generalized Yule-Walker-type equations for fitting ‘‘hidden” Multivariate Autoregressive models. We use this new framework to perform Granger causality analysis in order to extract the directed information flow pattern in networks of simulated spiking neurons. We discuss the relative merits and limitations of the new method.

The analysis of multivariate neurophysiological signals at the cellular (spike trains) and population scales (EEG/MEG, LFP, and ECOG) has developed almost independently, largely due to the mathematical differences between continuous and point-process signals. The analysis of multiple neural spike train data [1] has gained tremendous relevance recently with the advent and widespread application of arrays of microelectrodes in both basic and applied Neurosciences. Furthermore, emerging optical methods for network activity imaging [2] and control [3] are likely to further compound this growth.

Currently, the analysis of multichannel spike trains is still largely limited to single-channel analyses, to bivariate cross-correlation and metric-space analyses [4], and to spike train filtering (“decoding”). In contrast, much of EEG/MEG time series analysis has revolved around linear and nonlinear models and analyses that are essentially multivariate, most prominently the multivariate autoregressive (MVAR) model. The MVAR framework is associated with a powerful set of time- and frequency-domain statistical tools for inferring directional and causal information flow based on Granger's framework [5], including linear and nonlinear Granger causality, directed transfer function, directed coherence, and partial directed coherence (see [6–8] for reviews). Scattered attempts at applying this general framework to neural spike trains have relied on smoothing the spike trains to obtain a continuous process that can be fit with an MVAR model [9–12]. This approach has the clear disadvantage of being highly kernel dependent and of introducing unwanted distortions. The inability to estimate multivariate autoregressive models for spike trains has recently motivated Nedungadi et al. [13] to develop an alternative nonparametric procedure for computing Granger causality based on spectral matrix factorization (without fitting the data with an autoregressive model).

The purpose of this paper is to bridge this divide in neurophysiological data analysis by introducing a correlation-distortion-based framework for applying multivariate autoregressive models to multichannel spike trains. The primary aim of making this connection is to enable direct identification of causal information flow among populations of neurons using the powerful Granger causality analyses, which have been tried and tested in numerous studies of continuous neural signals. The framework is based on our recent analytical results [14, 15] on correlation distortions in (multiple) Linear-Nonlinear-Poisson (LNP) models when the inputs are white Gaussian noise processes and the nonlinearities are exponential, square, or absolute values. The essential idea in this approach is that the nonlinearity (which produces the firing rates) systematically distorts the correlation structure of the correlated Gaussian outputs of the linear stage, and that the spike trains carry essentially the same expected correlation structure. By deriving formulas for these distortions, we were able to generate synthetic spike trains with a fully-controllable correlation structure by choosing FIR linear kernels that “predistort” the Gaussian processes to cancel out the subsequent distortion. Such spike trains can be applied, for example, in pattern photo-stimulation of synthetic input activity onto a neuron, or for controlling neuron populations in artificial neuroprosthetic interfaces [3, 16]. Although we noted in [14] that the linear stage could generally have a recursive MVAR structure, the required estimation steps were not presented or tested.

The remainder of the paper is organized as follows. Section 2 introduces the methods used for generating the spike trains used in this paper and for evaluating statistical significance. In Section 3 we present and evaluate the procedure for estimating the MVAR-nonlinear-Poisson model. In Section 4 we provide an overview of linear Granger causality analyses and apply them to estimating information flow in bi- and trivariate spike trains. In Section 5 we conclude by discussing the new framework's relation to previous work, and its prospects and limitations.

Spike trains were generated in two different ways in order to mimic two basic scenarios encountered in neural data recordings: distributed population activity with relatively wide correlation functions and local network with directly interconnected neurons.

Population activity was simulated using a Linear-Nonlinear-Poisson (LNP) generative neural model with a multichannel linear stage modeled by a Multivariate Autoregressive model (see Section 3). Causal connectivity structures were generated by choosing appropriate coefficients for the MVAR model (details provided for each example in Section 4). The desired mean firing rates and firing rate variability were obtained by adjusting the parameters of the nonlinearity [14].

Local networks of directly interconnected neurons were simulated using integrate-and-fire (IF) neuron models proposed by Izhikevich [17] with various interconnection schemes. Routines based on freely available code from the ModelDB [18, 19] database (accession number 115968) were used to generate this network activity. This approach provided networks of realistic spiking neurons with no assumptions of Poisson firing or LNP-based activity. For the three examples used in this work, three different networks were simulated with the connectivity structures depicted in Figure 1.

To perform a statistical test on the results of the proposed method as part of Granger causality analyses, surrogate datasets were generated. The surrogate data was generated by nullifying one causal connection (coefficient) at a time in the estimated underlying MVAR model of the linear stage. This new MVAR model (with one artificially “broken” connection) was used for generating spike trains with no direct causal relation between the two tested channels. Each of these spike trains was then analyzed by the proposed method for Granger causality. The resultant coefficients (6) or (8) provided a “null” distribution, to which the corresponding coefficients calculated from the real data were compared.

We consider the problem of identifying a *p*-dimensional multivariate (vector) autoregressive model of order *m*:

$$\mathbf{x}\left(n\right)=-{\displaystyle \sum}_{k=1}^{m}A\left(k\right)\xb7\mathbf{x}\left(n-k\right)+\mathbf{w}\left(n\right)$$

(1)

from observations of Poisson spike trains whose rate depends nonlinearly on **x**(*n*):

$${\lambda}_{i}\left(n\right)=f\left({\mu}_{i}+{\sigma}_{i}\xb7{x}_{i}\left(n\right)\right).$$

(2)

The matrices *A*(*k*) are *p* × *p* coefficient matrices, each corresponding to a specific lag, and **w** is a zero-mean Gaussian noise process with covariance matrix ∑.

The parameters of an MVAR model (*A*(*k*)) can be estimated directly from the autocorrelation function of its output *R*_{x}(*k*) = *E*[**x**(*n*) · **x**^{T}(*n* + *k*)] by solving the multivariate Yule-Walker equations [8]:

$$\sum}_{k=1}^{m}A\left(k\right)\xb7{R}_{\mathbf{x}}\left(i-k\right)=-{R}_{\mathbf{x}}\left(i\right),1\le i\le m.$$

(3)

How can this framework be adapted to our case? Although the correlation matrices *R*_{x}(*k*) are not directly measurable, they can be indirectly estimated from the correlation matrices and expectations of the firing rates *λ* for certain choices of the nonlinearity *f*. These “predistortion” relationships were derived in [14] for exponential, square and absolute-value transformations by considering the effect of these nonlinearities on pairs of correlated, normally distributed random variables. For the case of doubly-stochastic Poisson processes, the spike-train correlation structure *R*_{ΔN}(*τ*) is identical to that of the firing rates, except at zero lag [14, 20]: *R*_{ΔN}(*τ*) = *R*_{λ}(*τ*) + *δ*(*τ*)*E*(*λ*).

The parameter estimation algorithm is summarized in Algorithm 1 for the exponential and square nonlinearities (the detailed derivation and formulas of absolute-value pre-distortions appear in [14]).

The main algorithm assumes that the model order *m* is known. Several different criteria for automatic determination of an “optimal” model order *m* have been developed. In our implementation we determined the model order by minimizing the Bayesian Information Criterion (BIC):

$$\text{BIC}\left(m\right)=2\mathrm{log}[\mathrm{det}\left(\tilde{\mathrm{\Sigma}}\right)]+\frac{2{p}^{2}m\xb7\mathrm{log}{N}_{\text{total}}{N}_{\text{total}}}{,}$$

(4)

where *N*_{total} is the total number of data points and the prediction error covariance matrix $\tilde{\Sigma}$ is given by

$$\tilde{\mathrm{\Sigma}}=R\left(0\right)+{\displaystyle \sum}_{k=1}^{m}A\left(k\right)\xb7{R}_{\mathbf{x}}\left(k\right).$$

(5)

Figure 2 presents an illustrative example of an MVAR-Nonlinear-Poisson model (Figure 2(a)) of order 3 estimated from three correlated spike trains. The correlations between the three processes, which can be qualitatively appreciated from the firing rates (Figure 2(b)), are accurately captured by the estimated model (Figure 2(c)).

Granger causality is based on the general concept due to Norbert Wiener that a causal influence should manifest in improving the predictability of the driven process when the driving process is observed. A measurable reduction in the unexplained variance of the driven process (say *x*_{2}(*n*)) as a result of inclusion of the causal (driving) process (say *x*_{1}(*n*)) in linear autoregressive modeling marks the existence of a causal influence from *x*_{1}(*n*) to *x*_{2}(*n*) in the time domain [5]. Pairwise linear Granger causality in the time domain is defined as

$${F}_{{x}_{1}\to {x}_{2}}=\mathrm{ln}\frac{{\mathrm{\Sigma}}_{1}}{{\mathrm{\Sigma}}_{2}},$$

(6)

where Σ_{1} is the unexplained variance (prediction error covariance) of *x*_{2}(*n*) in its own autoregressive model, whereas Σ_{2} is its unexplained variance when a joint MVAR model for both *x*_{2}(*n*) and *x*_{1}(*n*) is constructed. It is expected that *F*_{x1→x2} > 0 when *x*_{1}(*n*) influences *x*_{2}(*n*), and *F*_{x1→x2} = 0 when it does not. In practice, *F*_{x1→x2} is compared to a threshold value, which can be determined using a variety of methods (typically using surrogate data or by shuffling channels).

To evaluate the Granger analysis framework, we first tested its ability to estimate the causality structure in a system of two coupled synthetic spike trains with unidirectional influence (Figure 3). The structure of the MVAR model used to generate the data (Figure 3(a)) was

$$\underset{\xaf}{x}\left(n\right)=\left[\begin{array}{cc}\frac{1}{2}& 0\\ 0& \frac{1}{2}\end{array}\right]\underset{\xaf}{x}\left(n-1\right)+\left[\begin{array}{cc}0& 0\\ -\frac{1}{2}& 0\end{array}\right]\underset{\xaf}{x}\left(n-2\right)+\underset{\xaf}{w}\left(n\right).$$

(7)

We simulated a 10-minute-long dataset where the mean firing rate of the spike trains was 20Hz (set by adjusting the exponent's parameters). The strength of each connection was calculated using (6), and their statistical significance was tested by applying the same calculation scheme to surrogate data where the tested connection was removed (for details on generating surrogate data see Section 2).

Granger causality analysis in a 2D case. (a) Schematic of the original model and its observed correlation structure. (b) Granger causality analysis of the spike trains revealed a significant causal connection from cell 1 to cell 2. The numbers represent **...**

Pairwise causal analyses are important, but cannot distinguish, for example, between direct and indirect influences in more elaborate connectivity schemes, such as trivariate networks [8]. To address inference in this more complex scenario, we can perform a series of “leave-one-out” analyses, using the multivariate extension of the linear Granger causality index [7, 8]. For example, to assess the *direct* influence exerted by *x*_{m} on *x*_{n}, we use

$${F}_{{x}_{m}\to {x}_{n}}=\mathrm{ln}\frac{{\mathrm{\Sigma}}_{{x}_{n}{x}_{1}{x}_{m-1},{x}_{m+1}{x}_{p}{\mathrm{\Sigma}}_{{x}_{n}{x}_{1}{x}_{p}.}}}{}$$

(8)

To test this approach (Figure 4) we applied it to synthetic data originating from 10-minute-long datasets (20Hz average rate) synthetically generated from two different MVAR models that represent two basic triple-unit activity examples. The first one (Figures 4(a)–4(c)) models sequential connection and was modeled by

$$\begin{array}{cc}\hfill \underset{\xaf}{x}\left(n\right)& =\left[\begin{array}{ccc}\frac{1}{2}& 0& 0\\ \frac{1}{2}& \frac{1}{2}& 0\\ 0& 0& \frac{1}{2}\end{array}\right]\underset{\xaf}{x}\left(n-1\right)+\left[\begin{array}{ccc}0& 0& 0\\ 0& 0& 0\\ 0& -\frac{1}{2}& 0\end{array}\right]\underset{\xaf}{x}\left(n-2\right)\hfill \\ \hfill & +\left[\begin{array}{ccc}-\frac{1}{2}& 0& 0\\ 0& 0& 0\\ 0& 0& 0\end{array}\right]\underset{\xaf}{x}\left(n-3\right)+\underset{\xaf}{w}\left(n\right).\hfill \end{array}$$

(9)

The second case (Figures 4(d)–4(f)) is a “common input” example, where unit #1 drives the activity of units #2 and #3 with different time delays. This example was modeled by

$$\begin{array}{cc}\hfill \underset{\xaf}{x}\left(n\right)& =\left[\begin{array}{ccc}\frac{1}{2}& 0& 0\\ \frac{1}{2}& \frac{1}{2}& 0\\ 0& 0& \frac{1}{2}\end{array}\right]\underset{\xaf}{x}\left(n-1\right)+\left[\begin{array}{ccc}0& 0& 0\\ 0& 0& 0\\ 0& 0& 0\end{array}\right]\underset{\xaf}{x}\left(n-2\right)\hfill \\ \hfill & +\left[\begin{array}{ccc}-\frac{1}{2}& 0& 0\\ 0& 0& 0\\ -\frac{1}{2}& 0& 0\end{array}\right]\underset{\xaf}{x}\left(n-3\right)+\underset{\xaf}{w}\left(n\right).\hfill \end{array}$$

(10)

To test the robustness of the MVAR-N-P model estimation under a nonlinearity mismatch, we simulated Model I from Figure 4 using a square and an absolute value nonlinearity and estimated the model assuming an exponential nonlinearity as before. As can be seen in Figure 5, the estimated MVAR models are extremely similar to the model estimated without the mismatch: *ρ* = 0.995 ± 0.004, *ρ* = 0.993 ± 0.006, respectively, for the nonzero kernels (each model is illustrated by its respective impulse responses).

Nonlinearity mismatch has a minor effect on the estimation of the MVAR model. MVAR-N model was estimated from spike trains generated using different nonlinearities (absolute value (dotted line), square (dashed line), or exponential (solid line)). Estimation **...**

Finally, we tested the method on data that comes from simulations of *realistic* local network activity. The spike trains were simulated using interconnected networks of integrate-and-fire neurons [17]. Examples similar to those presented in Figures Figures3 and3 and and4 were4 were analyzed by the proposed approach based on the MVAR-Nonlinear-Poisson model. Exponential nonlinearity was used as it is more flexible in capturing correlation structures than the other two nonlinearities [14]. The method successfully determines the connectivity pattern in all three examples, even though the spike trains are far from being Poissonian and therefore cannot be generated by an LNP model (Figure 6).

We note that, as a result of absolute and relative refractory periods of non-Poisson spiking, the correlation structure has strong negative peaks that cannot be directly captured by the MVAR models used in our framework. To fit a stable and representative MVAR model to these spike trains, we applied a basic regularization procedure to their correlation structure that consisted of truncating the negative peaks in the auto- and cross-covariance functions and adding a diagonal matrix to the correlations matrix (in order to get a positive semidefinite correlation matrix).

In this paper, we introduced a new method for modeling multineuron spike train data, and its application to the identification of information flow structure among interacting neuronal populations. The identification of neural systems from multineuron spike train data can be used for experimental inference of underlying network connections [22–27], or more generally of “effective” connectivity. It is also indirectly related to nonparametric methods for identifying high-order synchronous interactions [28–31], and metrics of (dis)similarity between spike trains [4, 32–36]. In our approach, the data is fit to a model based on an underlying MVAR process with Gaussian statistics which is nonlinearly transformed to firing rates that modulate Poisson spike trains. Our approach thus departs from the classical model-based identification of multivariate spike train data which assumes a specific, biophysically-motivated, linear or nonlinear interaction scheme between neurons [22–27]. In our approach, there is no explicit modeling of the interaction exerted through the spike trains, but rather the modulating processes interact through the multivariate recursive structure of the MVAR. In practice, the strict assumptions of neural-interaction models are challenged by the dramatic under-sampling of a large population in real neural recordings, as well as by oscillations and modulations exerted by unmodeled elements. Recently, this “classical” approach has been generalized into generalized linear models (GLMs) that include modulation by a dynamic stimulus or behavior [37–39], which in our framework can be done by adding an additional external input or set of inputs (and thus moving from an AR to an ARX model). In addition, the new framework can easily be extended to analyze mixed datasets containing both spiking and continuous neurophysiological (and behavioral) data, as well as to time-varying (nonstationary) scenarios using the MVAR framework and its standard adaptive extensions.

Rather than explicitly modeling neuron-to-neuron interactions, our approach benefits from the MVAR-based Granger causality framework for inferring directed information flow using the concept of increased predictability of one time series when another is observed. While we have only illustrated the use of linear Granger causality, this framework now includes a number of different statistical measures that can be used for inference, including directed transfer function, directed coherence, and partial directed coherence. A thorough overview of these methods [6–8] is clearly outside the scope of the current paper; however, we note that their ability to infer cortical network connectivity patterns has been systematically tested and compared in a number of studies (e.g., see [40]). It is important to note that a major difference between our approach and “classical” MVAR models is that the MVAR model here is hidden and only indirectly observed through the noisy spike trains. As a result, it was crucial that both the MVAR parameter estimates and the prediction errors can both be estimated from auto- and cross-correlations equations (3) and (5).

Finally, we note that the proposed framework has some limitations that could potentially benefit from additional work. The LNP nonlinearity is required in order to transform the Gaussian processes into nonnegative stochastic intensities (rates) that modulate the Poisson processes. Having to select a specific nonlinearity is clearly a shortcoming of this approach versus the complete generality (and uniqueness) of the MVAR framework in the context of continuous processes. Secondly, we note that when the MVAR-N-P is used as a generative model for spike trains, the doubly-stochastic Poisson processes that are produced have a larger-than-Poisson dispersion (variance), which may not always be desirable. This limitation can be addressed by considering alternative models for the transformation from a Gaussian process to spike trains. For example, the single-stochastic processes analyzed by Macke et al. [41] produce spikes deterministically whenever a Gaussian process is higher than a threshold value, while Tchumatchenko et al. [42] analyzed spike trains produced during threshold crossings of Gaussian processes (both solutions can only be numerically inverted).

This work was supported by Israeli Science Foundation Grant no. 1248/06 and European Research Council Starting Grant no. 211055.

1. Brown EN, Kass RE, Mitra PP. Multiple neural spike train data analysis: state-of-the-art and future challenges. *Nature Neuroscience*. 2004;7(5):456–461. [PubMed]

2. Göbel W, Helmchen F. In vivo calcium imaging of neural network function. *Physiology*. 2007;22(6):358–365. [PubMed]

3. Shoham S, O’Connor DH, Sarkisov DV, Wang SS-H. Rapid neurotransmitter uncaging in spatially defined patterns. *Nature Methods*. 2005;2(11):837–843. [PubMed]

4. Victor JD. Spike train metrics. *Current Opinion in Neurobiology*. 2005;15(5):585–592. [PMC free article] [PubMed]

5. Granger CWJ. Investigating causal relations by econometric models and cross-spectral methods. *Econometrica*. 1969;37(3):424–438.

6. Pereda E, Quiroga RQ, Bhattacharya J. Nonlinear multivariate analysis of neurophysiological signals. *Progress in Neurobiology*. 2005;77(1-2):1–37. [PubMed]

7. Gourévitch B, Bouquin-Jeannès RL, Faucon G. Linear and nonlinear causality between signals: methods, examples and neurophysiological applications. *Biological Cybernetics*. 2006;95(4):349–369. [PubMed]

8. Ding M, Chen Y, Bressler SL. Granger causality: basic theory and application to neuroscience. In: Schelter B, Winterhalder M, Timmer J, editors. *Handbook of Time Series Analysis*. Weinheim, Germany: Wiley; 2007. pp. 437–460.

9. Sameshima K, Baccalá LA. Using partial directed coherence to describe neuronal ensemble interactions. *Journal of Neuroscience Methods*. 1999;94(1):93–103. [PubMed]

10. Fanselow EE, Sameshima K, Baccala LA, Nicolelis MAL. Thalamic bursting in rats during different awake behavioral states. *Proceedings of the National Academy of Sciences of the United States of America*. 2001;98(26):15330–15335. [PubMed]

11. Kamiński M, Ding M, Truccolo WA, Bressler SL. Evaluating causal relations in neural systems: granger causality, directed transfer function and statistical assessment of significance. *Biological Cybernetics*. 2001;85(2):145–157. [PubMed]

12. Zhu L, Lai Y-C, Hoppensteadt FC, He J. Probing changes in neural interaction during adaptation. *Neural Computation*. 2003;15(10):2359–2377. [PubMed]

13. Nedungadi AG, Rangarajan G, Jain N, Ding M. Analyzing multiple spike trains with nonparametric granger causality. *Journal of Computational Neuroscience*. 2009;27(1):55–64. [PubMed]

14. Krumin M, Shoham S. Generation of spike trains with controlled auto- and cross-correlation functions. *Neural Computation*. 2009;21(6):1642–1664. [PubMed]

15. Krumin M, Shimron A, Shoham S. Correlation-distortion based identification of linear-nonlinear-poisson models. *Journal of Computational Neuroscience*. In press. [PubMed]

16. Farah N, Reutsky I, Shoham S. Patterned optical activation of retinal ganglion cells. In: Proceedings of the 29th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC ’07); August 2007; Lyon, France. Cité Internationale; pp. 6368–6370. [PubMed]

17. Izhikevich EM. Simple model of spiking neurons. *IEEE Transactions on Neural Networks*. 2003;14(6):1569–1572. [PubMed]

18. Hines ML, Morse T, Migliore M, Carnevale NT, Shepherd GM. ModelDB: a database to support computational neuroscience. *Journal of Computational Neuroscience*. 2004;17(1):7–11. [PMC free article] [PubMed]

19. Izhikevich EM. Polychronization: computation with spikes. *Neural Computation*. 2006;18(2):245–282. [PubMed]

20. Knox CK. Signal transmission in random spike trains with applications to the statocyst neurons of the lobster. *Biological Cybernetics*. 1970;7(5):167–174. [PubMed]

21. Whittle P. On the fitting of multivariate autoregressions, and the approximate canonical factorization of a spectral density matrix. *Biometrika*. 1963;50(1-2):129–134.

22. Borisyuk GN, Borisyuk RM, Kirillov AB, Kovalenko EI, Kryukov VI. A new statistical method for identifying interconnections between neuronal network elements. *Biological Cybernetics*. 1985;52(5):301–306. [PubMed]

23. van den Boogaard H. Maximum likelihood estimations in a nonlinear self-exciting point process model. *Biological Cybernetics*. 1986;55(4):219–225. [PubMed]

24. Chornoboy ES, Schramm LP, Karr AF. Maximum likelihood identification of neural point process systems. *Biological Cybernetics*. 1988;59(4-5):265–275. [PubMed]

25. Yang X, Shamma SA. Identification of connectivity in neural networks. *Biophysical Journal*. 1990;57(5):987–999. [PubMed]

26. Brillinger DR. Nerve cell spike train data analysis: a progression of technique. *Journal of the American Statistical Association*. 1992;87(418):260–271.

27. Okatan M, Wilson MA, Brown EN. Analyzing functional connectivity using a network likelihood model of ensemble neural spiking activity. *Neural Computation*. 2005;17(9):1927–1961. [PubMed]

28. Martignon L, Deco G, Laskey K, Diamond M, Freiwald W, Vaadia E. Neural coding: higher-order temporal patterns in the neurostatistics of cell assemblies. *Neural Computation*. 2000;12(11):2621–2653. [PubMed]

29. Nakahara H, Amari S-I. Information-geometric measure for neural spikes. *Neural Computation*. 2002;14(10):2269–2316. [PubMed]

30. Gütig R, Aertsen A, Rotter S. Analysis of higher-order neuronal interactions based on conditional inference. *Biological Cybernetics*. 2003;88(5):352–359. [PubMed]

31. Schrader S, Grün S, Diesmann M, Gerstein GL. Detecting synfire chain activity using massively parallel spike train recording. *Journal of Neurophysiology*. 2008;100(4):2165–2176. [PubMed]

32. Victor JD, Purpura KP. Metric-space analysis of spike trains: theory, algorithms and application. *Network: Computation in Neural Systems*. 1997;8(2):127–164.

33. van Rossum MCW. A novel spike distance. *Neural Computation*. 2001;13(4):751–763. [PubMed]

34. Schreiber S, Fellous JM, Whitmer D, Tiesinga P, Sejnowski TJ. A new correlation-based measure of spike timing reliability. *Neurocomputing*. 2003;52–54:925–931. [PMC free article] [PubMed]

35. Hunter JD, Milton JG. Amplitude and frequency dependence of spike timing: implications for dynamic regulation. *Journal of Neurophysiology*. 2003;90(1):387–394. [PubMed]

36. Dauwels J, Vialatte F, Weber T, Cichocki A. Quantifying statistical interdependence by message passing on graphs—part I: one-dimensional point processes. *Neural Computation*. 2009;21(8):2152–2202. [PubMed]

37. Paninski L, Shoham S, Fellows MR, Hatsopoulos NG, Donoghue JP. Superlinear population encoding of dynamic hand trajectory in primary motor cortex. *Journal of Neuroscience*. 2004;24(39):8551–8561. [PubMed]

38. Truccolo W, Eden UT, Fellows MR, Donoghue JP, Brown EN. A point process framework for relating neural spiking activity to spiking history, neural ensemble, and extrinsic covariate effects. *Journal of Neurophysiology*. 2005;93(2):1074–1089. [PubMed]

39. Pillow JW, Shlens J, Paninski L, et al. Spatio-temporal correlations and visual signalling in a complete neuronal population. *Nature*. 2008;454(7207):995–999. [PMC free article] [PubMed]

40. Astolfi L, Cincotti F, Mattia D, et al. Comparison of different cortical connectivity estimators for high-resolution EEG recordings. *Human Brain Mapping*. 2007;28(2):143–157. [PubMed]

41. Macke JH, Berens P, Ecker AS, Tolias AS, Bethge M. Generating spike trains with specified correlation coefficients. *Neural Computation*. 2009;21(2):397–423. [PubMed]

42. Tchumatchenko T, Malyshev A, Geisel T, Volgushev M, Wolf F. Correlations and synchrony in threshold neuron models. *Physical Review Letters*. 2010;104(5):4 pages. Article ID 058102. [PubMed]

Articles from Computational Intelligence and Neuroscience are provided here courtesy of **Hindawi Publishing Corporation**

PubMed Central Canada is a service of the Canadian Institutes of Health Research (CIHR) working in partnership with the National Research Council's national science library in cooperation with the National Center for Biotechnology Information at the U.S. National Library of Medicine(NCBI/NLM). It includes content provided to the PubMed Central International archive by participating publishers. |