Home | About | Journals | Submit | Contact Us | Français |

**|**HHS Author Manuscripts**|**PMC5155514

Formats

Article sections

- Abstract
- 1 Introduction
- 2 Detecting Time-Varying Synchronies
- 3 Analysis of the spike-train data
- 4 Conclusions and future directions
- Supplementary Material
- References

Authors

Related links

J Am Stat Assoc. Author manuscript; available in PMC 2017 August 18.

Published in final edited form as:

Published online 2016 August 18. doi: 10.1080/01621459.2015.1116988

PMCID: PMC5155514

NIHMSID: NIHMS770707

Department of Statistics, University of California, Irvine, CA

Babak Shahbaba: ude.icu@skabab

The publisher's final edited version of this article is available at J Am Stat Assoc

The goal of this paper is to develop a novel statistical model for studying cross-neuronal spike train interactions during decision making. For an individual to successfully complete the task of decision-making, a number of temporally-organized events must occur: stimuli must be detected, potential outcomes must be evaluated, behaviors must be executed or inhibited, and outcomes (such as reward or no-reward) must be experienced. Due to the complexity of this process, it is likely the case that decision-making is encoded by the temporally-precise interactions between large populations of neurons. Most existing statistical models, however, are inadequate for analyzing such a phenomenon because they provide only an aggregated measure of interactions over time. To address this considerable limitation, we propose a dynamic Bayesian model which captures the time-varying nature of neuronal activity (such as the time-varying strength of the interactions between neurons). The proposed method yielded results that reveal new insight into the dynamic nature of population coding in the prefrontal cortex during decision making. In our analysis, we note that while some neurons in the prefrontal cortex do not synchronize their firing activity until the presence of a reward, a different set of neurons synchronize their activity shortly after stimulus onset. These differentially synchronizing sub-populations of neurons suggests a continuum of population representation of the reward-seeking task. Secondly, our analyses also suggest that the degree of synchronization differs between the rewarded and non-rewarded conditions. Moreover, the proposed model is scalable to handle data on many simultaneously-recorded neurons and is applicable to analyzing other types of multivariate time series data with latent structure. Supplementary materials (including computer codes) for our paper are available online.

This paper is motivated by neurophysiological studies on cognitive behaviors such as decision-making. Decision-making, and related behaviors are complex because they involve organized patterns of cognitive and behavioral subcomponents. For an individual to successfully complete a decision-making task, a number of temporally-organized events must occur: stimuli must be detected, potential outcomes must be evaluated, behaviors must be executed and inhibited, and outcomes (such as reward or no-reward) must be experienced. Here, we develop a novel statistical model for studying spike-train interactions between neurons recorded from a rat in a basic decision-making experiment. The potential clinical impact of this study is broad. Disrupted decision-making is recognized to be a common feature across many psychiatric disorders including schizophrenia and ADHD. Understanding how populations of neurons function in decision-making and related behaviors will guide targeting of specific neural systems for development of treatments.

In recent years, there have been a number of studies investigating how the activity of single neurons (see Figure 1) are related to decision-making (Gold and Shadlen 2007; Shadlen and Kiani 2013). However, it is very likely that decision-making is encoded by the temporally-precise interactions between populations of neurons (Buzsáki 2004, 2010). With this information in mind, we propose a statistical model that can capture the dynamics of the cross-neuronal interactions and can be used to test hypotheses on differences in neuronal responses between different experimental conditions. More importantly, our goal is to develop a model that is flexible and powerful and thus can be used by neuroscientists for testing existing theories as well developing new ways of thinking of how populations of neurons relate to behavior with high temporal specificity. Our proposed model characterizes the activity of many neurons in a population by connecting the joint distribution of multiple spike trains to their marginals via a parametric copula process model. In general, models that couple the joint distribution of two (or more) variables to their individual marginal distributions are called copula models. Using such models, we are able to separate the inference on cross-neuronal interactions from the individual marginal firing activity of neurons.

The Peri-stimulus time histogram (PSTH) of one neuron under two conditions. The upper plot is the continuous representation (point process) of the spike train data. Each line represents a trial and points on the line represent the firing time of the neuron. **...**

To estimate the underlying time-varying firing rates, we use a flexible Gaussian process model. In addition to allowing the firing rates to change over time, our proposed model can also capture time-varying interactions among neurons. Inference in our method involves a computationally efficient sampling scheme, which can be easily applied to a moderate (tens) to large (hundreds) number of simultaneously recorded neurons.

Early analysis of behavioral neurophysiological data involving multiple simultaneously-recorded neurons focused on correlation of activity across pairs of neurons (Narayanan and Laubach 2009) using a joint peristimulus time histogram (JPSTH), which shows spike intensity over time (Gerstein and Perkel 1969) to identify synchrony between a pair of neurons. However, to capture complex behavior of neural networks, many new statistical methods have been recently proposed for analyzing several simultaneously recoded neurons in order to extract information contained in their temporal interactions under different experimental conditions. For more discussion on analysis of spike trains, refer to Harrison et al. (2013); Brillinger (1988); Brown et al. (2004); Kass et al. (2005); West (2007); Reich et al. (1998); Barbieri et al. (2001); Kass and Ventura (2001); Grün et al. (2002); Kass et al. (2005); Rigat et al. (2006); Patnaik et al. (2008); Pillow et al. (2008); Jacobs et al. (2009); Diekman et al. (2009); Sastry and Unnikrishnan (2010); Kottas et al. (2012); Kelly and Kass (2012); Shahbaba et al. (2014).

In contrast to many of these existing methods, our approach is based on a dynamic copula process model. Several stationary copula models have been previously proposed for neuroscience problems. Berkes et al. (2009) propose a variety of copula models for capturing neural dependencies and develop an efficient maximum likelihood procedure for inference. Swihart et al. (2010) proposed a unified approach to model multivariate binary data using copulas on partitions. They established a modeling framework which produced likelihood-based inference about the effects of the covariates on marginal success probabilities while accounting for the correlation among multivariate outputs.

Although they did not refer to their model as a copula, in a recent work, Kass et al. (2011); Kelly and Kass (2012) proposed a new method for detecting synchrony by modeling the joint firing probability of two neurons, denoted ${\lambda}^{AB}(t\mid {H}_{t}^{AB})$, by a factorization of the marginals

$${\lambda}^{AB}(t\mid {H}_{t}^{AB})=\zeta (t){\lambda}^{A}(t\mid {H}_{t}^{A}){\lambda}^{B}(t\mid {H}_{t}^{B})$$

where *H _{t}* denotes the history of firing up to time

As we will illustrate in this paper, these static (stationary) models that aggregate cross-neuronal spike-train interactions over time might provide misleading results. Indeed, to address this issue, there already exist many dynamic methods for modeling brain functional and effective connectivity (Friston et al. 1997; Cribben et al. 2013; Ombao et al. 2005; Ombao and Van Bellegem 2008; Motta and Ombao 2012; Park et al. 2014; Lindquist et al. 2014). However, these approaches are primarily designed for continuous-valued signals such as functional magnetic resonance imaging (fMRI) and electroencephalogram (EEG) data. Also, some of these methods either assume that the change points are known (Friston et al. 1997), or that there is a finite number of unknown change points between which the model remains approximately static. In contrast, our proposed model in this paper allows for smoothly-varying dynamic spike train interactions among neurons by introducing a set of time-dependent indicators that can adapt to changes in the strength of interactions. Thus, our model not only identifies a subset of functionally connected neurons, but also provides temporally precise connectivity among those neurons. Furthermore, our analysis of simultaneous recorded neurons gives an in-depth cellular-level understanding of the neuronal circuitry while fMRI and EEG studies do not. We propose a semi-parametric Bayesian dynamic model to study the joint activity of neurons in a population. The proposed model captures cross-dependence among neurons while taking into account the marginal time-varying firing probabilities of each neuron. In that sense, our method can be considered as a copula process (Wilson and Ghahramani 2012) and a generalization of the copula method by Swihart et al. (2010). Further, our approach extends the state-of-the-art methods in modeling spike trains by allowing the interaction among neurons to evolve over time. That is, our method does not assume a static (stationary) connectivity among neurons.

In this paper, we evaluate the performance of our proposed model via simulation studies and apply it to neuronal spike train recordings collected during a basic decision-making experiment. The goal of the experiment was to investigate the role of the prefrontal cortex in rats with respect to reward-seeking behaviors and inhibition of reward-seeking in the absence of a rewarded outcome. The activity of the 6 prefrontal neurons was recorded simultaneously. During the experiment, rats chose to either press or withhold presses to the presented levers. A sucrose reward was given when rats pressed one designated lever and received no reward when pressing the other. See Figure 1 for the spike train data of a neuron recorded under various conditions.

The remainder of the paper is organized as follows. In Section 2, we present the proposed dynamic Bayesian model. To set the stage, we begin by describing a stationary version of our model for multiple spike trains and evaluate its performance using simulation studies and then generalize our method to detect time-varying synchronies. In Section 3, we analyze the neuronal spike train data described above and discuss our results and their potential impact on further studies on decision making and related cognitive functions. We conclude the paper with future directions in Section 4.

As discussed above, our goal is to develop a model that captures the time-varying cross-neuronal spike train interactions and to identify differences in the nature of these interactions during the rewarded vs. non-rewarded conditions. To this end, we propose a non-stationary Bayesian copula process model. The details of our model are presented in Section 2.2. Before we describe our dynamic model, however, we first describe a simpler stationary version of our model.

To model the cross-dependence among multiple spike trains, we first discretize the time into small bins. Here, we set the intervals to 100 ms. It is possible to use smaller intervals (e.g., 5ms) for the stationary model discussed in this section; however, our proposed non-stationary model requires longer intervals to be effective and computationally efficient. The spike trains represented by point processes can then be transformed to binary time series which consist of 1s and 0s, where 1 indicates presence and 0 indicates absence of spikes in each time bin. Let **Y _{i}** = (

For a given time bin, *t*, we model the joint firing probability of multiple spike trains at time *t*, P* _{r}*(

$$\sum _{({y}_{1t},\dots ,{y}_{nt})\in {(0,1)}^{n}}{\mathrm{P}}_{r}({Y}_{1t}={y}_{1t},\dots ,{Y}_{nt}={y}_{nt})=1,\phantom{\rule{0.16667em}{0ex}}\text{for}\phantom{\rule{0.16667em}{0ex}}\text{each}\phantom{\rule{0.16667em}{0ex}}t=1,2,\dots ,T.$$

To preserve the above constraint, we use a continuous latent variable *u _{it}* and a threshold

$${Y}_{it}={\U0001d7d9}_{(-\infty ,{\tau}_{it}]}({u}_{it})=\{\begin{array}{ll}1,\hfill & \text{if}\phantom{\rule{0.16667em}{0ex}}{u}_{it}\le {\tau}_{it}\hfill \\ 0,\hfill & \text{otherwise.}\hfill \end{array}$$

Here 1/0 indicates the presence/absence of spikes within that time bin.

It is worth mentioning that in our model *τ _{it}* represents an aggregate of those external conditions that lead to neuronal spiking. As such,

For any time *t*, we assume that *u*_{{}_{i}_{}}* _{t}* follows a multivariate Gaussian distribution with mean zero,

$${u}_{\{i\}t}~N(0,\mathrm{\sum}).$$

We use the notation *u*_{{}_{i}_{}}* _{t}* to indicate that unlike

For the moment, we assume that Σ does not vary over time so *u*_{{}_{i}_{}}* _{t}* is a stationary copula process. This will be allowed to evolve with time in the dynamic setting. Moreover, the support of the joint distribution of the latent variables is the Cartesian cross of the real lines which can be partitioned into 2

In our method, we use the correlation parameter *ρ _{ij}* to capture the relationship between neurons

$$\begin{array}{lll}\phantom{\rule{0.16667em}{0ex}}\hfill & \phantom{\rule{0.16667em}{0ex}}\hfill & \text{corr}[{Y}_{it},{Y}_{jt}]=\frac{\mathrm{E}[{Y}_{it}{Y}_{jt}]-\mathrm{E}[{Y}_{it}]\mathrm{E}[{Y}_{jt}]}{\sqrt{\text{Var}[{Y}_{it}]\text{Var}[{Y}_{jt}]}}\hfill \\ \hfill \mathrm{E}[{Y}_{it}]& =\hfill & {\mathrm{P}}_{r}({u}_{it}\le {\tau}_{it})={\mathrm{P}}_{r}(\frac{{u}_{it}}{\sqrt{{\sigma}_{ii}}}\le \frac{{\tau}_{it}}{\sqrt{{\sigma}_{ii}}})=\mathrm{\Phi}(\frac{{\tau}_{it}}{\sqrt{{\sigma}_{ii}}})\hfill \\ \hfill \text{Var}[{Y}_{it}]& =\hfill & \mathrm{\Phi}(\frac{{\tau}_{it}}{\sqrt{{\sigma}_{ii}}})(1-\mathrm{\Phi}(\frac{{\tau}_{it}}{\sqrt{{\sigma}_{ii}}}))\hfill \\ \hfill \mathrm{E}[{Y}_{it}{Y}_{jt}]& =\hfill & {\mathrm{P}}_{r}({u}_{it}\le {\tau}_{it},{u}_{jt}\le {\tau}_{jt})\hfill \\ \phantom{\rule{0.16667em}{0ex}}\hfill & =\hfill & {\int}_{-\infty}^{{\tau}_{it}}{\int}_{-\infty}^{{\tau}_{jt}}\frac{{e}^{-{\scriptstyle \frac{1}{2(1-{\rho}_{ij}^{2})}}({\scriptstyle \frac{{u}_{it}^{2}}{{\sigma}_{ii}}}-2{\scriptstyle \frac{{\rho}_{ij}{u}_{it}{u}_{jt}}{\sqrt{{\sigma}_{ii}{\sigma}_{jj}}}}+{\scriptstyle \frac{{u}_{jt}^{2}}{{\sigma}_{jj}}})}}{2\pi \sqrt{(1-{\rho}_{ij}^{2}){\sigma}_{ii}{\sigma}_{jj}}}{du}_{it}{du}_{jt}\hfill \\ \phantom{\rule{0.16667em}{0ex}}\hfill & \equiv \hfill & F(\frac{{\tau}_{it}}{\sqrt{{\sigma}_{ii}}},\frac{{\tau}_{jt}}{\sqrt{{\sigma}_{jj}}},{\rho}_{ij}).\hfill \end{array}$$

Here Φ is the cumulative distribution function of standard normal distribution. Nelson (2006) showed that given
${\scriptstyle \frac{{\tau}_{it}}{\sqrt{{\sigma}_{ii}}}}$ and
${\scriptstyle \frac{{\tau}_{jt}}{\sqrt{{\sigma}_{jj}}}}$, the cumulative distribution,
$f({\scriptstyle \frac{{\tau}_{it}}{\sqrt{{\sigma}_{ii}}}},{\scriptstyle \frac{{\tau}_{jt}}{\sqrt{{\sigma}_{jj}}}},{\rho}_{ij})$, of bivariate normal distribution provides comprehensive concordance ordering with respect to *ρ _{ij}*,

$$F(.,.,-1)\le F(.,.,\rho )\le F(.,.,{\rho}^{\prime})\le F(.,.,1)$$

where −1 ≤ *ρ* ≤ *ρ*′ ≤ 1. Hence, corr[*Y _{it}*,

The above derivation posits that the spike trains preserve the dependence structure of the latent variables. In other words, if the latent variables are positively correlated (or negatively correlated, or independent) then so are the spike trains. Further, stronger (or weaker) correlation between the latent variables corresponds to stronger (or weaker) correlation between the spike trains. This allows us to make inference about the cross-dependence among spike trains in terms of the correlation parameters *ρ _{ij}* for the latent variables. Finally, it is worth emphasizing that the perceived unidentifiability of the threshold parameter does not affect our inference since the standardized threshold,
${\scriptstyle \frac{{\tau}_{jt}}{\sqrt{{\sigma}_{jj}}}}$ and the correlation parameter (which is used for inference) remain identifiable. Also, note that

Note that the joint probabilities of spike trains depend on both the latent variables, *u*_{{}_{i}_{}}* _{t}*, and thresholds,

A Gaussian process (GP) on the real line is a random real-valued function *x*(*t*), with moments determined by its mean function *x*(*s*) and kernel *κ*(*s,t*) = *Cov*(*x*(*s*),*x*(*t*)). More precisely, all finite-dimensional distributions (*x*(*t*_{1}),...,*x*(*t _{m}*)) are multivariate Gaussian with mean (

$$\begin{array}{l}C{\mid}_{s,t}=\text{Cov}[x(s),x(t)]\\ ={\lambda}^{2}+{\theta}^{2}exp[-{\gamma}^{2}{(t-s)}^{2}]+{\delta}_{st}{\nu}^{2}\end{array}$$

(1)

Here, *λ*, *θ*, *γ* and *ν* are hyperparameters with their own hyperpriors. In general, the choice of kernel encodes our *qualitative* beliefs about the underlying signal. For instance, samples from a GP with OU kernel are always non-differentiable functions *x*(*t*), and the SE kernel generates only infinitely differentiable functions.

Applications of GP models have been explored extensively in spatial modeling (Sampson and Guttorp 1992; Schmidt and O’Hagan 2003; Cressie 1993; Stein 1999; Gelfand et al. 2005; Banerjee et al. 2008b; Duan et al. 2007; Quick et al. 2013). Here, we use GP to develop a flexible model for thresholds as a nonlinear function of time. To this end, for each neuron *i*, we assume the following Gaussian process prior:

$${\tau}_{i}(t)~GP(0,{C}_{i}).$$

where *C _{i}* has the above SE kernel form (1). Note that a finite-dimensional (

In general, as the number of time bins, *T*, increases, computing the likelihood of Gaussian process models becomes time-consuming as it involves inversion of large-scale covariance matrices with (*T*^{3}) complexity. To mitigate this issue, one could use the Wiener process (a.k.a. the Brownian motion) prior, where the covariance function has a simpler form,

$${C}_{i}{\mid}_{s,t}\phantom{\rule{0.16667em}{0ex}}=\text{Cov}({\tau}_{i}(s),{\tau}_{i}(t))={\theta}_{i}min(s,t).$$

This way, we can reduce the computational complexity of the GP model from (*T*^{3}) for the SE kernel (1) to (*T*). The resulting process, however, is not stationary. Alternatively, one could use the Ornstein-Uhlenbeck (OU) process prior instead. The OU process, *τ _{i}*(

$$d{\tau}_{i}(t)=-{\theta}_{i}({\tau}_{i}(t)-{\mu}_{i})dt+{\gamma}_{i}{dW}_{i}(t),$$

where *W _{i}*(

The left panel of Figure 2 shows the schematic representation of our proposed stationary model, which uses a set of latent variables and thresholds to model the joint probability of multiple spike trains. (See Section S.1.1 in Supplementary Materials for more details.) The latent variables are modeled by a stationary copula process, and the thresholds are modeled using a Gaussian process model with the SE kernel (alternatively, OU could be used for faster computation). The copula process model connects the marginal time-varying firing probabilities to the joint probabilities of multiple spike trains. The thresholds control the time-varying marginal firing probabilities, while the latent variables determine the cross-dependence among neurons. Overall, our model involves four types of parameters: 1) thresholds, *τ _{it}*, 2) latent variables,

A schematic representation of our *stationary* model (a) and *dynamic* model (b). Here, *r* = 1,...,*R* is the index for trials; *t* = 1,...,*T* is the index for time; and *i* = 1,...,*n* is the index for neurons. Note that *τ*_{it} is a neuron-specific parameter; **...**

In our numerical experiments, we used weakly informative priors for all hyperparameters and used Markov Chain Monte Carlo (MCMC) to simulate samples from posterior distributions. More details are provided in Section S.1.1 in Supplementary Materials.

To illustrate the stationary model, we use a simple numerical experiment. First, we generate the spike trains of four neurons, where the first pair of neurons are positively correlated, the second pair of neurons are negatively correlated, and that the two pairs are independent of each other. To this end, we generate the time-varying marginal firing probabilities for each neuron and then calculate the joint probabilities of the two neurons by multiplying the product of marginal firing probabilities by an extra term, *ζ*, so that *ζ* = 1 indicates independence while *ζ* < 1 and *ζ* > 1 indicate negative correlation and positive correlation between the two neurons respectively. Note that we are using a data generating mechanism that is different from our model in order to test for the robustness of our approach.

In this simulation study, we set *ζ* = 1.3 for the first pair of neurons and *ζ* = 0.7 for the second pair of neurons. Table 1 shows the posterior estimates of the correlation parameters *ρ _{ij}*’s with their corresponding 95% posterior intervals. Figure 3 shows the peri-stimulus time histogram (PSTH) of these neurons with their estimated number of spikes according to our model. The results presented in Table 1 show that our model correctly detects the dependence among the two pairs of neurons.

The Peri-Stimulus Time Histogram (PSTH) of 4 neurons with their estimated number of spikes. The red solid line is the estimated number of spikes, which is calculated by multiplying the estimated marginal firing probabilities by number of trials.

As discussed above, our main objective is to develop a flexible non-stationary method that allows for interactions among neuronal spike trains to vary over time. To this end, we use a time-varying covariance matrix for the latent variables. The time-dependent covariance matrix implies a dynamic dependence structure among spike trains. Without imposing a structure, however, the covariance matrix would become too complex, especially when *n* is large. We impose such a structure by introducing a series of time-dependent binary indicators in the stationary copula process model discussed in the previous section,

$${u}_{\{i\}t}~N(0,{\mathrm{\sum}}_{t}^{\ast}),\phantom{\rule{0.38889em}{0ex}}\phantom{\rule{0.38889em}{0ex}}\phantom{\rule{0.38889em}{0ex}}\text{where}\phantom{\rule{0.16667em}{0ex}}{\mathrm{\sum}}_{t}^{\ast}=\left[\begin{array}{cccc}{\sigma}_{11}& {\sigma}_{12}{I}_{1}^{t}{I}_{2}^{t}& \cdots & {\sigma}_{1n}{I}_{1}^{t}{I}_{n}^{t}\\ {\sigma}_{12}{I}_{2}^{t}{I}_{1}^{t}& {\sigma}_{22}& \cdots & {\sigma}_{2n}{I}_{2}^{t}{I}_{n}^{t}\\ \vdots & \vdots & \ddots & \vdots \\ {\sigma}_{1n}{I}_{n}^{t}{I}_{1}^{t}& {\sigma}_{2n}{I}_{n}^{t}{I}_{2}^{t}& \cdots & {\sigma}_{nn}\end{array}\right]$$

Here,
${I}_{i}^{t}=0$ indicates that the *i*-th neuron is not correlated with any other neurons at time *t*. These time-dependent indicators identify neurons that are involved in the network and impose a structure on the covariance matrix. We assume
${I}_{i}^{t}~\text{Bernoulli}({p}_{i}(t))$. To ensure 0 < *p _{i}*(

$${q}_{i}(t)~GP(\eta ,{K}_{i}),$$

Note that over a finite set of time points, (*q _{i}*

By introducing time-dependent binary indicators in to the copula process, our model is capable of identifying time-varying interactions among multiple neurons. Note that in this setting, the correlation between two neurons becomes
${\rho}_{ij}^{t}={\rho}_{ij}{I}_{i}^{t}{I}_{j}^{t}$. The two neurons are correlated if both
${I}_{i}^{t}$ and
${I}_{j}^{t}$ are non-zero; *ρ _{ij}* determines the overall strength of the correlation.

The time-varying correlation matrix ${\mathrm{\sum}}_{t}^{\ast}$ is positive definite as long as Σ is positive definite.

Many neurons might be involved in a complex process such as decision-making over time. The scientific community has begun to regard the involvement of neurons in the underlying network of complex processes as a dynamic phenomenon, changing with time (Buzsáki 2010). Cortical neurons, particularly neurons in prefrontal cortex, have been characterized as exhibiting mixed selectivity, encoding multiple, high-dimensional representations, i.e., exhibiting changes in firing rate that scale with multiple parameters in experimental tasks (Rigotti et al. 2013). The optimal, and likely way by which mixed-selectivity neurons encode task variables at the level of the ensemble is for them to dynamically participate in different ensembles depending on the context and specific behaviors being performed, with single neurons potentially participating in multiple ensembles depending on the objectives of the experimental task (Barak et al. 2013; Durstewitz et al. 2010; Hyman et al. 2012; Lapish et al. 2008; Sakurai et al. 2013). In our model, Σ captures the overall network: a collection of neurons involved in a dynamic process. The indicator variables on the other hand capture the dynamic membership of these neurons in the network.

As discussed by Bien and Tibshirani (2011), covariance estimation is one of the most fundamental problems in statistics. In general, this is a difficult problem since the number of parameters grows quadratically as the number of variables (here, neurons) increases. To solve this problem, many researchers have followed Dempster (1972) and developed methods to reduce the effective number of parameters by imposing sparsity on the *inverse* covariance matrix (see for example, Meinshausen and Bühlmann 2006; Yuan and Lin 2007; Banerjee et al. 2008a; Friedman et al. 2008). By setting an element of inverse covariance to zero, we create *conditional* independence between its corresponding variables. The resulting model, known as *Markov netowrk*, can be presented as a graph with variables as nodes. An edge between two nodes represent conditional dependency between the corresponding variables.

Noticeably, many authors have proposed methods based on sparsity in the covariance matrix itself. In this case, zeros in the covariance matrix correspond to *marginal* independence between variables, and the resulting model is called *covariance graph* (Bien and Tibshirani 2011; Drton and Richardson 2008; Rothman et al. 2009). While, a Morkov network is a suitable model to infer graphical *structure*, a covariance graph is typically used for *model selection* (Bien and Tibshirani 2011). The latter is a more appropriate modeling choice in our case since we intend to identify neurons that are involved in the decision-making process. That is, we intend to identify how neurons cluster together (a cluster consists of neurons that are directly or indirectly dependent) during a decision-making process. More specifically, because the imposed sparsity can change over time, our method can be regarded as a *dynamic model selection* approach.

As mentioned in the introduction, our approach is related to a number of existing methods such as the GLM-based method of Kass et al. (2011) and Kelly and Kass (2012), the semi-parametric model of Shahbaba et al. (2014), and copula-based models of Wilson and Ghahramani (2012) and Swihart et al. (2010). Our method is also related to the model proposed by Cunningham et al. (2007), who assume that the underlying non-negative firing rate for spike train is a draw from a Gaussian process. However, unlike the method proposed in this paper, they assume that the observed spike train is a conditionally inhomogeneous gamma-interval process given the underlying firing rate.

Since Cox (1981) several state-space models for binary time series data have been developed which are closely related to our proposed model. See, for example, Fahrmeir (1992); Carlin and Polson (1992); Song (2000); Czado and Song (2008). A binary state-space model is characterized as follows. Let {*Y _{t}*} be a binary time series with corresponding sequence of probabilities {

$${\mu}_{t}=h({\mathbf{M}}_{t}^{\prime}{X}_{t})\phantom{\rule{0.38889em}{0ex}}\phantom{\rule{0.38889em}{0ex}}\phantom{\rule{0.38889em}{0ex}}\text{where}\phantom{\rule{0.16667em}{0ex}}{X}_{t}={\mathbf{Q}}_{t}{X}_{t-1}+{\epsilon}_{t}$$

where *h*^{−1} is a link function as specified for generalized linear models (GLM, see McCullagh and Nelder (1989)); **M*** _{t}* is a vector that contains time-varying covariates;

The state-space model above contains both deterministic and random components which has been developed for modeling longitudinal count data in Zeger (1988); Chan and Ledolter (1995); Fahrmeir and Lang (2001a,b). Our proposed model currently does not include a deterministic component in the probability sequence, but its framework is general and can broadly include a deterministic component. The key distinction between our proposed model and the general state-space models is that our latent equation does not follow the AR(1) structure of a state equation. Our state-equation is broadly defined as a Gaussian process (GP) on the thresholds. In our current implementation, we particularly use a GP model.

Finally, we also point the connection between our model and the so-called threshold model in Albert and Chib (1993), where *Y _{t}* = 1

In this section, we use an example to illustrate our non-stationary copula process method. Here, we generalize *ζ _{t}* from the previous example so the dependencies can change over time. As before, we use weakly informative priors for all hyperparameters and used Markov Chain Monte Carlo (MCMC) to simulate samples from posterior distributions. More details are provided in Section S.1.1 in Supplementary Materials.

Our example is designed to highlight the problems with stationary models (and thus emphasize the need for non-stationary approaches such as the one being proposed). More specifically, we demonstrate that our model can identify time-varying dependencies while methods based on static models such as the methods of Kass et al. (2011) and Shahbaba et al. (2014) provide an incomplete picture of neuronal activity and, moreover, could give misleading results. To compare our proposed approach to those in Kass et al. (2011) and Shahbaba et al. (2014), we first generate spike trains for a pair of neurons for which *ζ _{t}* follows the pattern shown in the left panel of Figure 4. Due to the assumption that the correlation structure is stationary, the method of Kass et al. (2011) reports a p-value of 0.555 for the test of

To further illustrate the advantage of our method, we conducted two simulation studies to compare our dynamic model with the method in Kass et al. (2011) which is the state-of-the-art approach for studying cross-neuronal interactions. For the first simulation study, we compared the two methods in terms of their ability to correctly detect the correlation between two neurons. To this end, we simulated data according to the approach we discussed above. Namely, the two neurons are independent for the first 80% of time (*ζ _{t}* = 1 for

Evaluating our method’s performance based on ROC curves for detecting dependence between a pair of neurons with time-varying *ζ*_{t}. Our proposed dynamic model is shown as solid red lines; the method of Kass et al. (2011) is shown as dashed **...**

For the second simulation study, we compare the predictive power of the models. We simulate the datasets according to the five scenarios discussed above. This time, however, we randomly select two-thirds of the data to train both models (i.e., estimate parameters), and then test the models on the remaining one-third of the data by using the firing status of one neuron to estimate the firing probability for the second neuron. We derive the estimated firing probability, _{2}* _{k}* for the

$$\text{LPP}=\sum _{k\in \text{test}\phantom{\rule{0.16667em}{0ex}}\text{set}}{y}_{2k}log({\widehat{p}}_{2k})+(1-{y}_{2k})log(1-{\widehat{p}}_{2k}).$$

The model with a larger LPP value is considered to have greater predictive power. As before, we repeat each scenario 100 times. Figure 6 shows the 95% intervals of LPP for each scenario using our proposed model (shown as red bars) and the method of Kass et al. (2011) (shown as green bars). Again, the results show that both models display an increasing predictive power as the strength of correlation between two neurons increases. Moreover, consistent with the previous results, our proposed dynamic model outperforms the method in Kass et al. (2011) in general.

95% intervals of the log predictive probability (LPP) on the test sets using our dynamic model (in red) and the method of Kass et al. (2011) (in green) for different values of *ζ*_{t} over the last 20% of time interval. Results are based on 100 simulated **...**

The ultimate goal of this research is the application to neurophysiological data collected during performance of cognitive behaviors such as decision-making. Disrupted decision-making is a common feature across many psychiatric disorders including, but not limited to, ADHD, bipolar, and compulsive behavior disorders, mood and anxiety disorders, and schizophrenia (Sharp et al. 2012). Decision-making, therefore is a potent behavioral endophenotype that can be used to study disruptions in the neural framework underlying mental disorders (Insel and Cuthbert 2009). To study decision making in animal models we conducted an experiment which we describe below. We then give an overview of the statistical approaches for analyzing neuronal spike train.

In this experiment, we recorded the activity of multiple neurons in the prefrontal cortex of rats while they were presented with stimuli that either predicted or did not predict the availability of a reward. During the recording/test sessions, two different stimuli were presented: tone 1 (10 KHz) or tone 2 (5 KHz) individually and in pseudorandom order. At the same time, one of two levers was presented an active-lever, paired with tone 1 (Rewarded-Stimulus - RS) and an inactive-lever paired with tone 2 (Non-rewarded Stimulus - NS). Pressing the active lever resulted in the offset of tone 1, retraction of the lever, and illumination of the reward receptacle. If the rat then went to the reward receptacle, 0.1 ml of 15% sucrose solution was delivered as a reward. Pressing the inactive lever produced no effect. Tones were played for a maximum of 3 sec, and levers remained extended for a maximum of 10 sec. See Moorman and Aston-Jones (2015) for more details.

We recorded the activity of multiple single neurons simultaneously by sampling from multiple channels of an implanted 16-wire electrode array. We recorded both action potentials (i.e., spikes–the firing activity of single neurons) and local field potentials (averaged population activity). In this report we focus on action potential data. Here, we focused our analysis on a recording session from one rat. We identified six neurons that exhibited consistent firing across the recording session (note that other neurons recorded in this session exhibited low or inconsistent firing across the session). More details about the experiment are provided in Section S.3.1(Supplementary Materials).

Throughout this section, we discretize time using 100 ms intervals. As a preliminary step, we first applied the static model to the spike train data of two neurons. The results show that under the rewarded stimulus the 95% credible interval for the correlation parameter was [0.035,0.147] compared to [−0.039,0.049] under the non-rewarded stimulus. This suggests that there was some sort of aggregated synchrony between the two neurons under rewarded stimulus but no evidence of aggregated synchrony under the non-rewarded stimulus. The proposed model suggests an interesting result: that the temporal relationship activity between the two neurons differed based on whether the stimulus predicted a rewarded or non-rewarded outcome.

As noted, the static analysis above has limitations because it does not portray a complete picture of the time-varying cross-dependence between neurons. As a next step, we applied the dynamic model to the same pair of neurons. Figure 7 shows the estimated correlation parameters based on our proposed dynamic model. The results suggest that under the rewarded stimulus, the two neurons started to co-fire after 4 seconds. In contrast, for the unrewarded stimulus, the two neurons remained uncorrelated throughout the experiment. As noted above, the strong cross-dependence between these two neurons was seen during the reward-receipt epoch in rewarded trials. This comparison allowed us to reliably say that the onset of correlation in rewarded trials was not simply a by-product of stimulus presentation (as stimuli are presented in both contexts) but has some relationship between the outcomes or behaviors. Thus, these two neurons (Figure 7) are likely members of a population encoding reward or a behavior related to reward-seeking.

The estimated correlation parameters among the two neurons. The solid line is the posterior estimate and the gray area is the corresponding 95% credible interval.

There are a number of reasons that explain why neuronal populations may exhibit synchrony. In the periphery, sensory stimuli can simultaneously activate neurons, producing sensory-evoked synchrony. In some cases, synchrony arises from correlated input via, for example, neurons or neural populations that innervate multiple target populations, driving these multiple targets at the same time. For example, in the prefrontal cortex, where neurons studied here were recorded, synchrony is thought to result from strong, correlated, patterned input from the hippocampus and possibly other brain areas (Benchenane et al. 2010). In other cases, notably in cortex, interactions between inhibitory GABA interneurons and pyramidal neurons are responsible for generating synchrony across populations (Fujisawa et al. 2008). Correlated activity can also be seen among populations of neurons when activity increases and is maintained, as has been seen, for example, in synchronous activity in the prefrontal cortex underlying working memory (Sakurai et al. 2013). Results for model checking and diagnostics are provided in Section S.3.2 (Supplementary Materials).

Next, we examined the cross-dependence between all of the 6 neurons in the data. Section S.3.3 (Supplementary Materials) provides the plots for the time-varying correlation parameters among these neurons under different scenarios. For simplicity, in Figures 8 and and9,9, we divided the time series into two time intervals: Period 1 covers the first 5 seconds following the stimulation onset (i.e., [0,5] seconds) while Period 2 covers the last 5 seconds after stimulation onset (i.e., (5,10] seconds). Within Period 1 of the rewarded scenario, neurons 2, 5, and 6 seemed to be weakly correlated, while others were not at all implicated in the network. During Period 2, neurons 1, 2, 3, and 4 became strongly correlated (Figure 8). Under the non-rewarded stimulus, neuron 1, 3, and 4 are correlated throughout both epochs. Towards the end of Period 2 (approximately 7–10 seconds), neurons 5 and 6 join the network with moderate correlations (Figure 9). The results suggest that neurons 1, 3, and 4 are involved in the network under both scenarios; whereas, neuron 2 plays a differential role under the two scenarios: it is always involved in the correlation network under rewarded stimulus, while it remains isolated from the network under non-rewarded stimulus. In addition, neurons 5 and 6 vary under different stimuli: they are strongly involved in the network at the beginning following presentation of the rewarded stimulus and weakly involved in the network at the end of session after presentation of the non-rewarded stimulus.

Graphical representation of correlation network under rewarded stimulus. The nodes are the neurons and the edges indicate the connection between neurons. The grayscale indicates the strength of connection.

Graphical representation of correlation network under non-rewarded stimulus. The nodes are the neurons and the edges indicate the connection between neurons. The grayscale indicates the strength of connection.

The results described here reveal new information about network encoding of different aspects of the behavioral task, in particular providing us two powerful and intersecting insights into population coding of behavior. First, our model reliably measures correlation among groups of neurons over prolonged periods of time. This is an important advance in understanding how events and behaviors can influence synchronous activity in neural populations. Intriguingly, neurons 2 and 5 exhibit synchronous activity in what appears to be the time between lever-press and reward acquisition. Thus by differentially synchronizing different populations of neurons over time, we see a continuum of population representation of the reward-seeking task. Importantly, the degree of synchronization is different in the non-rewarded condition. In this case, neurons 1, 3, and 4 exhibit synchronous activity rapidly after the onset of the non-reward-predicting tone. This synchronization may serve as a response-inhibition signal that allows the animal to withhold responding to the non-rewarded lever.

Brain regions such as the ventral medial prefrontal cortex have frequently been characterized as playing a role in response inhibition: inactivation of these areas increases behavior and stimulation of them suppresses it (Peters et al. 2009). The neural signaling underlying this type of executive control has remained elusive however, although some neurons have been shown to be activated during behavioral suppression (Milad and Quirk 2002). However, it is unlikely that the small numbers of neurons showing increased activation in these circumstances represents the mechanism by which populations of neurons encode behavior. Rather, population signaling, reflected in correlated firing, is likely to underscore neural coding of these relatively sophisticated behaviors.

We have developed a Bayesian dynamic model for analyzing cross-dependence between neurons in a population. The proposed model has a number of advantages: (1.) It captures the time-varying dependencies between neurons and thus is applicable for analyzing spike train data recorded while a subject is performing a complex cognitive task; (2.) The proposed model is useful for testing differences in cross-neuronal correlations in activity between different experimental conditions; (3.) It can be applied to a moderate to large number of neurons, and the computational complexity increases with the number of interacting neurons only; (4.) The proposed model can be applied to different data from a broad spectrum including continuous-valued time series that have some latent structure; (5.) Finally, using the proposed model, our analysis yielded results that revealed new insight into the dynamic nature of population coding in the prefrontal cortex during decision making.

Our model of course can be generalized in several ways. Although it is already a step forward towards dynamic modeling of neural networks, our current model assumes that if two neurons are related, the relationship is either positive or negative so that the sign remains constant but may vary in strength over time. However, this model can be extended for situations where the direction of the relationship changes (from positive to negative or vice-versa) over time. For example, the model can be generalized by allowing the indicator variables to be between −1 and 1. This was not necessary in the current situation because we did not observe any evidence of such behavior in our data.

The model produced results that led us to pose new hypotheses related to the prefrontal control of executive function (response execution/inhibition). The large hypothesis resulting from these data is that complex behaviors such as decision-making come about by ensemble coding of discrete behavioral components (stimulus representation, response execution, reward consumption, etc.). These behaviors are encoded by temporally discrete synchronization of specific populations of neurons, although these populations share members. There are a number of key experiments that must be performed to verify this hypothesis and to refine our understanding of encoding along these lines. It is critical to isolate specific subcomponents of behavior that may be encoded. So, for example, studying neural encoding specifically during response execution or specifically during response inhibition (as opposed to in more complex tasks) will allow a precise association between synchronous populations and behavior. In addition, we need to test whether these ensembles reliably form–with the same member neurons–each time a behavior is performed or whether they exhibit subtle differences on a trial-to-trial basis.

The conclusions from this analysis are primary and will need to be studied from further experiments on decision-making in our laboratory and different contexts. Nevertheless, the results of the analysis of even this limited data set are already highly intriguing and already begin to reveal new insight into the dynamic nature of population coding in the prefrontal cortex during basic cognitive tasks. In particular, using these models to characterize how populations of neurons synchronize at precise times across behavior will let investigators focus precisely on the relationship between population coding and discrete components of behavior. For example, it is overly-general (i.e., lacking in temporal precision) to state that neural populations exhibit synchronous activity during reward-seeking. Instead, we can specify that particular neural populations are correlated during specific subcomponents of these behaviors (stimulus presentation, lever approach, reward-well entry, reward consumption, etc.). As demonstrated in our analysis, by narrowing down our window of focus with respect to population encoding, we can achieve a previously unconsidered combination of spatial breadth and temporal precision. Typically the focus on neural coding of discrete behavioral intervals is limited to analysis of single-neuron activity.

Our proposed method in this paper can be utilized for current behavioral neuroscience studies, which typically involve 10s of simultaneously recorded neurons. (See the review by Buzsáki et al. (2015) on the current state of simultaneous ensemble recording.) For such studies, investigators need adequate analytical tools for understanding the temporal relationship among a relatively large number of neurons. Our method provides a pragmatic approach for responding to that need. However, driven by advances in technology, the number of neurons capable of being recorded simultaneously is increasing. As a result, the computational demand of neuroscience studies is going to increase in future. To address the computational challenges of such studies and to maintain our method as a practical approach, we need to continue working on the scalability and computational efficiency of our proposed model – as truly large-scale recordings (e.g., thousands of neurons) will soon become commonplace in neuroscience.

Click here to view.^{(33K, pdf)}

Click here to view.^{(28K, pdf)}

Click here to view.^{(663K, pdf)}

We would like to thank the Associate Editor and anonymous referees for their insightful suggestions and constructive comments that helped us to improve our paper. Babak Shahbaba was supported by the NIH grant R01 AI107034. David Moorman was supported by the NIH grant R21-DA032005. Hernando Ombao would like to acknowledge support from NSF Division of Mathematical Sciences and NSF Division of Social and Economic Sciences.

Supplementary materials are provided in four separate files contained in a single archive:

**Computational details:**This file includes the details of our computational methods along with the Markov Chain Monte Carlo (MCMC) algorithms we used for Bayesian inference.**Negative correlation:**This file includes an additional illustrative example showing that our method can easily detect negatively correlated neurons.**Experimental details:**This file provides the details of experimental data along with our results for model checking and diagnostics.**Computer codes:**This zipped file includes all the MATLAB codes to implement our method and reproduce our results. It also contains the experimental data discussed in our paper.

- Albert JH, Chib S. Bayesian analysis of binary and polychotomous response data. JASA. 1993;88:669–679.
- Banerjee O, Ghaoui LE, d’Aspremont A. Model Selection Through Sparse Maximum Likelihood Estimation for Multivariate Gaussian or Binary Data. Journal of Machine Learning Research. 2008a;9:485–516.
- Banerjee S, Gelfand AE, Finley AO, Sang H. Gaussian predictive process models for large spatial data sets. Journal of the Royal Statistical Society Series B, Statistical methodology. 2008b;70:825–848. [PMC free article] [PubMed]
- Barak O, Rigotti M, Fusi S. The Sparseness of Mixed Selectivity Neurons Controls the Generalization Discrimination Trade-Off. The Journal of Neuroscience. 2013;33:3844–3856. [PubMed]
- Barbieri R, Quirk MC, Frank LM, Wilson MA, Brown EN. Construction and analysis of non-Poisson stimulus-response models of neural spiking activity. Journal of Neuroscience Methods. 2001;105:25–37. [PubMed]
- Benchenane K, Peyrache A, Khamassi M, Tierney PL, Gioanni Y, Battaglia FP, Wiener SI. Coherent Theta Oscillations and Reorganization of Spike Timing in the Hippocampal-Prefrontal Network upon Learning. Neuron. 2010;66:921–936. [PubMed]
- Berkes P, Wood F, Pillow J. Characterizing neural dependencies with copula models. In: Koller D, Schuurmans D, Bengio Y, Bottou L, editors. Advances in Neural Information Processing Systems. Vol. 21. 2009. pp. 129–136.
- Bien J, Tibshirani RJ. Sparse estimation of a covariance matrix. Biometrika. 2011;98:807–820. [PMC free article] [PubMed]
- Brillinger DR. Maximum likelihood analysis of spike trains of interacting nerve cells. Biological Cybernetics. 1988;59:189–200. [PubMed]
- Brown EN, Kass RE, Mitra PP. Multiple neural spike train data analysis: state-of-the-art and future challenges. Nature Neuroscience. 2004;7:456–461. [PubMed]
- Buzsáki G. Large-scale recording of neuronal ensembles. Nature Neuroscience. 2004;7:446–51. [PubMed]
- Buzsáki G. Neural Syntax: Cell Assemblies, Synapsembles, and Readers. Neuron. 2010;68:362–385. [PMC free article] [PubMed]
- Buzsáki G, Stark E, Berényi A, Khodagholy D, Kipke DR, Yoon E, Wise KD. Tools for Probing Local Circuits: High-Density Silicon Probes Combined with Optogenetics. Neuron. 2015;86:92–105. [PMC free article] [PubMed]
- Carlin B, Polson NG. Monte Carlo Bayesian Methods for Discrete Regression Models and Categorical Time Series. Bayesian Statistics. 1992;4:577–586.
- Chan KS, Ledolter J. Monte Carlo EM Estimation for Time Series Models Involving Counts. Journal of the American Statistical Association. 1995;90:242–252.
- Cox DR. Statistical analysis of time series, recent developments. Scandinavian Journal of Statistics. 1981;8:93–115.
- Cressie N. Statistics for Spatial Data. New York: Wiley; 1993.
- Cribben I, Wager T, Lindquist M. Detecting functional connectivity change points for single-subject fMRI data. Frontiers in Computational Neuroscience. 2013;7 [PMC free article] [PubMed]
- Cunningham JP, Yu BM, Shenoy KV, Sahani M. Inferring Neural Firing Rates from Spike Trains Using Gaussian Processes. In: Platt JC, Koller D, Singer Y, Roweis ST, editors. NIPS. 2007.
- Czado C, Song P. State space mixed models for longitudinal observations with binary and binomial responses. Statistical Papers. 2008;49:691–714.
- Dempster AP. Covariance selection. Biometrika. 1972;28:157–175.
- Diekman CO, Sastry PS, Unnikrishnan KP. Statistical significance of sequential firing patterns in multi-neuronal spike trains. Journal of neuroscience methods. 2009;182:279–284. [PubMed]
- Drton M, Richardson TS. Graphical Methods for Efficient Likelihood Inference in Gaussian Covariance Models. Journal of Machine Learning Research. 2008;9:893–914.
- Duan JA, Guindani M, Gelfand AE. Generalized Spatial Dirichlet Process Models. Biometrika. 2007:809–825.
- Durstewitz D, Vittoz NM, Floresco SB, Seamans JK. Abrupt Transitions between Prefrontal Neural Ensemble States Accompany Behavioral Transitions during Rule Learning. Neuron. 2010;66:438–448. [PubMed]
- Fahrmeir L. Posterior mode estimation by extended Kalman filtering for multivariate dynamic generalized linear models. Journal of the American Statistical Association. 1992:501–509.
- Fahrmeir L, Lang S. Bayesian Inference for Generalized Additive Mixed Models Based on Markov Random Field Priors. Applied Statistics. 2001a;50:201–220.
- Fahrmeir L, Lang S. Bayesian Semiparametric Regression Analysis of Multicategorical Time-Space Data. Annals of the Institute of Statistical Mathematics. 2001b;53:11–30.
- Friedman J, Hastie T, Tibshirani R. Sparse inverse covariance estimation with the graphical lasso. Biostatistics. 2008;9:432–441. [PMC free article] [PubMed]
- Friston KJ, Buechel C, Fink GR, Morris J, Rolls E, Dolan RJ. Psychophysiological and Modulatory Interactions in Neuroimaging. NeuroImage. 1997;6:218–229. [PubMed]
- Fujisawa S, Amarasingham A, Harrison MTT, Buzsáki G. Behavior-dependent short-term assembly dynamics in the medial prefrontal cortex. Nature Neuroscience. 2008;11:823–833. [PMC free article] [PubMed]
- Gelfand AE, Kottas A, MacEachern SN. Bayesian Nonparametric Spatial Modeling with Dirichlet Process Mixing. Journal of the American Statistical Association. 2005;100:1021–1035.
- Gerstein GL, Perkel DH. Simultaneously Recorded Trains of Action Potentials: Analysis and Functional Interpretation. Science. 1969;164:828–830. [PubMed]
- Gold JI, Shadlen MN. The neural basis of decision making. Annu Rev Neurosci. 2007:30. [PubMed]
- Grün S, Diesmann M, Aertsen A. Unitary events in multiple single-neuron spiking activity: I. Detection and significance. Neural Computation. 2002;14:43–80. [PubMed]
- Harrison MT, Amarasingham A, Kass RE. Statistical identification of synchronous spiking. In: Lorenzo PD, Victor J, editors. Spike Timing: Mechanisms and Function. Taylor and Francis; 2013.
- Hyman JM, Ma L, Balaguer-Ballester E, Durstewitz D, Seamans JK. Contextual encoding by ensembles of medial prefrontal cortex neurons. Proceedings of the National Academy of Sciences. 2012;109:5086–5091. [PubMed]
- Insel T, Cuthbert B. Endophenotypes: bridging genomic complexity and disorder heterogeneity. Biological psychiatry. 2009;66:988–989. [PubMed]
- Jacobs AL, Fridman G, Douglas RM, Alam NM, Latham PE, Prusky GT, Nirenberg S. Ruling out and ruling in neural codes. Proceedings of the National Academy of Sciences of the United States of America. 2009;106:5936–5941. [PubMed]
- Kass RE, Kelly RC, Loh W-L. Assessment of synchrony in multiple neural spike trains using loglinear point process models. Annals of Applied Statistics. 2011:5. [PMC free article] [PubMed]
- Kass RE, Ventura V. A spike-train probability model. Neural Computation. 2001;13:1713–1720. [PubMed]
- Kass RE, Ventura V, Brown EN. Statistical issues in the analysis of neuronal data. Journal of Neurophysiology. 2005;94:8–25. [PubMed]
- Kelly RC, Kass RE. A Framework for Evaluating Pairwise and Multiway Synchrony Among Stimulus-Driven Neurons. Neural Computation. 2012:1–26. [PMC free article] [PubMed]
- Kottas A, Behseta S, Moorman DE, Poynor V, Olson CR. Bayesian nonparametric analysis of neuronal intensity rates. Journal of Neuroscience Methods. 2012:203. [PubMed]
- Lapish CC, Durstewitz D, Chandler JL, Seamans JK. Successful choice behavior is associated with distinct and coherent network states in anterior cingulate cortex. Proceedings of the National Academy of Sciences. 2008;105:11963–11968. [PubMed]
- Lindquist M, Xu Y, Nebel M, Caffo B. Evaluating Dynamic Bivariate Correlations in Resting-state fMRI: A comparison study and a new approach. NeuroImage. 2014 in press. [PMC free article] [PubMed]
- McCullagh CE, Nelder JA. Generalized Linear Models. 2. London: Chapman and Hall; 1989.
- Meinshausen N, Bühlmann P. High-dimensional graphs and variable selection with the Lasso. The Annals of Statistics. 2006;34:1436–1462.
- Milad MR, Quirk GJ. Neurons in medial prefrontal cortex signal memory for fear extinction. Nature. 2002;420:70–4. [PubMed]
- Moorman DE, Aston-Jones G. Prefrontal neurons encode context-based response execution and inhibition in reward seeking and extinction. Proceedings of the National Academy of Sciences. 2015;112:9472–9477. [PubMed]
- Motta G, Ombao H. Evolutionary Factor Analysis of Replicated Time Series. Biometrics. 2012;68:825–836. [PubMed]
- Narayanan NS, Laubach M. Methods for studying functional interactions among neuronal populations. Methods in molecular biology. 2009;489:135–165. [PMC free article] [PubMed]
- Neal RM. Regression and classification using Gaussian process priors. Bayesian Statistics. 1998;6:471–501.
- Ombao H, Van Bellegem S. Coherence Analysis: A Linear Filtering Point Of View. IEEE Transactions on Signal Processing. 2008;56:2259–2266.
- Ombao H, von Sachs R, Guo W. SLEX Analysis of Multivariate Non-Stationary Time Series. Journal of the American Statistical Association. 2005;100:519–531.
- Park T, Eckley I, Ombao H. Estimating the time-evolving partial coherence between signals via multivariate locally stationary wavelet processes. IEEE Transactions on Signal Processing. 2014 accepted.
- Patnaik D, Sastry P, Unnikrishnan K. Inferring Neuronal Network Connectivity from Spike Data: A Temporal Data mining Approach. Scientific Programming. 2008;16:49–77.
- Peters J, Kalivas PW, Quirk GJ. Extinction circuits for fear and addiction overlap in prefrontal cortex. Learning & Memory. 2009;16:279–288. [PubMed]
- Pillow JW, Shlens J, Paninski L, Sher A, Litke AM, Chichilnisky EJ, Simoncelli EP. Spatio-temporal correlations and visual signalling in a complete neuronal population. Nature. 2008;454:995–9. [PMC free article] [PubMed]
- Quick H, Banerjee S, Carlin BP. Modeling temporal gradients in regionally aggregated California asthma hospitalization data. Annals of Applied Statistics. 2013;7:154–176.
- Rasmussen CE, Williams CKI. Gaussian Processes for Machine Learning. 2 MIT Press; 2006.
- Reich DS, Victor JD, Knight BW. The Power Ratio and the Interval Map: Spiking Models and Extracellular Recordings. Journal of Neuroscience. 1998;18:10090–10104. [PubMed]
- Rigat F, de Gunst M, van Pelt J. Bayesian Modeling and Analysis of Spatio-Temporal Neuronal Networks. Bayesian Analysis. 2006:733–764.
- Rigotti M, Barak O, Warden MR, Wang X-J, Daw ND, Miller EK, Fusi S. The importance of mixed selectivity in complex cognitive tasks. Nature. 2013:585–590. advance online publication. [PMC free article] [PubMed]
- Rothman AJ, Levina E, Zhu J. Generalized Thresholding of Large Covariance Matrices. Journal of the American Statistical Association. 2009;104:177–186.
- Sakurai Y, Nakazono T, Ishino S, Terada S, Yamaguchi K, Takahashi S. Diverse synchrony of firing reflects diverse cell-assembly coding in the prefrontal cortex. Journal of Physiology-Paris. 2013;107:459–470. [PubMed]
- Sampson PD, Guttorp P. Nonparametric Estimation of Nonstationary Spatial Covariance Structure. Journal of the American Statistical Association. 1992;87:108–119.
- Sastry PS, Unnikrishnan KP. Conditional probability-based significance tests for sequential patterns in multineuronal spike trains. Neural Comput. 2010;22:1025–1059. [PubMed]
- Schmidt AM, O’Hagan A. Bayesian inference for non-stationary spatial covariance structure via spatial deformations. Journal of the Royal Statistical Society: Series B (Statistical Methodology) 2003;65:743–758.
- Shadlen MN, Kiani R. Decision making as a window on cognition. Neuron. 2013;80:791–806. [PMC free article] [PubMed]
- Shahbaba B, Zhou B, Ombao H, Moorman D, Behseta S. A Semi-parametric Bayesian Model for Detecting Synchrony Among Multiple Neurons. Neural Computation. 2014;26:2025–51. [PMC free article] [PubMed]
- Sharp C, Monterosso J, Montague PR. Neuroeconomics: a bridge for translational research. Biological psychiatry. 2012;72:87–92. [PMC free article] [PubMed]
- Song P. Monte Carlo Kalman filter and smoothing for multivariate discrete state space models 2000
- Stein ML. Statistical Interpolation of Spatial Data: Some Theory for Kriging. Springer; 1999.
- Swihart B, Caffo B, Crainiceanu C. A unified approach to modeling multivariate binary data using copulas over partitions. Johns Hopkins University, Dept. of Biostatistics Working Papers; 2010.
- West M. Hierarchical mixture models in neurological transmission analysis. Journal of the American Statistical Association. 2007;92:587–606.
- Wilson AG, Ghahramani Z. In: Flach P, De Bie T, Bristol NC, editors. Modelling Input Dependent Correlations between Multiple Responses; Proceedings of the European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases (ECML PKDD); UK: Springer; 2012.
- Yuan M, Lin Y. Model selection and estimation in the Gaussian graphical model. Biometrika. 2007;94:19–35.
- Zeger SL. A regression model for time series of counts. Biometrika. 1988;75:621–629.

PubMed Central Canada is a service of the Canadian Institutes of Health Research (CIHR) working in partnership with the National Research Council's national science library in cooperation with the National Center for Biotechnology Information at the U.S. National Library of Medicine(NCBI/NLM). It includes content provided to the PubMed Central International archive by participating publishers. |