Search tips
Search criteria 


Logo of nihpaAbout Author manuscriptsSubmit a manuscriptHHS Public Access; Author Manuscript; Accepted for publication in peer reviewed journal;
IEEE Trans Inf Theory. Author manuscript; available in PMC 2010 April 5.
Published in final edited form as:
IEEE Trans Inf Theory. 2010 February 1; 56(2): 875–899.
doi:  10.1109/TIT.2009.2037057
PMCID: PMC2849156

Synergistic Coding by Cortical Neural Ensembles


An essential step towards understanding how the brain orchestrates information processing at the cellular and population levels is to simultaneously observe the spiking activity of cortical neurons that mediate perception, learning, and motor processing. In this paper, we formulate an information theoretic approach to determine whether cooperation among neurons may constitute a governing mechanism of information processing when encoding external covariates. Specifically, we show that conditional independence between neuronal outputs may not provide an optimal encoding strategy when the firing probability of a neuron depends on the history of firing of other neurons connected to it. Rather, cooperation among neurons can provide a “message-passing” mechanism that preserves most of the information in the covariates under specific constraints governing their connectivity structure. Using a biologically plausible statistical learning model, we demonstrate the performance of the proposed approach in synergistically encoding a motor task using a subset of neurons drawn randomly from a large population. We demonstrate its superiority in approximating the joint density of the population from limited data compared to a statistically independent model and a maximum entropy (MaxEnt) model.

Index Terms: Cortical networks, graph theory, maximum entropy (MaxEnt), minimum entropy distance (MinED), neural coding, spike trains

I. Introduction

Cortical neurons have long been hypothesized—potentially debated—to encode information about external stimuli using one of two encoding modalities: temporal coding or rate coding [1]. In temporal coding, a neuron fires an action potential—or a spike—that is almost always time locked to the external stimulus. Rate coding, on the other hand, presumes that a neuron responds to a stimulus by modulating the number of spikes it fires above or below a baseline level during a fixed time interval. In sensory systems such as visual and auditory cortices, temporal codes are more pronounced [2]–[5], while in motor systems, rate codes seem to be predominantly present [6]–[8].

The mapping relationship between an external stimulus and a neural response r (whether it expresses a temporal code or a rate code) is typically described by the joint density P (r, s). This density completely expresses the “receptive field” of a sensory neuron, or a “tuning function” of a motor neuron [9], [10]. It has been the standard metric for measuring stimulus-driven neural response in neurophysiological experiments for many decades. Recently, correlation among neurons has emerged as another cortical response property that may be playing a role in encoding stimuli [11]–[14]. This property has naturally emerged as a result of the ability to simultaneously record the activity of an ensemble of neurons in selected brain targets while subjects carry out specific functions [15]–[18]. Not surprisingly, many important aspects of behavior–neurophysiology relationships stem from the collective behavior of many neuronal elements [19]. The major gain over sequentially recording single-neuron activity is the ability to access the joint nth-order statistic of the observed neurons P (r1, r2, …, rn, s), thereby providing a closer examination of the response space of the underlying biological system that these neurons represent. Clearly, calculation of the true P (r1, r2, …, rn, s) requires knowledge of all the possible network states, which is practically impossible.

In many cases, the variables r1, r2, …, rn exhibit variable degrees of statistical dependency [21], [22]. Generally speaking, this statistical dependency may either result from a significant overlap in the receptive fields of these neurons, often referred to as response to a common input, or from some type of anatomical connectivity between the neurons, or from both [23]–[25]. To date, it is unclear whether this dependency is part of an efficient neural encoding mechanism that attempts to synergistically and cooperatively represent the stimulus in a network-wide interaction between the neuronal elements, or hinders this encoding mechanism if it were to redundantly express independent neuronal responses to that stimulus.

Numerous information theoretic measures have been proposed to address this synergy/redundancy question [13], [26]–[31]. It has been suggested that a maximum entropy (MaxEnt) model relying on weak second-order interaction (pairwise correlation) between neurons may result in a strong cooperative network behavior [32]. This principle was argued to yield a data distribution with the “least structure” among all possible distributions that could explain the observed data. More recently, a minimum mutual information (MinMI) principle was argued to yield a lower bound on the mutual information between the stimulus and the response of any system that explains the data [27]. However, besides being applied to sensory neurons in vitro, neither approach has directly addressed the following fundamental questions: how spatio–temporal spiking patterns of cortical neurons contribute to actual information transfer through the system and how the joint distribution can be optimally expressed using knowledge of any intrinsic structure in their underlying network connectivity.

In this paper, we propose a measure of information transfer that helps assess whether a connection-induced dependency in neural firing is part of a synergistic population code. Specifically, we propose to optimally express the conditional neural response in terms of a consistent hypergraph. We argue that the stimulus-specific network model associated with each hypergraph represents some type of “information trajectory” as it attempts to discover how stimulus information governs the way neurons signal one another. We demonstrate that this hypergraph representation yields a minimum entropy distance (MinED) between the conditional response entropy of a single experimental trial and the “true” response entropy estimated from a concatenation of many trials. We also show that this approach falls under the class of MaxEnt models that seek to find the probability distribution that maximizes the entropy with respect to the known constraints.

The paper is organized as follows. Section II discusses the theory of our framework. Section III describes the methods we used to generate the neural data. Section IV describes the results obtained. A discussion in Section V is provided about how the proposed approach relates to previous work, while Section VI summarizes our conclusion and suggestions for future work.

II. Theory

Let us assume we have a population of n neurons encoding stimulus S that can take one out of P possible values {s1, s2, …, sP}. These can be a set of discrete values of a continuous, time-varying deterministic function, or discrete-valued parameters of a latent stochastic process. For notational convenience, we will denote herein by Ps(·) quantities conditioned on the presence of S and make the distinction clear wherever needed. The encoding dictionary is characterized by the conditional probability of the neural response given S, or the likelihood Ps(r), where r is a matrix of population responses r = [r1, r2, …, rn]. Each neuron’s response is expressed in terms of the spike count in a fixed length time bin that is small enough to contain a binary “1” (spike) or a binary “0” (no spike). In the presence of sp, we assume that the variables r1, r2, …, rn exhibit some form of stimulus-specific statistical dependency that can be characterized by the kth-order marginal Ps([r1, …, rk]), where k is the number of neurons involved in encoding S = sp, and can vary with variable p. We seek to express the encoding dictionary as a product of all the possible kth-order conditional marginals Ps(k)(r). Given that a population of n neurons has 2n states, expressing the encoding dictionary by searching for the optimum product of marginals is a computationally prohibitive task. Therefore, it is highly desirable to reduce the search space by finding an optimum set of interactions between the variables r1, r2, …, rn for each value of the stimulus S.

Shannon’s mutual information can be used to compute how much information about S is encoded in r. However, it depends on obtaining a good estimate of the noise entropy [33], denoted Hs(r), which is known to be largely varying in our case [1]. When correlation between the variables is present, multi-information [34], also known as excess entropy [35], can be used. It is the relative entropy of the joint density of the variables with respect to the product of their individual densities


where Ps (ri) is the marginal distribution of neuron i given s, and Hs (ri)represents the conditional entropy of neuron i. Since the joint entropy cannot exceed the sum of marginal entropies, i.e., Σ Hs (ri) ≥ Hs(r), the multi-information is always positive, and equality holds only when the responses are statistically independent.

A. Factoring Multi-Information

Although multi-information accounts for the presence of correlation, it does so in a global context. It is highly desirable to quantify how much of that correlation is intrinsic to independent, pairwise, triplets, or higher order interactions among neurons given a particular stimulus. To do this, the kth-order connected information measure Is(k)(r) [36], also known as co-occurrences [37], computes the distance between the entropy of the kth-order and the (k − 1)th-order conditional densities


where Ps(k)(r) is the kth-order conditional density and is estimated as a normalized product of marginals up to the kth-order. It can be easily shown that multi-information is the sum of all kth-order connected information terms


By computing the ratio of each kth-order connected information to the multi-information Is(k)/Is, the contribution of each kth-order correlation in the encoding dictionary can be obtained. The order of the encoding model is estimated as the maximum k with significant connected information, as will be shown in Section IV.

We demonstrate the above theoretical analysis with a simple example. In Fig. 1, we graphically represent the encoding of S using a population of three neurons i, j, and k. Here we may have P possible stimuli that these three neurons are encoding if H(s) is partitioned (not shown), where it may also represent different variations of parameters associated with a single time-varying stimulus. The stimulus is encoded independently (e.g., neuron k) and jointly (neurons i and j). Since no third-order interaction exists between the variables, the multi-information in this case is equal to the second-order connected information Is(r)=Is(2)(r), and the encoding model order for this population is k = 2 as it fully characterizes the response properties using interactions limited to pairwise correlations. The latter can be also measured by the synaptic efficacy [38]. The mutual information between the response and the stimulus is shown by the intersection between the entropy of the responses and that of the stimulus (light gray region) in Fig. 1, while the noise entropy Hs(r) is shown by the dark gray region.

Fig. 1
Stimulus entropy representation by a population of three neurons. H (S) expresses the stimulus dictionary. The light gray region is the mutual information between the response of the three neurons and the stimulus, while the dark gray region represents ...

B. Graphical Representation

A stimulus-induced dependency between the neurons can be graphically represented using Markov random fields (MRF) [39]. Correlation can be represented in this graph, denoted G, as an undirected link (factor) between the neurons. In such case, the encoding dictionary is expressed as


where πi denotes the set of neurons connected to neuron i, and Z is a normalization factor (also known as the partition function).

The edges in a Markov graph structure, or cliques [40], express the role of second-order interactions between neurons in encoding S. A Markov graph incorporating only first-order cliques assumes neurons to be conditionally independent. This probabilistic model is noncooperative because it does not consider any stimulus-induced correlation among neurons. The encoding dictionary in such case is expressed as


On the other hand, when second-order cliques are present, we get the Ising model from statistical physics [32], [41]. This model becomes a MaxEnt model when the density is expressed as an exponential function of the weighted responses and pairwise responses


The unknowns parameters δi and γij are estimated using an iterative scaling algorithm so that the expected values of the individual responses left angle bracket ri right angle bracket and pairwise responses left angle bracket rirj right angle bracket match the measurable information in the experiment [42]. This information is the mean spike count within a fixed window (i.e., firing rates) of individual neurons and their observed pairwise correlations.

C. Cooperative Model

Herein, we generalize the Markov structure to edges that can connect any number of vertices to model higher order interactions at relatively very fine time scales. Time scales below 3 ms have been shown to describe excess synchrony typically observed between cortical neurons in vivo [8], [50], [51]. This yields a hypergraph G in which every vertex expresses a marginal distribution and represents a cooperative model of the encoding dictionary as [52]


where ZG is a normalization factor, and Ps (rπi) are factors defined by the set πj, which itself is a subset of πi. The subsets πi are the edges in the hypergraph G. The power of the marginals in (7), qj = ±1, is defined based on the Möbius inversion property as [53]


where |·| is the cardinality of a set. The factorization in (7) indicates that all the marginals associated with the subsets of πi are represented in addition to the marginals of set πi. Using (7), the entropy of the cooperative model can be derived from the entropy of all of its factors as


where the entropy HZG represents a constant bias term resulting from the partition function. An example is illustrated in Fig. 2. Here we have a hypergraph including four factors that represent the network structure of a population of six neurons. In this hypergraph, circles represent neurons and squares represent factors, respectively. Using (7), the encoding dictionary of the cooperative model represented by this hypergraph is factorized as

Fig. 2
Hypergraph including four factors, representing a Markov network model of a population of six neurons. Neurons are illustrated as circles and factors are illustrated as squares. The terms H123, H24, H25, and H36 are the entropies of the marginal factors ...


where Pijk is short for Ps (ri, rj, rk), and each bracket represents factor sets of different orders. The cooperative model in this example can be further simplified to


The kth-order interactions among neurons illustrated by the hypergraph are shown as ovals, representing the entropy space of each factor. Using (9), the entropy estimate of the cooperative model in Fig. 2 is expressed as


D. Minimum Entropy Distance

In a typical experiment, the stimulus is presented over multiple repeated trials to probe all the possible system states expressed by the output spike trains. The stimulus itself can be an actual information-bearing signal or the output of another neural population, as typical in the layered structure of the cortex [54]. We therefore assume that the “true” entropy of the population given S is computed by concatenating a large number of repeated trials for that stimulus. Given the distribution of these data, we seek to find the optimal factorization that results in a model entropy with minimum distance to the true entropy. We refer to this as the MinED algorithm. This algorithm yields a hypergraph G*obtained by minimizing the following objective function:


where Htrue is the true population entropy and HG is the entropy estimate of the population from single trial data.

To initialize the search, we assume the least informative prior expressed by the independent model. The algorithm searches for the best kth-order marginal out of all N!/k!(Nk)!kth-order marginals that minimizes the entropy distance. In each step, only kth-order marginal factors are accumulated in the hypergraph. Changes to the graph structure are allowed only when the entropy distance decreases. This operation is repeated until convergence occurs. The hypergraph is saved at the end of each iteration, and then continues to higher orders. A summary of the algorithm is given below.

Initialize left angle bracket Graph right angle bracket Independent Model
 For left angle bracket Order k right angle bracket from 2 to n
 For all nonexisting kth-order edges → left angle bracket Cset right angle bracket
  Do estimate the left angle bracket Entropy Distance right angle bracket for adding an edge from left angle bracket Cset right angle bracket
  If min{left angle bracket Entropy Distance for right angle bracket left angle bracket Cset right angle bracket} < left angle bracket{ left angle bracket Entropy Distance right angle bracket for left angle bracket Graph right angle bracket}
   Then add edge to left angle bracket Graph right angle bracket
  End Search if no edges decrease the left angle bracket Entropy Distance right angle bracket

The MinED algorithm yields a model that can also be categorized as a MaxEnt model. One notable difference is that the testable information is the rate of occurrence of network states Ptrue and not the estimates of the individual and pairwise neurons’ firing rates left angle bracket ri right angle bracket and left angle bracket rirj right angle bracket, as in [32]. Although we do not have access to the true rate, we have a repertoire of network states collected over multiple repeated trials. Based on this testable information, (10) can be rewritten as


Based on the MaxEnt principle, HG is always less than or equal to Htrue, and the maximum HG is reached when the MinED objective function is minimized.

III. Methods

A. Probabilistic Spiking Neurons

Pairwise MaxEnt models as in (6), to our knowledge, have not been applied to stimulus-driven responses of large sized networks of cortical neurons observed in vivo. There are multiple reports in the literature suggesting that these models are only suitable for neural systems that are small in size (~10–20 cells) and operate below a crossover point of the model’s predictive power [55]. To gain some insight on whether this is the case, we sought here to simulate an encoding model with the following features.

  1. A relatively large size spiking network model to test the generalization capability of the MinED algorithm to larger systems and contrast it to MaxEnt models that perform well in small size systems.
  2. Testable information limited to a small subpopulation of the entire simulated network model so that the true probability distribution can be faithfully approximated across many experiments and to mimic realistic neural yields in typical neural recording experiments (second edition of [18]).
  3. Spike trains as input to the network (i.e., discrete binary representation) rather than continuous inputs (e.g., current injection typically used in in vitro experiments with cell cultures).
  4. Biologically plausible statistical learning rule that optimizes the system’s experience with the stimulus by varying the network structure over multiple repeated trials.
  5. Response statistics of the population similar to those observed experimentally.

We simulated a motor task in which a population of neurons (166 neurons) encoded a 2-D arm trajectory of a subject reaching from a center position to one of eight peripheral targets [7]. We used an inhomogeneous Poisson generalized linear model (GLM) to simulate the spike train data for two interconnected layers of neurons. This model has been shown to fit the firing pattern of motor cortical neurons under a variety of motor tasks [56], [57]. In this model, the firing probability of each neuron at time is expressed by the conditional mean intensity function λi (t | gi (t), hi(t)), where gi(t) describes the tuning to movement parameters (i.e., the stimulus), and hi(t) denotes the spiking history of neuron i (to account for refractory effects) and that of other neurons connected to it up to time t [58], [59]


where βi is the background rate of neuron i that represents membrane voltage fluctuations and


where πi is the set of (input) neurons affecting the firing of (output) neuron i, Mij is the number of history time bins that relate the firing probability of neuron i to activity from parent neuron j, Sj(tmΔ) is the state of neuron j in bin m (takes the value of 0 or 1), and Δ is the bin width. The parameter αij models the coupling between neuron i and neuron j as [60]


where Aij(t) is the strength of the coupling and can be positive or negative depending on whether the connection is excitatory or inhibitory, and lij is the synaptic latency (in bins) associated with that connection.

Our model consisted of a two-layer network as depicted in Fig. 3(a). The input layer (66 neurons) was noncooperative, meaning that the term hi (t) was absent from the model (12) for these 66 neurons. For this layer, neurons were tuned to two movement parameters: direction and endpoint through gi (t) as

Fig. 3
(a) Simplified version of the network structure. The noncooperative (input) layer consists of 40 direction-tuned neurons and 36 endpoint tuned neurons with ten neurons tuned to both movement parameters. Each cooperative layer neuron receives four excitatory ...


where ηi and κi are the weights assigned to the cosine and endpoint tuning terms, respectively, V (t+ς) is the movement speed (kept constant) to be reached after ς seconds, θ (t+ς) is the movement direction, θi is the neuron’s preferred direction [7], ϕx and ϕy are the x- and y-components of the intended target endpoint, respectively, while xi and yi are the x- and y-components of the preferred endpoint of neuron i, respectively. The parameters ωi and σi control the direction and endpoint tuning widths, respectively. Based on these settings, neural responses would appear to be correlated due to the overlap between the tuning characteristics, and not due to actual interaction among themselves. In our model, neurons 1–40 were each cosine-tuned to a uniformly distributed preferred direction between −180° and 180° as illustrated in Fig. 3(b). Neurons 31–66 were each tuned to a preferred endpoint in a 2-D space divided into a 6×6 grid. The center of each cell in the grid was considered as the endpoint as shown in Fig. 3(c). Accordingly, ten neurons (neurons indexed 31–40) were tuned to both direction and endpoint. This mimicked the heterogeneous tuning characteristics frequently observed in motor cortex neurons recording experiments [61], [62].

The cooperative layer (100 neurons), on the other hand, followed the model (12) with the term gi(t) = 0. Initially, these neurons did not have any tuning preferences and were fully interconnected among themselves with fixed weights, with 80 excitatory neurons and 20 inhibitory neurons similar to the 4 : 1 ratio typically observed in cortex [63]. For this layer, hi(t) was expressed as


where πinc and πin are the parent sets of neuron i, while αijnc and αikc model the connections from noncooperative layer and cooperative layer neurons, respectively, as expressed in (14). Thus, the encoding of these neurons of movement parameters resulted from the random connections to subpopulations in the noncooperative layer. Specifically, each cooperative layer neuron received excitatory connections from four randomly selected non-cooperative layer neurons. The synaptic latency lij of the connections from the noncooperative layer neurons was randomly selected from a uniform distribution between 1 and 3 bins, while for the cooperative layer connections, lik was set to 1 bin. Table I lists the values of the parameters used in (14)–(16).

Values of the Parameters Used for the Model of Equations (13)(16). These Parameters Were Fixed for all Neurons in the Noncooperative and Cooperative Layers. Aikc+(t) and Aikc(t) Model the Weight of the Excitatory and Inhibitory Connections, ...

B. Task Learning

We used a biologically plausible statistical learning rule to optimize the system experience with the task. This rule was governed by a spike-timing-dependent plasticity (STDP) mechanism, known to underlie synaptic plasticity in vitro [66] and is widely believed to underlie cortical plasticity in vivo [65]–[67]. Specifically, under this mechanism, the connection strength between any given pair of interconnected neurons in the cooperative layer was allowed to naturally change if the pair was “functionally correlated.” Two neurons in the cooperative layer were considered functionally correlated if they received connections from the same noncooperative layer neuron with different synaptic latencies or from more than one noncooperative layer neuron with overlapped tuning. This is illustrated in Fig. 4(a) by cooperative layer neurons C and D. More precisely, only the connection from neuron C to neuron D is expected to be strengthened as a result of learning, while all other connections are expected to be weakened if no overlap exists between the tuning of neurons A and B. More details about this learning rule are reported elsewhere [66], [68].

Fig. 4
(a) Schematic of the statistical learning rule. Neurons A and B are noncooperative layer neurons that excite cooperative layer neurons C, D, and E. The parameter lij is the synaptic latency associated with each connection. Assuming lCA, to be greater ...

The model was trained with 1000 trials (trajectories) per target where the duration of each trial was 2 s: 1 s to reach the target and 1 s to return back to the center position. Trajectory data were collected from the 2-D arm movement of a human operator performing the task. Excitatory connections between cooperative layer neurons were the only connections allowed to change as a result of learning. Fig. 4(b) shows sample trajectories used during task learning. Every 2 s, one of the 1000 trajectories was randomly selected from which the movement direction, movement speed, and endpoint were extracted. These movement features were then used to modulate the firing of noncooperative layer neurons according to (15). Fig. 4(c) shows sample spike raster plots obtained in one trial for two adjacent targets 1 and 8 after the training phase. It can be seen that noncooperative layer neurons exhibit clear task-dependent response modulation reminiscent of their tuning characteristics. In contrast, cooperative layer neurons do not exhibit noticeable changes in their firing rates but significant synchrony seems to be present.

Fig. 5(a) shows the average change in connection strength for both functionally correlated and uncorrelated neurons during the learning phase. As can be seen, the strength of the connections between functionally correlated neurons increases above the initial strength (0.01), while those of uncorrelated neurons decrease substantially over multiple training sessions. Fig. 5(b) shows the connectivity matrices of cooperative layer neurons before and after learning. Sparse connectivity is apparent, indicating that the learning phase consolidated the task information in a relatively small number of connections.

Fig. 5
(a) Mean connectivity strength versus trial number for functionally correlated and uncorrelated cooperative layer neurons. (b) Connectivity matrix for the cooperative layer neurons before training (left) and after training (right). The columns represent ...

An example of the statistics of input and output spike trains for sample neurons during the reach portion of the trial is shown in Fig. 6. Cooperative layer neuron “103” receives input from neurons “8,” “13,” “24” (direction tuned), and “54” (endpoint tuned). As can be seen in Fig. 6(f), neuron “103” modulated its firing significantly when the movement was to targets “5” or “8” as depicted by the 10-ms bin poststimulus time histograms (PSTHs) averaged across 1000 trials/target. On the other hand, for these two targets, two out of the four input neurons modulated their firing significantly (neurons “8” and “13” for target “5” and neurons “24” and “54” for target “8”) as shown in Fig. 6(b)–(e). Because target “5” was independently encoded by neurons “8” and “13” and excited neuron “103,” the response modulation of neuron “103” to target “5” was much stronger than its response modulation to targets “4” and “6” combined. The latter targets were independently encoded (with similar strength) by the same two neurons “8” and “13,” respectively. A similar observation can be made to the response modulation of neuron “103” to target “8” and that of its response to targets “1” and “7” relative to neurons “24” and “54,” respectively.

Fig. 6
(a) Example of the connectivity between one cooperative layer neuron (neuron 103) and four noncooperative layer neurons (8, 13, 24, and 54). Neurons 8, 13, and 24 are direction tuned while neuron 54 is endpoint tuned. (b)–(f) 10-ms bin PSTHs of ...

To assess whether the correlation observed between cooperative layer neurons is an actual indicator of stimulus-driven interaction, we created a version of the data in which each spike was randomly jittered around its position over a uniform range of [−9, 9] ms. This kept the firing rate of each neuron the same (i.e., similar individual response to the common input) but destroys the time-dependent correlation in the patterns, thereby making neurons appear to fire independently. For a representative population of ten cooperative layer neurons (neurons “99” to “108”), Fig. 7(a) shows the probability of the number of neurons firing within a 10-ms window for both the actual data and the jittered data. As can be seen, the probability of two or more neurons firing within a 10-ms window for the jittered data becomes smaller than that for the actual data indicating that the temporal coordination among the neurons is important to maintain the spatio–temporal pattern with higher order interactions and is destroyed by jittering. Fig. 7(b) shows the number of times each specific pattern of firing occurred in the jittered data versus the actual data. Each point in the figure represents one out of the 1024 possible patterns. The graph indicates that jittering the spike trains decreased the rate of occurrence of most of the patterns compared to the actual data. This confirms the observation in Fig. 7(a) that higher order correlation in the data is more than what can be obtained from neurons acting independently.

Fig. 7
(a) Probability of number of neurons firing within a 10-ms window for actual data generated from the model and a jittered version of the data for cooperative layer neurons 99–108. (b) Rate of occurrence of specific firing patterns within a 3-ms ...

IV. Results

A. Order of the Encoding Model

After the motor task was learned, no more changes in the connection strengths were permitted and the obtained network topology was kept fixed for the rest of the analysis. One thousand trajectories per target were subsequently presented to the network and the corresponding spike trains were obtained. Fig. 8(a) shows sample trial data from target 1, obtained from the cooperative layer neurons. Target-specific trials were then concatenated into a single response matrix r. To estimate the true entropy, the joint density was calculated as the rate of occurrence of distinct patterns. Fig. 8(b) shows a sample pattern from a subpopulation of ten neurons indicated by the box in Fig. 8(a).

Fig. 8
(a) Sample data from 60 cooperative layer neurons generated by the model during one trial for one of the targets after task learning was completed. (b) A binary representation of the network states shown by the box in (a). Every state describes the activity ...

The order of the encoding model was estimated by determining how much information is intrinsic to each order of interactions. This was done by estimating the kth-order connected information in (2) and comparing it to the multi-information in (1). For a subpopulation size of ten neurons, the second-, third-, and fourth-order connected information is illustrated in Fig. 9. It can be seen that the kth-order connected information for k > 3 is negligible, indicating that information in the model was intrinsic to independent, pairwise, and triplet interactions among neurons, but no significant higher order interactions beyond k = 3 were contributing to the encoding model.

Fig. 9
Connected information for different orders of the cooperative model for a fixed subpopulation size (n = 10). The kth-order connected information for k > 3 is negligible compared to the second and third orders. The sum of all connected information ...

B. Model Consistency

The above analysis was repeated and the underlying network model was inferred for each of the eight targets using the MinED algorithm. Fig. 10(a) and (b) illustrates two hypergraphs inferred for sample adjacent targets 1 and 8 for which the spike train data from a sample trial was shown in Fig. 4(c). It can be seen that a considerable number of second-order interactions exists, confirming the results reported in the literature using in vitro data [32]. Furthermore, a considerable portion of third-order interactions is present. Both account for more than 99.9% of the multi-information in the population activity.

Fig. 10
(a) and (b) Inferred factor graphs for sample targets “1” and “8,” respectively. Every ellipse represents a neurons and the squares represent second and third-order factors. It can be seen that a considerable number of ...

To verify that the observed factors are reminiscent of stimulus-dependent higher order interactions, we computed cross correlograms between sample pairwise neurons that were found to have factors in the hypergraphs. Features in the correlograms can be used to determine if serial excitatory or inhibitory synaptic interaction exist between biological neurons [23], [25]. Fig. 10(c) and (d) shows sample cross correlograms for neuron pair (“9,” “10”) with a bin size of 9 ms computed from target “1” and target “8” trials, respectively. A single bin central peak at zero lag occurs in the cross correlogram for target “1” indicating a serial excitatory interaction. This is not observed between the two neurons for target “8” data and perfectly agrees with the absence of a factor between the two neurons in the hypergraphs in Fig. 10(a) and (b). It is also noteworthy that the MinED algorithm extracted many additional pairwise factors that were not detected by the cross-correlogram method.

Fig. 11 illustrates the model goodness of fit for the independent, MaxEnt, and MinED cooperative models. Further departure from the reference line (solid) would indicate the data from the two models were generated from increasingly distinct distributions. According to Fig. 11, the MinED model is the closest to the reference line, further demonstrating its superiority in capturing the intrinsic higher order interactions between the neurons. This is further demonstrated using the Kullback–Leibler (KL) distance between the true and estimated densities displayed in Fig. 12 for each of the eight targets.

Fig. 11
Model fit for a sample subpopulation of ten neurons for: 1) independent model, 2) MaxEnt model, and 3) MinED cooperative model. The true density is the measured rate of occurrence for distinct network states while the estimated density is the rate predicted ...
Fig. 12
KL distance between the true and the single trial distribution of states for each model. It can be seen that on average the MinED cooperative model results in the smallest distance.

To demonstrate the consistency of the MinED cooperative model as a function of the population size, Fig. 13 shows the average normalized KL distance for the different models as a function of the observed population size. The error for the cooperative model remains bounded as the number of neurons increases compared to a larger error for the independent and MaxEnt models. One possible explanation of this result is that the probability of observing neurons with significant overlaps in their tuning characteristics increases as the population size increases. This means that the output correlation becomes dominated by the input correlation rather than local interaction between cooperative layer neurons. In such case, the MaxEnt and independent models tend to exhibit some redundancy in their encoding characteristics.

Fig. 13
KL distance for the independent, MaxEnt, and cooperative models as a function of the size of the observed population. The distance is normalized by the number of neurons. The error bars represent the standard deviation.

V. Discussion

In the above analysis, we have mainly focused on the development of the algorithm and therefore restricted our analysis to spike train data from statistical point process models of cortical neurons. These models generate neural responses that share a number of features with experimental data. From a stimulus coding perspective, it has been shown that motor cortical neurons can simultaneously encode multiple parameters (e.g., direction and endpoint [61]), reflecting the complex anatomical and functional organization of the cortical neural circuitry. From a “natural learning” perspective, our model made use of the widely accepted notion that synaptic modification plays an important role in learning and memory formation [9]. For these reasons, we believe that our encoding model mimics to some extent a biological “information transform” that may be taking place in an actual spiking neural system. Our model, however, did not include many important elements that are known to guide motor behavior such as attention and visual feedback. We have chosen to simulate a task (the center out reach task) that minimally relies on this feedback mechanism to guide reach movements (as opposed to, for example, encoding a task with freely moving target which would require continuous adaptation of motor commands).

We also found a strong agreement between the statistics of our encoding model data and related models that have been applied to in vitro cell cultures. Specifically, we found that second-order interaction seem to be predominantly present, a result observed by a multitude of studies (e.g., [11], [13], [32], [42], and [43]).

MaxEnt models similar to (6) have been shown to faithfully explain the collective stimulus-driven network behavior of ~50–100 retinal Ganglion cells (RGCs) in isolated slices in vitro [32], [41]–[44]. These neurons have well-characterized receptive fields, typically exhibit low firing rates, and the MaxEnt model was shown to capture ~98% of the multi-information under stimulus presence [41] while only accounts for ~88% of the multi-information in spontaneous network activity [43]. We also found that our biological learning rule results in network connectivity that is relatively sparse, in agreement with some recent experimental results during direct activation of biological cortical networks [70].

The functional and anatomical organization of cortical neural networks observed in vivo, however, is significantly different from RGC cell cultures. They exhibit very heterogeneous firing patterns [45] and are believed to encode stimuli in a more complex fashion than sensory neurons observed in vitro [46], [47]. These neurons receive inputs from many cortical and subcortical areas and generate outputs through numerous feedback loops with optimal error-correcting capabilities that are known to play a very important role in mediating attention and in optimizing the subject’s experience with the stimulus over multiple repeated trials. Therefore, their underlying complex structure results in interaction patterns that may have much longer spatial and temporal dependence, yielding sequence lengths that are highly nonstationary.

As our results demonstrated, the MinED approach is superior to the MaxEnt model in inferring the underlying model structure, particularly for relatively large size systems. The MaxEnt model finds the optimal network structure that maximizes entropy with respect to the testable information (i.e., known constraints such as observed firing rate and pairwise correlation). This limits its application to small sized networks in which the true distribution can be measured with a high degree of accuracy. Our MinED approach, on the other hand, was developed to optimally explain single trial data with knowledge of network structure from a concatenation of a large number of trials. This has the advantage of capturing information about transition between specific sequences of network states, a feature not accounted for by the MaxEnt model. We therefore expect that the MinED approach will yield superior performance in quantifying spatio–temporal patterns of activity that are frequently observed in cortical systems in vivo. Full analysis of real experimental neural data with our proposed approach is currently underway.

VI. Conclusion

As tools for large-scale recording of neural activity in the brain continue to excel, some challenging questions about the nature of the brain’s complex information processing naturally arise. Information theory is increasingly becoming popular to address many of these questions. In this work, we mainly addressed the problem of actual information transfer through a system of spiking cortical neurons that may provide a near-optimal mechanism for encoding external stimuli in single trial experiments.

Generally speaking, limited data constitute a significant challenge, particularly for large populations, because the dimension of the response space grows exponentially with the number of neurons. In such a case, an infinite amount of data would be needed to completely characterize the joint distribution. The density estimates can be severely biased when only limited data are available. Our approach therefore attempts to circumvent this problem by finding the graphical model that explains single trial data and can be simultaneously extrapolated to account for unlimited data size, if available. Using factor graphs, we have demonstrated that a network model can best fit single trial data when connectivity is constrained by information available from the concatenation of multiple repeated trials.

Using a model of motor encoding, we demonstrated that task information can be transformed from a noncooperative network output with independent firing to a cooperative network interaction using an STDP learning rule. This biologically plausible mechanism accounts for the ordered set of event occurrences and therefore captures temporal correlations between neuronal states that are caused by delayed synaptic interactions between neurons. Our analysis has revealed that a substantial amount of information in the model is represented in second-order interactions and is efficiently represented by cliques in the graph structures, confirming the results obtained by other groups. However, our results also demonstrate that third-order interactions are present and contribute significantly to the multi-information. In addition, spatial patterns between neurons have emerged as a result of task learning. These results were not surprising, given the model’s dependence on the variable-length history of interaction that contributed to the emergence of specific temporal patterns of network states.

We believe this framework can be extremely useful in improving our understanding of brain circuits, and in developing efficient decoders for brain machine interface applications [71]. Despite our focus on spiking neural data analysis, the framework is applicable to many other fields, for example, in gene regulatory networks where the collective role of multiple genes needs to be assessed to characterize an observed phenotype. It can also be useful in the development of sensor network coding strategies in which sensors optimally cooperate to encode signals from the surrounding to optimize their limited resources. In some sense, one can argue that natural brain encoding mechanisms can be extremely insightful in the design of efficient network codes.


This work was supported by the National Institute of Health (NIH) National Institute of Neurological Disorders and Stroke (NINDS) under Grant NS054148.



Mehdi Aghagolzadeh received the B.Sc. degree in electrical engineering with honors from University of Tabriz, Tabriz, Iran, in 2003 and the M.Sc. degree in electrical engineering (with specialty in signal processing and communication systems) from University of Tehran, Tehran, Iran in 2006. Currently, he is working towards the Ph.D. degree at the Department of Electrical and Computer Engineering, Michigan State University, East Lansing.

He has held various academic scholarships throughout his academic career. His research interest includes statistical signal processing, neural engineering, information theory, and system identification.


Seif Eldawlatly received the B.Sc. and M.Sc. degrees in electrical engineering from the Computer and Systems Engineering Department, Ain Shams University, Cairo, Egypt, in 2003 and 2006, respectively. Currently, he is working towards the Ph.D. degree at the Department of Electrical and Computer Engineering, Michigan State University, East Lansing.

His research focuses on utilizing signal processing and machine learning techniques to understand brain connectivity at the neuronal level.


Karim Oweiss (S’95–M’02) received the B.S. and M.S. degrees with honors in electrical engineering from the University of Alexandria, Alexandria, Egypt, in 1993 and 1996, respectively, and the Ph.D. degree in electrical engineering and computer science from the University of Michigan, Ann Arbor, in 2002.

He completed a postdoctoral training with the Biomedical Engineering Department, University of Michigan, in summer 2002.

In August 2002, he joined the Department of Electrical and Computer Engineering and the Neuroscience program at Michigan State University, East Lansing, where he is currently an Associate Professor and Director of the Neural Systems Engineering Laboratory. He is the editor of a forthcoming book on Statistical Signal Processing for Neuroscience (New York: Academic Press, 2010). His research interests span diverse areas that include statistical signal processing, information theory, machine learning, with direct applications to neural integration and coordination in sensorimotor systems, computational neuroscience, and brain machine interfaces.

Prof. Oweiss is a member of the Society for Neuroscience. He serves as a member of the board of directors of the IEEE Signal Processing Society on Brain Machine Interfaces, the technical committees of the IEEE Biomedical Circuits and Systems, the IEEE Life Sciences, and the IEEE Engineering in Medicine and Biology Society. He is an Associate Editor of the IEEE Signal Processing Letters, the Journal of Computational Intelligence and Neuroscience, and the EURASIP Journal on Advances in Signal Processing. He was awarded the excellence in Neural Engineering award from the National Science Foundation in 2001.

Contributor Information

Mehdi Aghagolzadeh, Department of Electrical and Computer Engineering, Michigan State University, East Lansing, MI 48824 USA.

Seif Eldawlatly, Department of Electrical and Computer Engineering, Michigan State University, East Lansing, MI 48824 USA.

Karim Oweiss, Department of Electrical and Computer Engineering and Neuroscience Program, Michigan State University, East Lansing, MI 48824 USA.


1. Shadlen MN, Newsome WT. The variable discharge of cortical neurons: Implications for connectivity, computation, and information coding. J Neurosci. 1998;18:3870–3896. [PubMed]
2. Gilbert CD. Microcircuitry of the visual cortex. Annu Rev Neurosci. 1983;6:217–247. [PubMed]
3. Reich DS, Victor JD, Knight BW, Ozaki T, Kaplan E. Response Variability and Timing Precision of Neuronal Spike Trains In Vivo. Vol. 77. Bethesda, MD: Amer. Physiol. Soc; 1997. pp. 2836–2841. [PubMed]
4. Eggermont JJ. Function aspects of synchrony and correlation in the auditory nervous system. Concepts Neurosci. 1993;4:105–129.
5. Dzakpasu R, Zochowski M. Discriminating different types of synchrony in neural systems. Physica D. 2005;208:115–122.
6. Georgopoulos AP, Kalaska JF, Caminiti R, Massey JT. On the relations between the direction of two-dimensional arm movements and cell discharge in primate motor cortex. J Neurosci. 1982;2:1527–1537. [PubMed]
7. Georgopoulos AP, Schwartz AB, Kettner RE. Neuronal population coding of movement direction. Science. 1986;233:1416. [PubMed]
8. Riehle A, Grun S, Diesmann M, Aertsen A. Spike synchronization and rate modulation differentially involved in motor cortical function. Science. 1997;278:1950. [PubMed]
9. Kandel ER, Schwartz JH, Jessell TM. Principles of Neuro-science. New York: McGraw-Hill; 2000.
10. Rieke F, Warland D, de Ruyter van Steveninck R, Bialek W. Spikes: Exploring the Neural Code. Cambridge, MA: MIT Press; 1997F. Rieke, Spikes:: Exploring the Neural Code 1997.
11. Averbeck BB, Latham P, Pouget A. Neural correlations, population coding and computation. Nature Rev Neurosci. 2006;7:358–366. [PubMed]
12. Montani F, Kohn A, Smith MA, Schultz SR. The role of correlations in direction and contrast coding in the primary visual cortex. J Neurosci. 2007;27:2338. [PubMed]
13. Nirenberg S, Latham PE. Decoding neuronal spike trains: How important are correlations? Proc Nat Acad Sci. 2003;100:7348–7353. [PubMed]
14. Jermakowicz WJ, Casagrande VA. Neural networks a century after Cajal. Brain Res Rev. 2007 Oct;55:264–284. [PMC free article] [PubMed]
15. Wise K, Anderson D, Hetke J, Kipke D, Najafi K. Wireless implantable microsystems: High-density electronic interfaces to the nervous system. Proc IEEE. 2004 Jan;92(1):76–97.
16. Normann R, Maynard EM, Rousche PJ, Warren DJ. A neural interface for a cortical vision prosthesis. Vis Res. 1999;39:2577–2587. [PubMed]
17. Buzsaki G. Large-scale recording of neuronal ensembles. Nature Neurosci. 2004;7:446–451. [PubMed]
18. Nicolelis MAL. Methods for Neural Ensemble Recordings. Boca Raton, FL: CRC Press; 1999.
19. Hebb DO. The Organization of Behavior: A Neuropsychological Theory. New York: Wiley; 1949.
20. Gourevitch B, Eggermont JJ. A nonparametric approach for detection of bursts in spike trains. J Neurosci Methods. 2007;160:349–358. [PubMed]
21. Zohary E, Shadlen MN, Newsome WT. Correlated neuronal discharge rate and its implications for psychophysical performance. Nature. 1994;370:140–143. [PubMed]
22. Salinas E, Sejnowski TJ. Correlated neuronal activity and the flow of neural information. Nat Rev Neurosci. 2001;2:539–550. [PMC free article] [PubMed]
23. Perkel DH, Gerstein G, Moore GP. Neuronal spike trains and stochastic point processes. II. Simultaneous spike trains. J Biophys. 1967;7:419–40. [PubMed]
24. Gerstein GL, Perkel DH, Dayhoff J. Cooperative firing activity in simultaneously recorded populations of neurons: Detection and measurement. J Neurosci. 1985;5:881. [PubMed]
25. Smith WS, Fetz EE. Synaptic interactions between forelimb-related motor cortex neurons in behaving primates. J Neurophysiol. 2009 Aug;102:1026–1039. [PubMed]
26. Schneidman E, Bialek W, Berry MJ. Synergy, redundancy, and independence in population codes. J Neurosci. 2003;23:11539–11553. [PubMed]
27. Globerson A, Stark E, Vaadia E, Tishby N. The minimum information principle and its application to neural code analysis. Proc Nat Acad Sci. 2009;106:3490–3495. [PubMed]
28. Brenner N, Strong SP, Koberle R, Bialek W, de Ruyter van Steveninck RR. Synergy in a neural code. Neural Comput. 2000 Jul 1;12:1531–1552. [PubMed]
29. Martignon LG, Deco G, Laskey K, Diamond M, Freiwald W, Vaadia E. Neural coding: Higher-order temporal patterns in the neurostatistics of cell assemblies. Neural Comput. 2000;12:2621–2653. [PubMed]
30. Borst A, Theunissen FE. Information theory and neural coding. Nature Neurosci. 1999;2:947–958. [PubMed]
31. Nelken I, Chechik G. Information theory in auditory research. Hearing Res. 2007;229:94–105. [PubMed]
32. Schneidman E, Berry MJ, Ii RS, Bialek W. Weak pairwise correlations imply strongly correlated network states in a neural population. Nature. 2006;440:1007. [PMC free article] [PubMed]
33. Cover TM, Thomas JA. Elements of Information Theory. New York: Wiley-Interscience; 2006.
34. Studeny M, Vejnarova J. The multiinformation function as a tool for measuring stochastic dependence. Learn Graph Models. 1998:261–297.
35. Van Emden MH, van Universiteit A. An Analysis of Complexity. Amsterdam, The Netherlands: Mathematisch Centrum; 1971.
36. Schneidman E, Still S, Berry MJ, II, Bialek W. Network information and connected correlations. Phys Rev Lett. 2003;91:238701. [PubMed]
37. Silva J, Willett R. Hypergraph-based anomaly detection of high-dimensional co-occurrences. IEEE Trans Pattern Anal Mach Intell. 2009 Oct;31(10):563–569. [PubMed]
38. London M, Schreibman A, Häusser M, Larkum ME, Segev I. The information efficacy of a synapse. Nature Neurosci. 2002;5:332–340. [PubMed]
39. Kindermann R, Snell JL, Grivich MI, Petrusca D, Sher A, Litke AM, Chichilnisky EJ. Markov Random Fields and Their Applications. Providence, RI: AMS; 1980.
40. Golumbic MC. Algorithmic Graph Theory and Perfect Graphs. New York: Academic; 1980.
41. Shlens J, Field GD, Gauthier JL. The structure of multi-neuron firing patterns in primate retina. J Neurosci. 2006;26:8254. [PubMed]
42. Tang A, Jackson D, Hobbs J, Chen W, Smith JL, Patel H, Prieto A, Petrusca D, Grivich MI, Sher A, Hottowy P, Dabrowski W, Litke AM, Beggs JM. A maximum entropy model applied to spatial and temporal correlations from cortical networks in vitro. J Neurosci. 2008;28:505. [PubMed]
43. Pillow JW, Shlens J, Paninski L, Sher A, Litke AM, Chichilnisky EJ, Simoncelli EP. Spatio-temporal correlations and visual signalling in a complete neuronal population. Nature. 2008;454:995–999. [PMC free article] [PubMed]
44. Shlens J, Field GD, Gauthier JL, Greschner M, Sher A, Litke AM, Chichilnisky EJ. The structure of large-scale synchronized firing in primate retina. J Neurosci. 2009 Apr 15;29:5022–5031. [PMC free article] [PubMed]
45. Churchland MM, Shenoy KV. Temporal complexity and heterogeneity of single-neuron activity in premotor and motor cortex. J Neurophysiol. 2007;97:4235. [PubMed]
46. Callaway EM. Local circuits in primary visual cortex of the macaque monkey. Annu Rev Neurosci. 1998;21:47–74. [PubMed]
47. Yoshimura Y, Callaway EM. Fine-scale specificity of cortical networks depends on inhibitory cell type and connectivity. Nature Neurosci. 2005 Nov;8:1552–1559. [PubMed]
48. Sanes JN, Donoghue JP. Plasticity and primary motor cortex. Annu Rev Neurosci. 2000;23:393–415. [PubMed]
49. Cohen MR, Newsome WT. Context-dependent changes in functional circuitry in visual area MT. Neuron. 2008 Oct 9;60:162–173. [PMC free article] [PubMed]
50. Azouz R, Gray CM. Dynamic spike threshold reveals a mechanism for synaptic coincidence detection in cortical neurons in vivo. Proc Nat Acad Sci USA. 2000;97:8110–8115. [PubMed]
51. Grün S, Diesmann M, Aertsen A. Unitary events in multiple single-neuron spiking activity: I. Detection and significance. Neural Comput. 2002;14:43–80. [PubMed]
52. Bondy JA, Murty USR. Graph Theory. New York: Springer-Verlag; 2007.
53. Bell AJ. The co-information lattice. In: Amari S, Cichocki A, Makino S, Murata N, editors. Proc. 5th Int. Workshop Ind. Component Anal. Blind Signal Separat. 2003.
54. Crochet S, Petersen CC. Cortical dynamics by layers. Neuron. 2009 Nov 12;64:298–300. [PubMed]
55. Roudi Y, Nirenberg S, Latham P. Plos Comput Biol. Vol. 5. 2009. Pairwise maximum entropy models for studying large biological systems: When they can work and when they can’t. [PMC free article] [PubMed]
56. Truccolo W, Eden UT, Fellows MR, Donoghue JP, Brown EN. A point process framework for relating neural spiking activity to spiking history, neural ensemble, and extrinsic covariate effects. J Neurophysiol. 2005 Feb 1;93:1074–1089. [PubMed]
57. Brockwell AE, Kass RE, Schwartz AB. Statistical signal processing and the motor cortex. Proc IEEE. 2007 May;95(5):881–898. [PMC free article] [PubMed]
58. Brillinger D. The identification of point process systems. Ann Probab. 1975;3:909–924.
59. Brown EN, editor. Theory of Point Processes for Neural Systems, ser. Methods and Models in Neurophysics. Paris, France: Elsevier; 2005.
60. Sprekeler H, Michaelis C, Wiskott L. Slowness: An objective for spike-timing-dependent plasticity? PLoS Comput Biol. 2007;3:e112. [PMC free article] [PubMed]
61. Carmena JM, Lebedev MA, Crist RE, O’Doherty JE, Santucci DM, Dimitrov DF, Patil PG, Henriquez CS, Nicolelis MAL. Learning to control a brain-machine interface for reaching and grasping by primates. PLoS Biol. 2003;1:e2. [PMC free article] [PubMed]
62. Jackson A, Mavoori J, Fetz EE. Long-term motor cortex plasticity induced by an electronic neural implant. Nature. 2006;444:56–60. [PubMed]
63. Hendry SH, Schwark HD, Jones EG, Yan J. Numbers and proportions of GABA-immunoreactive neurons in different areas of monkey cerebral cortex. J Neurosci. 1987;7:1503–1519. [PubMed]
64. Bi GQ, Poo MM. Synaptic modifications in cultured hippocampal neurons: Dependence on spike timing, synaptic strength, and postsynaptic cell type. J Neurosci. 1998;18:10464–72. [PubMed]
65. Jacob V, Brasier DJ, Erchova I, Feldman D, Shulz DE. Spike timing-dependent synaptic depression in the in vivo barrel cortex of the rat. J Neurosci. 2007;27:1271–1284. [PMC free article] [PubMed]
66. Froemke RC, Dan Y. Spike-timing-dependent synaptic modification induced by natural spike trains. Nature. 2002;416:433–438. [PubMed]
67. Dan Y, Poo MM. Spike timing-dependent plasticity: From synapse to perception. Physiol Rev. 2006;86:1033. [PubMed]
68. Dan Y, Poo MM. Spike timing-dependent plasticity of neural circuits. Neuron. 2004;44:23–30. [PubMed]
69. Palm G, Aertsen AMHJ, Gerstein GL. On the significance of correlations among neuronal spike trains. Biol Cybern. 1988;59:1–11. [PubMed]
70. Histed M, Bonin V, Reid R. Direct activation of sparse, distributed populations of cortical neurons by electrical microstimulation. Neuron. 2009;63(4):508–522. [PMC free article] [PubMed]
71. Aghagolzadeh M, Oweiss K. Compressed and distributed sensing of neuronal activity for real time spike train decoding. IEEE Trans Neural Syst Rehabil Eng. 2009 Apr;17(2):116–127. [PMC free article] [PubMed]