Humans are remarkably proficient at categorizing visually-similar objects. To better understand the cortical basis of this categorization process, we used magnetoencephalography (MEG) to record neural activity while participants learned–with feedback–to discriminate two highly-similar, novel visual categories. We hypothesized that although prefrontal regions would mediate early category learning, this role would diminish with increasing category familiarity and that regions within the ventral visual pathway would come to play a more prominent role in encoding category-relevant information as learning progressed. Early in learning we observed some degree of categorical discriminability and predictability in both prefrontal cortex and the ventral visual pathway. Predictability improved significantly above chance in the ventral visual pathway over the course of learning with the left inferior temporal and fusiform gyri showing the greatest improvement in predictability between 150 and 250 ms (M200) during category learning. In contrast, there was no comparable increase in discriminability in prefrontal cortex with the only significant post-learning effect being a decrease in predictability in the inferior frontal gyrus between 250 and 350 ms (M300). Thus, the ventral visual pathway appears to encode learned visual categories over the long term. At the same time these results add to our understanding of the cortical origins of previously reported signature temporal components associated with perceptual learning.
doi:10.3389/fpsyg.2013.00684
PMCID: PMC3797963
PMID: 24146656
visual category learning; categorization; ventral visual pathway; prefrontal cortex; cortical time course; decoding; MEG; human neuroscience
Several authors have discussed previously the use of loglinear models, often called maximum entropy models, for analyzing spike train data to detect synchrony. The usual loglinear modeling techniques, however, do not allow for time-varying firing rates that typically appear in stimulus-driven (or action-driven) neurons, nor do they incorporate non-Poisson history effects or covariate effects. We generalize the usual approach, combining point process regression models of individual-neuron activity with loglinear models of multiway synchronous interaction. The methods are illustrated with results found in spike trains recorded simultaneously from primary visual cortex. We then go on to assess the amount of data needed to reliably detect multiway spiking.
doi:10.1162/NECO_a_00307
PMCID: PMC3374919
PMID: 22509967
functional connectivity; loglinear models; multiple spike train analysis
Multineuronal recordings have revealed that neurons in primary visual cortex (V1) exhibit coordinated fluctuations of spiking activity in the absence and in the presence of visual stimulation. From the perspective of understanding a single cell’s spiking activity relative to a behavior or stimulus, these network flutuations are typically considered to be noise. We show that these events are highly correlated with another commonly recorded signal, the local field potential (LFP), and are also likely related to global network state phenomena which have been observed in a number of neural systems. Moreover, we show that attributing a component of cell firing to these network fluctuations via explicit modeling of the LFP improves the recovery of cell properties. This suggests that the impact of network fluctuations may be estimated using the LFP, and that a portion of this network activity is unrelated to the stimulus and instead reflects ongoing cortical activity. Thus, the LFP acts as an easily accessible bridge between the network state and the spiking activity.
doi:10.1007/s10827-009-0208-9
PMCID: PMC3604740
PMID: 20094906
Local field potential; correlation; network state; spontaneous activity; multielectrode array; decoding; population coding
Identifying brain regions with high differential response under multiple experimental conditions is a fundamental goal of functional imaging. In many studies, regions of interest (ROIs) are not determined a priori but are instead discovered from the data, a process that requires care because of the great potential for false discovery. An additional challenge is that magnetoencephalography/electroencephalography sensor signals are very noisy, and brain source images are usually produced by averaging sensor signals across trials. As a consequence, for a given subject, there is only one source data vector for each condition, making it impossible to apply testing methods such as analysis of variance. We solve these problems in several steps. (1) To obtain within-condition uncertainty, we apply the bootstrap across trials, producing many bootstrap source images. To discover ‘hot spots’ in space and time that could become ROIs, (2) we find source locations where likelihood ratio statistics take unusually large values. We are not interested in isolated brain locations where a test statistic might happen to be large. Instead, (3) we apply a clustering algorithm to identify sources that are contiguous in space and time where the test statistic takes an ‘excursion’ above some threshold. Having identified possible spatiotemporal ROIs, (4) we evaluate global statistical significance of ROIs by using a permutation test. After these steps, we check performance via simulation, and then illustrate their application in a magnetoencephalography study of four-direction center-out wrist movement, showing that this approach identifies statistically significant spatiotemporal ROIs in the motor and visual cortices of individual subjects.
doi:10.1002/sim.4309
PMCID: PMC3298091
PMID: 21786281
ROI; global statistical significance; spatiotemporal clustering; bootstrap; MEG/EEG; source localization
Neural spike trains, which are sequences of very brief jumps in voltage across the cell membrane, were one of the motivating applications for the development of point process methodology. Early work required the assumption of stationarity, but contemporary experiments often use time-varying stimuli and produce time-varying neural responses. More recently, many statistical methods have been developed for nonstationary neural point process data. There has also been much interest in identifying synchrony, meaning events across two or more neurons that are nearly simultaneous at the time scale of the recordings. A natural statistical approach is to discretize time, using short time bins, and to introduce loglinear models for dependency among neurons, but previous use of loglinear modeling technology has assumed stationarity. We introduce a succinct yet powerful class of time-varying loglinear models by (a) allowing individual-neuron effects (main effects) to involve time-varying intensities; (b) also allowing the individual-neuron effects to involve autocovariation effects (history effects) due to past spiking, (c) assuming excess synchrony effects (interaction effects) do not depend on history, and (d) assuming all effects vary smoothly across time. Using data from primary visual cortex of an anesthetized monkey we give two examples in which the rate of synchronous spiking can not be explained by stimulus-related changes in individual-neuron effects. In one example, the excess synchrony disappears when slow-wave “up” states are taken into account as history effects, while in the second example it does not. Standard point process theory explicitly rules out synchronous events. To justify our use of continuous-time methodology we introduce a framework that incorporates synchronous events and provides continuous-time loglinear point process approximations to discrete-time loglinear models.
doi:10.1214/10-AOAS429
PMCID: PMC3152213
PMID: 21837263
Discrete-time approximation; loglinear model; marked process; nonstationary point process; simultaneous events; spike train; synchrony detection
Statistics has moved beyond the frequentist-Bayesian controversies of the past. Where does this leave our ability to interpret results? I suggest that a philosophy compatible with statistical practice, labelled here statistical pragmatism, serves as a foundation for inference. Statistical pragmatism is inclusive and emphasizes the assumptions that connect statistical models with observed data. I argue that introductory courses often mis-characterize the process of statistical inference and I propose an alternative “big picture” depiction.
doi:10.1214/10-STS337
PMCID: PMC3153074
PMID: 21841892
Bayesian; confidence; frequentist; statistical education; statistical pragmatism; statistical significance
Activity of a neuron, even in the early sensory areas, is not simply a function of its local receptive field or tuning properties, but depends on global context of the stimulus, as well as the neural context. This suggests the activity of the surrounding neurons and global brain states can exert considerable influence on the activity of a neuron. In this paper we implemented an L1 regularized point process model to assess the contribution of multiple factors to the firing rate of many individual units recorded simultaneously from V1 with a 96-electrode “Utah” array. We found that the spikes of surrounding neurons indeed provide strong predictions of a neuron's response, in addition to the neuron's receptive field transfer function. We also found that the same spikes could be accounted for with the local field potentials, a surrogate measure of global network states. This work shows that accounting for network fluctuations can improve estimates of single trial firing rate and stimulus-response transfer functions.
PMCID: PMC3235005
PMID: 22162918
In the neocortex, neurons participate in epochs of elevated activity, or Up states, during periods of quiescent wakefulness, slow-wave sleep, and general anesthesia. The regulation of firing during and between Up states is of great interest because it can reflect the underlying connectivity and excitability of neurons within the network. Automated analysis of the onset and characteristics of Up state firing across different experiments and conditions requires a robust and accurate method for Up state detection. Using measurements of membrane potential mean and variance calculated from whole-cell recordings of neurons from control and postseizure tissue, the authors have developed such a method. This quantitative and automated method is independent of cell- or condition-dependent variability in underlying noise or tonic firing activity. Using this approach, the authors show that Up state frequency and firing rates are significantly increased in layer 2/3 neocortical neurons 24 hours after chemo-convulsant-induced seizure. Down states in postseizure tissue show greater membrane-potential variance characterized by increased synaptic activity. Previously, the authors have found that postseizure increase in excitability is linked to a gain-of-function in BK channels, and blocking BK channels in vitro and in vivo can decrease excitability and eliminate seizures. Thus, the authors also assessed the effect of BK-channel antagonists on Up state properties in control and postseizure neurons. These data establish a robust and broadly applicable algorithm for Up state detection and analysis, provide a quantitative description of how prior seizures increase spontaneous firing activity in cortical networks, and show how BK-channel antagonists reduce this abnormal activity.
doi:10.1097/WNP.0b013e3181fdf8bd
PMCID: PMC3150741
PMID: 21127407
epilepsy; seizure; Up state; BK channels; classification
State-space models provide an important body of techniques for analyzing time-series, but their use requires estimating unobserved states. The optimal estimate of the state is its conditional expectation given the observation histories, and computing this expectation is hard when there are nonlinearities. Existing filtering methods, including sequential Monte Carlo, tend to be either inaccurate or slow. In this paper, we study a nonlinear filter for nonlinear/non-Gaussian state-space models, which uses Laplace’s method, an asymptotic series expansion, to approximate the state’s conditional mean and variance, together with a Gaussian conditional distribution. This Laplace-Gaussian filter (LGF) gives fast, recursive, deterministic state estimates, with an error which is set by the stochastic characteristics of the model and is, we show, stable over time. We illustrate the estimation ability of the LGF by applying it to the problem of neural decoding and compare it to sequential Monte Carlo both in simulations and with real data. We find that the LGF can deliver superior results in a small fraction of the computing time.
doi:10.1198/jasa.2009.tm08326
PMCID: PMC3132892
PMID: 21753862
Laplace’s method; recursive Bayesian estimation; neural decoding
Indirect evidence is crucial for successful statistical practice. Sometimes, however, it is better used informally. Future efforts should be directed toward understanding better the connection between statistical methods and scientific problems.
doi:10.1214/10-STS308C
PMCID: PMC3123034
PMID: 21709739
Bayesian; decision theory; prior information; statistical pragmatism; statistical science
Typically, tuning curves in motor cortex are constructed by fitting the firing rate of a neuron as a function of some observed action, like arm direction or movement speed. These tuning curves are then often interpreted causally, as representing the firing rate as a function of the desired movement, or intent. This interpretation implicitly assumes that the motor command and the motor act are equivalent. However, any kind of perturbation, be it external, like a visuomotor rotation, or internal, like muscle fatigue, can create a difference between the motor intent and the action. How do we estimate the tuning curve under these conditions? Furthermore, it is well known that during learning or adaptation, the relationship between neural firing and the observed movement can change. Does this change indicate a change in the inputs to the population, or a change in the way those inputs are processed?
In this work, we present a method to infer the latent, unobserved inputs into the population of recorded neurons. Using data from non-human primates performing brain-computer interface experiments, we show that tuning curves based on these latent directions fit better than tuning curves based on actual movements. Finally, using data from a brain-computer interface learning experiment in which half of the units were decoded incorrectly, we demonstrate how this method might differentiate various aspects of motor adaptation.
doi:10.1523/JNEUROSCI.2325-10.2010
PMCID: PMC2970932
PMID: 20943928
motor control; motor command; brain machine interface; neural prosthetics
The activity of dozens of simultaneously recorded neurons can be used to control the movement of a robotic arm or a cursor on a computer screen. This motor neural prosthetic technology has spurred an increased interest in the algorithms by which motor intention can be inferred. The simplest of these algorithms is the population vector algorithm (PVA), where the activity of each cell is used to weight a vector pointing in that neuron’s preferred direction. Off-line, it is possible to show that more complicated algorithms, such as the optimal linear estimator (OLE), can yield substantial improvements in the accuracy of reconstructed hand movements over the PVA. We call this open-loop performance. In contrast, this performance difference may not be present in closed-loop, on-line control.
The obvious difference between open and closed-loop control is the ability to adapt to the specifics of the decoder in use at the time. In order to predict performance gains that an algorithm may yield in closed-loop control, it is necessary to build a model that captures aspects of this adaptation process. Here we present a framework for modeling the closed-loop performance of the PVA and the OLE. Using both simulations and experiments, we show that (1) the performance gain with certain decoders can be far less extreme than predicted by off-line results, (2) that subjects are able to compensate for certain types of bias in decoders, and (3) that care must be taken to ensure that estimation error does not degrade the performance of theoretically optimal decoders.
doi:10.1016/j.neunet.2009.05.005
PMCID: PMC2783655
PMID: 19502004
Neural prosthetics; decoding algorithms; brain-machine interface
Information estimates such as the “direct method” of Strong et al. (1998) sidestep the difficult problem of estimating the joint distribution of response and stimulus by instead estimating the difference between the marginal and conditional entropies of the response. While this is an effective estimation strategy, it tempts the practitioner to ignore the role of the stimulus and the meaning of mutual information. We show here that, as the number of trials increases indefinitely, the direct (or “plug-in”) estimate of marginal entropy converges (with probability 1) to the entropy of the time-averaged conditional distribution of the response, and the direct estimate of the conditional entropy converges to the time-averaged entropy of the conditional distribution of the response. Under joint stationarity and ergodicity of the response and stimulus, the difference of these quantities converges to the mutual information. When the stimulus is deterministic or non-stationary the direct estimate of information no longer estimates mutual information, which is no longer meaningful, but it remains a measure of variability of the response distribution across time.
doi:10.1162/neco.2008.01-08-700
PMCID: PMC2703442
PMID: 18928371
Summary
Estimation of covariance matrices in small samples has been studied by many authors. Standard estimators, like the unstructured maximum likelihood estimator (ML) or restricted maximum likelihood (REML) estimator, can be very unstable with the smallest estimated eigenvalues being too small and the largest too big. A standard approach to more stably estimating the matrix in small samples is to compute the ML or REML estimator under some simple structure that involves estimation of fewer parameters, such as compound symmetry or independence. However, these estimators will not be consistent unless the hypothesized structure is correct. If interest focuses on estimation of regression coefficients with correlated (or longitudinal) data, a sandwich estimator of the covariance matrix may be used to provide standard errors for the estimated coefficients that are robust in the sense that they remain consistent under misspecifics tion of the covariance structure. With large matrices, however, the inefficiency of the sandwich estimator becomes worrisome. We consider here two general shrinkage approaches to estimating the covariance matrix and regression coefficients. The first involves shrinking the eigenvalues of the unstructured ML or REML estimator. The second involves shrinking an unstructured estimator toward a structured estimator. For both cases, the data determine the amount of shrinkage. These estimators are consistent and give consistent and asymptotically efficient estimates for regression coefficients. Simulations show the improved operating characteristics of the shrinkage estimators of the covariance matrix and the regression coefficients in finite samples. The final estimator chosen includes a combination of both shrinkage approaches, i.e., shrinking the eigenvalues and then shrinking toward structure. We illustrate our approach on a sleep EEG study that requires estimation of a 24 × 24 covariance matrix and for which inferences on mean parameters critically depend on the covariance estimator chosen. We recommend making inference using a particular shrinkage estimator that provides a reasonable compromise between structured and unstructured estimators.
PMCID: PMC2748251
PMID: 11764258
Empirical Bayes; General linear model; Givens angles; Hierarchical prior; Longitudinal data
BARS (DiMatteo, Genovese, and Kass 2001) uses the powerful reversible-jump MCMC engine to perform spline-based generalized nonparametric regression. It has been shown to work well in terms of having small mean-squared error in many examples (smaller than known competitors), as well as producing visually-appealing fits that are smooth (filtering out high-frequency noise) while adapting to sudden changes (retaining high-frequency signal). However, BARS is computationally intensive. The original implementation in S was too slow to be practical in certain situations, and was found to handle some data sets incorrectly. We have implemented BARS in C for the normal and Poisson cases, the latter being important in neurophysiological and other point-process applications. The C implementation includes all needed subroutines for fitting Poisson regression, manipulating B-splines (using code created by Bates and Venables), and finding starting values for Poisson regression (using code for density estimation created by Kooperberg). The code utilizes only freely-available external libraries (LAPACK and BLAS) and is otherwise self-contained. We have also provided wrappers so that BARS can be used easily within S or R.
PMCID: PMC2748880
PMID: 19777145
curve-fitting; free-knot splines; nonparametric regression; peri-stimulus time histogram; Poisson process
Mathematical models of neurons are widely used to improve understanding of neuronal spiking behavior. These models can produce artificial spike trains that resemble actual spike train data in important ways, but they are not very easy to apply to the analysis of spike train data. Instead, statistical methods based on point process models of spike trains provide a wide range of data-analytical techniques. Two simplified point process models have been introduced in the literature: the time-rescaled renewal process (TRRP) and the multiplicative inhomogeneous Markov interval (m-IMI) model. In this letter we investigate the extent to which the TRRP and m-IMI models are able to fit spike trains produced by stimulus-driven leaky integrate-and-fire (LIF) neurons.
With a constant stimulus, the LIF spike train is a renewal process, and the m-IMI and TRRP models will describe accurately the LIF spike train variability. With a time-varying stimulus, the probability of spiking under all three of these models depends on both the experimental clock time relative to the stimulus and the time since the previous spike, but it does so differently for the LIF, m-IMI, and TRRP models. We assessed the distance between the LIF model and each of the two empirical models in the presence of a time-varying stimulus. We found that while lack of fit of a Poisson model to LIF spike train data can be evident even in small samples, the m-IMI and TRRP models tend to fit well, and much larger samples are required before there is statistical evidence of lack of fit of the m-IMI or TRRP models. We also found that when the mean of the stimulus varies across time, the m-IMI model provides a better fit to the LIF data than the TRRP, and when the variance of the stimulus varies across time, the TRRP provides the better fit.
doi:10.1162/neco.2008.06-07-540
PMCID: PMC2715549
PMID: 18336078