The spatial variation of the extracellular action potentials (EAP) of a single neuron contains information about the size and location of the dominant current source of its action potential generator, which is typically in the vicinity of the soma. Using this dependence in reverse in a three-component realistic probe + brain + source model, we solved the inverse problem of characterizing the equivalent current source of an isolated neuron from the EAP data sampled by an extracellular probe at multiple independent recording locations. We used a dipole for the model source because there is extensive evidence it accurately captures the spatial roll-off of the EAP amplitude, and because, as we show, dipole localization, beyond a minimum cell-probe distance, is a more accurate alternative to approaches based on monopole source models. Dipole characterization is separable into a linear dipole moment optimization where the dipole location is fixed, and a second, nonlinear, global optimization of the source location. We solved the linear optimization on a discrete grid via the lead fields of the probe, which can be calculated for any realistic probe + brain model by the finite element method. The global source location was optimized by means of Tikhonov regularization that jointly minimizes model error and dipole size. The particular strategy chosen reflects the fact that the dipole model is used in the near field, in contrast to the typical prior applications of dipole models to EKG and EEG source analysis. We applied dipole localization to data collected with stepped tetrodes whose detailed geometry was measured via scanning electron microscopy. The optimal dipole could account for 96% of the power in the spatial variation of the EAP amplitude. Among various model error contributions to the residual, we address especially the error in probe geometry, and the extent to which it biases estimates of dipole parameters. This dipole characterization method can be applied to any recording technique that has the capabilities of taking multiple independent measurements of the same single units.
Multisite recording; Inverse problem; Passive conductor model; Lead field theory; Finite element method (FEM)
A methodology for nonlinear modeling of multi-input multi-output (MIMO) neuronal systems is presented that utilizes the concept of Principal Dynamic Modes (PDM). The efficacy of this new methodology is demonstrated in the study of the dynamic interactions between neuronal ensembles in the Pre-Frontal Cortex (PFC) of a behaving non-human primate (NHP) performing a Delayed Match-to-Sample task. Recorded spike trains from Layer-2 and Layer-5 neurons were viewed as the “inputs” and “outputs”, respectively, of a putative MIMO system/model that quantifies the dynamic transformation of multi-unit neuronal activity between Layer-2 and Layer-5 of the PFC. Model prediction performance was evaluated by means of computed Receiver Operating Characteristic (ROC) curves. The PDM-based approach seeks to reduce the complexity of MIMO models of neuronal ensembles in order to enable the practicable modeling of large-scale neural systems incorporating hundreds or thousands of neurons, which is emerging as a preeminent issue in the study of neural function. The “scaling-up” issue has attained critical importance as multi-electrode recordings are increasingly used to probe neural systems and advance our understanding of integrated neural function. The initial results indicate that the PDM-based modeling methodology may greatly reduce the complexity of the MIMO model without significant degradation of performance. Furthermore, the PDM-based approach offers the prospect of improved biological/physiological interpretation of the obtained MIMO models.
Multi-input multi-output neuronal systems; Pre-frontal cortex; Dynamic modeling; Nonlinear modeling; Principal Dynamic Modes; Volterra modeling
Although associational/commissural (A/C) and perforant path (PP) inputs to CA3b pyramidal cells play a central role in hippocampal mnemonic functions, the active and passive processes that shape A/C and PP AMPA and NMDA receptor-mediated unitary EPSP/EPSC (AMPA and NMDA uEPSP/uEPSC) have not been fully characterized yet. Here we find no differences in somatic amplitude between A/C and PP for either AMPA or NMDA uEPSPs. However, larger AMPA uEPSCs were evoked from proximal than from distal A/C or PP. Given the space-clamp constraints in CA3 pyramidal cells, these voltage clamp data suggest that the location-independence of A/C and PP AMPA uEPSP amplitudes is achieved in part through the activation of voltage dependent conductances at or near the soma. Moreover, similarity in uEPSC amplitudes for distal A/C and PP points to the additional participation of unclamped active conductances. Indeed, the pharmacological blockade of voltage-dependent conductances eliminates the location-independence of these inputs. In contrast, the location-independence of A/C and PP NMDA uEPSP/uEPSC amplitudes is maintained across all conditions indicating that propagation is not affected by active membrane processes. The location-independence for A/C uEPSP amplitudes may be relevant in the recruitment of CA3 pyramidal cells by other CA3 pyramidal cells. These data also suggest that PP excitation represents a significant input to CA3 pyramidal cells. Implication of the passive data on local synaptic properties is further investigated in the companion paper with a detailed computational model.
AMPA receptor; NMDA receptor; Hippocampus
Most neurons in the primary visual cortex initially respond vigorously when a preferred stimulus is presented, but adapt as stimulation continues. The functional consequences of adaptation are unclear. Typically a reduction of firing rate would reduce single neuron accuracy as less spikes are available for decoding, but it has been suggested that on the population level, adaptation increases coding accuracy. This question requires careful analysis as adaptation not only changes the firing rates of neurons, but also the neural variability and correlations between neurons, which affect coding accuracy as well. We calculate the coding accuracy using a computational model that implements two forms of adaptation: spike frequency adaptation and synaptic adaptation in the form of short-term synaptic plasticity. We find that the net effect of adaptation is subtle and heterogeneous. Depending on adaptation mechanism and test stimulus, adaptation can either increase or decrease coding accuracy. We discuss the neurophysiological and psychophysical implications of the findings and relate it to published experimental data.
Visual adaptation; Primary visual cortex; Population coding; Fisher Information; Cortical circuit; Computational model; Short-term synaptic depression; Spike-frequency adaptation
The mechanism of axonal conduction block induced by ultra-high frequency (≥20 kHz) biphasic electrical current was investigated using a lumped circuit model of the amphibian myelinated axon based on Frankenhaeuser-Huxley (FH) equations. The ultra-high frequency stimulation produces constant activation of both sodium and potassium channels at the axonal node under the block electrode causing the axonal conduction block. This blocking mechanism is different from the mechanism when the stimulation frequency is between 4 kHz and 10 kHz, where only the potassium channel is constantly activated. The minimal stimulation intensity required to induce a conduction block increases as the stimulation frequency increases. The results from this simulation study are useful to guide future animal experiments to reveal the different mechanisms underlying nerve conduction block induced by high-frequency biphasic electrical current.
Axon; electrical stimulation; high frequency; model; nerve block
Using two-cell and 50-cell networks of square-wave bursters, we studied how excitatory coupling of individual neurons affects the bursting output of the network. Our results show that the effects of synaptic excitation vs. electrical coupling are distinct. Increasing excitatory synaptic coupling generally increases burst duration. Electrical coupling also increases burst duration for low to moderate values, but at sufficiently strong values promotes a switch to highly synchronous bursts where further increases in electrical or synaptic coupling have a minimal effect on burst duration. These effects are largely mediated by spike synchrony, which is determined by the stability of the in-phase spiking solution during the burst. Even when both coupling mechanisms are strong, one form (in-phase or anti-phase) of spike synchrony will determine the burst dynamics, resulting in a sharp boundary in the space of the coupling parameters. This boundary exists in both two cell and network simulations. We use these results to interpret the effects of gap-junction blockers on the neuronal circuitry that underlies respiration.
Pacemaker neuron; Square-wave bursting; Synchronization; Bifurcation analysis
Gamma oscillations can synchronize with near zero phase lag over multiple cortical regions and between hemispheres, and between two distal sites in hippocampal slices. How synchronization can take place over long distances in a stable manner is considered an open question. The phase resetting curve (PRC) keeps track of how much an input advances or delays the next spike, depending upon where in the cycle it is received. We use PRCs under the assumption of pulsatile coupling to derive existence and stability criteria for 1:1 phase-locking that arises via bidirectional pulse coupling of two limit cycle oscillators with a conduction delay of any duration for any 1:1 firing pattern. The coupling can be strong as long as the effect of one input dissipates before the next input is received. We show the form that the generic synchronous and anti-phase solutions take in a system of two identical, identically pulse-coupled oscillators with identical delays. The stability criterion has a simple form that depends only on the slopes of the PRCs at the phases at which inputs are received and on the number of cycles required to complete the delayed feedback loop. The number of cycles required to complete the delayed feedback loop depends upon both the value of the delay and the firing pattern. We successfully tested the predictions of our methods on networks of model neurons. The criteria can easily be extended to include the effect of an input on the cycle after the one in which it is received.
Central pattern generators (CPGs) frequently include bursting neurons that serve as pacemakers for rhythm generation. Phase resetting curves (PRCs) can provide insight into mechanisms underlying phase locking in such circuits. PRCs were constructed for a pacemaker bursting complex in the pyloric circuit in the stomatogastric ganglion of the lobster and crab. This complex is comprised of the Anterior Burster (AB) neuron and two Pyloric Dilator (PD) neurons that are all electrically coupled. Artificial excitatory synaptic conductance pulses of different strengths and durations were injected into one of the AB or PD somata using the Dynamic Clamp. Previously, we characterized the inhibitory PRCs by assuming a single slow process that enabled synaptic inputs to trigger switches between an up state in which spiking occurs and a down state in which it does not. Excitation produced five different PRC shapes, which could not be explained with such a simple model. A separate dendritic compartment was required to separate the mechanism that generates the up and down phases of the bursting envelope (1) from synaptic inputs applied at the soma, (2) from axonal spike generation and (3) from a slow process with a slower time scale than burst generation. This study reveals that due to the nonlinear properties and compartmentalization of ionic channels, the response to excitation is more complex than inhibition.
Because of its highly branched dendrite, the Purkinje neuron requires significant computational resources if coupled electrical and biochemical activity are to be simulated. To address this challenge, we developed a scheme for reducing the geometric complexity; while preserving the essential features of activity in both the soma and a remote dendritic spine. We merged our previously published biochemical model of calcium dynamics and lipid signaling in the Purkinje neuron, developed in the Virtual Cell modeling and simulation environment, with an electrophysiological model based on a Purkinje neuron model available in NEURON. A novel reduction method was applied to the Purkinje neuron geometry to obtain a model with fewer compartments that is tractable in Virtual Cell. Most of the dendritic tree was subject to reduction, but we retained the neuron’s explicit electrical and geometric features along a specified path from spine to soma. Further, unlike previous simplification methods, the dendrites that branch off along the preserved explicit path are retained as reduced branches. We conserved axial resistivity and adjusted passive properties and active channel conductances for the reduction in surface area, and cytosolic calcium for the reduction in volume. Rallpacks are used to validate the reduction algorithm and show that it can be generalized to other complex neuronal geometries. For the Purkinje cell, we found that current injections at the soma were able to produce similar trains of action potentials and membrane potential propagation in the full and reduced models in NEURON; the reduced model produces identical spiking patterns in NEURON and Virtual Cell. Importantly, our reduced model can simulate communication between the soma and a distal spine; an alpha function applied at the spine to represent synaptic stimulation gave similar results in the full and reduced models for potential changes associated with both the spine and the soma. Finally, we combined phosphoinositol signaling and electrophysiology in the reduced model in Virtual Cell. Thus, a strategy has been developed to combine electrophysiology and biochemistry as a step toward merging neuronal and systems biology modeling.
Virtual Cell; NEURON; Model reduction; Compartmental modeling; Biochemical simulation; Electrophysiology modeling
The space of sensory stimuli is complex and high-dimensional. Yet, single neurons in sensory systems are typically affected by only a small subset of the vast space of all possible stimuli. A proper understanding of the input–output transformation represented by a given cell therefore requires the identification of the subset of stimuli that are relevant in shaping the neuronal response. As an extension to the commonly-used spike-triggered average, the analysis of the spike-triggered covariance matrix provides a systematic methodology to detect relevant stimuli. As originally designed, the consistency of this method is guaranteed only if stimuli are drawn from a Gaussian distribution. Here we present a geometric proof of consistency, which provides insight into the foundations of the method, in particular, into the crucial role played by the geometry of stimulus space and symmetries in the stimulus–response relation. This approach leads to a natural extension of the applicability of the spike-triggered covariance technique to arbitrary spherical or elliptic stimulus distributions. The extension only requires a subtle modification of the original prescription. Furthermore, we present a new resampling method for assessing statistical significance of identified relevant stimuli, applicable to spherical and elliptic stimulus distributions. Finally, we exemplify the modified method and compare it to other prescriptions given in the literature.
Covariance analysis; Spike-triggered average; Receptive field; Linear-nonlinear model
A computational study into the motion perception dynamics of a multistable psychophysics stimulus is presented. A diagonally drifting grating viewed through a square aperture is perceived as moving in the actual grating direction or in line with the aperture edges (horizontally or vertically). The different percepts are the product of interplay between ambiguous contour cues and specific terminator cues. We present a dynamical model of motion integration that performs direction selection for such a stimulus and link the different percepts to coexisting steady states of the underlying equations. We apply the powerful tools of bifurcation analysis and numerical continuation to study changes to the model’s solution structure under the variation of parameters. Indeed, we apply these tools in a systematic way, taking into account biological and mathematical constraints, in order to fix model parameters. A region of parameter space is identified for which the model reproduces the qualitative behaviour observed in experiments. The temporal dynamics of motion integration are studied within this region; specifically, the effect of varying the stimulus gain is studied, which allows for qualitative predictions to be made.
Motion; Perception; Multistability; Visual cortex; Barber pole; Bifurcation
The preBötzinger complex (preBötC) is a heterogeneous neuronal network within the mammalian brainstem that has been experimentally found to generate robust, synchronous bursts that drive the inspiratory phase of the respiratory rhythm. The persistent sodium (NaP) current is observed in every preBötC neuron, and significant modeling effort has characterized its contribution to square-wave bursting in the preBötC. Recent experimental work demonstrated that neurons within the preBötC are endowed with a calcium-activated nonspecific cationic (CAN) current that is activated by a signaling cascade initiated by glutamate. In a preBötC model, the CAN current was shown to promote robust bursts that experience depolarization block (DB bursts). We consider a self-coupled model neuron, which we represent as a single compartment based on our experimental finding of electrotonic compactness, under variation of gNaP, the conductance of the NaP current, and gCAN, the conductance of the CAN current. Varying these two conductances yields a spectrum of activity patterns, including quiescence, tonic activity, square-wave bursting, DB bursting, and a novel mixture of square-wave and DB bursts, which match well with activity that we observe in experimental preparations. We elucidate the mechanisms underlying these dynamics, as well as the transitions between these regimes and the occurrence of bistability, by applying the mathematical tools of bifurcation analysis and slow-fast decomposition. Based on the prevalence of NaP and CAN currents, we expect that the generalizable framework for modeling their interactions that we present may be relevant to the rhythmicity of other brain areas beyond the preBötC as well.
Respiration; preBötzinger complex; Central pattern generator; Bifurcation analysis; Bursting; Slow-fast decomposition
Transcranial magnetic stimulation (TMS) noninvasively interferes with human cortical function, and is widely used as an effective technique for probing causal links between neural activity and cognitive function. However, the physiological mechanisms underlying TMS-induced effects on neural activity remain unclear. We examined the mechanism by which TMS disrupts neural activity in a local circuit in early visual cortex using a computational model consisting of conductance-based spiking neurons with excitatory and inhibitory synaptic connections. We found that single-pulse TMS suppressed spiking activity in a local circuit model, disrupting the population response. Spike suppression was observed when TMS was applied to the local circuit within a limited time window after the local circuit received sensory afferent input, as observed in experiments investigating suppression of visual perception with TMS targeting early visual cortex. Quantitative analyses revealed that the magnitude of suppression was significantly larger for synaptically-connected neurons than for isolated individual neurons, suggesting that intracortical inhibitory synaptic coupling also plays an important role in TMS-induced suppression. A conventional local circuit model of early visual cortex explained only the early period of visual suppression observed in experiments. However, models either involving strong recurrent excitatory synaptic connections or sustained excitatory input were able to reproduce the late period of visual suppression. These results suggest that TMS targeting early visual cortex disrupts functionally distinct neural signals, possibly corresponding to feedforward and recurrent information processing, by imposing inhibitory effects through intracortical inhibitory synaptic connections.
Transcranial magnetic stimulation; Visual cortex; Suppression; Spiking neuron; Computational model
Information from the vestibular, sensorimotor, or visual systems can affect the firing of grid cells recorded in entorhinal cortex of rats. Optic flow provides information about the rat’s linear and rotational velocity and, thus, could influence the firing pattern of grid cells. To investigate this possible link, we model parts of the rat’s visual system and analyze their capability in estimating linear and rotational velocity. In our model a rat is simulated to move along trajectories recorded from rat’s foraging on a circular ground platform. Thus, we preserve the intrinsic statistics of real rats’ movements. Visual image motion is analytically computed for a spherical camera model and superimposed with noise in order to model the optic flow that would be available to the rat. This optic flow is fed into a template model to estimate the rat’s linear and rotational velocities, which in turn are fed into an oscillatory interference model of grid cell firing. Grid scores are reported while altering the flow noise, tilt angle of the optical axis with respect to the ground, the number of flow templates, and the frequency used in the oscillatory interference model. Activity patterns are compatible with those of grid cells, suggesting that optic flow can contribute to their firing.
Optic flow; Grid cell firing; Entorhinal cortex; Spherical camera; Visual image motion; Gaussian noise model; Self-motion
Kinetic occlusion produces discontinuities in the optic flow field, whose perception requires the detection of an unexpected onset or offset of otherwise predictably moving or stationary contrast patches. Many cells in primate visual cortex are directionally selective for moving contrasts, and recent reports suggest that this selectivity arises through the inhibition of contrast signals moving in the cells’ null direction, as in the rabbit retina. This nulling inhibition circuit (Barlow-Levick) is here extended to also detect motion onsets and offsets. The selectivity of extended circuit units, measured as a peak evidence accumulation response to motion onset/offset compared to the peak response to constant motion, is analyzed as a function of stimulus speed. Model onset cells are quiet during constant motion, but model offset cells activate during constant motion at slow speeds. Consequently, model offset cell speed tuning is biased towards higher speeds than onset cell tuning, similarly to the speed tuning of cells in the middle temporal area when exposed to speed ramps. Given a population of neurons with different preferred speeds, this asymmetry addresses a behavioral paradox—why human subjects in a simple reaction time task respond more slowly to motion offsets than onsets for low speeds, even though monkey neuron firing rates react more quickly to the offset of a preferred stimulus than to its onset.
Acceleration; Accretion and deletion; Occlusion; Visual cortex; Visual motion
A phase resetting curve (PRC) keeps track of the extent to which a perturbation at a given phase advances or delays the next spike, and can be used to predict phase locking in networks of oscillators. The PRC can be estimated by convolving the waveform of the perturbation with the infinitesimal PRC (iPRC) under the assumption of weak coupling. The iPRC is often defined with respect to an infinitesimal current as zi(ϕ), where ϕ is phase, but can also be defined with respect to an infinitesimal conductance change as zg(ϕ). In this paper, we first show that the two approaches are equivalent. Coupling waveforms corresponding to synapses with different time courses sample zg(ϕ) in predictably different ways. We show that for oscillators with Type I excitability, an anomalous region in zg(ϕ) with opposite sign to that seen otherwise is often observed during an action potential. If the duration of the synaptic perturbation is such that it effectively samples this region, PRCs with both advances and delays can be observed despite Type I excitability. We also show that changing the duration of a perturbation so that it preferentially samples regions of stable or unstable slopes in zg(ϕ) can stabilize or destabilize synchrony in a network with the corresponding dynamics.
A significant degree of heterogeneity in synaptic conductance is present in neuron to neuron connections. We study the dynamics of weakly coupled pairs of neurons with heterogeneities in synaptic conductance using Wang-Buzsaki and Hodgkin-Huxley model neurons which have Type I and Type II excitability, respectively. This type of heterogeneity breaks a symmetry in the bifurcation diagrams of equilibrium phase difference versus the synaptic rate constant when compared to the identical case. For weakly coupled neurons coupled with identical values of synaptic conductance a phase locked solution exists for all values of the synaptic rate constant, α. In particular, in-phase and anti-phase solutions are guaranteed to exist for all α. Heterogeneity in synaptic conductance results in regions where no phase locked solution exists and the general loss of the ubiquitous in-phase and anti-phase solutions of the identically coupled case. We explain these results through examination of interaction functions using the weak coupling approximation and an in depth analysis of the underlying multiple cusp bifurcation structure of the systems of coupled neurons.
Synchrony; Weak coupling; Heterogeneity
Despite the vital importance of our ability to accurately process and encode temporal information, the underlying neural mechanisms are largely unknown. We have previously described a theoretical framework that explains how temporal representations, similar to those reported in the visual cortex, can form in locally recurrent cortical networks as a function of reward modulated synaptic plasticity. This framework allows networks of both linear and spiking neurons to learn the temporal interval between a stimulus and paired reward signal presented during training. Here we use a mean field approach to analyze the dynamics of non-linear stochastic spiking neurons in a network trained to encode specific time intervals. This analysis explains how recurrent excitatory feedback allows a network structure to encode temporal representations.
Phase response curves (PRCs) have been widely used to study synchronization in neural circuits comprised of pacemaking neurons. They describe how the timing of the next spike in a given spontaneously firing neuron is affected by the phase at which an input from another neuron is received. Here we study two reciprocally coupled clusters of pulse coupled oscillatory neurons. The neurons within each cluster are presumed to be identical and identically pulse coupled, but not necessarily identical to those in the other cluster. We investigate a two cluster solution in which all oscillators are synchronized within each cluster, but in which the two clusters are phase locked at nonzero phase with each other. Intuitively, one might expect this solution to be stable only when synchrony within each isolated cluster is stable, but this is not the case. We prove rigorously the stability of the two cluster solution and show how reciprocal coupling can stabilize synchrony within clusters that cannot synchronize in isolation. These stability results for the two cluster solution suggest a mechanism by which reciprocal coupling between brain regions can induce local synchronization via the network feedback loop.
neuronal networks; synchronization; clustering; phase response curves; pulse coupled oscillators
Understanding the direction and quantity of information flowing in neuronal networks is a fundamental problem in neuroscience. Brains and neuronal networks must at the same time store information about the world and react to information in the world. We sought to measure how the activity of the network alters information flow from inputs to output patterns. Using neocortical column neuronal network simulations, we demonstrated that networks with greater internal connectivity reduced input/output correlations from excitatory synapses and decreased negative correlations from inhibitory synapses, measured by Kendall's τ correlation. Both of these changes were associated with reduction in information flow, measured by normalized transfer entropy (nTE). Information handling by the network reflected the degree of internal connectivity. With no internal connectivity, the feedforward network transformed inputs through nonlinear summation and thresholding. With greater connectivity strength, the recurrent network translated activity and information due to contribution of activity from intrinsic network dynamics. This dynamic contribution amounts to added information drawn from that stored in the network. At still higher internal synaptic strength, the network corrupted the external information, producing a state where little external information came through. The association of increased information retrieved from the network with increased gamma power supports the notion of gamma oscillations playing a role in information processing.
Information transfer; Neuronal networks; Simulation; Modeling
Adaptive stimulus design methods can potentially improve the efficiency of sensory neurophysiology experiments significantly; however, designing optimal stimulus sequences in real time remains a serious technical challenge. Here we describe two approximate methods for generating informative stimulus sequences: the first approach provides a fast method for scoring the informativeness of a batch of specific potential stimulus sequences, while the second method attempts to compute an optimal stimulus distribution from which the experimenter may easily sample. We apply these methods to single-neuron spike train data recorded from the auditory midbrain of zebra finches, and demonstrate that the resulting stimulus sequences do in fact provide more information about neuronal tuning in a shorter amount of time than do more standard experimental designs.
Information theory; Generalized linear model; Birdsong; Active learning; Optimal experimental design
An important tool to study rhythmic neuronal synchronization is provided by relating spiking activity to the Local Field Potential (LFP). Two types of interdependent spike-LFP measures exist. The first approach is to directly quantify the consistency of single spike-LFP phases across spikes, referred to here as point-field phase synchronization measures. We show that conventional point-field phase synchronization measures are sensitive not only to the consistency of spike-LFP phases, but are also affected by statistical dependencies between spike-LFP phases, caused by e.g. non-Poissonian history-effects within spike trains such as bursting and refractoriness. To solve this problem, we develop a new pairwise measure that is not biased by the number of spikes and not affected by statistical dependencies between spike-LFP phases. The second approach is to quantify, similar to EEG-EEG coherence, the consistency of the relative phase between spike train and LFP signals across trials instead of across spikes, referred to here as spike train to field phase synchronization measures. We demonstrate an analytical relationship between point-field and spike train to field phase synchronization measures. Based on this relationship, we prove that the spike train to field pairwise phase consistency (PPC), a quantity closely related to the squared spike-field coherence, is a monotonically increasing function of the number of spikes per trial. This derived relationship is exact and analytic, and takes a linear form for weak phase-coupling. To solve this problem, we introduce a corrected version of the spike train to field PPC that is independent of the number of spikes per trial. Finally, we address the problem that dependencies between spike-LFP phase and the number of spikes per trial can cause spike-LFP phase synchronization measures to be biased by the number of trials. We show how to modify the developed point-field and spike train to field phase synchronization measures in order to make them unbiased by the number of trials.
Spike-triggered average; Spike-field locking; Spike-LFP; Phase locking; Spike-field coherence; Phase-synchronization
Local field potentials (LFPs) measure aggregate neural activity resulting from the coordinated firing of neurons within a local network. We hypothesized that state parameters associated with the underlying brain dynamics may be encoded in LFPs but may not be directly measurable in the signal temporal and spectral contents. Using the Kalman filter we estimated latent state changes in LFPs recorded in monkey motor cortical areas during the execution of a visually instructed reaching task, under different applied force conditions. Prior to the estimation, matched filtering was performed to decouple behavior-relevant signals (Stamoulis and Richardson, J Comput Neurosci, 2009) from unrelated background oscillations. State changes associated with baseline oscillations appeared insignificant. In contrast, state changes estimated from LFP components associated with the execution of movement were significant. Approximately direction-invariant state vectors were consistently observed. Their patterns appeared invariant also to force field conditions, with a peak in the first 200 ms of the movement interval, but exponentially decreasing to the zero state approximately 200 ms from movement onset, also the time at which movement velocity reached its peak. Thus, state appeared to be modulated by the dynamics of movement but neither by movement direction nor by the mechanical environment. Finally, we compared state vectors estimated using the Kalman filter to the basis functions obtained through Principal Component Analysis. The pattern of the estimated state vector was very similar to that of the first PCA component, further suggesting that LFPs may directly encode brain state fluctuations associated with the dynamics of behavior.
Local field potentials; Motor system; State-space estimation
The rapidly increasing use of the local field potential (LFP) has motivated research to better understand its relation to the gold standard of neural activity, single unit (SU) spiking. We addressed this in an in vivo, awake, restrained mouse auditory cortical electrophysiology preparation by asking whether the LFP could actually be used to predict stimulus-evoked SU spiking. Implementing a Bayesian algorithm to predict the likelihood of spiking on a trial by trial basis from different representations of the despiked LFP signal, we were able to predict, with high quality and fine temporal resolution (2 ms), the time course of a SU's excitatory or inhibitory firing rate response to natural species-specific vocalizations. Our best predictions were achieved by representing the LFP by its wide-band Hilbert phase signal, and approximating the statistical structure of this signal at different time points as independent. Our results show that each SU's action potential has a unique relationship with the LFP that can be reliably used to predict the occurrence of spikes. This “signature” interaction can reflect both pre- and post-spike neural activity that is intrinsic to the local circuit rather than just dictated by the stimulus. Finally, the time course of this “signature” may be most faithful when the full bandwidth of the LFP, rather than specific narrow-band components, is used for representation.
LFP; Spike prediction; Auditory cortex; Gamma band; Theta band; Beta band; Oscillation; Bayesian algorithm; A1; Evoked potentials; Electroencephalography; EEG; Hilbert transform; Single cortical cells; Phase; Despiking
The Poisson process is an often employed model for the activity of neuronal populations. It is known, though, that superpositions of realistic, non- Poisson spike trains are not in general Poisson processes, not even for large numbers of superimposed processes. Here we construct superimposed spike trains from intracellular in vivo recordings from rat neocortex neurons and compare their statistics to specific point process models. The constructed superimposed spike trains reveal strong deviations from the Poisson model. We find that superpositions of model spike trains that take the effective refractoriness of the neurons into account yield a much better description. A minimal model of this kind is the Poisson process with dead-time (PPD). For this process, and for superpositions thereof, we obtain analytical expressions for some second-order statistical quantities—like the count variability, inter-spike interval (ISI) variability and ISI correlations—and demonstrate the match with the in vivo data. We conclude that effective refractoriness is the key property that shapes the statistical properties of the superposition spike trains. We present new, efficient algorithms to generate superpositions of PPDs and of gamma processes that can be used to provide more realistic background input in simulations of networks of spiking neurons. Using these generators, we show in simulations that neurons which receive superimposed spike trains as input are highly sensitive for the statistical effects induced by neuronal refractoriness.
Point process; Population activity; Spike train variability; Serial interval correlations ; Spike train simulation; Network simulation; Biomedicine; Human Genetics; Neurosciences; Theory of Computation; Neurology