PMCC PMCC

Search tips
Search criteria

Advanced
Results 1-14 (14)
 

Clipboard (0)
None

Select a Filter Below

Journals
Year of Publication
4.  How adaptation shapes spike rate oscillations in recurrent neuronal networks 
Neural mass signals from in-vivo recordings often show oscillations with frequencies ranging from <1 to 100 Hz. Fast rhythmic activity in the beta and gamma range can be generated by network-based mechanisms such as recurrent synaptic excitation-inhibition loops. Slower oscillations might instead depend on neuronal adaptation currents whose timescales range from tens of milliseconds to seconds. Here we investigate how the dynamics of such adaptation currents contribute to spike rate oscillations and resonance properties in recurrent networks of excitatory and inhibitory neurons. Based on a network of sparsely coupled spiking model neurons with two types of adaptation current and conductance-based synapses with heterogeneous strengths and delays we use a mean-field approach to analyze oscillatory network activity. For constant external input, we find that spike-triggered adaptation currents provide a mechanism to generate slow oscillations over a wide range of adaptation timescales as long as recurrent synaptic excitation is sufficiently strong. Faster rhythms occur when recurrent inhibition is slower than excitation and oscillation frequency increases with the strength of inhibition. Adaptation facilitates such network-based oscillations for fast synaptic inhibition and leads to decreased frequencies. For oscillatory external input, adaptation currents amplify a narrow band of frequencies and cause phase advances for low frequencies in addition to phase delays at higher frequencies. Our results therefore identify the different key roles of neuronal adaptation dynamics for rhythmogenesis and selective signal propagation in recurrent networks.
doi:10.3389/fncom.2013.00009
PMCID: PMC3583173  PMID: 23450654
spike frequency adaptation; adaptation; oscillations; rate models; network dynamics; Fokker–Planck; mean-field; recurrent network
5.  Patterned Brain Stimulation, What a Framework with Rhythmic and Noisy Components Might Tell Us about Recovery Maximization 
Brain stimulation is having remarkable impact on clinical neurology. Brain stimulation can modulate neuronal activity in functionally segregated circumscribed regions of the human brain. Polarity, frequency, and noise specific stimulation can induce specific manipulations on neural activity. In contrast to neocortical stimulation, deep-brain stimulation has become a tool that can dramatically improve the impact clinicians can possibly have on movement disorders. In contrast, neocortical brain stimulation is proving to be remarkably susceptible to intrinsic brain-states. Although evidence is accumulating that brain stimulation can facilitate recovery processes in patients with cerebral stroke, the high variability of results impedes successful clinical implementation. Interestingly, recent data in healthy subjects suggests that brain-state dependent patterned stimulation might help resolve some of the intrinsic variability found in previous studies. In parallel, other studies suggest that noisy “stochastic resonance” (SR)-like processes are a non-negligible component in non-invasive brain stimulation studies. The hypothesis developed in this manuscript is that stimulation patterning with noisy and oscillatory components will help patients recover from stroke related deficits more reliably. To address this hypothesis we focus on two factors common to both neural computation (intrinsic variables) as well as brain stimulation (extrinsic variables): noise and oscillation. We review diverse theoretical and experimental evidence that demonstrates that subject-function specific brain-states are associated with specific oscillatory activity patterns. These states are transient and can be maintained by noisy processes. The resulting control procedures can resemble homeostatic or SR processes. In this context we try to extend awareness for inter-individual differences and the use of individualized stimulation in the recovery maximization of stroke patients.
doi:10.3389/fnhum.2013.00325
PMCID: PMC3695464  PMID: 23825456
transcranial brain stimulation; adaptive stimulus control; synchronization; stochastic facilitation; metaplasticity; neuroplasticity; stroke rehabilitation; motor cortex
6.  Spyke Viewer: a flexible and extensible platform for electrophysiological data analysis 
Spyke Viewer is an open source application designed to help researchers analyze data from electrophysiological recordings or neural simulations. It provides a graphical data browser and supports finding and selecting relevant subsets of the data. Users can interact with the selected data using an integrated Python console or plugins. Spyke Viewer includes plugins for several common visualizations and allows users to easily extend the program by writing their own plugins. New plugins are automatically integrated with the graphical interface. Additional plugins can be downloaded and shared on a dedicated website.
doi:10.3389/fninf.2013.00026
PMCID: PMC3822898  PMID: 24273510
electrophysiology; python; data analysis; visualization; spike sorting; software; data sharing
7.  A Maximum Entropy Test for Evaluating Higher-Order Correlations in Spike Counts 
PLoS Computational Biology  2012;8(6):e1002539.
Evaluating the importance of higher-order correlations of neural spike counts has been notoriously hard. A large number of samples are typically required in order to estimate higher-order correlations and resulting information theoretic quantities. In typical electrophysiology data sets with many experimental conditions, however, the number of samples in each condition is rather small. Here we describe a method that allows to quantify evidence for higher-order correlations in exactly these cases. We construct a family of reference distributions: maximum entropy distributions, which are constrained only by marginals and by linear correlations as quantified by the Pearson correlation coefficient. We devise a Monte Carlo goodness-of-fit test, which tests - for a given divergence measure of interest - whether the experimental data lead to the rejection of the null hypothesis that it was generated by one of the reference distributions. Applying our test to artificial data shows that the effects of higher-order correlations on these divergence measures can be detected even when the number of samples is small. Subsequently, we apply our method to spike count data which were recorded with multielectrode arrays from the primary visual cortex of anesthetized cat during an adaptation experiment. Using mutual information as a divergence measure we find that there are spike count bin sizes at which the maximum entropy hypothesis can be rejected for a substantial number of neuronal pairs. These results demonstrate that higher-order correlations can matter when estimating information theoretic quantities in V1. They also show that our test is able to detect their presence in typical in-vivo data sets, where the number of samples is too small to estimate higher-order correlations directly.
Author Summary
Populations of neurons signal information by their joint activity. Dependencies between the activity of multiple neurons are typically described by the linear correlation coefficient. However, this description of the dependencies is not complete. Dependencies beyond the linear correlation coefficient, so-called higher-order correlations, are often neglected because too many experimental samples are required in order to estimate them reliably. Evaluating the importance of higher-order correlations for the neural representation has therefore been notoriously hard. We devise a statistical test that can quantify evidence for higher-order correlations without estimating higher-order correlations directly. The test yields reliable results even when the number of experimental samples is small. The power of the method is demonstrated on data which were recorded from a population of neurons in the primary visual cortex of cat during an adaptation experiment. We show that higher-order correlations can have a substantial impact on the encoded stimulus information which, moreover, is modulated by stimulus adaptation.
doi:10.1371/journal.pcbi.1002539
PMCID: PMC3369943  PMID: 22685392
8.  Impact of Adaptation Currents on Synchronization of Coupled Exponential Integrate-and-Fire Neurons 
PLoS Computational Biology  2012;8(4):e1002478.
The ability of spiking neurons to synchronize their activity in a network depends on the response behavior of these neurons as quantified by the phase response curve (PRC) and on coupling properties. The PRC characterizes the effects of transient inputs on spike timing and can be measured experimentally. Here we use the adaptive exponential integrate-and-fire (aEIF) neuron model to determine how subthreshold and spike-triggered slow adaptation currents shape the PRC. Based on that, we predict how synchrony and phase locked states of coupled neurons change in presence of synaptic delays and unequal coupling strengths. We find that increased subthreshold adaptation currents cause a transition of the PRC from only phase advances to phase advances and delays in response to excitatory perturbations. Increased spike-triggered adaptation currents on the other hand predominantly skew the PRC to the right. Both adaptation induced changes of the PRC are modulated by spike frequency, being more prominent at lower frequencies. Applying phase reduction theory, we show that subthreshold adaptation stabilizes synchrony for pairs of coupled excitatory neurons, while spike-triggered adaptation causes locking with a small phase difference, as long as synaptic heterogeneities are negligible. For inhibitory pairs synchrony is stable and robust against conduction delays, and adaptation can mediate bistability of in-phase and anti-phase locking. We further demonstrate that stable synchrony and bistable in/anti-phase locking of pairs carry over to synchronization and clustering of larger networks. The effects of adaptation in aEIF neurons on PRCs and network dynamics qualitatively reflect those of biophysical adaptation currents in detailed Hodgkin-Huxley-based neurons, which underscores the utility of the aEIF model for investigating the dynamical behavior of networks. Our results suggest neuronal spike frequency adaptation as a mechanism synchronizing low frequency oscillations in local excitatory networks, but indicate that inhibition rather than excitation generates coherent rhythms at higher frequencies.
Author Summary
Synchronization of neuronal spiking in the brain is related to cognitive functions, such as perception, attention, and memory. It is therefore important to determine which properties of neurons influence their collective behavior in a network and to understand how. A prominent feature of many cortical neurons is spike frequency adaptation, which is caused by slow transmembrane currents. We investigated how these adaptation currents affect the synchronization tendency of coupled model neurons. Using the efficient adaptive exponential integrate-and-fire (aEIF) model and a biophysically detailed neuron model for validation, we found that increased adaptation currents promote synchronization of coupled excitatory neurons at lower spike frequencies, as long as the conduction delays between the neurons are negligible. Inhibitory neurons on the other hand synchronize in presence of conduction delays, with or without adaptation currents. Our results emphasize the utility of the aEIF model for computational studies of neuronal network dynamics. We conclude that adaptation currents provide a mechanism to generate low frequency oscillations in local populations of excitatory neurons, while faster rhythms seem to be caused by inhibition rather than excitation.
doi:10.1371/journal.pcbi.1002478
PMCID: PMC3325187  PMID: 22511861
10.  The Operating Regime of Local Computations in Primary Visual Cortex 
Cerebral Cortex (New York, NY)  2009;19(9):2166-2180.
In V1, local circuitry depends on the position in the orientation map: close to pinwheel centers, recurrent inputs show variable orientation preferences; within iso-orientation domains, inputs are relatively uniformly tuned. Physiological properties such as cell's membrane potentials, spike outputs, and temporal characteristics change systematically with map location. We investigate in a firing rate and a Hodgkin–Huxley network model what constraints these tuning characteristics of V1 neurons impose on the cortical operating regime. Systematically varying the strength of both recurrent excitation and inhibition, we test a wide range of model classes and find the likely models to account for the experimental observations. We show that recent intracellular and extracellular recordings from cat V1 provide the strongest evidence for a regime where excitatory and inhibitory recurrent inputs are balanced and dominate the feed-forward input. Our results are robust against changes in model assumptions such as spatial extent and strength of lateral inhibition. Intriguingly, the most likely recurrent regime is in a region of parameter space where small changes have large effects on the network dynamics, and it is close to a regime of “runaway excitation,” where the network shows strong self-sustained activity. This could make the cortical response particularly sensitive to modulation.
doi:10.1093/cercor/bhn240
PMCID: PMC2722429  PMID: 19221143
Bayesian data analysis; computational model; network dynamics; orientation tuning; reverse correlation
11.  Analyzing Short-Term Noise Dependencies of Spike-Counts in Macaque Prefrontal Cortex Using Copulas and the Flashlight Transformation 
PLoS Computational Biology  2009;5(11):e1000577.
Simultaneous spike-counts of neural populations are typically modeled by a Gaussian distribution. On short time scales, however, this distribution is too restrictive to describe and analyze multivariate distributions of discrete spike-counts. We present an alternative that is based on copulas and can account for arbitrary marginal distributions, including Poisson and negative binomial distributions as well as second and higher-order interactions. We describe maximum likelihood-based procedures for fitting copula-based models to spike-count data, and we derive a so-called flashlight transformation which makes it possible to move the tail dependence of an arbitrary copula into an arbitrary orthant of the multivariate probability distribution. Mixtures of copulas that combine different dependence structures and thereby model different driving processes simultaneously are also introduced. First, we apply copula-based models to populations of integrate-and-fire neurons receiving partially correlated input and show that the best fitting copulas provide information about the functional connectivity of coupled neurons which can be extracted using the flashlight transformation. We then apply the new method to data which were recorded from macaque prefrontal cortex using a multi-tetrode array. We find that copula-based distributions with negative binomial marginals provide an appropriate stochastic model for the multivariate spike-count distributions rather than the multivariate Poisson latent variables distribution and the often used multivariate normal distribution. The dependence structure of these distributions provides evidence for common inhibitory input to all recorded stimulus encoding neurons. Finally, we show that copula-based models can be successfully used to evaluate neural codes, e.g., to characterize stimulus-dependent spike-count distributions with information measures. This demonstrates that copula-based models are not only a versatile class of models for multivariate distributions of spike-counts, but that those models can be exploited to understand functional dependencies.
Author Summary
The brain has an enormous number of neurons that do not work alone but in an ensemble. Yet, mostly individual neurons were measured in the past and therefore models were restricted to independent neurons. With the advent of new multi-electrode techniques, however, it becomes possible to measure a great number of neurons simultaneously. As a result, models of how populations of neurons co-vary are becoming increasingly important. Here, we describe such a framework based on so-called copulas. Copulas allow to separate the neural variation structure of the population from the variability of the individual neurons. Contrary to standard models, versatile dependence structures can be described using this approach. We explore what additional information is provided by the detailed dependence. For simulated neurons, we show that the variation structure of the population allows inference of the underlying connectivity structure of the neurons. The power of the approach is demonstrated on a memory experiment in macaque monkey. We show that our framework describes the measurements better than the standard models and identify possible network connections of the measured neurons.
doi:10.1371/journal.pcbi.1000577
PMCID: PMC2776173  PMID: 19956759
12.  An online spike detection and spike classification algorithm capable of instantaneous resolution of overlapping spikes 
For the analysis of neuronal cooperativity, simultaneously recorded extracellular signals from neighboring neurons need to be sorted reliably by a spike sorting method. Many algorithms have been developed to this end, however, to date, none of them manages to fulfill a set of demanding requirements. In particular, it is desirable to have an algorithm that operates online, detects and classifies overlapping spikes in real time, and that adapts to non-stationary data. Here, we present a combined spike detection and classification algorithm, which explicitly addresses these issues. Our approach makes use of linear filters to find a new representation of the data and to optimally enhance the signal-to-noise ratio. We introduce a method called “Deconfusion” which de-correlates the filter outputs and provides source separation. Finally, a set of well-defined thresholds is applied and leads to simultaneous spike detection and spike classification. By incorporating a direct feedback, the algorithm adapts to non-stationary data and is, therefore, well suited for acute recordings. We evaluate our method on simulated and experimental data, including simultaneous intra/extra-cellular recordings made in slices of a rat cortex and recordings from the prefrontal cortex of awake behaving macaques. We compare the results to existing spike detection as well as spike sorting methods. We conclude that our algorithm meets all of the mentioned requirements and outperforms other methods under realistic signal-to-noise ratios and in the presence of overlapping spikes.
doi:10.1007/s10827-009-0163-5
PMCID: PMC2950077  PMID: 19499318
Realtime spike sorting; Extracellular multi electrode recordings; Tetrode recordings; FIR filters; Deconfusion
13.  Adaptation and Selective Information Transmission in the Cricket Auditory Neuron AN2 
PLoS Computational Biology  2008;4(9):e1000182.
Sensory systems adapt their neural code to changes in the sensory environment, often on multiple time scales. Here, we report a new form of adaptation in a first-order auditory interneuron (AN2) of crickets. We characterize the response of the AN2 neuron to amplitude-modulated sound stimuli and find that adaptation shifts the stimulus–response curves toward higher stimulus intensities, with a time constant of 1.5 s for adaptation and recovery. The spike responses were thus reduced for low-intensity sounds. We then address the question whether adaptation leads to an improvement of the signal's representation and compare the experimental results with the predictions of two competing hypotheses: infomax, which predicts that information conveyed about the entire signal range should be maximized, and selective coding, which predicts that “foreground” signals should be enhanced while “background” signals should be selectively suppressed. We test how adaptation changes the input–response curve when presenting signals with two or three peaks in their amplitude distributions, for which selective coding and infomax predict conflicting changes. By means of Bayesian data analysis, we quantify the shifts of the measured response curves and also find a slight reduction of their slopes. These decreases in slopes are smaller, and the absolute response thresholds are higher than those predicted by infomax. Most remarkably, and in contrast to the infomax principle, adaptation actually reduces the amount of encoded information when considering the whole range of input signals. The response curve changes are also not consistent with the selective coding hypothesis, because the amount of information conveyed about the loudest part of the signal does not increase as predicted but remains nearly constant. Less information is transmitted about signals with lower intensity.
Author Summary
Sensory systems have the ability to adapt to changes in the environment. In a quiet room, the nervous system is very responsive, so that even a whisper can be easily understood. In contrast, the perceived loudness on a crowded street will be reduced to prevent an overload of the nervous system. Two different hypotheses have been proposed to explain how the nervous system achieves this adaptation. According to one idea, all present sensory signals are equally enhanced, so that the whole range of input signals is reliably represented. On the other hand, the aim of the nervous system may be to extract the most important parts of the acoustic signal, for example, an approaching car, and thus abolish the irrelevant rest. To address which of these two principles is implemented in the auditory system of the cricket, we investigated the responses of a single auditory neuron, called interneuron AN2, to different sound signals. We found that adaptation actually reduces the amount of encoded information when considering the whole range of input signals. However, the changes were also not in agreement with the idea that only the most important signal is transmitted, because the amount of information conveyed about the loudest part of the signal does not increase. Thus, we here report the unusual case of a reduction of information transfer by adaptation, while in most other systems reported of so far adaptation actually enhances coding of sensory information.
doi:10.1371/journal.pcbi.1000182
PMCID: PMC2527132  PMID: 18818723
14.  Dynamics of Orientation Tuning in Cat V1 Neurons Depend on Location Within Layers and Orientation Maps 
Frontiers in Neuroscience  2007;1(1):145-159.
Analysis of the timecourse of the orientation tuning of responses in primary visual cortex (V1) can provide insight into the circuitry underlying tuning. Several studies have examined the temporal evolution of orientation selectivity in V1 neurons, but there is no consensus regarding the stability of orientation tuning properties over the timecourse of the response. We have used reverse-correlation analysis of the responses to dynamic grating stimuli to re-examine this issue in cat V1 neurons. We find that the preferred orientation and tuning curve shape are stable in the majority of neurons; however, more than forty percent of cells show a significant change in either preferred orientation or tuning width between early and late portions of the response. To examine the influence of the local cortical circuit connectivity, we analyzed the timecourse of responses as a function of receptive field type, laminar position, and orientation map position. Simple cells are more selective, and reach peak selectivity earlier, than complex cells. There are pronounced laminar differences in the timing of responses: middle layer cells respond faster, deep layer cells have prolonged response decay, and superficial cells are intermediate in timing. The average timing of neurons near and far from pinwheel centers is similar, but there is more variability in the timecourse of responses near pinwheel centers. This result was reproduced in an established network model of V1 operating in a regime of balanced excitatory and inhibitory recurrent connections, confirming previous results. Thus, response dynamics of cortical neurons reflect circuitry based on both vertical and horizontal location within cortical networks.
doi:10.3389/neuro.01.1.1.011.2007
PMCID: PMC2570087  PMID: 18982125
area 17; dynamics; tuning curve; visual cortex; orientation selectivity; reverse-correlation; large-scale network; network model

Results 1-14 (14)