PMCC PMCC

Search tips
Search criteria

Advanced
Results 1-13 (13)
 

Clipboard (0)
None

Select a Filter Below

Journals
Authors
more »
Year of Publication
1.  Spike-Threshold Adaptation Predicted by Membrane Potential Dynamics In Vivo 
PLoS Computational Biology  2014;10(4):e1003560.
Neurons encode information in sequences of spikes, which are triggered when their membrane potential crosses a threshold. In vivo, the spiking threshold displays large variability suggesting that threshold dynamics have a profound influence on how the combined input of a neuron is encoded in the spiking. Threshold variability could be explained by adaptation to the membrane potential. However, it could also be the case that most threshold variability reflects noise and processes other than threshold adaptation. Here, we investigated threshold variation in auditory neurons responses recorded in vivo in barn owls. We found that spike threshold is quantitatively predicted by a model in which the threshold adapts, tracking the membrane potential at a short timescale. As a result, in these neurons, slow voltage fluctuations do not contribute to spiking because they are filtered by threshold adaptation. More importantly, these neurons can only respond to input spikes arriving together on a millisecond timescale. These results demonstrate that fast adaptation to the membrane potential captures spike threshold variability in vivo.
Author Summary
Neurons spike when their membrane potential exceeds a threshold value, but this value has been shown to be variable in the same neuron recorded in vivo. This variability could reflect noise, or deterministic processes that make the threshold vary with the membrane potential. The second alternative would have important functional consequences. Here, we show that threshold variability is a genuine feature of neurons, which reflects adaptation to the membrane potential at a short timescale, with little contribution from noise. This demonstrates that a deterministic model can predict spikes based only on the membrane potential.
doi:10.1371/journal.pcbi.1003560
PMCID: PMC3983065  PMID: 24722397
2.  Equation-oriented specification of neural models for simulations 
Simulating biological neuronal networks is a core method of research in computational neuroscience. A full specification of such a network model includes a description of the dynamics and state changes of neurons and synapses, as well as the synaptic connectivity patterns and the initial values of all parameters. A standard approach in neuronal modeling software is to build network models based on a library of pre-defined components and mechanisms; if a model component does not yet exist, it has to be defined in a special-purpose or general low-level language and potentially be compiled and linked with the simulator. Here we propose an alternative approach that allows flexible definition of models by writing textual descriptions based on mathematical notation. We demonstrate that this approach allows the definition of a wide range of models with minimal syntax. Furthermore, such explicit model descriptions allow the generation of executable code for various target languages and devices, since the description is not tied to an implementation. Finally, this approach also has advantages for readability and reproducibility, because the model description is fully explicit, and because it can be automatically parsed and transformed into formatted descriptions. The presented approach has been implemented in the Brian2 simulator.
doi:10.3389/fninf.2014.00006
PMCID: PMC3912318  PMID: 24550820
python; neuroscience; computational neuroscience; simulation; software
3.  Sharpness of Spike Initiation in Neurons Explained by Compartmentalization 
PLoS Computational Biology  2013;9(12):e1003338.
In cortical neurons, spikes are initiated in the axon initial segment. Seen at the soma, they appear surprisingly sharp. A standard explanation is that the current coming from the axon becomes sharp as the spike is actively backpropagated to the soma. However, sharp initiation of spikes is also seen in the input–output properties of neurons, and not only in the somatic shape of spikes; for example, cortical neurons can transmit high frequency signals. An alternative hypothesis is that Na channels cooperate, but it is not currently supported by direct experimental evidence. I propose a simple explanation based on the compartmentalization of spike initiation. When Na channels are placed in the axon, the soma acts as a current sink for the Na current. I show that there is a critical distance to the soma above which an instability occurs, so that Na channels open abruptly rather than gradually as a function of somatic voltage.
Author Summary
Spike initiation determines how the combined inputs to a neuron are converted to an output. Since the pioneering work of Hodgkin and Huxley, it is known that spikes are generated by the opening of sodium channels with depolarization. According to this standard theory, these channels should open gradually when the membrane potential increases, but spikes measured at the soma appear to suddenly rise from rest. This apparent contradiction has triggered a controversy about the origin of spike “sharpness.” This study shows with biophysical modelling that if sodium channels are placed in the axon rather than in the soma, they open all at once when the somatic membrane potential exceeds a critical value. This work explains the sharpness of spike initiation and provides another demonstration that morphology plays a critical role in neural function.
doi:10.1371/journal.pcbi.1003338
PMCID: PMC3854010  PMID: 24339755
4.  Decoding neural responses to temporal cues for sound localization 
eLife  2013;2:e01312.
The activity of sensory neural populations carries information about the environment. This may be extracted from neural activity using different strategies. In the auditory brainstem, a recent theory proposes that sound location in the horizontal plane is decoded from the relative summed activity of two populations in each hemisphere, whereas earlier theories hypothesized that the location was decoded from the identity of the most active cells. We tested the performance of various decoders of neural responses in increasingly complex acoustical situations, including spectrum variations, noise, and sound diffraction. We demonstrate that there is insufficient information in the pooled activity of each hemisphere to estimate sound direction in a reliable way consistent with behavior, whereas robust estimates can be obtained from neural activity by taking into account the heterogeneous tuning of cells. These estimates can still be obtained when only contralateral neural responses are used, consistently with unilateral lesion studies.
DOI: http://dx.doi.org/10.7554/eLife.01312.001
eLife digest
Having two ears allows animals to localize the source of a sound. For example, barn owls can snatch their prey in complete darkness by relying on sound alone. It has been known for a long time that this ability depends on tiny differences in the sounds that arrive at each ear, including differences in the time of arrival: in humans, for example, sound will arrive at the ear closer to the source up to half a millisecond earlier than it arrives at the other ear. These differences are called interaural time differences. However, the way that the brain processes this information to figure out where the sound came from has been the source of much debate.
Several theories have been proposed for how the brain calculates position from interaural time differences. According to the hemispheric theory, the activities of particular binaurally sensitive neurons in each of side of the brain are added together: adding signals in this way has been shown to maximize sensitivity to time differences under simple, controlled circumstances. The peak decoding theory proposes that the brain can work out the location of a sound on the basis of which neurons responded most strongly to the sound.
Both theories have their potential advantages, and there is evidence in support of each. Now, Goodman et al. have used computational simulations to compare the models under ecologically relevant circumstances. The simulations show that the results predicted by both models are inconsistent with those observed in real animals, and they propose that the brain must use the full pattern of neural responses to calculate the location of a sound.
One of the parts of the brain that is responsible for locating sounds is the inferior colliculus. Studies in cats and humans have shown that damage to the inferior colliculus on one side of the brain prevents accurate localization of sounds on the opposite side of the body, but the animals are still able to locate sounds on the same side. This finding is difficult to explain using the hemispheric model, but Goodman et al. show that it can be explained with pattern-based models.
DOI: http://dx.doi.org/10.7554/eLife.01312.002
doi:10.7554/eLife.01312
PMCID: PMC3844708  PMID: 24302571
sound localization; neural coding; audition; None
6.  An ecological approach to neural computation 
BMC Neuroscience  2013;14(Suppl 1):P40.
doi:10.1186/1471-2202-14-S1-P40
PMCID: PMC3704892
8.  Computing with Neural Synchrony 
PLoS Computational Biology  2012;8(6):e1002561.
Neurons communicate primarily with spikes, but most theories of neural computation are based on firing rates. Yet, many experimental observations suggest that the temporal coordination of spikes plays a role in sensory processing. Among potential spike-based codes, synchrony appears as a good candidate because neural firing and plasticity are sensitive to fine input correlations. However, it is unclear what role synchrony may play in neural computation, and what functional advantage it may provide. With a theoretical approach, I show that the computational interest of neural synchrony appears when neurons have heterogeneous properties. In this context, the relationship between stimuli and neural synchrony is captured by the concept of synchrony receptive field, the set of stimuli which induce synchronous responses in a group of neurons. In a heterogeneous neural population, it appears that synchrony patterns represent structure or sensory invariants in stimuli, which can then be detected by postsynaptic neurons. The required neural circuitry can spontaneously emerge with spike-timing-dependent plasticity. Using examples in different sensory modalities, I show that this allows simple neural circuits to extract relevant information from realistic sensory stimuli, for example to identify a fluctuating odor in the presence of distractors. This theory of synchrony-based computation shows that relative spike timing may indeed have computational relevance, and suggests new types of neural network models for sensory processing with appealing computational properties.
Author Summary
How does the brain compute? Traditional theories of neural computation describe the operating function of neurons in terms of average firing rates, with the timing of spikes bearing little information. However, numerous studies have shown that spike timing can convey information and that neurons are highly sensitive to synchrony in their inputs. Here I propose a simple spike-based computational framework, based on the idea that stimulus-induced synchrony can be used to extract sensory invariants (for example, the location of a sound source), which is a difficult task for classical neural networks. It relies on the simple remark that a series of repeated coincidences is in itself an invariant. Many aspects of perception rely on extracting invariant features, such as the spatial location of a time-varying sound, the identity of an odor with fluctuating intensity, the pitch of a musical note. I demonstrate that simple synchrony-based neuron models can extract these useful features, by using spiking models in several sensory modalities.
doi:10.1371/journal.pcbi.1002561
PMCID: PMC3375225  PMID: 22719243
9.  Spiking Models for Level-Invariant Encoding 
Levels of ecological sounds vary over several orders of magnitude, but the firing rate and membrane potential of a neuron are much more limited in range. In binaural neurons of the barn owl, tuning to interaural delays is independent of level differences. Yet a monaural neuron with a fixed threshold should fire earlier in response to louder sounds, which would disrupt the tuning of these neurons. How could spike timing be independent of input level? Here I derive theoretical conditions for a spiking model to be insensitive to input level. The key property is a dynamic change in spike threshold. I then show how level invariance can be physiologically implemented, with specific ionic channel properties. It appears that these ingredients are indeed present in monaural neurons of the sound localization pathway of birds and mammals.
doi:10.3389/fncom.2011.00063
PMCID: PMC3254166  PMID: 22291634
spiking models; sound localization; spike timing; gain control; interaural time difference
10.  The Brian Simulator 
Frontiers in Neuroscience  2009;3(2):192-197.
“Brian” is a simulator for spiking neural networks (http://www.briansimulator.org). The focus is on making the writing of simulation code as quick and easy as possible for the user, and on flexibility: new and non-standard models are no more difficult to define than standard ones. This allows scientists to spend more time on the details of their models, and less on their implementation. Neuron models are defined by writing differential equations in standard mathematical notation, facilitating scientific communication. Brian is written in the Python programming language, and uses vector-based computation to allow for efficient simulations. It is particularly useful for neuroscientific modelling at the systems level, and for teaching computational neuroscience.
doi:10.3389/neuro.01.026.2009
PMCID: PMC2751620  PMID: 20011141
Python; spiking neural networks; simulation; teaching; systems neuroscience
11.  The Cauchy problem for one-dimensional spiking neuron models 
Cognitive Neurodynamics  2007;2(1):21-27.
I consider spiking neuron models defined by a one-dimensional differential equation and a reset—i.e., neuron models of the integrate-and-fire type. I address the question of the existence and uniqueness of a solution on \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\mathbb{R}}$$\end{document} for a given initial condition. It turns out that the reset introduces a countable and ordered set of backward solutions for a given initial condition. I discuss the implications of these mathematical results in terms of neural coding and spike timing precision.
doi:10.1007/s11571-007-9032-y
PMCID: PMC2289251  PMID: 19003470
Integrate-and-fire; Cauchy problem; Spike timing precision; Reliability; Neuron models
12.  Simulation of networks of spiking neurons: A review of tools and strategies 
We review different aspects of the simulation of spiking neural networks. We start by reviewing the different types of simulation strategies and algorithms that are currently implemented. We next review the precision of those simulation strategies, in particular in cases where plasticity depends on the exact timing of the spikes. We overview different simulators and simulation environments presently available (restricted to those freely available, open source and documented). For each simulation tool, its advantages and pitfalls are reviewed, with an aim to allow the reader to identify which simulator is appropriate for a given task. Finally, we provide a series of benchmark simulations of different types of networks of spiking neurons, including Hodgkin–Huxley type, integrate-and-fire models, interacting with current-based or conductance-based synapses, using clock-driven or event-driven integration strategies. The same set of models are implemented on the different simulators, and the codes are made available. The ultimate goal of this review is to provide a resource to facilitate identifying the appropriate integration strategy and simulation tool to use for a given modeling problem related to spiking neural networks.
doi:10.1007/s10827-007-0038-6
PMCID: PMC2638500  PMID: 17629781
Spiking neural networks; Simulation tools; Integration strategies; Clock-driven; Event-driven
13.  Brian: A Simulator for Spiking Neural Networks in Python 
“Brian” is a new simulator for spiking neural networks, written in Python (http://brian. di.ens.fr). It is an intuitive and highly flexible tool for rapidly developing new models, especially networks of single-compartment neurons. In addition to using standard types of neuron models, users can define models by writing arbitrary differential equations in ordinary mathematical notation. Python scientific libraries can also be used for defining models and analysing data. Vectorisation techniques allow efficient simulations despite the overheads of an interpreted language. Brian will be especially valuable for working on non-standard neuron models not easily covered by existing software, and as an alternative to using Matlab or C for simulations. With its easy and intuitive syntax, Brian is also very well suited for teaching computational neuroscience.
doi:10.3389/neuro.11.005.2008
PMCID: PMC2605403  PMID: 19115011
Python; spiking neurons; simulation; integrate and fire; teaching; neural networks; computational neuroscience; software

Results 1-13 (13)