PMCC PMCC

Search tips
Search criteria

Advanced
Results 1-23 (23)
 

Clipboard (0)
None

Select a Filter Below

Journals
Year of Publication
1.  Spike-Threshold Adaptation Predicted by Membrane Potential Dynamics In Vivo 
PLoS Computational Biology  2014;10(4):e1003560.
Neurons encode information in sequences of spikes, which are triggered when their membrane potential crosses a threshold. In vivo, the spiking threshold displays large variability suggesting that threshold dynamics have a profound influence on how the combined input of a neuron is encoded in the spiking. Threshold variability could be explained by adaptation to the membrane potential. However, it could also be the case that most threshold variability reflects noise and processes other than threshold adaptation. Here, we investigated threshold variation in auditory neurons responses recorded in vivo in barn owls. We found that spike threshold is quantitatively predicted by a model in which the threshold adapts, tracking the membrane potential at a short timescale. As a result, in these neurons, slow voltage fluctuations do not contribute to spiking because they are filtered by threshold adaptation. More importantly, these neurons can only respond to input spikes arriving together on a millisecond timescale. These results demonstrate that fast adaptation to the membrane potential captures spike threshold variability in vivo.
Author Summary
Neurons spike when their membrane potential exceeds a threshold value, but this value has been shown to be variable in the same neuron recorded in vivo. This variability could reflect noise, or deterministic processes that make the threshold vary with the membrane potential. The second alternative would have important functional consequences. Here, we show that threshold variability is a genuine feature of neurons, which reflects adaptation to the membrane potential at a short timescale, with little contribution from noise. This demonstrates that a deterministic model can predict spikes based only on the membrane potential.
doi:10.1371/journal.pcbi.1003560
PMCID: PMC3983065  PMID: 24722397
2.  Equation-oriented specification of neural models for simulations 
Simulating biological neuronal networks is a core method of research in computational neuroscience. A full specification of such a network model includes a description of the dynamics and state changes of neurons and synapses, as well as the synaptic connectivity patterns and the initial values of all parameters. A standard approach in neuronal modeling software is to build network models based on a library of pre-defined components and mechanisms; if a model component does not yet exist, it has to be defined in a special-purpose or general low-level language and potentially be compiled and linked with the simulator. Here we propose an alternative approach that allows flexible definition of models by writing textual descriptions based on mathematical notation. We demonstrate that this approach allows the definition of a wide range of models with minimal syntax. Furthermore, such explicit model descriptions allow the generation of executable code for various target languages and devices, since the description is not tied to an implementation. Finally, this approach also has advantages for readability and reproducibility, because the model description is fully explicit, and because it can be automatically parsed and transformed into formatted descriptions. The presented approach has been implemented in the Brian2 simulator.
doi:10.3389/fninf.2014.00006
PMCID: PMC3912318  PMID: 24550820
python; neuroscience; computational neuroscience; simulation; software
3.  Sharpness of Spike Initiation in Neurons Explained by Compartmentalization 
PLoS Computational Biology  2013;9(12):e1003338.
In cortical neurons, spikes are initiated in the axon initial segment. Seen at the soma, they appear surprisingly sharp. A standard explanation is that the current coming from the axon becomes sharp as the spike is actively backpropagated to the soma. However, sharp initiation of spikes is also seen in the input–output properties of neurons, and not only in the somatic shape of spikes; for example, cortical neurons can transmit high frequency signals. An alternative hypothesis is that Na channels cooperate, but it is not currently supported by direct experimental evidence. I propose a simple explanation based on the compartmentalization of spike initiation. When Na channels are placed in the axon, the soma acts as a current sink for the Na current. I show that there is a critical distance to the soma above which an instability occurs, so that Na channels open abruptly rather than gradually as a function of somatic voltage.
Author Summary
Spike initiation determines how the combined inputs to a neuron are converted to an output. Since the pioneering work of Hodgkin and Huxley, it is known that spikes are generated by the opening of sodium channels with depolarization. According to this standard theory, these channels should open gradually when the membrane potential increases, but spikes measured at the soma appear to suddenly rise from rest. This apparent contradiction has triggered a controversy about the origin of spike “sharpness.” This study shows with biophysical modelling that if sodium channels are placed in the axon rather than in the soma, they open all at once when the somatic membrane potential exceeds a critical value. This work explains the sharpness of spike initiation and provides another demonstration that morphology plays a critical role in neural function.
doi:10.1371/journal.pcbi.1003338
PMCID: PMC3854010  PMID: 24339755
4.  Decoding neural responses to temporal cues for sound localization 
eLife  2013;2:e01312.
The activity of sensory neural populations carries information about the environment. This may be extracted from neural activity using different strategies. In the auditory brainstem, a recent theory proposes that sound location in the horizontal plane is decoded from the relative summed activity of two populations in each hemisphere, whereas earlier theories hypothesized that the location was decoded from the identity of the most active cells. We tested the performance of various decoders of neural responses in increasingly complex acoustical situations, including spectrum variations, noise, and sound diffraction. We demonstrate that there is insufficient information in the pooled activity of each hemisphere to estimate sound direction in a reliable way consistent with behavior, whereas robust estimates can be obtained from neural activity by taking into account the heterogeneous tuning of cells. These estimates can still be obtained when only contralateral neural responses are used, consistently with unilateral lesion studies.
DOI: http://dx.doi.org/10.7554/eLife.01312.001
eLife digest
Having two ears allows animals to localize the source of a sound. For example, barn owls can snatch their prey in complete darkness by relying on sound alone. It has been known for a long time that this ability depends on tiny differences in the sounds that arrive at each ear, including differences in the time of arrival: in humans, for example, sound will arrive at the ear closer to the source up to half a millisecond earlier than it arrives at the other ear. These differences are called interaural time differences. However, the way that the brain processes this information to figure out where the sound came from has been the source of much debate.
Several theories have been proposed for how the brain calculates position from interaural time differences. According to the hemispheric theory, the activities of particular binaurally sensitive neurons in each of side of the brain are added together: adding signals in this way has been shown to maximize sensitivity to time differences under simple, controlled circumstances. The peak decoding theory proposes that the brain can work out the location of a sound on the basis of which neurons responded most strongly to the sound.
Both theories have their potential advantages, and there is evidence in support of each. Now, Goodman et al. have used computational simulations to compare the models under ecologically relevant circumstances. The simulations show that the results predicted by both models are inconsistent with those observed in real animals, and they propose that the brain must use the full pattern of neural responses to calculate the location of a sound.
One of the parts of the brain that is responsible for locating sounds is the inferior colliculus. Studies in cats and humans have shown that damage to the inferior colliculus on one side of the brain prevents accurate localization of sounds on the opposite side of the body, but the animals are still able to locate sounds on the same side. This finding is difficult to explain using the hemispheric model, but Goodman et al. show that it can be explained with pattern-based models.
DOI: http://dx.doi.org/10.7554/eLife.01312.002
doi:10.7554/eLife.01312
PMCID: PMC3844708  PMID: 24302571
sound localization; neural coding; audition; None
6.  An ecological approach to neural computation 
BMC Neuroscience  2013;14(Suppl 1):P40.
doi:10.1186/1471-2202-14-S1-P40
PMCID: PMC3704892
8.  Computing with Neural Synchrony 
PLoS Computational Biology  2012;8(6):e1002561.
Neurons communicate primarily with spikes, but most theories of neural computation are based on firing rates. Yet, many experimental observations suggest that the temporal coordination of spikes plays a role in sensory processing. Among potential spike-based codes, synchrony appears as a good candidate because neural firing and plasticity are sensitive to fine input correlations. However, it is unclear what role synchrony may play in neural computation, and what functional advantage it may provide. With a theoretical approach, I show that the computational interest of neural synchrony appears when neurons have heterogeneous properties. In this context, the relationship between stimuli and neural synchrony is captured by the concept of synchrony receptive field, the set of stimuli which induce synchronous responses in a group of neurons. In a heterogeneous neural population, it appears that synchrony patterns represent structure or sensory invariants in stimuli, which can then be detected by postsynaptic neurons. The required neural circuitry can spontaneously emerge with spike-timing-dependent plasticity. Using examples in different sensory modalities, I show that this allows simple neural circuits to extract relevant information from realistic sensory stimuli, for example to identify a fluctuating odor in the presence of distractors. This theory of synchrony-based computation shows that relative spike timing may indeed have computational relevance, and suggests new types of neural network models for sensory processing with appealing computational properties.
Author Summary
How does the brain compute? Traditional theories of neural computation describe the operating function of neurons in terms of average firing rates, with the timing of spikes bearing little information. However, numerous studies have shown that spike timing can convey information and that neurons are highly sensitive to synchrony in their inputs. Here I propose a simple spike-based computational framework, based on the idea that stimulus-induced synchrony can be used to extract sensory invariants (for example, the location of a sound source), which is a difficult task for classical neural networks. It relies on the simple remark that a series of repeated coincidences is in itself an invariant. Many aspects of perception rely on extracting invariant features, such as the spatial location of a time-varying sound, the identity of an odor with fluctuating intensity, the pitch of a musical note. I demonstrate that simple synchrony-based neuron models can extract these useful features, by using spiking models in several sensory modalities.
doi:10.1371/journal.pcbi.1002561
PMCID: PMC3375225  PMID: 22719243
10.  A functional spiking model of the ITD processing pathway of the barn owl 
BMC Neuroscience  2011;12(Suppl 1):P20.
doi:10.1186/1471-2202-12-S1-P20
PMCID: PMC3240301
11.  Encoding the pitch of sounds using synchrony receptive fields 
BMC Neuroscience  2011;12(Suppl 1):P21.
doi:10.1186/1471-2202-12-S1-P21
PMCID: PMC3240312
12.  Impact of Fast Sodium Channel Inactivation on Spike Threshold Dynamics and Synaptic Integration 
PLoS Computational Biology  2011;7(5):e1001129.
Neurons spike when their membrane potential exceeds a threshold value. In central neurons, the spike threshold is not constant but depends on the stimulation. Thus, input-output properties of neurons depend both on the effect of presynaptic spikes on the membrane potential and on the dynamics of the spike threshold. Among the possible mechanisms that may modulate the threshold, one strong candidate is Na channel inactivation, because it specifically impacts spike initiation without affecting the membrane potential. We collected voltage-clamp data from the literature and we found, based on a theoretical criterion, that the properties of Na inactivation could indeed cause substantial threshold variability by itself. By analyzing simple neuron models with fast Na inactivation (one channel subtype), we found that the spike threshold is correlated with the mean membrane potential and negatively correlated with the preceding depolarization slope, consistent with experiments. We then analyzed the impact of threshold dynamics on synaptic integration. The difference between the postsynaptic potential (PSP) and the dynamic threshold in response to a presynaptic spike defines an effective PSP. When the neuron is sufficiently depolarized, this effective PSP is briefer than the PSP. This mechanism regulates the temporal window of synaptic integration in an adaptive way. Finally, we discuss the role of other potential mechanisms. Distal spike initiation, channel noise and Na activation dynamics cannot account for the observed negative slope-threshold relationship, while adaptive conductances (e.g. K+) and Na inactivation can. We conclude that Na inactivation is a metabolically efficient mechanism to control the temporal resolution of synaptic integration.
Author Summary
Neurons spike when their combined inputs exceed a threshold value, but recent experimental findings have shown that this value also depends on the inputs. Thus, to understand how neurons respond to input spikes, it is important to know how inputs modify the spike threshold. Spikes are generated by sodium channels, which inactivate when the neuron is depolarized, raising the threshold for spike initiation. We found that inactivation properties of sodium channels could indeed cause substantial threshold variability in central neurons. We then analyzed in models the implications of this form of threshold modulation on neuronal function. We found that this mechanism makes neurons more sensitive to coincident spikes and provides them with an energetically efficient form of gain control.
doi:10.1371/journal.pcbi.1001129
PMCID: PMC3088652  PMID: 21573200
13.  Fitting Neuron Models to Spike Trains 
Computational modeling is increasingly used to understand the function of neural circuits in systems neuroscience. These studies require models of individual neurons with realistic input–output properties. Recently, it was found that spiking models can accurately predict the precisely timed spike trains produced by cortical neurons in response to somatically injected currents, if properly fitted. This requires fitting techniques that are efficient and flexible enough to easily test different candidate models. We present a generic solution, based on the Brian simulator (a neural network simulator in Python), which allows the user to define and fit arbitrary neuron models to electrophysiological recordings. It relies on vectorization and parallel computing techniques to achieve efficiency. We demonstrate its use on neural recordings in the barrel cortex and in the auditory brainstem, and confirm that simple adaptive spiking models can accurately predict the response of cortical neurons. Finally, we show how a complex multicompartmental model can be reduced to a simple effective spiking model.
doi:10.3389/fnins.2011.00009
PMCID: PMC3051271  PMID: 21415925
python; spiking models; simulation; optimization; parallel computing
14.  Brian Hears: Online Auditory Processing Using Vectorization Over Channels 
The human cochlea includes about 3000 inner hair cells which filter sounds at frequencies between 20 Hz and 20 kHz. This massively parallel frequency analysis is reflected in models of auditory processing, which are often based on banks of filters. However, existing implementations do not exploit this parallelism. Here we propose algorithms to simulate these models by vectorizing computation over frequency channels, which are implemented in “Brian Hears,” a library for the spiking neural network simulator package “Brian.” This approach allows us to use high-level programming languages such as Python, because with vectorized operations, the computational cost of interpretation represents a small fraction of the total cost. This makes it possible to define and simulate complex models in a simple way, while all previous implementations were model-specific. In addition, we show that these algorithms can be naturally parallelized using graphics processing units, yielding substantial speed improvements. We demonstrate these algorithms with several state-of-the-art cochlear models, and show that they compare favorably with existing, less flexible, implementations.
doi:10.3389/fninf.2011.00009
PMCID: PMC3143729  PMID: 21811453
auditory filter; vectorization; Python; Brian; GPU
15.  Spike-Timing Dependent Plasticity and Feed-Forward Input Oscillations Produce Precise and Invariant Spike Phase-Locking 
In the hippocampus and the neocortex, the coupling between local field potential (LFP) oscillations and the spiking of single neurons can be highly precise, across neuronal populations and cell types. Spike phase (i.e., the spike time with respect to a reference oscillation) is known to carry reliable information, both with phase-locking behavior and with more complex phase relationships, such as phase precession. How this precision is achieved by neuronal populations, whose membrane properties and total input may be quite heterogeneous, is nevertheless unknown. In this note, we investigate a simple mechanism for learning precise LFP-to-spike coupling in feed-forward networks – the reliable, periodic modulation of presynaptic firing rates during oscillations, coupled with spike-timing dependent plasticity. When oscillations are within the biological range (2–150 Hz), firing rates of the inputs change on a timescale highly relevant to spike-timing dependent plasticity (STDP). Through analytic and computational methods, we find points of stable phase-locking for a neuron with plastic input synapses. These points correspond to precise phase-locking behavior in the feed-forward network. The location of these points depends on the oscillation frequency of the inputs, the STDP time constants, and the balance of potentiation and de-potentiation in the STDP rule. For a given input oscillation, the balance of potentiation and de-potentiation in the STDP rule is the critical parameter that determines the phase at which an output neuron will learn to spike. These findings are robust to changes in intrinsic post-synaptic properties. Finally, we discuss implications of this mechanism for stable learning of spike-timing in the hippocampus.
doi:10.3389/fncom.2011.00045
PMCID: PMC3216007  PMID: 22110429
spike-timing dependent plasticity; oscillations; phase-locking; stable learning; stability of neuronal plasticity; place fields
16.  Spiking Models for Level-Invariant Encoding 
Levels of ecological sounds vary over several orders of magnitude, but the firing rate and membrane potential of a neuron are much more limited in range. In binaural neurons of the barn owl, tuning to interaural delays is independent of level differences. Yet a monaural neuron with a fixed threshold should fire earlier in response to louder sounds, which would disrupt the tuning of these neurons. How could spike timing be independent of input level? Here I derive theoretical conditions for a spiking model to be insensitive to input level. The key property is a dynamic change in spike threshold. I then show how level invariance can be physiologically implemented, with specific ionic channel properties. It appears that these ingredients are indeed present in monaural neurons of the sound localization pathway of birds and mammals.
doi:10.3389/fncom.2011.00063
PMCID: PMC3254166  PMID: 22291634
spiking models; sound localization; spike timing; gain control; interaural time difference
17.  Spike-Timing-Based Computation in Sound Localization 
PLoS Computational Biology  2010;6(11):e1000993.
Spike timing is precise in the auditory system and it has been argued that it conveys information about auditory stimuli, in particular about the location of a sound source. However, beyond simple time differences, the way in which neurons might extract this information is unclear and the potential computational advantages are unknown. The computational difficulty of this task for an animal is to locate the source of an unexpected sound from two monaural signals that are highly dependent on the unknown source signal. In neuron models consisting of spectro-temporal filtering and spiking nonlinearity, we found that the binaural structure induced by spatialized sounds is mapped to synchrony patterns that depend on source location rather than on source signal. Location-specific synchrony patterns would then result in the activation of location-specific assemblies of postsynaptic neurons. We designed a spiking neuron model which exploited this principle to locate a variety of sound sources in a virtual acoustic environment using measured human head-related transfer functions. The model was able to accurately estimate the location of previously unknown sounds in both azimuth and elevation (including front/back discrimination) in a known acoustic environment. We found that multiple representations of different acoustic environments could coexist as sets of overlapping neural assemblies which could be associated with spatial locations by Hebbian learning. The model demonstrates the computational relevance of relative spike timing to extract spatial information about sources independently of the source signal.
Author Summary
There is growing evidence that the temporal coordination of spikes is important for neural computation, especially in auditory perception. Yet it is unclear what computational advantage it might provide, if any. We investigated this issue in the context of a difficult auditory task which must be performed quickly by an animal to escape a predator: locating the source of a sound independently of the source signal. Using models, we found that when neurons encode auditory stimuli in spike trains, the location-specific structure of binaural signals is transformed into location-specific synchrony patterns. These patterns are then mapped to the activation of specific neural assemblies. We designed a simple neural network model based on this principle which was able to estimate both the azimuth and elevation of unknown sounds in a realistic virtual acoustic environment. The relationship between binaural cues and source location could be learned through a supervised Hebbian procedure. The model demonstrates the computational relevance of relative spike timing in a difficult task where spatial information must be extracted independent of other dimensions of the stimuli.
doi:10.1371/journal.pcbi.1000993
PMCID: PMC2978676  PMID: 21085681
18.  A Threshold Equation for Action Potential Initiation 
PLoS Computational Biology  2010;6(7):e1000850.
In central neurons, the threshold for spike initiation can depend on the stimulus and varies between cells and between recording sites in a given cell, but it is unclear what mechanisms underlie this variability. Properties of ionic channels are likely to play a role in threshold modulation. We examined in models the influence of Na channel activation, inactivation, slow voltage-gated channels and synaptic conductances on spike threshold. We propose a threshold equation which quantifies the contribution of all these mechanisms. It provides an instantaneous time-varying value of the threshold, which applies to neurons with fluctuating inputs. We deduce a differential equation for the threshold, similar to the equations of gating variables in the Hodgkin-Huxley formalism, which describes how the spike threshold varies with the membrane potential, depending on channel properties. We find that spike threshold depends logarithmically on Na channel density, and that Na channel inactivation and K channels can dynamically modulate it in an adaptive way: the threshold increases with membrane potential and after every action potential. Our equation was validated with simulations of a previously published multicompartemental model of spike initiation. Finally, we observed that threshold variability in models depends crucially on the shape of the Na activation function near spike initiation (about −55 mV), while its parameters are adjusted near half-activation voltage (about −30 mV), which might explain why many models exhibit little threshold variability, contrary to experimental observations. We conclude that ionic channels can account for large variations in spike threshold.
Author Summary
Neurons communicate primarily with stereotypical electrical impulses, action potentials, which are fired when a threshold level of excitation is reached. This threshold varies between cells and over time as a function of previous stimulations, which has major functional implications on the integrative properties of neurons. Ionic channels are thought to play a central role in this modulation but the precise relationship between their properties and the threshold is unclear. We examined this relationship in biophysical models and derived a formula which quantifies the contribution of various mechanisms. The originality of our approach is that it provides an instantaneous time-varying value for the threshold, which applies to the highly fluctuating regimes characterizing neurons in vivo. In particular, two known ionic mechanisms were found to make the threshold adapt to the membrane potential, thus providing the cell with a form of gain control.
doi:10.1371/journal.pcbi.1000850
PMCID: PMC2900290  PMID: 20628619
19.  Automatic Fitting of Spiking Neuron Models to Electrophysiological Recordings 
Spiking models can accurately predict the spike trains produced by cortical neurons in response to somatically injected currents. Since the specific characteristics of the model depend on the neuron, a computational method is required to fit models to electrophysiological recordings. The fitting procedure can be very time consuming both in terms of computer simulations and in terms of code writing. We present algorithms to fit spiking models to electrophysiological data (time-varying input and spike trains) that can run in parallel on graphics processing units (GPUs). The model fitting library is interfaced with Brian, a neural network simulator in Python. If a GPU is present it uses just-in-time compilation to translate model equations into optimized code. Arbitrary models can then be defined at script level and run on the graphics card. This tool can be used to obtain empirically validated spiking models of neurons in various systems. We demonstrate its use on public data from the INCF Quantitative Single-Neuron Modeling 2009 competition by comparing the performance of a number of neuron spiking models.
doi:10.3389/neuro.11.002.2010
PMCID: PMC2835507  PMID: 20224819
model fitting; electrophysiology; spiking models; simulation; GPU; distributed computing; adaptive threshold; optimization
20.  The Brian Simulator 
Frontiers in Neuroscience  2009;3(2):192-197.
“Brian” is a simulator for spiking neural networks (http://www.briansimulator.org). The focus is on making the writing of simulation code as quick and easy as possible for the user, and on flexibility: new and non-standard models are no more difficult to define than standard ones. This allows scientists to spend more time on the details of their models, and less on their implementation. Neuron models are defined by writing differential equations in standard mathematical notation, facilitating scientific communication. Brian is written in the Python programming language, and uses vector-based computation to allow for efficient simulations. It is particularly useful for neuroscientific modelling at the systems level, and for teaching computational neuroscience.
doi:10.3389/neuro.01.026.2009
PMCID: PMC2751620  PMID: 20011141
Python; spiking neural networks; simulation; teaching; systems neuroscience
21.  The Cauchy problem for one-dimensional spiking neuron models 
Cognitive Neurodynamics  2007;2(1):21-27.
I consider spiking neuron models defined by a one-dimensional differential equation and a reset—i.e., neuron models of the integrate-and-fire type. I address the question of the existence and uniqueness of a solution on \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\mathbb{R}}$$\end{document} for a given initial condition. It turns out that the reset introduces a countable and ordered set of backward solutions for a given initial condition. I discuss the implications of these mathematical results in terms of neural coding and spike timing precision.
doi:10.1007/s11571-007-9032-y
PMCID: PMC2289251  PMID: 19003470
Integrate-and-fire; Cauchy problem; Spike timing precision; Reliability; Neuron models
22.  Simulation of networks of spiking neurons: A review of tools and strategies 
We review different aspects of the simulation of spiking neural networks. We start by reviewing the different types of simulation strategies and algorithms that are currently implemented. We next review the precision of those simulation strategies, in particular in cases where plasticity depends on the exact timing of the spikes. We overview different simulators and simulation environments presently available (restricted to those freely available, open source and documented). For each simulation tool, its advantages and pitfalls are reviewed, with an aim to allow the reader to identify which simulator is appropriate for a given task. Finally, we provide a series of benchmark simulations of different types of networks of spiking neurons, including Hodgkin–Huxley type, integrate-and-fire models, interacting with current-based or conductance-based synapses, using clock-driven or event-driven integration strategies. The same set of models are implemented on the different simulators, and the codes are made available. The ultimate goal of this review is to provide a resource to facilitate identifying the appropriate integration strategy and simulation tool to use for a given modeling problem related to spiking neural networks.
doi:10.1007/s10827-007-0038-6
PMCID: PMC2638500  PMID: 17629781
Spiking neural networks; Simulation tools; Integration strategies; Clock-driven; Event-driven
23.  Brian: A Simulator for Spiking Neural Networks in Python 
“Brian” is a new simulator for spiking neural networks, written in Python (http://brian. di.ens.fr). It is an intuitive and highly flexible tool for rapidly developing new models, especially networks of single-compartment neurons. In addition to using standard types of neuron models, users can define models by writing arbitrary differential equations in ordinary mathematical notation. Python scientific libraries can also be used for defining models and analysing data. Vectorisation techniques allow efficient simulations despite the overheads of an interpreted language. Brian will be especially valuable for working on non-standard neuron models not easily covered by existing software, and as an alternative to using Matlab or C for simulations. With its easy and intuitive syntax, Brian is also very well suited for teaching computational neuroscience.
doi:10.3389/neuro.11.005.2008
PMCID: PMC2605403  PMID: 19115011
Python; spiking neurons; simulation; integrate and fire; teaching; neural networks; computational neuroscience; software

Results 1-23 (23)