Search tips
Search criteria

Results 1-25 (38)

Clipboard (0)

Select a Filter Below

more »
Year of Publication
1.  Automated High-Throughput Characterization of Single Neurons by Means of Simplified Spiking Models 
PLoS Computational Biology  2015;11(6):e1004275.
Single-neuron models are useful not only for studying the emergent properties of neural circuits in large-scale simulations, but also for extracting and summarizing in a principled way the information contained in electrophysiological recordings. Here we demonstrate that, using a convex optimization procedure we previously introduced, a Generalized Integrate-and-Fire model can be accurately fitted with a limited amount of data. The model is capable of predicting both the spiking activity and the subthreshold dynamics of different cell types, and can be used for online characterization of neuronal properties. A protocol is proposed that, combined with emergent technologies for automatic patch-clamp recordings, permits automated, in vitro high-throughput characterization of single neurons.
Author Summary
Large-scale, high-throughput data acquisition is revolutionizing the field of neuroscience. Single-neuron electrophysiology is moving from the situation where a highly skilled experimentalist can patch a few cells per day, to a situation where robots will collect large amounts of data. To take advantage of this quantity of data, this technological advance requires a paradigm shift in the experimental design and analysis. Presently, most single-neuron experimental studies rely on old protocols—such as injections of steps and ramps of current—that rarely inform theoreticians and modelers interested in emergent properties of the brain. Here, we describe an efficient protocol for high-throughput in vitro electrophysiology as well as a set of mathematical tools that neuroscientists can use to directly translate experimental data into realistic spiking neuron models. The efficiency of the proposed method makes it suitable for high-throughput data analysis, allowing for the generation of a standardized database of realistic single-neuron models.
PMCID: PMC4470831  PMID: 26083597
2.  Diverse synaptic plasticity mechanisms orchestrated to form and retrieve memories in spiking neural networks 
Nature Communications  2015;6:6922.
Synaptic plasticity, the putative basis of learning and memory formation, manifests in various forms and across different timescales. Here we show that the interaction of Hebbian homosynaptic plasticity with rapid non-Hebbian heterosynaptic plasticity is, when complemented with slower homeostatic changes and consolidation, sufficient for assembly formation and memory recall in a spiking recurrent network model of excitatory and inhibitory neurons. In the model, assemblies were formed during repeated sensory stimulation and characterized by strong recurrent excitatory connections. Even days after formation, and despite ongoing network activity and synaptic plasticity, memories could be recalled through selective delay activity following the brief stimulation of a subset of assembly neurons. Blocking any component of plasticity prevented stable functioning as a memory network. Our modelling results suggest that the diversity of plasticity phenomena in the brain is orchestrated towards achieving common functional goals.
The brain exhibits a diversity of plasticity mechanisms across different timecales that constitute the putative basis for learning and memory. Here, the authors demonstrate how these different plasticity mechanisms are orchestrated to support the formation of robust and stable neural cell assemblies.
PMCID: PMC4411307  PMID: 25897632
4.  The role of interconnected hub neurons in cortical dynamics 
BMC Neuroscience  2014;15(Suppl 1):P158.
PMCID: PMC4125080
6.  Connection-type-specific biases make uniform random network models consistent with cortical recordings 
Journal of Neurophysiology  2014;112(8):1801-1814.
Uniform random sparse network architectures are ubiquitous in computational neuroscience, but the implicit hypothesis that they are a good representation of real neuronal networks has been met with skepticism. Here we used two experimental data sets, a study of triplet connectivity statistics and a data set measuring neuronal responses to channelrhodopsin stimuli, to evaluate the fidelity of thousands of model networks. Network architectures comprised three neuron types (excitatory, fast spiking, and nonfast spiking inhibitory) and were created from a set of rules that govern the statistics of the resulting connection types. In a high-dimensional parameter scan, we varied the degree distributions (i.e., how many cells each neuron connects with) and the synaptic weight correlations of synapses from or onto the same neuron. These variations converted initially uniform random and homogeneously connected networks, in which every neuron sent and received equal numbers of synapses with equal synaptic strength distributions, to highly heterogeneous networks in which the number of synapses per neuron, as well as average synaptic strength of synapses from or to a neuron were variable. By evaluating the impact of each variable on the network structure and dynamics, and their similarity to the experimental data, we could falsify the uniform random sparse connectivity hypothesis for 7 of 36 connectivity parameters, but we also confirmed the hypothesis in 8 cases. Twenty-one parameters had no substantial impact on the results of the test protocols we used.
PMCID: PMC4200009  PMID: 24944218
neuronal network models; random connectivity; layer 2/3 sensory cortex
7.  Stochastic variational learning in recurrent spiking networks 
The ability to learn and perform statistical inference with biologically plausible recurrent networks of spiking neurons is an important step toward understanding perception and reasoning. Here we derive and investigate a new learning rule for recurrent spiking networks with hidden neurons, combining principles from variational learning and reinforcement learning. Our network defines a generative model over spike train histories and the derived learning rule has the form of a local Spike Timing Dependent Plasticity rule modulated by global factors (neuromodulators) conveying information about “novelty” on a statistically rigorous ground. Simulations show that our model is able to learn both stationary and non-stationary patterns of spike trains. We also propose one experiment that could potentially be performed with animals in order to test the dynamics of the predicted novelty signal.
PMCID: PMC3983494  PMID: 24772078
neural networks; variational learning; spiking neurons; synapses; action potentials
8.  Spike-timing prediction in cortical neurons with active dendrites 
A complete single-neuron model must correctly reproduce the firing of spikes and bursts. We present a study of a simplified model of deep pyramidal cells of the cortex with active dendrites. We hypothesized that we can model the soma and its apical dendrite with only two compartments, without significant loss in the accuracy of spike-timing predictions. The model is based on experimentally measurable impulse-response functions, which transfer the effect of current injected in one compartment to current reaching the other. Each compartment was modeled with a pair of non-linear differential equations and a small number of parameters that approximate the Hodgkin-and-Huxley equations. The predictive power of this model was tested on electrophysiological experiments where noisy current was injected in both the soma and the apical dendrite simultaneously. We conclude that a simple two-compartment model can predict spike times of pyramidal cells stimulated in the soma and dendrites simultaneously. Our results support that regenerating activity in the apical dendritic is required to properly account for the dynamics of layer 5 pyramidal cells under in-vivo-like conditions.
PMCID: PMC4131408  PMID: 25165443
dendrites; neuron models; cortical neurons; spike train analysis; models; theoretical
9.  Limits to high-speed simulations of spiking neural networks using general-purpose computers 
To understand how the central nervous system performs computations using recurrent neuronal circuitry, simulations have become an indispensable tool for theoretical neuroscience. To study neuronal circuits and their ability to self-organize, increasing attention has been directed toward synaptic plasticity. In particular spike-timing-dependent plasticity (STDP) creates specific demands for simulations of spiking neural networks. On the one hand a high temporal resolution is required to capture the millisecond timescale of typical STDP windows. On the other hand network simulations have to evolve over hours up to days, to capture the timescale of long-term plasticity. To do this efficiently, fast simulation speed is the crucial ingredient rather than large neuron numbers. Using different medium-sized network models consisting of several thousands of neurons and off-the-shelf hardware, we compare the simulation speed of the simulators: Brian, NEST and Neuron as well as our own simulator Auryn. Our results show that real-time simulations of different plastic network models are possible in parallel simulations in which numerical precision is not a primary concern. Even so, the speed-up margin of parallelism is limited and boosting simulation speeds beyond one tenth of real-time is difficult. By profiling simulation code we show that the run times of typical plastic network simulations encounter a hard boundary. This limit is partly due to latencies in the inter-process communications and thus cannot be overcome by increased parallelism. Overall, these results show that to study plasticity in medium-sized spiking neural networks, adequate simulation tools are readily available which run efficiently on small clusters. However, to run simulations substantially faster than real-time, special hardware is a prerequisite.
PMCID: PMC4160969  PMID: 25309418
spiking neural networks; network simulator; synaptic plasticity; STDP; parallel computing; computational neuroscience
10.  Synaptic Plasticity in Neural Networks Needs Homeostasis with a Fast Rate Detector 
PLoS Computational Biology  2013;9(11):e1003330.
Hebbian changes of excitatory synapses are driven by and further enhance correlations between pre- and postsynaptic activities. Hence, Hebbian plasticity forms a positive feedback loop that can lead to instability in simulated neural networks. To keep activity at healthy, low levels, plasticity must therefore incorporate homeostatic control mechanisms. We find in numerical simulations of recurrent networks with a realistic triplet-based spike-timing-dependent plasticity rule (triplet STDP) that homeostasis has to detect rate changes on a timescale of seconds to minutes to keep the activity stable. We confirm this result in a generic mean-field formulation of network activity and homeostatic plasticity. Our results strongly suggest the existence of a homeostatic regulatory mechanism that reacts to firing rate changes on the order of seconds to minutes.
Author Summary
Learning and memory in the brain are thought to be mediated through Hebbian plasticity. When a group of neurons is repetitively active together, their connections get strengthened. This can cause co-activation even in the absence of the stimulus that triggered the change. To avoid run-away behavior it is important to prevent neurons from forming excessively strong connections. This is achieved by regulatory homeostatic mechanisms that constrain the overall activity. Here we study the stability of background activity in a recurrent network model with a plausible Hebbian learning rule and homeostasis. We find that the activity in our model is unstable unless homeostasis reacts to rate changes on a timescale of minutes or faster. Since this timescale is incompatible with most known forms of homeostasis, this implies the existence of a previously unknown, rapid homeostatic regulatory mechanism capable of either gating the rate of plasticity, or affecting synaptic efficacies otherwise on a short timescale.
PMCID: PMC3828150  PMID: 24244138
13.  Reinforcement Learning Using a Continuous Time Actor-Critic Framework with Spiking Neurons 
PLoS Computational Biology  2013;9(4):e1003024.
Animals repeat rewarded behaviors, but the physiological basis of reward-based learning has only been partially elucidated. On one hand, experimental evidence shows that the neuromodulator dopamine carries information about rewards and affects synaptic plasticity. On the other hand, the theory of reinforcement learning provides a framework for reward-based learning. Recent models of reward-modulated spike-timing-dependent plasticity have made first steps towards bridging the gap between the two approaches, but faced two problems. First, reinforcement learning is typically formulated in a discrete framework, ill-adapted to the description of natural situations. Second, biologically plausible models of reward-modulated spike-timing-dependent plasticity require precise calculation of the reward prediction error, yet it remains to be shown how this can be computed by neurons. Here we propose a solution to these problems by extending the continuous temporal difference (TD) learning of Doya (2000) to the case of spiking neurons in an actor-critic network operating in continuous time, and with continuous state and action representations. In our model, the critic learns to predict expected future rewards in real time. Its activity, together with actual rewards, conditions the delivery of a neuromodulatory TD signal to itself and to the actor, which is responsible for action choice. In simulations, we show that such an architecture can solve a Morris water-maze-like navigation task, in a number of trials consistent with reported animal performance. We also use our model to solve the acrobot and the cartpole problems, two complex motor control tasks. Our model provides a plausible way of computing reward prediction error in the brain. Moreover, the analytically derived learning rule is consistent with experimental evidence for dopamine-modulated spike-timing-dependent plasticity.
Author Summary
As every dog owner knows, animals repeat behaviors that earn them rewards. But what is the brain machinery that underlies this reward-based learning? Experimental research points to plasticity of the synaptic connections between neurons, with an important role played by the neuromodulator dopamine, but the exact way synaptic activity and neuromodulation interact during learning is not precisely understood. Here we propose a model explaining how reward signals might interplay with synaptic plasticity, and use the model to solve a simulated maze navigation task. Our model extends an idea from the theory of reinforcement learning: one group of neurons form an “actor,” responsible for choosing the direction of motion of the animal. Another group of neurons, the “critic,” whose role is to predict the rewards the actor will gain, uses the mismatch between actual and expected reward to teach the synapses feeding both groups. Our learning agent learns to reliably navigate its maze to find the reward. Remarkably, the synaptic learning rule that we derive from theoretical considerations is similar to previous rules based on experimental evidence.
PMCID: PMC3623741  PMID: 23592970
14.  Changing the responses of cortical neurons from sub- to suprathreshold using single spikes in vivo 
eLife  null;2:e00012.
Action Potential (APs) patterns of sensory cortex neurons encode a variety of stimulus features, but how can a neuron change the feature to which it responds? Here, we show that in vivo a spike-timing-dependent plasticity (STDP) protocol—consisting of pairing a postsynaptic AP with visually driven presynaptic inputs—modifies a neurons' AP-response in a bidirectional way that depends on the relative AP-timing during pairing. Whereas postsynaptic APs repeatedly following presynaptic activation can convert subthreshold into suprathreshold responses, APs repeatedly preceding presynaptic activation reduce AP responses to visual stimulation. These changes were paralleled by restructuring of the neurons response to surround stimulus locations and membrane-potential time-course. Computational simulations could reproduce the observed subthreshold voltage changes only when presynaptic temporal jitter was included. Together this shows that STDP rules can modify output patterns of sensory neurons and the timing of single-APs plays a crucial role in sensory coding and plasticity.
eLife digest
Nerve cells, called neurons, are one of the core components of the brain and form complex networks by connecting to other neurons via long, thin ‘wire-like’ processes called axons. Axons can extend across the brain, enabling neurons to form connections—or synapses—with thousands of others. It is through these complex networks that incoming information from sensory organs, such as the eye, is propagated through the brain and encoded.
The basic unit of communication between neurons is the action potential, often called a ‘spike’, which propagates along the network of axons and, through a chemical process at synapses, communicates with the postsynaptic neurons that the axon is connected to. These action potentials excite the neuron that they arrive at, and this excitatory process can generate a new action potential that then propagates along the axon to excite additional target neurons. In the visual areas of the cortex, neurons respond with action potentials when they ‘recognize’ a particular feature in a scene—a process called tuning. How a neuron becomes tuned to certain features in the world and not to others is unclear, as are the rules that enable a neuron to change what it is tuned to. What is clear, however, is that to understand this process is to understand the basis of sensory perception.
Memory storage and formation is thought to occur at synapses. The efficiency of signal transmission between neurons can increase or decrease over time, and this process is often referred to as synaptic plasticity. But for these synaptic changes to be transmitted to target neurons, the changes must alter the number of action potentials. Although it has been shown in vitro that the efficiency of synaptic transmission—that is the strength of the synapse—can be altered by changing the order in which the pre- and postsynaptic cells are activated (referred to as ‘Spike-timing-dependent plasticity’), this has never been shown to have an effect on the number of action potentials generated in a single neuron in vivo. It is therefore unknown whether this process is functionally relevant.
Now Pawlak et al. report that spike-timing-dependent plasticity in the visual cortex of anaesthetized rats can change the spiking of neurons in the visual cortex. They used a visual stimulus (a bar flashed up for half a second) to activate a presynaptic cell, and triggered a single action potential in the postsynaptic cell a very short time later. By repeatedly activating the cells in this way, they increased the strength of the synaptic connection between the two neurons. After a small number of these pairing activations, presenting the visual stimulus alone to the presynaptic cell was enough to trigger an action potential (a suprathreshold response) in the postsynaptic neuron—even though this was not the case prior to the pairing.
This study shows that timing rules known to change the strength of synaptic connections—and proposed to underlie learning and memory—have functional relevance in vivo, and that the timing of single action potentials can change the functional status of a cortical neuron.
PMCID: PMC3552422  PMID: 23359858
synaptic plasticity; STDP; visual cortex; circuits; in vivo; spiking patterns; rat
15.  The Silent Period of Evidence Integration in Fast Decision Making 
PLoS ONE  2013;8(1):e46525.
In a typical experiment on decision making, one out of two possible stimuli is displayed and observers decide which one was presented. Recently, Stanford and colleagues (2010) introduced a new variant of this classical one-stimulus presentation paradigm to investigate the speed of decision making. They found evidence for “perceptual decision making in less than 30 ms”. Here, we extended this one-stimulus compelled-response paradigm to a two-stimulus compelled-response paradigm in which a vernier was followed immediately by a second vernier with opposite offset direction. The two verniers and their offsets fuse. Only one vernier is perceived. When observers are asked to indicate the offset direction of the fused vernier, the offset of the second vernier dominates perception. Even for long vernier durations, the second vernier dominates decisions indicating that decision making can take substantial time. In accordance with previous studies, we suggest that our results are best explained with a two-stage model of decision making where a leaky evidence integration stage precedes a race-to-threshold process.
PMCID: PMC3549915  PMID: 23349660
16.  Reward-based learning under hardware constraints—using a RISC processor embedded in a neuromorphic substrate 
In this study, we propose and analyze in simulations a new, highly flexible method of implementing synaptic plasticity in a wafer-scale, accelerated neuromorphic hardware system. The study focuses on globally modulated STDP, as a special use-case of this method. Flexibility is achieved by embedding a general-purpose processor dedicated to plasticity into the wafer. To evaluate the suitability of the proposed system, we use a reward modulated STDP rule in a spike train learning task. A single layer of neurons is trained to fire at specific points in time with only the reward as feedback. This model is simulated to measure its performance, i.e., the increase in received reward after learning. Using this performance as baseline, we then simulate the model with various constraints imposed by the proposed implementation and compare the performance. The simulated constraints include discretized synaptic weights, a restricted interface between analog synapses and embedded processor, and mismatch of analog circuits. We find that probabilistic updates can increase the performance of low-resolution weights, a simple interface between analog synapses and processor is sufficient for learning, and performance is insensitive to mismatch. Further, we consider communication latency between wafer and the conventional control computer system that is simulating the environment. This latency increases the delay, with which the reward is sent to the embedded processor. Because of the time continuous operation of the analog synapses, delay can cause a deviation of the updates as compared to the not delayed situation. We find that for highly accelerated systems latency has to be kept to a minimum. This study demonstrates the suitability of the proposed implementation to emulate the selected reward modulated STDP learning rule. It is therefore an ideal candidate for implementation in an upgraded version of the wafer-scale system developed within the BrainScaleS project.
PMCID: PMC3778319  PMID: 24065877
neuromorphic hardware; wafer-scale integration; large-scale spiking neural networks; spike-timing dependent plasticity; reinforcement learning; hardware constraints analysis
17.  Inference of neuronal network spike dynamics and topology from calcium imaging data 
Two-photon calcium imaging enables functional analysis of neuronal circuits by inferring action potential (AP) occurrence (“spike trains”) from cellular fluorescence signals. It remains unclear how experimental parameters such as signal-to-noise ratio (SNR) and acquisition rate affect spike inference and whether additional information about network structure can be extracted. Here we present a simulation framework for quantitatively assessing how well spike dynamics and network topology can be inferred from noisy calcium imaging data. For simulated AP-evoked calcium transients in neocortical pyramidal cells, we analyzed the quality of spike inference as a function of SNR and data acquisition rate using a recently introduced peeling algorithm. Given experimentally attainable values of SNR and acquisition rate, neural spike trains could be reconstructed accurately and with up to millisecond precision. We then applied statistical neuronal network models to explore how remaining uncertainties in spike inference affect estimates of network connectivity and topological features of network organization. We define the experimental conditions suitable for inferring whether the network has a scale-free structure and determine how well hub neurons can be identified. Our findings provide a benchmark for future calcium imaging studies that aim to reliably infer neuronal network properties.
PMCID: PMC3871709  PMID: 24399936
calcium; action potential; reconstruction; connectivity; scale-free; hub neurons
18.  Coding and Decoding with Adapting Neurons: A Population Approach to the Peri-Stimulus Time Histogram 
PLoS Computational Biology  2012;8(10):e1002711.
The response of a neuron to a time-dependent stimulus, as measured in a Peri-Stimulus-Time-Histogram (PSTH), exhibits an intricate temporal structure that reflects potential temporal coding principles. Here we analyze the encoding and decoding of PSTHs for spiking neurons with arbitrary refractoriness and adaptation. As a modeling framework, we use the spike response model, also known as the generalized linear neuron model. Because of refractoriness, the effect of the most recent spike on the spiking probability a few milliseconds later is very strong. The influence of the last spike needs therefore to be described with high precision, while the rest of the neuronal spiking history merely introduces an average self-inhibition or adaptation that depends on the expected number of past spikes but not on the exact spike timings. Based on these insights, we derive a ‘quasi-renewal equation’ which is shown to yield an excellent description of the firing rate of adapting neurons. We explore the domain of validity of the quasi-renewal equation and compare it with other rate equations for populations of spiking neurons. The problem of decoding the stimulus from the population response (or PSTH) is addressed analogously. We find that for small levels of activity and weak adaptation, a simple accumulator of the past activity is sufficient to decode the original input, but when refractory effects become large decoding becomes a non-linear function of the past activity. The results presented here can be applied to the mean-field analysis of coupled neuron networks, but also to arbitrary point processes with negative self-interaction.
Author Summary
How can information be encoded and decoded in populations of adapting neurons? A quantitative answer to this question requires a mathematical expression relating neuronal activity to the external stimulus, and, conversely, stimulus to neuronal activity. Although widely used equations and models exist for the special problem of relating external stimulus to the action potentials of a single neuron, the analogous problem of relating the external stimulus to the activity of a population has proven more difficult. There is a bothersome gap between the dynamics of single adapting neurons and the dynamics of populations. Moreover, if we ignore the single neurons and describe directly the population dynamics, we are faced with the ambiguity of the adapting neural code. The neural code of adapting populations is ambiguous because it is possible to observe a range of population activities in response to a given instantaneous input. Somehow the ambiguity is resolved by the knowledge of the population history, but how precisely? In this article we use approximation methods to provide mathematical expressions that describe the encoding and decoding of external stimuli in adapting populations. The theory presented here helps to bridge the gap between the dynamics of single neurons and that of populations.
PMCID: PMC3464223  PMID: 23055914
19.  Paradoxical Evidence Integration in Rapid Decision Processes 
PLoS Computational Biology  2012;8(2):e1002382.
Decisions about noisy stimuli require evidence integration over time. Traditionally, evidence integration and decision making are described as a one-stage process: a decision is made when evidence for the presence of a stimulus crosses a threshold. Here, we show that one-stage models cannot explain psychophysical experiments on feature fusion, where two visual stimuli are presented in rapid succession. Paradoxically, the second stimulus biases decisions more strongly than the first one, contrary to predictions of one-stage models and intuition. We present a two-stage model where sensory information is integrated and buffered before it is fed into a drift diffusion process. The model is tested in a series of psychophysical experiments and explains both accuracy and reaction time distributions.
Author Summary
In models of decision making, evidence is accumulated until it crosses a threshold. The amount of evidence is directly related to the strength of the sensory input for the decision alternatives. Such one-stage models predict that if two stimulus alternatives are presented in succession, the stimulus alternative presented first dominates the decision, as the accumulated evidence will reach the threshold for this alternative first. Here, we show that for short stimulus durations decision making is not dominated by the first, but by the second stimulus. This result cannot be explained by classical one-stage decision models. We present a two-stage model where sensory input is first integrated before its outcome is fed into a classical decision process.
PMCID: PMC3280955  PMID: 22359494
23.  Synaptic tagging and capture: a bridge from molecular to behaviour 
BMC Neuroscience  2011;12(Suppl 1):P122.
PMCID: PMC3240215
24.  Efficient modeling of neural activity using coupled renewal processes 
BMC Neuroscience  2011;12(Suppl 1):P123.
PMCID: PMC3240216

Results 1-25 (38)