astrocytes; neurons; signaling; tripartite synapse; mathematical model
Spinal motoneurons may display a variety of firing patterns including bistability between repetitive firing and quiescence and, more rarely, bistability between two firing states of different frequencies. It was suggested in the past that firing bistability required that the persistent L-type calcium current be segregated in distal dendrites, far away from the spike generating currents. However, this is not supported by more recent data. Using a two compartment model of motoneuron, we show that the different firing patterns may also result from the competition between the more proximal dendritic component of the dendritic L-type conductance and the calcium sensitive potassium conductance responsible for afterhypolarization (AHP). Further emphasizing this point, firing bistability may be also achieved when the L-type current is put in the somatic compartment. However, this requires that the calcium-sensitive potassium conductance be triggered solely by the high threshold calcium currents activated during spikes and not by calcium influx through the L-type current. This prediction was validated by dynamic clamp experiments in vivo in lumbar motoneurons of deeply anesthetized cats in which an artificial L-type current was added at the soma. Altogether, our results suggest that the dynamical interaction between the L-type and afterhyperpolarization currents is as fundamental as the segregation of the calcium L-type current in dendrites for controlling the discharge of motoneurons.
bistability; persistent calcium current; afterhyperpolarization; modeling; dynamic clamp
Parkinson's disease; freezing of gait; response conflict; computational modeling
Brain function is characterized by dynamical interactions among networks of neurons. These interactions are mediated by network topology at many scales ranging from microcircuits to brain areas. Understanding how networks operate can be aided by understanding how the transformation of inputs depends upon network connectivity patterns, e.g., serial and parallel pathways. To tractably determine how single synapses or groups of synapses in such pathways shape these transformations, we modeled feed-forward networks of 7–22 neurons in which synaptic strength changed according to a spike-timing dependent plasticity (STDP) rule. We investigated how activity varied when dynamics were perturbed by an activity-dependent electrical stimulation protocol (spike-triggered stimulation; STS) in networks of different topologies and background input correlations. STS can successfully reorganize functional brain networks in vivo, but with a variability in effectiveness that may derive partially from the underlying network topology. In a simulated network with a single disynaptic pathway driven by uncorrelated background activity, structured spike-timing relationships between polysynaptically connected neurons were not observed. When background activity was correlated or parallel disynaptic pathways were added, however, robust polysynaptic spike timing relationships were observed, and application of STS yielded predictable changes in synaptic strengths and spike-timing relationships. These observations suggest that precise input-related or topologically induced temporal relationships in network activity are necessary for polysynaptic signal propagation. Such constraints for polysynaptic computation suggest potential roles for higher-order topological structure in network organization, such as maintaining polysynaptic correlation in the face of relatively weak synapses.
spike-timing dependent plasticity (STDP); computational modeling; network connectivity; biological neural networks; microcircuits
Synchrony in a presynaptic population leads to correlations in vesicle occupancy at the active sites for neurotransmitter release. The number of independent release sites per presynaptic neuron, a synaptic parameter recently shown to be modified during long-term plasticity, will modulate these correlations and therefore have a significant effect on the firing rate of the postsynaptic neuron. To understand how correlations from synaptic dynamics and from presynaptic synchrony shape the postsynaptic response, we study a model of multiple release site short-term plasticity and derive exact results for the crosscorrelation function of vesicle occupancy and neurotransmitter release, as well as the postsynaptic voltage variance. Using approximate forms for the postsynaptic firing rate in the limits of low and high correlations, we demonstrate that short-term depression leads to a maximum response for an intermediate number of presynaptic release sites, and that this leads to a tuning-curve response peaked at an optimal presynaptic synchrony set by the number of neurotransmitter release sites per presynaptic neuron. These effects arise because, above a certain level of correlation, activity in the presynaptic population is overly strong resulting in wastage of the pool of releasable neurotransmitter. As the nervous system operates under constraints of efficient metabolism it is likely that this phenomenon provides an activity-dependent constraint on network architecture.
long-term plasticity; short-term plasticity; synaptic depression; correlations and synchrony; voltage fluctuations
To control targeted movements, such as reaching to grasp an object or hammering a nail, the brain can use divers sources of sensory information, such as vision and proprioception. Although a variety of studies have shown that sensory signals are optimally combined according to principles of maximum likelihood, increasing evidence indicates that the CNS does not compute a single, optimal estimation of the target's position to be compared with a single optimal estimation of the hand. Rather, it employs a more modular approach in which the overall behavior is built by computing multiple concurrent comparisons carried out simultaneously in a number of different reference frames. The results of these individual comparisons are then optimally combined in order to drive the hand. In this article we examine at a computational level two formulations of concurrent models for sensory integration and compare this to the more conventional model of converging multi-sensory signals. Through a review of published studies, both our own and those performed by others, we produce evidence favoring the concurrent formulations. We then examine in detail the effects of additive signal noise as information flows through the sensorimotor system. By taking into account the noise added by sensorimotor transformations, one can explain why the CNS may shift its reliance on one sensory modality toward a greater reliance on another and investigate under what conditions those sensory transformations occur. Careful consideration of how transformed signals will co-vary with the original source also provides insight into how the CNS chooses one sensory modality over another. These concepts can be used to explain why the CNS might, for instance, create a visual representation of a task that is otherwise limited to the kinesthetic domain (e.g., pointing with one hand to a finger on the other) and why the CNS might choose to recode sensory information in an external reference frame.
sensory integration; motor control; maximum likelihood; reference frames
In a first step toward the comprehension of neural activity, one should focus on the stability of the possible dynamical states. Even the characterization of an idealized regime, such as that of a perfectly periodic spiking activity, reveals unexpected difficulties. In this paper we discuss a general approach to linear stability of pulse-coupled neural networks for generic phase-response curves and post-synaptic response functions. In particular, we present: (1) a mean-field approach developed under the hypothesis of an infinite network and small synaptic conductances; (2) a “microscopic” approach which applies to finite but large networks. As a result, we find that there exist two classes of perturbations: those which are perfectly described by the mean-field approach and those which are subject to finite-size corrections, irrespective of the network size. The analysis of perfectly regular, asynchronous, states reveals that their stability depends crucially on the smoothness of both the phase-response curve and the transmitted post-synaptic pulse. Numerical simulations suggest that this scenario extends to systems that are not covered by the perturbative approach. Altogether, we have described a series of tools for the stability analysis of various dynamical regimes of generic pulse-coupled oscillators, going beyond those that are currently invoked in the literature.
linear stability analysis; splay states; synchronization; neural networks; pulse coupled neurons; Floquet spectrum
Describing the collective activity of neural populations is a daunting task. Recent empirical studies in retina, however, suggest a vast simplification in how multi-neuron spiking occurs: the activity patterns of retinal ganglion cell (RGC) populations under some conditions are nearly completely captured by pairwise interactions among neurons. In other circumstances, higher-order statistics are required and appear to be shaped by input statistics and intrinsic circuit mechanisms. Here, we study the emergence of higher-order interactions in a model of the RGC circuit in which correlations are generated by common input. We quantify the impact of higher-order interactions by comparing the responses of mechanistic circuit models vs. “null” descriptions in which all higher-than-pairwise correlations have been accounted for by lower order statistics; these are known as pairwise maximum entropy (PME) models. We find that over a broad range of stimuli, output spiking patterns are surprisingly well captured by the pairwise model. To understand this finding, we study an analytically tractable simplification of the RGC model. We find that in the simplified model, bimodal input signals produce larger deviations from pairwise predictions than unimodal inputs. The characteristic light filtering properties of the upstream RGC circuitry suppress bimodality in light stimuli, thus removing a powerful source of higher-order interactions. This provides a novel explanation for the surprising empirical success of pairwise models.
retinal ganglion cells; maximum entropy distribution; stimulus-driven; correlations; computational model
complex network; synchronization fluctuation; dynamical system; bifurcation; control parameter
Neural communication generates oscillations of electric potential in the extracellular medium. In feedback, these oscillations affect the electrochemical processes within the neurons, influencing the timing and the number of action potentials. It is unclear whether this influence should be considered only as noise or it has some functional role in neural communication. Through computer simulations we investigated the effect of various sinusoidal extracellular oscillations on the timing and number of action potentials. Each simulation is based on a multicompartment model of a single neuron, which is stimulated through spatially distributed synaptic activations. A thorough analysis is conducted on a large number of simulations with different models of CA3 and CA1 pyramidal neurons which are modeled using realistic morphologies and active ion conductances. We demonstrated that the influence of the weak extracellular oscillations, which are commonly present in the brain, is rather stochastic and modest. We found that the stronger fields, which are spontaneously present in the brain only in some particular cases (e.g., during seizures) or that can be induced externally, could significantly modulate spike timings.
extracellular oscillations; local field potentials; ephaptic coupling; nonsynaptic communication; multi-compartment model; NEURON simulation environment; pyramidal neurons
Understanding how ensembles of neurons represent and transmit information in the patterns of their joint spiking activity is a fundamental question in computational neuroscience. At present, analyses of spiking activity from neuronal ensembles are limited because multivariate point process (MPP) models cannot represent simultaneous occurrences of spike events at an arbitrarily small time resolution. Solo recently reported a simultaneous-event multivariate point process (SEMPP) model to correct this key limitation. In this paper, we show how Solo's discrete-time formulation of the SEMPP model can be efficiently fit to ensemble neural spiking activity using a multinomial generalized linear model (mGLM). Unlike existing approximate procedures for fitting the discrete-time SEMPP model, the mGLM is an exact algorithm. The MPP time-rescaling theorem can be used to assess model goodness-of-fit. We also derive a new marked point-process (MkPP) representation of the SEMPP model that leads to new thinning and time-rescaling algorithms for simulating an SEMPP stochastic process. These algorithms are much simpler than multivariate extensions of algorithms for simulating a univariate point process, and could not be arrived at without the MkPP representation. We illustrate the versatility of the SEMPP model by analyzing neural spiking activity from pairs of simultaneously-recorded rat thalamic neurons stimulated by periodic whisker deflections, and by simulating SEMPP data. In the data analysis example, the SEMPP model demonstrates that whisker motion significantly modulates simultaneous spiking activity at the 1 ms time scale and that the stimulus effect is more than one order of magnitude greater for simultaneous activity compared with non-simultaneous activity. Together, the mGLM, the MPP time-rescaling theorem and the MkPP representation of the SEMPP model offer a theoretically sound, practical tool for measuring joint spiking propensity in a neuronal ensemble.
multivariate point-process; simultaneous events; multinomial GLM; thalamic synchrony
Single-unit measurements have reported many different effects of attention on contrast-response (e.g., contrast-gain, response-gain, additive-offset dependent on visibility), while functional imaging measurements have more uniformly reported increases in response across all contrasts (additive-offset). The normalization model of attention elegantly predicts the diversity of effects of attention reported in single-units well-tuned to the stimulus, but what predictions does it make for more realistic populations of neurons with heterogeneous tuning? Are predictions in accordance with population-scale measurements? We used functional imaging data from humans to determine a realistic ratio of attention-field to stimulus-drive size (a key parameter for the model) and predicted effects of attention in a population of model neurons with heterogeneous tuning. We found that within the population, neurons well-tuned to the stimulus showed a response-gain effect, while less-well-tuned neurons showed a contrast-gain effect. Averaged across the population, these disparate effects of attention gave rise to additive-offsets in contrast-response, similar to reports in human functional imaging as well as population averages of single-units. Differences in predictions for single-units and populations were observed across a wide range of model parameters (ratios of attention-field to stimulus-drive size and the amount of baseline response modifiable by attention), offering an explanation for disparity in physiological reports. Thus, by accounting for heterogeneity in tuning of realistic neuronal populations, the normalization model of attention can not only predict responses of well-tuned neurons, but also the activity of large populations of neurons. More generally, computational models can unify physiological findings across different scales of measurement, and make links to behavior, but only if factors such as heterogeneous tuning within a population are properly accounted for.
contrast-response; spatial attention; contrast-gain; response-gain; additive-offset; efficient-selection; cueing; attention-field
Nowadays, high-density microelectrode arrays provide unprecedented possibilities to precisely activate spatially well-controlled central nervous system (CNS) areas. However, this requires optimizing stimulating devices, which in turn requires a good understanding of the effects of microstimulation on cells and tissues. In this context, modeling approaches provide flexible ways to predict the outcome of electrical stimulation in terms of CNS activation. In this paper, we present state-of-the-art modeling methods with sufficient details to allow the reader to rapidly build numerical models of neuronal extracellular microstimulation. These include (1) the computation of the electrical potential field created by the stimulation in the tissue, and (2) the response of a target neuron to this field. Two main approaches are described: First we describe the classical hybrid approach that combines the finite element modeling of the potential field with the calculation of the neuron's response in a cable equation framework (compartmentalized neuron models). Then, we present a “whole finite element” approach allowing the simultaneous calculation of the extracellular and intracellular potentials, by representing the neuronal membrane with a thin-film approximation. This approach was previously introduced in the frame of neural recording, but has never been implemented to determine the effect of extracellular stimulation on the neural response at a sub-compartment level. Here, we show on an example that the latter modeling scheme can reveal important sub-compartment behavior of the neural membrane that cannot be resolved using the hybrid approach. The goal of this paper is also to describe in detail the practical implementation of these methods to allow the reader to easily build new models using standard software packages. These modeling paradigms, depending on the situation, should help build more efficient high-density neural prostheses for CNS rehabilitation.
finite element modeling; extracellular focal microstimulation; microelectrode arrays; neural prosthesis; brain implants; ground surface configuration; compartmentalized neuron models; thin-film approximation
consciousness; quantum state; mixed state; probability distribution; dominance duration
non-linear analsyis; complex systems; fractal analysis; cognitive psychology; neurosciences
Studying neural coding through neural energy is a novel view. In this paper, based on previously proposed single neuron model, the correlation between the energy consumption and the parameters of the cortex networks (amount of neurons, coupling strength, and transform delay) under an oscillational condition were researched. We found that energy distribution varies orderly as these parameters change, and it is closely related to the synchronous oscillation of the neural network. Besides, we compared this method with traditional method of relative coefficient, which shows energy method works equal to or better than the traditional one. It is novel that the synchronous activity and neural network parameters could be researched by assessing energy distribution and consumption. Therefore, the conclusion of this paper will refine the framework of neural coding theory and contribute to our understanding of the coding mechanism of the cerebral cortex. It provides a strong theoretical foundation of a novel neural coding theory—energy coding.
neural network; nervous energy; neural coding; parameter
spectroscopy; fractal; power law; time series; molecular cloud; stars; perception; perception-action
The control of input-to-output mappings, or gain control, is one of the main strategies used by neural networks for the processing and gating of information. Using a spiking neural network model, we studied the gain control induced by a form of inhibitory feedforward circuitry—also known as “open-loop feedback”—, which has been experimentally observed in a cerebellum-like structure in weakly electric fish. We found, both analytically and numerically, that this network displays three different regimes of gain control: subtractive, divisive, and non-monotonic. Subtractive gain control was obtained when noise is very low in the network. Also, it was possible to change from divisive to non-monotonic gain control by simply modulating the strength of the feedforward inhibition, which may be achieved via long-term synaptic plasticity. The particular case of divisive gain control has been previously observed in vivo in weakly electric fish. These gain control regimes were robust to the presence of temporal delays in the inhibitory feedforward pathway, which were found to linearize the input-to-output mappings (or f-I curves) via a novel variability-increasing mechanism. Our findings highlight the feedforward-induced gain control analyzed here as a highly versatile mechanism of information gating in the brain.
gain control; feedforward network; subtractive; divisive; non-monotonic; linearization by delay; weakly electric fish
A human motor system can improve its behavior toward optimal movement. The skeletal system has more degrees of freedom than the task dimensions, which incurs an ill-posed problem. The multijoint system involves complex interaction torques between joints. To produce optimal motion in terms of energy consumption, the so-called cost function based optimization has been commonly used in previous works.Even if it is a fact that an optimal motor pattern is employed phenomenologically, there is no evidence that shows the existence of a physiological process that is similar to such a mathematical optimization in our central nervous system.In this study, we aim to find a more primitive computational mechanism with a modular configuration to realize adaptability and optimality without prior knowledge of system dynamics.We propose a novel motor control paradigm based on tacit learning with task space feedback. The motor command accumulation during repetitive environmental interactions, play a major role in the learning process. It is applied to a vertical cyclic reaching which involves complex interaction torques.We evaluated whether the proposed paradigm can learn how to optimize solutions with a 3-joint, planar biomechanical model. The results demonstrate that the proposed method was valid for acquiring motor synergy and resulted in energy efficient solutions for different load conditions. The case in feedback control is largely affected by the interaction torques. In contrast, the trajectory is corrected over time with tacit learning toward optimal solutions.Energy efficient solutions were obtained by the emergence of motor synergy. During learning, the contribution from feedforward controller is augmented and the one from the feedback controller is significantly minimized down to 12% for no load at hand, 16% for a 0.5 kg load condition.The proposed paradigm could provide an optimization process in redundant system with dynamic-model-free and cost-function-free approach.
feedback error learning; motor synergy; optimality; interaction torques; redundancy; Bernstein problem; tacit learning
Muscle activities underlying many motor behaviors can be generated by a small number of basic activation patterns with specific features shared across movement conditions. Such low-dimensionality suggests that the central nervous system (CNS) relies on a modular organization to simplify control. However, the relationship between the dimensionality of muscle patterns and that of joint torques is not fixed, because of redundancy and non-linearity in mapping the former into the latter, and needs to be investigated. We compared the torques acting at four arm joints during fast reaching movements in different directions in the frontal and sagittal planes and the underlying muscle patterns. The dimensionality of the non-gravitational components of torques and muscle patterns in the spatial, temporal, and spatiotemporal domains was estimated by multidimensional decomposition techniques. The spatial organization of torques was captured by two or three generators, indicating that not all the available coordination patterns are employed by the CNS. A single temporal generator with a biphasic profile was identified, generalizing previous observations on a single plane. The number of spatiotemporal generators was equal to the product of the spatial and temporal dimensionalities and their organization was essentially synchronous. Muscle pattern dimensionalities were higher than torques dimensionalities but also higher than the minimum imposed by the inherent non-negativity of muscle activations. The spatiotemporal dimensionality of the muscle patterns was lower than the product of their spatial and temporal dimensionality, indicating the existence of specific asynchronous coordination patterns. Thus, the larger dimensionalities of the muscle patterns may be required for CNS to overcome the non-linearities of the musculoskeletal system and to flexibly generate endpoint trajectories with simple kinematic features using a limited number of building blocks.
modularity; reaching movements; human subjects; inverse dynamics; EMGs; muscle synergies; temporal components; joint torques
The decision making behaviors of humans and animals adapt and then satisfy an “operant matching law” in certain type of tasks. This was first pointed out by Herrnstein in his foraging experiments on pigeons. The matching law has been one landmark for elucidating the underlying processes of decision making and its learning in the brain. An interesting question is whether decisions are made deterministically or probabilistically. Conventional learning models of the matching law are based on the latter idea; they assume that subjects learn choice probabilities of respective alternatives and decide stochastically with the probabilities. However, it is unknown whether the matching law can be accounted for by a deterministic strategy or not. To answer this question, we propose several deterministic Bayesian decision making models that have certain incorrect beliefs about an environment. We claim that a simple model produces behavior satisfying the matching law in static settings of a foraging task but not in dynamic settings. We found that the model that has a belief that the environment is volatile works well in the dynamic foraging task and exhibits undermatching, which is a slight deviation from the matching law observed in many experiments. This model also demonstrates the double-exponential reward history dependency of a choice and a heavier-tailed run-length distribution, as has recently been reported in experiments on monkeys.
decision making; operant matching law; Bayesian inference; dynamic foraging task; heavy-tailed reward history dependency
Electrical microstimulation studies provide some of the most direct evidence for the neural representation of muscle synergies. These synergies, i.e., coordinated activations of groups of muscles, have been proposed as building blocks for the construction of motor behaviors by the nervous system. Intraspinal or intracortical microstimulation (ICMS) has been shown to evoke muscle patterns that can be resolved into a small set of synergies similar to those seen in natural behavior. However, questions remain about the validity of microstimulation as a probe of neural function, particularly given the relatively long trains of supratheshold stimuli used in these studies. Here, we examined whether muscle synergies evoked during ICMS in two rhesus macaques were similarly encoded by nearby motor cortical units during a purely voluntary behavior involving object reach, grasp, and carry movements. At each microstimulation site we identified the synergy most strongly evoked among those extracted from muscle patterns evoked over all microstimulation sites. For each cortical unit recorded at the same microstimulation site, we then identified the synergy most strongly encoded among those extracted from muscle patterns recorded during the voluntary behavior. We found that the synergy most strongly evoked at an ICMS site matched the synergy most strongly encoded by proximal units more often than expected by chance. These results suggest a common neural substrate for microstimulation-evoked motor responses and for the generation of muscle patterns during natural behaviors.
motor; movement; muscle; synergy; hand; macaque; grasping; cortex
In a previous study, Harris et al. (2002) found disruption of vibrotactile short-term memory after applying single-pulse transcranial magnetic stimulation (TMS) to primary somatosensory cortex (SI) early in the maintenance period, and suggested that this demonstrated a role for SI in vibrotactile memory storage. While such a role is compatible with recent suggestions that sensory cortex is the storage substrate for working memory, it stands in contrast to a relatively large body of evidence from human EEG and single-cell recording in primates that instead points to prefrontal cortex as the storage substrate for vibrotactile memory. In the present study, we use computational methods to demonstrate how Harris et al.'s results can be reproduced by TMS-induced activity in sensory cortex and subsequent feedforward interference with memory traces stored in prefrontal cortex, thereby reconciling discordant findings in the tactile memory literature.
short-term memory; working memory; scalar memory; TMS; vibrotactile; noise; computational modeling
To date a number of studies have shown that receptive field shapes of early sensory neurons can be reproduced by optimizing coding efficiency of natural stimulus ensembles. A still unresolved question is whether the efficient coding hypothesis explains formation of neurons which explicitly represent environmental features of different functional importance. This paper proposes that the spatial selectivity of higher auditory neurons emerges as a direct consequence of learning efficient codes for natural binaural sounds. Firstly, it is demonstrated that a linear efficient coding transform—Independent Component Analysis (ICA) trained on spectrograms of naturalistic simulated binaural sounds extracts spatial information present in the signal. A simple hierarchical ICA extension allowing for decoding of sound position is proposed. Furthermore, it is shown that units revealing spatial selectivity can be learned from a binaural recording of a natural auditory scene. In both cases a relatively small subpopulation of learned spectrogram features suffices to perform accurate sound localization. Representation of the auditory space is therefore learned in a purely unsupervised way by maximizing the coding efficiency and without any task-specific constraints. This results imply that efficient coding is a useful strategy for learning structures which allow for making behaviorally vital inferences about the environment.
efficient coding; natural sound statistics; binaural hearing; spectrotemporal receptive fields; auditory scene analysis
In this work we research the role of body dynamics in the complexity of kinematic patterns in a quadruped robot with compliant legs. Two gait patterns, lateral sequence walk and trot, along with leg length control patterns of different complexity were implemented in a modular, feed-forward locomotion controller. The controller was tested on a small, quadruped robot with compliant, segmented leg design, and led to self-stable and self-stabilizing robot locomotion. In-air stepping and on-ground locomotion leg kinematics were recorded, and the number and shapes of motion primitives accounting for 95% of the variance of kinematic leg data were extracted. This revealed that kinematic patterns resulting from feed-forward control had a lower complexity (in-air stepping, 2–3 primitives) than kinematic patterns from on-ground locomotion (νm4 primitives), although both experiments applied identical motor patterns. The complexity of on-ground kinematic patterns had increased, through ground contact and mechanical entrainment. The complexity of observed kinematic on-ground data matches those reported from level-ground locomotion data of legged animals. Results indicate that a very low complexity of modular, rhythmic, feed-forward motor control is sufficient for level-ground locomotion in combination with passive compliant legged hardware.
motion primitives; locomotion patterns; central pattern generator; quadruped robot; passive leg compliance; entrainment; principal component analysis; walk and trot