Search tips
Search criteria

Results 1-25 (619)

Clipboard (0)

Select a Filter Below

Year of Publication
1.  The mapping of eccentricity and meridional angle onto orthogonal axes in the primary visual cortex: an activity-dependent developmental model 
Primate vision research has shown that in the retinotopic map of the primary visual cortex, eccentricity and meridional angle are mapped onto two orthogonal axes: whereas the eccentricity is mapped onto the nasotemporal axis, the meridional angle is mapped onto the dorsoventral axis. Theoretically such a map has been approximated by a complex log map. Neural models with correlational learning have explained the development of other visual maps like orientation maps and ocular-dominance maps. In this paper it is demonstrated that activity based mechanisms can drive a self-organizing map (SOM) into such a configuration that dilations and rotations of a particular image (in this case a rectangular bar) are mapped onto orthogonal axes. We further demonstrate using the Laterally Interconnected Synergetically Self Organizing Map (LISSOM) model, with an appropriate boundary and realistic initial conditions, that a retinotopic map which maps eccentricity and meridional angle to the horizontal and vertical axes respectively can be developed. This developed map bears a strong resemblance to the complex log map. We also simulated lesion studies which indicate that the lateral excitatory connections play a crucial role in development of the retinotopic map.
PMCID: PMC4310300
development; complex-logarithmic; retinotopy; self-organizing; LISSOM; V1; plasticity
2.  Applying artificial vision models to human scene understanding 
How do we understand the complex patterns of neural responses that underlie scene understanding? Studies of the network of brain regions held to be scene-selective—the parahippocampal/lingual region (PPA), the retrosplenial complex (RSC), and the occipital place area (TOS)—have typically focused on single visual dimensions (e.g., size), rather than the high-dimensional feature space in which scenes are likely to be neurally represented. Here we leverage well-specified artificial vision systems to explicate a more complex understanding of how scenes are encoded in this functional network. We correlated similarity matrices within three different scene-spaces arising from: (1) BOLD activity in scene-selective brain regions; (2) behavioral measured judgments of visually-perceived scene similarity; and (3) several different computer vision models. These correlations revealed: (1) models that relied on mid- and high-level scene attributes showed the highest correlations with the patterns of neural activity within the scene-selective network; (2) NEIL and SUN—the models that best accounted for the patterns obtained from PPA and TOS—were different from the GIST model that best accounted for the pattern obtained from RSC; (3) The best performing models outperformed behaviorally-measured judgments of scene similarity in accounting for neural data. One computer vision method—NEIL (“Never-Ending-Image-Learner”), which incorporates visual features learned as statistical regularities across web-scale numbers of scenes—showed significant correlations with neural activity in all three scene-selective regions and was one of the two models best able to account for variance in the PPA and TOS. We suggest that these results are a promising first step in explicating more fine-grained models of neural scene understanding, including developing a clearer picture of the division of labor among the components of the functional scene-selective brain network.
PMCID: PMC4316773
scene processing; parahippocampal place area; retrosplenial cortex; transverse occipital sulcus; computer vision
3.  Context-dependent memory decay is evidence of effort minimization in motor learning: a computational study 
Recent theoretical models suggest that motor learning includes at least two processes: error minimization and memory decay. While learning a novel movement, a motor memory of the movement is gradually formed to minimize the movement error between the desired and actual movements in each training trial, but the memory is slightly forgotten in each trial. The learning effects of error minimization trained with a certain movement are partially available in other non-trained movements, and this transfer of the learning effect can be reproduced by certain theoretical frameworks. Although most theoretical frameworks have assumed that a motor memory trained with a certain movement decays at the same speed during performing the trained movement as non-trained movements, a recent study reported that the motor memory decays faster during performing the trained movement than non-trained movements, i.e., the decay rate of motor memory is movement or context dependent. Although motor learning has been successfully modeled based on an optimization framework, e.g., movement error minimization, the type of optimization that can lead to context-dependent memory decay is unclear. Thus, context-dependent memory decay raises the question of what is optimized in motor learning. To reproduce context-dependent memory decay, I extend a motor primitive framework. Specifically, I introduce motor effort optimization into the framework because some previous studies have reported the existence of effort optimization in motor learning processes and no conventional motor primitive model has yet considered the optimization. Here, I analytically and numerically revealed that context-dependent decay is a result of motor effort optimization. My analyses suggest that context-dependent decay is not merely memory decay but is evidence of motor effort optimization in motor learning.
PMCID: PMC4316784
motor learning; neural network modeling; context-dependent memory decay; effort minimization; motor primitive
4.  Enhanced polychronization in a spiking network with metaplasticity 
Computational models of metaplasticity have usually focused on the modeling of single synapses (Shouval et al., 2002). In this paper we study the effect of metaplasticity on network behavior. Our guiding assumption is that the primary purpose of metaplasticity is to regulate synaptic plasticity, by increasing it when input is low and decreasing it when input is high. For our experiments we adopt a model of metaplasticity that demonstrably has this effect for a single synapse; our primary interest is in how metaplasticity thus defined affects network-level phenomena. We focus on a network-level phenomenon called polychronicity, that has a potential role in representation and memory. A network with polychronicity has the ability to produce non-synchronous but precisely timed sequences of neural firing events that can arise from strongly connected groups of neurons called polychronous neural groups (Izhikevich et al., 2004). Polychronous groups (PNGs) develop readily when spiking networks are exposed to repeated spatio-temporal stimuli under the influence of spike-timing-dependent plasticity (STDP), but are sensitive to changes in synaptic weight distribution. We use a technique we have recently developed called Response Fingerprinting to show that PNGs formed in the presence of metaplasticity are significantly larger than those with no metaplasticity. A potential mechanism for this enhancement is proposed that links an inherent property of integrator type neurons called spike latency to an increase in the tolerance of PNG neurons to jitter in their inputs.
PMCID: PMC4318347
metaplasticity; STDP; spiking network; polychronous neural group; memory; spike latency; synaptic weight; synaptic drive
5.  Architectural constraints are a major factor reducing path integration accuracy in the rat head direction cell system 
Head direction cells fire to signal the direction in which an animal's head is pointing. They are able to track head direction using only internally-derived information (path integration)In this simulation study we investigate the factors that affect path integration accuracy. Specifically, two major limiting factors are identified: rise time, the time after stimulation it takes for a neuron to start firing, and the presence of symmetric non-offset within-layer recurrent collateral connectivity. On the basis of the latter, the important prediction is made that head direction cell regions directly involved in path integration will not contain this type of connectivity; giving a theoretical explanation for architectural observations. Increased neuronal rise time is found to slow path integration, and the slowing effect for a given rise time is found to be more severe in the context of short conduction delays. Further work is suggested on the basis of our findings, which represent a valuable contribution to understanding of the head direction cell system.
PMCID: PMC4319401
spatial cognition; head direction cells; path integration; continuous attractor neural networks; attractor dynamics
6.  Data and model tango to aid the understanding of astrocyte-neuron signaling 
PMCID: PMC3900764  PMID: 24478686
astrocytes; neurons; signaling; tripartite synapse; mathematical model
7.  The dendritic location of the L-type current and its deactivation by the somatic AHP current both contribute to firing bistability in motoneurons 
Spinal motoneurons may display a variety of firing patterns including bistability between repetitive firing and quiescence and, more rarely, bistability between two firing states of different frequencies. It was suggested in the past that firing bistability required that the persistent L-type calcium current be segregated in distal dendrites, far away from the spike generating currents. However, this is not supported by more recent data. Using a two compartment model of motoneuron, we show that the different firing patterns may also result from the competition between the more proximal dendritic component of the dendritic L-type conductance and the calcium sensitive potassium conductance responsible for afterhypolarization (AHP). Further emphasizing this point, firing bistability may be also achieved when the L-type current is put in the somatic compartment. However, this requires that the calcium-sensitive potassium conductance be triggered solely by the high threshold calcium currents activated during spikes and not by calcium influx through the L-type current. This prediction was validated by dynamic clamp experiments in vivo in lumbar motoneurons of deeply anesthetized cats in which an artificial L-type current was added at the soma. Altogether, our results suggest that the dynamical interaction between the L-type and afterhyperpolarization currents is as fundamental as the segregation of the calcium L-type current in dendrites for controlling the discharge of motoneurons.
PMCID: PMC3902208  PMID: 24478687
bistability; persistent calcium current; afterhyperpolarization; modeling; dynamic clamp
8.  Freezing of gait and response conflict in Parkinson's disease: computational directions 
PMCID: PMC3904072  PMID: 24478689
Parkinson's disease; freezing of gait; response conflict; computational modeling
9.  Spike-timing computation properties of a feed-forward neural network model 
Brain function is characterized by dynamical interactions among networks of neurons. These interactions are mediated by network topology at many scales ranging from microcircuits to brain areas. Understanding how networks operate can be aided by understanding how the transformation of inputs depends upon network connectivity patterns, e.g., serial and parallel pathways. To tractably determine how single synapses or groups of synapses in such pathways shape these transformations, we modeled feed-forward networks of 7–22 neurons in which synaptic strength changed according to a spike-timing dependent plasticity (STDP) rule. We investigated how activity varied when dynamics were perturbed by an activity-dependent electrical stimulation protocol (spike-triggered stimulation; STS) in networks of different topologies and background input correlations. STS can successfully reorganize functional brain networks in vivo, but with a variability in effectiveness that may derive partially from the underlying network topology. In a simulated network with a single disynaptic pathway driven by uncorrelated background activity, structured spike-timing relationships between polysynaptically connected neurons were not observed. When background activity was correlated or parallel disynaptic pathways were added, however, robust polysynaptic spike timing relationships were observed, and application of STS yielded predictable changes in synaptic strengths and spike-timing relationships. These observations suggest that precise input-related or topologically induced temporal relationships in network activity are necessary for polysynaptic signal propagation. Such constraints for polysynaptic computation suggest potential roles for higher-order topological structure in network organization, such as maintaining polysynaptic correlation in the face of relatively weak synapses.
PMCID: PMC3904091  PMID: 24478688
spike-timing dependent plasticity (STDP); computational modeling; network connectivity; biological neural networks; microcircuits
10.  Long-term plasticity determines the postsynaptic response to correlated afferents with multivesicular short-term synaptic depression 
Synchrony in a presynaptic population leads to correlations in vesicle occupancy at the active sites for neurotransmitter release. The number of independent release sites per presynaptic neuron, a synaptic parameter recently shown to be modified during long-term plasticity, will modulate these correlations and therefore have a significant effect on the firing rate of the postsynaptic neuron. To understand how correlations from synaptic dynamics and from presynaptic synchrony shape the postsynaptic response, we study a model of multiple release site short-term plasticity and derive exact results for the crosscorrelation function of vesicle occupancy and neurotransmitter release, as well as the postsynaptic voltage variance. Using approximate forms for the postsynaptic firing rate in the limits of low and high correlations, we demonstrate that short-term depression leads to a maximum response for an intermediate number of presynaptic release sites, and that this leads to a tuning-curve response peaked at an optimal presynaptic synchrony set by the number of neurotransmitter release sites per presynaptic neuron. These effects arise because, above a certain level of correlation, activity in the presynaptic population is overly strong resulting in wastage of the pool of releasable neurotransmitter. As the nervous system operates under constraints of efficient metabolism it is likely that this phenomenon provides an activity-dependent constraint on network architecture.
PMCID: PMC3906582  PMID: 24523691
long-term plasticity; short-term plasticity; synaptic depression; correlations and synchrony; voltage fluctuations
11.  A modular theory of multisensory integration for motor control 
To control targeted movements, such as reaching to grasp an object or hammering a nail, the brain can use divers sources of sensory information, such as vision and proprioception. Although a variety of studies have shown that sensory signals are optimally combined according to principles of maximum likelihood, increasing evidence indicates that the CNS does not compute a single, optimal estimation of the target's position to be compared with a single optimal estimation of the hand. Rather, it employs a more modular approach in which the overall behavior is built by computing multiple concurrent comparisons carried out simultaneously in a number of different reference frames. The results of these individual comparisons are then optimally combined in order to drive the hand. In this article we examine at a computational level two formulations of concurrent models for sensory integration and compare this to the more conventional model of converging multi-sensory signals. Through a review of published studies, both our own and those performed by others, we produce evidence favoring the concurrent formulations. We then examine in detail the effects of additive signal noise as information flows through the sensorimotor system. By taking into account the noise added by sensorimotor transformations, one can explain why the CNS may shift its reliance on one sensory modality toward a greater reliance on another and investigate under what conditions those sensory transformations occur. Careful consideration of how transformed signals will co-vary with the original source also provides insight into how the CNS chooses one sensory modality over another. These concepts can be used to explain why the CNS might, for instance, create a visual representation of a task that is otherwise limited to the kinesthetic domain (e.g., pointing with one hand to a finger on the other) and why the CNS might choose to recode sensory information in an external reference frame.
PMCID: PMC3908447  PMID: 24550816
sensory integration; motor control; maximum likelihood; reference frames
12.  Linear stability in networks of pulse-coupled neurons 
In a first step toward the comprehension of neural activity, one should focus on the stability of the possible dynamical states. Even the characterization of an idealized regime, such as that of a perfectly periodic spiking activity, reveals unexpected difficulties. In this paper we discuss a general approach to linear stability of pulse-coupled neural networks for generic phase-response curves and post-synaptic response functions. In particular, we present: (1) a mean-field approach developed under the hypothesis of an infinite network and small synaptic conductances; (2) a “microscopic” approach which applies to finite but large networks. As a result, we find that there exist two classes of perturbations: those which are perfectly described by the mean-field approach and those which are subject to finite-size corrections, irrespective of the network size. The analysis of perfectly regular, asynchronous, states reveals that their stability depends crucially on the smoothness of both the phase-response curve and the transmitted post-synaptic pulse. Numerical simulations suggest that this scenario extends to systems that are not covered by the perturbative approach. Altogether, we have described a series of tools for the stability analysis of various dynamical regimes of generic pulse-coupled oscillators, going beyond those that are currently invoked in the literature.
PMCID: PMC3912513  PMID: 24550817
linear stability analysis; splay states; synchronization; neural networks; pulse coupled neurons; Floquet spectrum
13.  When do microcircuits produce beyond-pairwise correlations? 
Describing the collective activity of neural populations is a daunting task. Recent empirical studies in retina, however, suggest a vast simplification in how multi-neuron spiking occurs: the activity patterns of retinal ganglion cell (RGC) populations under some conditions are nearly completely captured by pairwise interactions among neurons. In other circumstances, higher-order statistics are required and appear to be shaped by input statistics and intrinsic circuit mechanisms. Here, we study the emergence of higher-order interactions in a model of the RGC circuit in which correlations are generated by common input. We quantify the impact of higher-order interactions by comparing the responses of mechanistic circuit models vs. “null” descriptions in which all higher-than-pairwise correlations have been accounted for by lower order statistics; these are known as pairwise maximum entropy (PME) models. We find that over a broad range of stimuli, output spiking patterns are surprisingly well captured by the pairwise model. To understand this finding, we study an analytically tractable simplification of the RGC model. We find that in the simplified model, bimodal input signals produce larger deviations from pairwise predictions than unimodal inputs. The characteristic light filtering properties of the upstream RGC circuitry suppress bimodality in light stimuli, thus removing a powerful source of higher-order interactions. This provides a novel explanation for the surprising empirical success of pairwise models.
PMCID: PMC3915758  PMID: 24567715
retinal ganglion cells; maximum entropy distribution; stimulus-driven; correlations; computational model
14.  Bifurcation analysis of “synchronization fluctuation”: a diagnostic measure of brain epileptic states 
PMCID: PMC3915881  PMID: 24567716
complex network; synchronization fluctuation; dynamical system; bifurcation; control parameter
15.  Influence of extracellular oscillations on neural communication: a computational perspective 
Neural communication generates oscillations of electric potential in the extracellular medium. In feedback, these oscillations affect the electrochemical processes within the neurons, influencing the timing and the number of action potentials. It is unclear whether this influence should be considered only as noise or it has some functional role in neural communication. Through computer simulations we investigated the effect of various sinusoidal extracellular oscillations on the timing and number of action potentials. Each simulation is based on a multicompartment model of a single neuron, which is stimulated through spatially distributed synaptic activations. A thorough analysis is conducted on a large number of simulations with different models of CA3 and CA1 pyramidal neurons which are modeled using realistic morphologies and active ion conductances. We demonstrated that the influence of the weak extracellular oscillations, which are commonly present in the brain, is rather stochastic and modest. We found that the stronger fields, which are spontaneously present in the brain only in some particular cases (e.g., during seizures) or that can be induced externally, could significantly modulate spike timings.
PMCID: PMC3916728  PMID: 24570661
extracellular oscillations; local field potentials; ephaptic coupling; nonsynaptic communication; multi-compartment model; NEURON simulation environment; pyramidal neurons
16.  Algorithms for the analysis of ensemble neural spiking activity using simultaneous-event multivariate point-process models 
Understanding how ensembles of neurons represent and transmit information in the patterns of their joint spiking activity is a fundamental question in computational neuroscience. At present, analyses of spiking activity from neuronal ensembles are limited because multivariate point process (MPP) models cannot represent simultaneous occurrences of spike events at an arbitrarily small time resolution. Solo recently reported a simultaneous-event multivariate point process (SEMPP) model to correct this key limitation. In this paper, we show how Solo's discrete-time formulation of the SEMPP model can be efficiently fit to ensemble neural spiking activity using a multinomial generalized linear model (mGLM). Unlike existing approximate procedures for fitting the discrete-time SEMPP model, the mGLM is an exact algorithm. The MPP time-rescaling theorem can be used to assess model goodness-of-fit. We also derive a new marked point-process (MkPP) representation of the SEMPP model that leads to new thinning and time-rescaling algorithms for simulating an SEMPP stochastic process. These algorithms are much simpler than multivariate extensions of algorithms for simulating a univariate point process, and could not be arrived at without the MkPP representation. We illustrate the versatility of the SEMPP model by analyzing neural spiking activity from pairs of simultaneously-recorded rat thalamic neurons stimulated by periodic whisker deflections, and by simulating SEMPP data. In the data analysis example, the SEMPP model demonstrates that whisker motion significantly modulates simultaneous spiking activity at the 1 ms time scale and that the stimulus effect is more than one order of magnitude greater for simultaneous activity compared with non-simultaneous activity. Together, the mGLM, the MPP time-rescaling theorem and the MkPP representation of the SEMPP model offer a theoretically sound, practical tool for measuring joint spiking propensity in a neuronal ensemble.
PMCID: PMC3918645  PMID: 24575001
multivariate point-process; simultaneous events; multinomial GLM; thalamic synchrony
17.  Differing effects of attention in single-units and populations are well predicted by heterogeneous tuning and the normalization model of attention 
Single-unit measurements have reported many different effects of attention on contrast-response (e.g., contrast-gain, response-gain, additive-offset dependent on visibility), while functional imaging measurements have more uniformly reported increases in response across all contrasts (additive-offset). The normalization model of attention elegantly predicts the diversity of effects of attention reported in single-units well-tuned to the stimulus, but what predictions does it make for more realistic populations of neurons with heterogeneous tuning? Are predictions in accordance with population-scale measurements? We used functional imaging data from humans to determine a realistic ratio of attention-field to stimulus-drive size (a key parameter for the model) and predicted effects of attention in a population of model neurons with heterogeneous tuning. We found that within the population, neurons well-tuned to the stimulus showed a response-gain effect, while less-well-tuned neurons showed a contrast-gain effect. Averaged across the population, these disparate effects of attention gave rise to additive-offsets in contrast-response, similar to reports in human functional imaging as well as population averages of single-units. Differences in predictions for single-units and populations were observed across a wide range of model parameters (ratios of attention-field to stimulus-drive size and the amount of baseline response modifiable by attention), offering an explanation for disparity in physiological reports. Thus, by accounting for heterogeneity in tuning of realistic neuronal populations, the normalization model of attention can not only predict responses of well-tuned neurons, but also the activity of large populations of neurons. More generally, computational models can unify physiological findings across different scales of measurement, and make links to behavior, but only if factors such as heterogeneous tuning within a population are properly accounted for.
PMCID: PMC3928538  PMID: 24600380
contrast-response; spatial attention; contrast-gain; response-gain; additive-offset; efficient-selection; cueing; attention-field
18.  Current approaches to model extracellular electrical neural microstimulation 
Nowadays, high-density microelectrode arrays provide unprecedented possibilities to precisely activate spatially well-controlled central nervous system (CNS) areas. However, this requires optimizing stimulating devices, which in turn requires a good understanding of the effects of microstimulation on cells and tissues. In this context, modeling approaches provide flexible ways to predict the outcome of electrical stimulation in terms of CNS activation. In this paper, we present state-of-the-art modeling methods with sufficient details to allow the reader to rapidly build numerical models of neuronal extracellular microstimulation. These include (1) the computation of the electrical potential field created by the stimulation in the tissue, and (2) the response of a target neuron to this field. Two main approaches are described: First we describe the classical hybrid approach that combines the finite element modeling of the potential field with the calculation of the neuron's response in a cable equation framework (compartmentalized neuron models). Then, we present a “whole finite element” approach allowing the simultaneous calculation of the extracellular and intracellular potentials, by representing the neuronal membrane with a thin-film approximation. This approach was previously introduced in the frame of neural recording, but has never been implemented to determine the effect of extracellular stimulation on the neural response at a sub-compartment level. Here, we show on an example that the latter modeling scheme can reveal important sub-compartment behavior of the neural membrane that cannot be resolved using the hybrid approach. The goal of this paper is also to describe in detail the practical implementation of these methods to allow the reader to easily build new models using standard software packages. These modeling paradigms, depending on the situation, should help build more efficient high-density neural prostheses for CNS rehabilitation.
PMCID: PMC3928616  PMID: 24600381
finite element modeling; extracellular focal microstimulation; microelectrode arrays; neural prosthesis; brain implants; ground surface configuration; compartmentalized neuron models; thin-film approximation
19.  A more realistic quantum mechanical model of conscious perception during binocular rivalry 
PMCID: PMC3929835  PMID: 24600383
consciousness; quantum state; mixed state; probability distribution; dominance duration
20.  Unveiling complexity: non-linear and fractal analysis in neuroscience and cognitive psychology 
PMCID: PMC3930866  PMID: 24600384
non-linear analsyis; complex systems; fractal analysis; cognitive psychology; neurosciences
21.  Energy distribution property and energy coding of a structural neural network 
Studying neural coding through neural energy is a novel view. In this paper, based on previously proposed single neuron model, the correlation between the energy consumption and the parameters of the cortex networks (amount of neurons, coupling strength, and transform delay) under an oscillational condition were researched. We found that energy distribution varies orderly as these parameters change, and it is closely related to the synchronous oscillation of the neural network. Besides, we compared this method with traditional method of relative coefficient, which shows energy method works equal to or better than the traditional one. It is novel that the synchronous activity and neural network parameters could be researched by assessing energy distribution and consumption. Therefore, the conclusion of this paper will refine the framework of neural coding theory and contribute to our understanding of the coding mechanism of the cerebral cortex. It provides a strong theoretical foundation of a novel neural coding theory—energy coding.
PMCID: PMC3930871  PMID: 24600382
neural network; nervous energy; neural coding; parameter
22.  Astronomical apology for fractal analysis: spectroscopy's place in the cognitive neurosciences 
PMCID: PMC3934308  PMID: 24616693
spectroscopy; fractal; power law; time series; molecular cloud; stars; perception; perception-action
23.  Subtractive, divisive and non-monotonic gain control in feedforward nets linearized by noise and delays 
The control of input-to-output mappings, or gain control, is one of the main strategies used by neural networks for the processing and gating of information. Using a spiking neural network model, we studied the gain control induced by a form of inhibitory feedforward circuitry—also known as “open-loop feedback”—, which has been experimentally observed in a cerebellum-like structure in weakly electric fish. We found, both analytically and numerically, that this network displays three different regimes of gain control: subtractive, divisive, and non-monotonic. Subtractive gain control was obtained when noise is very low in the network. Also, it was possible to change from divisive to non-monotonic gain control by simply modulating the strength of the feedforward inhibition, which may be achieved via long-term synaptic plasticity. The particular case of divisive gain control has been previously observed in vivo in weakly electric fish. These gain control regimes were robust to the presence of temporal delays in the inhibitory feedforward pathway, which were found to linearize the input-to-output mappings (or f-I curves) via a novel variability-increasing mechanism. Our findings highlight the feedforward-induced gain control analyzed here as a highly versatile mechanism of information gating in the brain.
PMCID: PMC3934558  PMID: 24616694
gain control; feedforward network; subtractive; divisive; non-monotonic; linearization by delay; weakly electric fish
24.  Synergetic motor control paradigm for optimizing energy efficiency of multijoint reaching via tacit learning 
A human motor system can improve its behavior toward optimal movement. The skeletal system has more degrees of freedom than the task dimensions, which incurs an ill-posed problem. The multijoint system involves complex interaction torques between joints. To produce optimal motion in terms of energy consumption, the so-called cost function based optimization has been commonly used in previous works.Even if it is a fact that an optimal motor pattern is employed phenomenologically, there is no evidence that shows the existence of a physiological process that is similar to such a mathematical optimization in our central nervous system.In this study, we aim to find a more primitive computational mechanism with a modular configuration to realize adaptability and optimality without prior knowledge of system dynamics.We propose a novel motor control paradigm based on tacit learning with task space feedback. The motor command accumulation during repetitive environmental interactions, play a major role in the learning process. It is applied to a vertical cyclic reaching which involves complex interaction torques.We evaluated whether the proposed paradigm can learn how to optimize solutions with a 3-joint, planar biomechanical model. The results demonstrate that the proposed method was valid for acquiring motor synergy and resulted in energy efficient solutions for different load conditions. The case in feedback control is largely affected by the interaction torques. In contrast, the trajectory is corrected over time with tacit learning toward optimal solutions.Energy efficient solutions were obtained by the emergence of motor synergy. During learning, the contribution from feedforward controller is augmented and the one from the feedback controller is significantly minimized down to 12% for no load at hand, 16% for a 0.5 kg load condition.The proposed paradigm could provide an optimization process in redundant system with dynamic-model-free and cost-function-free approach.
PMCID: PMC3937612  PMID: 24616695
feedback error learning; motor synergy; optimality; interaction torques; redundancy; Bernstein problem; tacit learning
25.  Dimensionality of joint torques and muscle patterns for reaching 
Muscle activities underlying many motor behaviors can be generated by a small number of basic activation patterns with specific features shared across movement conditions. Such low-dimensionality suggests that the central nervous system (CNS) relies on a modular organization to simplify control. However, the relationship between the dimensionality of muscle patterns and that of joint torques is not fixed, because of redundancy and non-linearity in mapping the former into the latter, and needs to be investigated. We compared the torques acting at four arm joints during fast reaching movements in different directions in the frontal and sagittal planes and the underlying muscle patterns. The dimensionality of the non-gravitational components of torques and muscle patterns in the spatial, temporal, and spatiotemporal domains was estimated by multidimensional decomposition techniques. The spatial organization of torques was captured by two or three generators, indicating that not all the available coordination patterns are employed by the CNS. A single temporal generator with a biphasic profile was identified, generalizing previous observations on a single plane. The number of spatiotemporal generators was equal to the product of the spatial and temporal dimensionalities and their organization was essentially synchronous. Muscle pattern dimensionalities were higher than torques dimensionalities but also higher than the minimum imposed by the inherent non-negativity of muscle activations. The spatiotemporal dimensionality of the muscle patterns was lower than the product of their spatial and temporal dimensionality, indicating the existence of specific asynchronous coordination patterns. Thus, the larger dimensionalities of the muscle patterns may be required for CNS to overcome the non-linearities of the musculoskeletal system and to flexibly generate endpoint trajectories with simple kinematic features using a limited number of building blocks.
PMCID: PMC3939605  PMID: 24624078
modularity; reaching movements; human subjects; inverse dynamics; EMGs; muscle synergies; temporal components; joint torques

Results 1-25 (619)