Controversy exists regarding whether bimanual skill learning can generalize to unimanual performance. For example, some investigators showed that dynamic adaptation could only partially generalize between bilateral and unilateral movement conditions, while others demonstrated complete generalization of visuomotor adaptation. Here, we identified three potential factors that might have contributed to the discrepancy between the two sets of findings. In our first experiment, subjects performed reaching movements toward eight targets bilaterally with a novel force field applied to both arms, then unilaterally with the force field applied to one arm. Results showed that the dynamic adaptation generalized completely from bilateral to unilateral movements. In our second experiment, the same force field was only applied to one arm during both bilateral and unilateral movements. Results indicated complete transfer again. Finally, our subjects performed reaching movements toward a single target with the force field or a novel visuomotor rotation applied only to one arm during both bilateral and unilateral movements. The reduced breadth of experience obtained during bilateral movements resulted in incomplete transfer, which explains previous findings of limited generalization. These findings collectively suggest a substantial overlap between the neural processes underlying bilateral and unilateral movements, supporting the idea that bilateral training, often employed in stroke rehabilitation, is a valid method for improving unilateral performance. However, our findings also suggest that while the neural representations developed during bilateral training can generalize to facilitate unilateral performance, the extent of generalization may depend on the breadth of experience obtained during bilateral training.
The Lombard effect describes the automatic and involuntary increase in vocal intensity that speakers exhibit in a noisy environment. Previous studies of the Lombard effect have typically focused on the relationship between speaking and hearing. Automatic and involuntary increases in motor output have also been noted in studies of finger force production, an effect attributed to mechanisms of sensory attenuation. The present study tested the hypothesis that sensory attenuation mechanisms also underlie expression of the Lombard effect. Participants vocalized phonemes in time with a metronome, while auditory and visual feedback of their performance were manipulated or removed during the course of the trial. We demonstrate that providing a visual reference to calibrate somatosensory-based judgments of current vocal intensity resulted in reduced expression of the Lombard effect. Our results suggest that sensory attenuation effects typically seen in fingertip force production play an important role in the control of speech volume.
The control architecture underlying human reaching has been established, at least in broad outline. However, despite extensive research, the control architecture underlying human locomotion remains unclear. Some studies show evidence of high-level control focused on lower-limb trajectories; others suggest that nonlinear oscillators such as lower-level rhythmic central pattern generators (CPGs) play a significant role. To resolve this ambiguity, we reasoned that if a nonlinear oscillator contributes to locomotor control, human walking should exhibit dynamic entrainment to periodic mechanical perturbation; entrainment is a distinctive behavior of nonlinear oscillators. Here we present the first behavioral evidence that nonlinear neuro-mechanical oscillators contribute to the production of human walking, albeit weakly. As unimpaired human subjects walked at constant speed, we applied periodic torque pulses to the ankle at periods different from their preferred cadence. The gait period of 18 out of 19 subjects entrained to this mechanical perturbation, converging to match that of the perturbation. Significantly, entrainment occurred only if the perturbation period was close to subjects' preferred walking cadence: it exhibited a narrow basin of entrainment. Further, regardless of the phase within the walking cycle at which perturbation was initiated, subjects' gait synchronized or phase-locked with the mechanical perturbation at a phase of gait where it assisted propulsion. These results were affected neither by auditory feedback nor by a distractor task. However, the convergence to phase-locking was slow. These characteristics indicate that nonlinear neuro-mechanical oscillators make at most a modest contribution to human walking. Our results suggest that human locomotor control is not organized as in reaching to meet a predominantly kinematic specification, but is hierarchically organized with a semi-autonomous peripheral oscillator operating under episodic supervisory control.
Grasping is a prototype of human motor coordination. Nevertheless, it is not known what determines the typical movement patterns of grasping. One way to approach this issue is by building models. We developed a model based on the movements of the individual digits. In our model the following objectives were taken into account for each digit: move smoothly to the preselected goal position on the object without hitting other surfaces, arrive at about the same time as the other digit and never move too far from the other digit. These objectives were implemented by regarding the tips of the digits as point masses with a spring between them, each attracted to its goal position and repelled from objects' surfaces. Their movements were damped. Using a single set of parameters, our model can reproduce a wider variety of experimental findings than any previous model of grasping. Apart from reproducing known effects (even the angles under which digits approach trapezoidal objects' surfaces, which no other model can explain), our model predicted that the increase in maximum grip aperture with object size should be greater for blocks than for cylinders. A survey of the literature shows that this is indeed how humans behave. The model can also adequately predict how single digit pointing movements are made. This supports the idea that grasping kinematics follow from the movements of the individual digits.
Recent behavioral neuroscience research revealed that elementary reactive behavior can be improved in the case of cross-modal sensory interactions thanks to underlying multisensory integration mechanisms. Can this benefit be generalized to an ongoing coordination of movements under severe physical constraints? We choose a juggling task to examine this question. A central issue well-known in juggling lies in establishing and maintaining a specific temporal coordination among balls, hands, eyes and posture. Here, we tested whether providing additional timing information about the balls and hands motions by using external sound and tactile periodic stimulations, the later presented at the wrists, improved the behavior of jugglers. One specific combination of auditory and tactile metronome led to a decrease of the spatiotemporal variability of the juggler's performance: a simple sound associated to left and right tactile cues presented antiphase to each other, which corresponded to the temporal pattern of hands movement in the juggling task. A contrario, no improvements were obtained in the case of other auditory and tactile combinations. We even found a degraded performance when tactile events were presented alone. The nervous system thus appears able to integrate in efficient way environmental information brought by different sensory modalities, but only if the information specified matches specific features of the coordination pattern. We discuss the possible implications of these results for the understanding of the neuronal integration process implied in audio-tactile interaction in the context of complex voluntary movement, and considering the well-known gating effect of movement on vibrotactile perception.
Memory consolidation for a trained sequence of finger opposition movements, in 9- and 12-year-old children, was recently found to be significantly less susceptible to interference by a subsequent training experience, compared to that of 17-year-olds. It was suggested that, in children, the experience of training on any sequence of finger movements may affect the performance of the sequence elements, component movements, rather than the sequence as a unit; the latter has been implicated in the learning of the task by adults. This hypothesis implied a possible childhood advantage in the ability to transfer the gains from a trained to the reversed, untrained, sequence of movements. Here we report the results of transfer tests undertaken to test this proposal in 9-, 12-, and 17-year-olds after training in the finger-to-thumb opposition sequence (FOS) learning task. Our results show that the performance gains in the trained sequence partially transferred from the left, trained hand, to the untrained hand at 48-hours after a single training session in the three age-groups tested. However, there was very little transfer of the gains from the trained to the untrained, reversed, sequence performed by either hand. The results indicate sequence specific post-training gains in FOS performance, as opposed to a general improvement in performance of the individual, component, movements that comprised both the trained and untrained sequences. These results do not support the proposal that the reduced susceptibility to interference, in children before adolescence, reflects a difference in movement syntax representation after training.
Why do motorcyclists crash on bends? To address this question we examined the riding styles of three groups of motorcyclists on a motorcycle simulator. Novice, experienced and advanced motorcyclists navigated a series of combined left and right bends while their speed and lane position were recorded. Each rider encountered an unexpected hazard on both a left- and right-hand bend section. Upon seeing the hazards, all riders decreased their speed before steering to avoid the hazard. Experienced riders tended to follow more of a racing line through the bends, which resulted in them having to make the most severe changes to their position to avoid a collision. Advanced riders adopted the safest road positions, choosing a position which offered greater visibility through the bends. As a result, they did not need to alter their road position in response to the hazard. Novice riders adopted similar road positions to experienced riders on the left-hand bends, but their road positions were more similar to advanced riders on right-hand bends, suggesting that they were more aware of the risks associated with right bends. Novice riders also adopted a safer position on post-hazard bends whilst the experienced riders failed to alter their behaviour even though they had performed the greatest evasive manoeuvre in response to the hazards. Advanced riders did not need to alter their position as their approach to the bends was already optimal. The results suggest that non-advanced riders were more likely to choose an inappropriate lane position than an inappropriate speed when entering a bend. Furthermore, the findings support the theory that expertise is achieved as a result of relearning, with advanced training overriding ‘bad habits’ gained through experience alone.
Visual and somatosensory signals participate together in providing an estimate of the hand's spatial location. While the ability of subjects to identify the spatial location of their hand based on visual and proprioceptive signals has previously been characterized, relatively few studies have examined in detail the spatial structure of the proprioceptive map of the arm. Here, we reconstructed and analyzed the spatial structure of the estimation errors that resulted when subjects reported the location of their unseen hand across a 2D horizontal workspace. Hand position estimation was mapped under four conditions: with and without tactile feedback, and with the right and left hands. In the task, we moved each subject's hand to one of 100 targets in the workspace while their eyes were closed. Then, we either a) applied tactile stimulation to the fingertip by allowing the index finger to touch the target or b) as a control, hovered the fingertip 2 cm above the target. After returning the hand to a neutral position, subjects opened their eyes to verbally report where their fingertip had been. We measured and analyzed both the direction and magnitude of the resulting estimation errors. Tactile feedback reduced the magnitude of these estimation errors, but did not change their overall structure. In addition, the spatial structure of these errors was idiosyncratic: each subject had a unique pattern of errors that was stable between hands and over time. Finally, we found that at the population level the magnitude of the estimation errors had a characteristic distribution over the workspace: errors were smallest closer to the body. The stability of estimation errors across conditions and time suggests the brain constructs a proprioceptive map that is reliable, even if it is not necessarily accurate. The idiosyncrasy across subjects emphasizes that each individual constructs a map that is unique to their own experiences.
Fitts' law is an empirical rule of thumb which predicts the time it takes people, under time pressure, to reach with some pointer a target of width W located at a distance D. It has been traditionally assumed that the predictor of movement time must be some mathematical transform of the quotient of D/W, called the index of difficulty (ID) of the movement task. We ask about the scale of measurement involved in this independent variable. We show that because there is no such thing as a zero-difficulty movement, the IDs of the literature run on non-ratio scales of measurement. One notable consequence is that, contrary to a widespread belief, the value of the y-intercept of Fitts' law is uninterpretable. To improve the traditional Fitts paradigm, we suggest grounding difficulty on relative target tolerance W/D, which has a physical zero, unlike relative target distance D/W. If no one can explain what is meant by a zero-difficulty movement task, everyone can understand what is meant by a target layout whose relative tolerance W/D is zero, and hence whose relative intolerance 1–W/D is 1 or 100%. We use the data of Fitts' famous tapping experiment to illustrate these points. Beyond the scale of measurement issue, there is reason to doubt that task difficulty is the right object to try to measure in basic research on Fitts' law, target layout manipulations having never provided users of the traditional Fitts paradigm with satisfactory control over the variations of the speed and accuracy of movements. We advocate the trade-off paradigm, a recently proposed alternative, which is immune to this criticism.
In previous studies of interference in vibrotactile working memory, subjects were presented with an interfering distractor stimulus during the delay period between the target and probe stimuli in a delayed match-to-sample task. The accuracy of same/different decisions indicated feature overwriting was the mechanism of interference. However, the distractor was presented late in the delay period, and the distractor may have interfered with the decision-making process, rather than the maintenance of stored information. The present study varies the timing of distractor onset, (either early, in the middle, or late in the delay period), and demonstrates both overwriting and non-overwriting forms of interference.
Emotional stimuli can be processed even when participants perceive them without conscious awareness, but the extent to which unconsciously processed emotional stimuli influence implicit memory after short and long delays is not fully understood. We addressed this issue by measuring a subliminal affective priming effect in Experiment 1 and a long-term priming effect in Experiment 2. In Experiment 1, a flashed fearful or neutral face masked by a scrambled face was presented three times, then a target face (either fearful or neutral) was presented and participants were asked to make a fearful/neutral judgment. We found that, relative to a neutral prime face (neutral–fear face), a fearful prime face speeded up participants' reaction to a fearful target (fear–fear face), when they were not aware of the masked prime face. But this response pattern did not apply to the neutral target. In Experiment 2, participants were first presented with a masked faces six times during encoding. Three minutes later, they were asked to make a fearful/neutral judgment for the same face with congruent expression, the same face with incongruent expression or a new face. Participants showed a significant priming effect for the fearful faces but not for the neutral faces, regardless of their awareness of the masked faces during encoding. These results provided evidence that unconsciously processed stimuli could enhance emotional memory after both short and long delays. It indicates that emotion can enhance memory processing whether the stimuli are encoded consciously or unconsciously.
Studies investigating joint actions have suggested a central role for the putative mirror neuron system (pMNS) because of the close link between perception and action provided by these brain regions , , . In contrast, our previous functional magnetic resonance imaging (fMRI) experiment demonstrated that the BOLD response of the pMNS does not suggest that it directly integrates observed and executed actions during joint actions . To test whether the pMNS might contribute indirectly to the integration process by sending information to brain areas responsible for this integration (integration network), here we used Granger causality mapping (GCM) . We explored the directional information flow between the anterior sites of the pMNS and previously identified integrative brain regions. We found that the left BA44 sent more information than it received to both the integration network (left thalamus, right middle occipital gyrus and cerebellum) and more posterior nodes of the pMNS (BA2). Thus, during joint actions, two anatomically separate networks therefore seem effectively connected and the information flow is predominantly from anterior to posterior areas of the brain. These findings suggest that the pMNS is involved indirectly in joint actions by transforming observed and executed actions into a common code and is part of a generative model that could predict the future somatosensory and visual consequences of observed and executed actions in order to overcome otherwise inevitable neural delays.
Motor learning is dependent upon plasticity in motor areas of the brain, but does it occur in isolation or does it also result in changes to sensory systems? We examined changes to somatosensory function that occur in conjunction with motor learning. We found that even after periods of training as brief as 10 minutes, sensed limb position was altered and the perceptual change persisted for 24 hours. The perceptual change was reflected in subsequent movements; limb movements following learning deviated from the pre-learning trajectory by an amount that was not different in magnitude and in the same direction as the perceptual shift. Crucially, the perceptual change was dependent upon motor learning. When the limb was displaced passively such that subjects experienced similar kinematics but without learning, no sensory change was observed. The findings indicate that motor learning affects not only motor areas of the brain but changes sensory function as well.
motor learning; sensory plasticity; somatosensory
In visual psychophysics, precise display timing, particularly for brief stimulus presentations, is often required. The aim of this study was to systematically review the commonly applied methods for the computation of stimulus durations in psychophysical experiments and to contrast them with the true luminance signals of stimuli on computer displays.
In a first step, we systematically scanned the citation index Web of Science for studies with experiments with stimulus presentations for brief durations. Articles which appeared between 2003 and 2009 in three different journals were taken into account if they contained experiments with stimuli presented for less than 50 milliseconds. The 79 articles that matched these criteria were reviewed for their method of calculating stimulus durations. For those 75 studies where the method was either given or could be inferred, stimulus durations were calculated by the sum of frames (SOF) method. In a second step, we describe the luminance signal properties of the two monitor technologies which were used in the reviewed studies, namely cathode ray tube (CRT) and liquid crystal display (LCD) monitors. We show that SOF is inappropriate for brief stimulus presentations on both of these technologies. In extreme cases, SOF specifications and true stimulus durations are even unrelated. Furthermore, the luminance signals of the two monitor technologies are so fundamentally different that the duration of briefly presented stimuli cannot be calculated by a single method for both technologies. Statistics over stimulus durations given in the reviewed studies are discussed with respect to different duration calculation methods.
The SOF method for duration specification which was clearly dominating in the reviewed studies leads to serious misspecifications particularly for brief stimulus presentations. We strongly discourage its use for brief stimulus presentations on CRT and LCD monitors.
Several firing patterns experimentally observed in neural populations have been successfully correlated to animal behavior. Population bursting, hereby regarded as a period of high firing rate followed by a period of quiescence, is typically observed in groups of neurons during behavior. Biophysical membrane-potential models of single cell bursting involve at least three equations. Extending such models to study the collective behavior of neural populations involves thousands of equations and can be very expensive computationally. For this reason, low dimensional population models that capture biophysical aspects of networks are needed. The present paper uses a firing-rate model to study mechanisms that trigger and stop transitions between tonic and phasic population firing. These mechanisms are captured through a two-dimensional system, which can potentially be extended to include interactions between different areas of the nervous system with a small number of equations. The typical behavior of midbrain dopaminergic neurons in the rodent is used as an example to illustrate and interpret our results. The model presented here can be used as a building block to study interactions between networks of neurons. This theoretical approach may help contextualize and understand the factors involved in regulating burst firing in populations and how it may modulate distinct aspects of behavior.
A predictive component can contribute to the command signal for smooth pursuit. This is readily demonstrated by the fact that low frequency sinusoidal target motion can be tracked with zero time delay or even with a small lead. The objective of this study was to characterize the predictive contributions to pursuit tracking more precisely by developing analytical models for predictive smooth pursuit. Subjects tracked a small target moving in two dimensions. In the simplest case, the periodic target motion was composed of the sums of two sinusoidal motions (SS), along both the horizontal and the vertical axes. Motions following the same or similar paths, but having a richer spectral composition, were produced by having the target follow the same path but at a constant speed (CS), and by combining the horizontal SS velocity with the vertical CS velocity and vice versa. Several different quantitative models were evaluated. The predictive contribution to the eye tracking command signal could be modeled as a low-pass filtered target acceleration signal with a time delay. This predictive signal, when combined with retinal image velocity at the same time delay, as in classical models for the initiation of pursuit, gave a good fit to the data. The weighting of the predictive acceleration component was different in different experimental conditions, being largest when target motion was simplest, following the SS velocity profiles.
Adaptation to deterministic force perturbations during reaching movements was extensively studied in the last few decades. Here, we use this methodology to explore the ability of the brain to adapt to a delayed velocity-dependent force field. Two groups of subjects preformed a standard reaching experiment under a velocity dependent force field. The force was either immediately proportional to the current velocity (Control) or lagged it by 50 ms (Test). The results demonstrate clear adaptation to the delayed force perturbations. Deviations from a straight line during catch trials were shifted in time compared to post-adaptation to a non-delayed velocity dependent field (Control), indicating expectation to the delayed force field. Adaptation to force fields is considered to be a process in which the motor system predicts the forces to be expected based on the state that a limb will assume in response to motor commands. This study demonstrates for the first time that the temporal window of this prediction needs not to be fixed. This is relevant to the ability of the adaptive mechanisms to compensate for variability in the transmission of information across the sensory-motor system.
Relatively few studies have been reported that document how proprioception varies across the workspace of the human arm. Here we examined proprioceptive function across a horizontal planar workspace, using a new method that avoids active movement and interactions with other sensory modalities. We systematically mapped both proprioceptive acuity (sensitivity to hand position change) and bias (perceived location of the hand), across a horizontal-plane 2D workspace. Proprioception of both the left and right arms was tested at nine workspace locations and in 2 orthogonal directions (left-right and forwards-backwards). Subjects made repeated judgments about the position of their hand with respect to a remembered proprioceptive reference position, while grasping the handle of a robotic linkage that passively moved their hand to each judgement location. To rule out the possibility that the memory component of the proprioceptive testing procedure may have influenced our results, we repeated the procedure in a second experiment using a persistent visual reference position. Both methods resulted in qualitatively similar findings. Proprioception is not uniform across the workspace. Acuity was greater for limb configurations in which the hand was closer to the body, and was greater in a forward-backward direction than in a left-right direction. A robust difference in proprioceptive bias was observed across both experiments. At all workspace locations, the left hand was perceived to be to the left of its actual position, and the right hand was perceived to be to the right of its actual position. Finally, bias was smaller for hand positions closer to the body. The results of this study provide a systematic map of proprioceptive acuity and bias across the workspace of the limb that may be used to augment computational models of sensory-motor control, and to inform clinical assessment of sensory function in patients with sensory-motor deficits.
During locomotion, vision is used to perceive environmental obstacles that could potentially threaten stability; locomotor action is then modified to avoid these obstacles. Various factors such as lighting and texture can make these environmental obstacles appear larger or smaller than their actual size. It is unclear if gait is adapted based on the actual or perceived height of these environmental obstacles. The purposes of this study were to determine if visually guided action is scaled to visual perception, and to determine if task experience influenced how action is scaled to perception.
Participants judged the height of two obstacles before and after stepping over each of them 50 times. An illusion made obstacle one appear larger than obstacle two, even though they were identical in size. The influence of task experience was examined by comparing the perception-action relationship during the first five obstacle crossings (1–5) with the last five obstacle crossings (46–50). In the first set of trials, obstacle one was perceived to be 2.0 cm larger than obstacle two and subjects stepped 2.7 cm higher over obstacle one. After walking over the obstacle 50 times, the toe elevation was not different between obstacles, but obstacle one was still perceived as 2.4 cm larger.
There was evidence of locomotor adaptation, but no evidence of perceptual adaptation with experience. These findings add to research that demonstrates that while the motor system can be influenced by perception, it can also operate independent of perception.
The power law provides an efficient description of amplitude spectra of natural scenes. Psychophysical studies have shown that the forms of the amplitude spectra are clearly related to human visual performance, indicating that the statistical parameters in natural scenes are represented in the nervous system. However, the underlying neuronal computation that accounts for the perception of the natural image statistics has not been thoroughly studied. We propose a theoretical framework for neuronal encoding and decoding of the image statistics, hypothesizing the elicited population activities of spatial-frequency selective neurons observed in the early visual cortex. The model predicts that frequency-tuned neurons have asymmetric tuning curves as functions of the amplitude spectra falloffs. To investigate the ability of this neural population to encode the statistical parameters of the input images, we analyze the Fisher information of the stochastic population code, relating it to the psychophysically measured human ability to discriminate natural image statistics. The nature of discrimination thresholds suggested by the computational model is consistent with experimental data from previous studies. Of particular interest, a reported qualitative disparity between performance in fovea and parafovea can be explained based on the distributional difference over preferred frequencies of neurons in the current model. The threshold shows a peak at a small falloff parameter when the neuronal preferred spatial frequencies are narrowly distributed, whereas the threshold peak vanishes for a neural population with a more broadly distributed frequency preference. These results demonstrate that the distributional property of neuronal stimulus preference can play a crucial role in linking microscopic neurophysiological phenomena and macroscopic human behaviors.
When exposed to a continuous directional discrepancy between movements of a visible hand cursor and the actual hand (visuomotor rotation), subjects adapt their reaching movements so that the cursor is brought to the target. Abrupt removal of the discrepancy after training induces reaching error in the direction opposite to the original discrepancy, which is called an aftereffect. Previous studies have shown that training with gradually increasing visuomotor rotation results in a larger aftereffect than with a suddenly increasing one. Although the aftereffect difference implies a difference in the learning process, it is still unclear whether the learned visuomotor transformations are qualitatively different between the training conditions.
We examined the qualitative changes in the visuomotor transformation after the learning of the sudden and gradual visuomotor rotations. The learning of the sudden rotation led to a significant increase of the reaction time for arm movement initiation and then the reaching error decreased, indicating that the learning is associated with an increase of computational load in motor preparation (planning). In contrast, the learning of the gradual rotation did not change the reaction time but resulted in an increase of the gain of feedback control, suggesting that the online adjustment of the reaching contributes to the learning of the gradual rotation. When the online cursor feedback was eliminated during the learning of the gradual rotation, the reaction time increased, indicating that additional computations are involved in the learning of the gradual rotation.
The results suggest that the change in the motor planning and online feedback adjustment of the movement are involved in the learning of the visuomotor rotation. The contributions of those computations to the learning are flexibly modulated according to the visual environment. Such multiple learning strategies would be required for reaching adaptation within a short training period.
A central objective in neuroscience is to understand how neurons interact. Such functional interactions have been estimated using signals recorded with different techniques and, consequently, different temporal resolutions. For example, spike data often have sub-millisecond resolution while some imaging techniques may have a resolution of many seconds. Here we use multi-electrode spike recordings to ask how similar functional connectivity inferred from slower timescale signals is to the one inferred from fast timescale signals. We find that functional connectivity is relatively robust to low-pass filtering—dropping by about 10% when low pass filtering at 10 hz and about 50% when low pass filtering down to about 1 Hz—and that estimates are robust to high levels of additive noise. Moreover, there is a weak correlation for physiological filters such as hemodynamic or Ca2+ impulse responses and filters based on local field potentials. We address the origin of these correlations using simulation techniques and find evidence that the similarity between functional connectivity estimated across timescales is due to processes that do not depend on fast pair-wise interactions alone. Rather, it appears that connectivity on multiple timescales or common-input related to stimuli or movement drives the observed correlations. Despite this qualification, our results suggest that techniques with intermediate temporal resolution may yield good estimates of the functional connections between individual neurons.
Motor lateralization in humans has primarily been characterized as “handedness”, resulting in the view that one arm-hemisphere system is specialized for all aspects of movement while the other is simply a weaker analogue. We have proposed an alternative view that motor lateralization reflects proficiency of each arm for complementary functions that arises from a specialization of each hemisphere for distinct movement control mechanisms. However, before this idea of hemispheric specialization can be accepted, it is necessary to precisely identify these distinct, lateralized mechanisms. Here we show in right-handers that dominant arm movements rely on predictive mechanisms that anticipate and account for the dynamic properties of the arm, while the non-dominant arm optimizes positional stability by specifying impedance around equilibrium positions. In a targeted-reaching paradigm, we covertly and occasionally shifted the hand starting location either orthogonal to or collinear with a particular direction of movement. On trials on which the start positions were shifted orthogonally, we did not notice any strong interlimb differences. However, on trials on which start positions were shifted orthogonally, the dominant arm largely maintained the direction and straightness of its trajectory, while the non-dominant arm deviated towards the previously learned goal position, consistent with the hypothesized control specialization of each arm-hemisphere system. These results bring together two competing theories about mechanisms of movement control, and suggest that they coexist in the brain in different hemispheres. These findings also question the traditional view of handedness, because specialized mechanisms for each arm-hemisphere system were identified within a group of right-handers. It is likely that such hemispheric specialization emerged to accommodate increasing motor complexity during evolution.
Learning is often understood as an organism's gradual acquisition of the association between a given sensory stimulus and the correct motor response. Mathematically, this corresponds to regressing a mapping between the set of observations and the set of actions. Recently, however, it has been shown both in cognitive and motor neuroscience that humans are not only able to learn particular stimulus-response mappings, but are also able to extract abstract structural invariants that facilitate generalization to novel tasks. Here we show how such structure learning can enhance facilitation in a sensorimotor association task performed by human subjects. Using regression and reinforcement learning models we show that the observed facilitation cannot be explained by these basic models of learning stimulus-response associations. We show, however, that the observed data can be explained by a hierarchical Bayesian model that performs structure learning. In line with previous results from cognitive tasks, this suggests that hierarchical Bayesian inference might provide a common framework to explain both the learning of specific stimulus-response associations and the learning of abstract structures that are shared by different task environments.