We present in outline a theory of sensorimotor control based on dynamic primitives, which we define as attractors. To account for the broad class of human interactive behaviors—especially tool use—we propose three distinct primitives: submovements, oscillations and mechanical impedances, the latter necessary for interaction with objects. Due to fundamental features of the neuromuscular system, most notably its slow response, we argue that encoding in terms of parameterized primitives may be an essential simplification required for learning, performance, and retention of complex skills. Primitives may simultaneously and sequentially be combined to produce observable forces and motions. This may be achieved by defining a virtual trajectory composed of submovements and/or oscillations interacting with impedances. Identifying primitives requires care: in principle, overlapping submovements would be sufficient to compose all observed movements but biological evidence shows that oscillations are a distinct primitive. Conversely, we suggest that kinematic synergies, frequently discussed as primitives of complex actions, may be an emergent consequence of neuromuscular impedance. To illustrate how these dynamic primitives may account for complex actions, we briefly review three types of interactive behaviors: constrained motion, impact tasks, and manipulation of dynamic objects.
Discrete; submovement; rhythmic; oscillation; impedance; primitive
Walknet comprises an artificial neural network that allows for the simulation of a considerable amount of behavioral data obtained from walking and standing stick insects. It has been tested by kinematic and dynamic simulations as well as on a number of six-legged robots. Over the years, various different expansions of this network have been provided leading to different versions of Walknet. This review summarizes the most important biological findings described by Walknet and how they can be simulated. Walknet shows how a number of properties observed in insects may emerge from a decentralized architecture. Examples are the continuum of so-called “gaits,” coordination of up to 18 leg joints during stance when walking forward or backward over uneven surfaces and negotiation of curves, dealing with leg loss, as well as being able following motion trajectories without explicit precalculation. The different Walknet versions are compared to other approaches describing insect-inspired hexapod walking. Finally, we briefly address the ability of this decentralized reactive controller to form the basis for the simulation of higher-level cognitive faculties exceeding the capabilities of insects.
Insect locomotion; Motor control; Decentralized architecture
It is often assumed that humans generate a 3D reconstruction of the environment, either in egocentric or world-based coordinates, but the steps involved are unknown. Here, we propose two reconstruction-based models, evaluated using data from two tasks in immersive virtual reality. We model the observer’s prediction of landmark location based on standard photogrammetric methods and then combine location predictions to compute likelihood maps of navigation behaviour. In one model, each scene point is treated independently in the reconstruction; in the other, the pertinent variable is the spatial relationship between pairs of points. Participants viewed a simple environment from one location, were transported (virtually) to another part of the scene and were asked to navigate back. Error distributions varied substantially with changes in scene layout; we compared these directly with the likelihood maps to quantify the success of the models. We also measured error distributions when participants manipulated the location of a landmark to match the preceding interval, providing a direct test of the landmark-location stage of the navigation models. Models such as this, which start with scenes and end with a probabilistic prediction of behaviour, are likely to be increasingly useful for understanding 3D vision.
Navigation; 3D perception; Virtual reality; Stereopsis; Motion parallax; Computational modelling
Many auditory neurons possess low-threshold potassium currents (IKLT ) that enhance their responsiveness to rapid and coincident inputs. We present recordings from gerbil medial superior olivary (MSO) neurons in vitro and modeling results that illustrate how IKLT improves the detection of brief signals, of weak signals in noise, and of the coincidence of signals (as needed for sound localization). We quantify the enhancing effect of IKLT on temporal processing with several measures: signal-to-noise ratio (SNR), reverse correlation or spike-triggered averaging of input currents, and inter-aural time difference (ITD) tuning curves. To characterize how IKLT, which activates below spike threshold, influences a neuron’s voltage rise toward threshold, i.e., how it filters the inputs, we focus first on the response to weak and noisy signals. Cells and models were stimulated with a computer-generated steady barrage of random inputs, mimicking weak synaptic conductance transients (the “noise”), together with a larger but still subthreshold postsynaptic conductance, EPSG (the “signal”). Reduction of IKLT decreased the SNR, mainly due to an increase in spontaneous firing (more “false positive”). The spike-triggered reverse correlation indicated that IKLT shortened the integration time for spike generation. IKLT also heightened the model’s timing selectivity for coincidence detection of simulated binaural inputs. Further, ITD tuning is shifted in favor of a slope code rather than a place code by precise and rapid inhibition onto MSO cells (Brand et al. 2002). In several ways, low-threshold outward currents are seen to shape integration of weak and strong signals in auditory neurons.
In this paper, we present a novel method for the identification of synchronization effects in multichannel electrocorticograms (ECoG). Based on autoregressive modeling, we define a dependency measure termed extrinsic-to-intrinsic power ratio (EIPR) which quantifies directed coupling effects in the time domain. Hereby, a dynamic input channel selection algorithm assures the estimation of the model parameters despite the strong spatial correlation among the high number of involved ECoG channels. We compare EIPR to the partial directed coherence, show its ability to indicate Granger causality and successfully validate a signal model. Applying EIPR to ictal ECoG data of patients suffering from temporal lobe epilepsy allows us to identify the electrodes of the seizure onset zone. The results obtained by the proposed method are in good accordance with the clinical findings.
Epilepsy; ECoG; Partial directed coherence; Synchronization; Dynamic input channel selection
The superior colliculus (SC) integrates relevant sensory information (visual, auditory, somatosensory) from several cortical and subcortical structures, to program orientation responses to external events. However, this capacity is not present at birth, and it is acquired only through interactions with cross-modal events during maturation. Mathematical models provide a quantitative framework, valuable in helping to clarify the specific neural mechanisms underlying the maturation of the multisensory integration in the SC. We extended a neural network model of the adult SC (Cuppini et al. 2010) to describe the development of this phenomenon starting from an immature state, based on known or suspected anatomy and physiology, in which: 1) AES afferents are present but weak, 2) Responses are driven from non-AES afferents, and 3) The visual inputs have a marginal spatial tuning. Sensory experience was modelled by repeatedly presenting modality-specific and cross-modal stimuli. Synapses in the network were modified by simple Hebbian learning rules. As a consequence of this exposure, 1) Receptive fields shrink and come into spatial register, and 2) SC neurons gained the adult characteristic integrative properties: enhancement, depression, and inverse effectiveness. Importantly, the unique architecture of the model guided the development so that integration became dependent on the relationship between the cortical input and the SC. Manipulations of the statistics of the experience during the development changed the integrative profiles of the neurons, and results matched well with the results of physiological studies.
Visual-acoustic neurons; Anterior ectosylvian sulcus; Enhancement; Hebb rule Learning mechanisms; Inverse effectiveness principle; Neural network modeling
We have suggested that the mirror-neuron system might be usefully understood as implementing Bayes-optimal perception of actions emitted by oneself or others. To substantiate this claim, we present neuronal simulations that show the same representations can prescribe motor behavior and encode motor intentions during action–observation. These simulations are based on the free-energy formulation of active inference, which is formally related to predictive coding. In this scheme, (generalised) states of the world are represented as trajectories. When these states include motor trajectories they implicitly entail intentions (future motor states). Optimizing the representation of these intentions enables predictive coding in a prospective sense. Crucially, the same generative models used to make predictions can be deployed to predict the actions of self or others by simply changing the bias or precision (i.e. attention) afforded to proprioceptive signals. We illustrate these points using simulations of handwriting to illustrate neuronally plausible generation and recognition of itinerant (wandering) motor trajectories. We then use the same simulations to produce synthetic electrophysiological responses to violations of intentional expectations. Our results affirm that a Bayes-optimal approach provides a principled framework, which accommodates current thinking about the mirror-neuron system. Furthermore, it endorses the general formulation of action as active inference.
Action–observation; Mirror-neuron system; Inference; Precision; Free-energy; Perception; Generative models; Predictive coding
The acoustic startle reflex (ASR), a defensive response, is a contraction of the skeletal and facial muscles in response to an abrupt, intense (>80 db) auditory stimulus, which has been extensively studied in rats and humans. Prepulse inhibition (PPI) of ASR is the normal suppression of the startle reflex when an intense stimulus is preceded by a weak non-starting pre-stimulus. PPI, a measure of sensory motor gating, is impaired in various neuropsychiatric disorders, including schizophrenia, and is modulated by cognitive and emotional contexts such as fear and attention. We have modeled the fear modulation of PPI of ASR based on its anatomical substrates and taking into account data from behaving rats and humans. The model replicates the principal features of both phenomena and predicts underlying neural mechanisms. In addition, the model yields testable predictions.
Acoustic startle reflex; Prepulse inhibition; Fear modulation; PPI computational model
Habituation is a generic property of the neural response to repeated stimuli. Its strength often increases as inter-stimuli relaxation periods decrease. We propose a simple, broadly applicable control structure that enables a neural mass model of the evoked EEG response to exhibit habituated behavior. A key motivation for this investigation is the ongoing effort to develop model-based reconstruction of multimodal functional neuroimaging data. The control structure proposed here is illustrated and validated in the context of a biophysical neural mass model, developed by Riera et al. (Hum Brain Mapp 27(11):896–914, 2006; 28(4):335–354, 2007), and of simplifications thereof, using data from rat EEG response to medial nerve stimuli presented at frequencies from 1 to 8 Hz. Performance was tested by predictions of both the response to the next stimulus based on the current one, and also of continued stimuli trains over 4-s time intervals based on the first stimulus in the interval, with similar success statistics. These tests demonstrate the ability of simple generative models to capture key features of the evoked response, including habituation.
Neural mass model; Evoked response; Habituation prediction; Low order model
Short-term synaptic plasticity acts as a time- and firing rate-dependent filter that mediates the transmission of information across synapses. In the avian auditory brainstem, specific forms of plasticity are expressed at different terminals of the same auditory nerve fibers and contribute to the divergence of acoustic timing and intensity information. To identify key differences in the plasticity properties, we made patch-clamp recordings from neurons in the cochlear nucleus responsible for intensity coding, nucleus angularis, and measured the time course of the recovery of excitatory postsynaptic currents following short-term synaptic depression. These synaptic responses showed a very rapid recovery, following a bi-exponential time course with a fast time constant of ~40 ms and a dependence on the presynaptic activity levels, resulting in a crossing over of the recovery trajectories following high-rate versus low-rate stimulation trains. We also show that the recorded recovery in the intensity pathway differs from similar recordings in the timing pathway, specifically the cochlear nucleus magnocellularis, in two ways: (1) a fast recovery that was not due to recovery from postsynaptic receptor desensitization and (2) a recovery trajectory that was characterized by a non-monotonic bump that may be due in part to facilitation mechanisms more prevalent in the intensity pathway. We tested whether a previously proposed model of synaptic transmission based on vesicle depletion and sequential steps of vesicle replenishment could account for the recovery responses, and found it was insufficient, suggesting an activity-dependent feedback mechanism is present. We propose that the rapid recovery following depression allows improved coding of natural auditory signals that often consist of sound bursts separated by short gaps.
Auditory nerve; Cochlear nucleus; Angularis; Magnocellularis; Short-term depression; Short-term facilitation; Vesicle cycling
One of the key problems of motor control is the redundancy problem, in particular how the central nervous system (CNS) chooses an action out of infinitely many possible. A promising way to address this question is to assume that the choice is made based on optimization of a certain cost function. A number of cost functions have been proposed in the literature to explain performance in different motor tasks: from force sharing in grasping to path planning in walking. However, the problem of uniqueness of the cost function(s) was not addressed until recently. In this article, we analyze two methods of finding additive cost functions in inverse optimization problems with linear constraints, so-called linear-additive inverse optimization problems. These methods are based on the Uniqueness Theorem for inverse optimization problems that we proved recently (Terekhov et al., J Math Biol 61(3):423–453, 2010). Using synthetic data, we show that both methods allow for determining the cost function. We analyze the influence of noise on the both methods. Finally, we show how a violation of the conditions of the Uniqueness Theorem may lead to incorrect solutions of the inverse optimization problem.
Inverse optimization; Optimization; Uniqueness Theorem; Cost function; Grasping; Force sharing
A biologically detailed model of the binaural avian nucleus laminaris is constructed, as a two-dimensional array of multicompartment, conductance-based neurons, along tonotopic and interaural time delay (ITD) axes. The model is based primarily on data from chick nucleus laminaris. Typical chick-like parameters perform ITD discrimination up to 2 kHz, and enhancements for barn owl perform ITD discrimination up to 6 kHz. The dendritic length gradient of NL is explained concisely. The response to binaural out-of-phase input is suppressed well below the response to monaural input (without any spontaneous activity on the opposite side), implicating active potassium channels as crucial to good ITD discrimination.
Animals, including humans, use interaural time differences (ITDs) that arise from different sound path lengths to the two ears as a cue of horizontal sound source location. The nature of the neural code for ITD is still controversial. Current models differentiate between two population codes: either a map-like rate-place code of ITD along an array of neurons, consistent with a large body of data in the barn owl, or a population rate code, consistent with data from small mammals. Recently, it was proposed that these different codes reflect optimal coding strategies that depend on head size and sound frequency. The chicken makes an excellent test case of this proposal because its physical pre-requisites are similar to small mammals, yet it shares a more recent common ancestry with the owl. We show here that, like in the barn owl, the brainstem nucleus laminaris in mature chickens displayed the major features of a place code of ITD. ITD was topographically represented in the maximal responses of neurons along each isofrequency band, covering approximately the contralateral acoustic hemisphere. Furthermore, the represented ITD range appeared to change with frequency, consistent with a pressure gradient receiver mechanism in the avian middle ear. At very low frequencies, below400 Hz, maximal neural responses were symmetrically distributed around zero ITD and it remained unclear whether there was a topographic representation. These findings do not agree with the above predictions for optimal coding and thus revive the discussion as to what determines the neural coding strategies for ITDs.
Auditory; Hearing; Sound localization; Sensory
This Prospects presents the problems that must be solved by the vertebrate nervous system in the process of sensorimotor integration and motor control. The concepts of efference copy and inverse model are defined, and multiple biological mechanisms are described, including those that form the basis of integration, extrapolation, and comparison/cancellation operations. Open questions for future research include the biological basis of continuous and distributed vs. modular control, and somatosensory-motor coordination.
The outer retina removes the first-order correlation, the background light level, and thus more efficiently transmits contrast. This removal is accomplished by negative feedback from horizontal cell to photoreceptors. However, the optimal feedback gain to maximize the contrast sensitivity and spatial resolution is not known. The objective of this study was to determine, from the known structure of the outer retina, the synaptic gains that optimize the response to spatial and temporal contrast within natural images. We modeled the outer retina as a continuous 2D extension of the discrete 1D model of Yagi et al. (Proc Int Joint Conf Neural Netw 1: 787–789, 1989). We determined the spatio-temporal impulse response of the model using small-signal analysis, assuming that the stimulus did not perturb the resting state of the feedback system. In order to maximize the efficiency of the feedback system, we derived the relationships between time constants, space constants, and synaptic gains that give the fastest temporal adaptation and the highest spatial resolution of the photoreceptor input to bipolar cells. We found that feedback which directly modulated photoreceptor calcium channel activation, as opposed to changing photoreceptor voltage, provides faster adaptation to light onset and higher spatial resolution. The optimal solution suggests that the feedback gain from horizontal cells to photoreceptors should be ~0.5. The model can be extended to retinas that have two or more horizontal cell networks with different space constants. The theoretical predictions closely match experimental observations of outer retinal function.
Lateral inhibition; Feedback; Horizontal cell; Network; Gain
Human off-vertical axis rotation (OVAR) in the dark typically produces perceived motion about a cone, the amplitude of which changes as a function of frequency. This perception is commonly attributed to the fact that both the OVAR and the conical motion have a gravity vector that rotates about the subject. Little-known, however, is that this rotating-gravity explanation for perceived conical motion is inconsistent with basic observations about self-motion perception: (a) that the perceived vertical moves toward alignment with the gravito-inertial acceleration (GIA) and (b) that perceived translation arises from perceived linear acceleration, as derived from the portion of the GIA not associated with gravity. Mathematically proved in this article is the fact that during OVAR these properties imply mismatched phase of perceived tilt and translation, in contrast to the common perception of matched phases which correspond to conical motion with pivot at the bottom. This result demonstrates that an additional perceptual rule is required to explain perception in OVAR. This study investigates, both analytically and computationally, the phase relationship between tilt and translation at different stimulus rates—slow (45°/s) and fast (180°/s), and the three-dimensional shape of predicted perceived motion, under different sets of hypotheses about self-motion perception. We propose that for human motion perception, there is a phase-linking of tilt and translation movements to construct a perception of one’s overall motion path. Alternative hypotheses to achieve the phase match were tested with three-dimensional computational models, comparing the output with published experimental reports. The best fit with experimental data was the hypothesis that the phase of perceived translation was linked to perceived tilt, while the perceived tilt was determined by the GIA. This hypothesis successfully predicted the bottom-pivot cone commonly reported and a reduced sense of tilt during fast OVAR. Similar considerations apply to the hilltop illusion often reported during horizontal linear oscillation. Known response properties of central neurons are consistent with this ability to phase-link translation with tilt. In addition, the competing “standard” model was mathematically proved to be unable to predict the bottom-pivot cone regardless of the values used for parameters in the model.
OVAR; Model; Vestibular; Perception
Synchronously spiking neurons have been observed in the cerebral cortex and the hippocampus. In computer models, synchronous spike volleys may be propagated across appropriately connected neuron populations. However, it is unclear how the appropriate synaptic connectivity is set up during development and maintained during adult learning. We performed computer simulations to investigate the influence of temporally asymmetric Hebbian synaptic plasticity on the propagation of spike volleys. In addition to feedforward connections, recurrent connections were included between and within neuron populations and spike transmission delays varied due to axonal, synaptic and dendritic transmission. We found that repeated presentations of input volleys decreased the synaptic conductances of intragroup and feedback connections while synaptic conductances of feedforward connections with short delays became stronger than those of connections with longer delays. These adaptations led to the synchronization of spike volleys as they propagated across neuron populations. The findings suggests that temporally asymmetric Hebbian learning may enhance synchronized spiking within small populations of neurons in cortical and hippocampal areas and familiar stimuli may produce synchronized spike volleys that are rapidly propagated across neural tissue.
Properties of neural controllers for closed-loop sensorimotor behavior can be inferred with system identification. Under the standard paradigm, the closed-loop system is perturbed (input), measurements are taken (output), and the relationship between input and output reveals features of the system under study. Here we show that under common assumptions made about such systems (e.g. the system implements optimal control with a penalty on mechanical, but not sensory, states) important aspects of the neural controller (its zeros mask the modes of the sensors) remain hidden from standard system identification techniques. Only by perturbing or measuring the closed-loop system “between” the sensor and the control can these features be exposed with closed-loop system identification methods; while uncommon, there exist noninvasive techniques such as galvanic vestibular stimulation that perturb between sensor and controller in this way.
Closed-loop system identification; Optimal motor control; Sensory dynamics; Pole-zero cancellation
Sound localization requires comparison between the inputs to the left and right ears. One important aspect of this comparison is the differences in arrival time to each side, also called interaural time difference (ITD).A prevalent model of ITD detection, consisting of delay lines and coincidence-detector neurons, was proposed by Jeffress (J Comp Physiol Psychol 41:35–39, 1948). As an extension of the Jeffress model, the process of detecting and encoding ITD has been compared to an effective cross-correlation between the input signals to the two ears. Because the cochlea performs a spectrotemporal decomposition of the input signal, this cross-correlation takes place over narrow frequency bands. Since the cochlear tonotopy is arranged in series, sounds of different frequencies will trigger neural activity with different temporal delays. Thus, the matching of the frequency tuning of the left and right inputs to the cross-correlator units becomes a ‘timing’ issue. These properties of auditory transduction gave theoretical support to an alternative model of ITD-detection based on a bilateral mismatch in frequency tuning, called the ‘stereausis’ model. Here we first review the current literature on the owl’s nucleus laminaris, the equivalent to the medial superior olive of mammals, which is the site where ITD is detected. Subsequently, we use reverse correlation analysis and stimulation with uncorrelated sounds to extract the effective monaural inputs to the cross-correlator neurons. We show that when the left and right inputs to the cross-correlators are defined in this manner, the computation performed by coincidence-detector neurons satisfies conditions of cross-correlation theory. We also show that the spectra of left and right inputs are matched, which is consistent with predictions made by the classic model put forth by Jeffress.
Barn owl; Interaural time difference; Cross-correlation; Coincidence detection; Cochlear delays; Sound localization; Nucleus laminaris; Stereausis
Spike-frequency adaptation is the reduction of a neuron's firing rate to a stimulus of constant intensity. In the locust, the Lobula Giant Movement Detector (LGMD) is a visual interneuron that exhibits rapid adaptation to both current injection and visual stimuli. Here, a reduced compartmental model of the LGMD is employed to explore adaptation's role in selectivity for stimuli whose intensity changes with time. We show that supralinearly increasing current injection stimuli are best at driving a high spike count in the response, while linearly increasing current injection stimuli (i.e., ramps) are best at attaining large firing rate changes in an adapting neuron. This result is extended with in vivo experiments showing that the LGMD's response to translating stimuli having a supralinear velocity profile is larger than the response to constant or linearly increasing velocity translation. Furthermore, we show that the LGMD's preference for approaching versus receding stimuli can partly be accounted for by adaptation. Finally, we show that the LGMD's adaptation mechanism appears well tuned to minimize sensitivity for the level of basal input.
Spike-frequency adaptation; Single neuron computation; LGMD; DCMD; Insect vision; Collision avoidance
The Wilson–Cowan model of interacting neurons (1973) is one of the most influential papers published in Biological Cybernetics (Kybernetik). This paper and a companion paper published in 1972 have been cited over 1000 times. Rather than focus on the microscopic properties of neurons, Wilson and Cowan analyzed the collective properties of large numbers of neurons using methods from statistical mechanics, based on the mean-field approach. New experimental techniques to measure neuronal activity at the level of large populations are now available to test these models, including optical recording of brain activity with intrinsic signals and voltage sensitive dyes, and new methods for analyzing EEG and MEG. These measurement techniques have revealed patterns of coherent activity that span centimetres of tissue in the cerebral cortex. Here the underlying ideas are reviewed in a historic context.
The coordination of digits during combined force/torque production tasks was further studied using the data presented in the companion paper [Zatsiorsky et al. Biol Cybern this issue, Part I]. Optimization was performed using as criteria the cubic norms of (a) finger forces, (b) finger forces normalized with respect to the maximal forces measured in single-finger tasks, (c) finger forces normalized with respect to the maximal forces measured in a four-finger task, and (d) finger forces normalized with respect to the maximal moments that can be generated by the fingers. All four criteria failed to predict antagonist finger moments when these moments were not imposed by the task mechanics. Reconstruction of neural commands: The vector of neural commands c was reconstructed from the equation c = W−1F, where W is the finger interconnection weight matrix and F is the vector of finger forces. The neural commands ranged from zero (no voluntary force production) to one (maximal voluntary contraction). For fingers producing moments counteracting the external torque (‘agonist’ fingers), the intensity of the neural commands was well correlated with the relative finger forces normalized to the maximal forces in a four-finger task. When fingers produced moments in the direction of the external torque (‘antagonist’ fingers), the relative finger forces were always larger than those expected from the intensity of the corresponding neural commands. The individual finger forces were decomposed into forces due to ‘direct’ commands and forces induced by enslaving effects. Optimization of the neural commands resulted in the best correspondence between actual and predicted finger forces. The antagonist moments are, at least in part, due to enslaving effects: strong commands to agonist fingers also activated antagonist fingers.
We offer a hypothesis on the organization of multi-effector motor synergies and illustrate it with the task of force production with a set of fingers. A physical metaphor, a leaking bucket, is analyzed to demonstrate that an inanimate structure can show apparent error compensation among its elements. A neural model is developed using tunable back-coupling loops as means of assuring error compensation in a task-specific way. The model demonstrates non-trivial features of multi-finger interaction such as delayed emergence of force stabilizing synergies and simultaneous stabilization of the total force and total moment produced by the fingers. The hypothesis suggests that neurophysiological structures involving short-latency feedback may play a central role in the formation of motor synergies.
This study examines various optimization criteria as potential sources of constraints that eliminate (or at least reduce the degree of) mechanical redundancy in prehension. A model of nonvertical grasping mimicking the experimental conditions of Pataky et al. (current issue) was developed and numerically optimized. Several cost functions compared well with experimental data including energylike functions, entropylike functions, and a “motor command” function. A tissue deformation function failed to predict finger forces. In the prehension literature, the “safety margin” (SM) measure has been used to describe grasp quality. We demonstrate here that the SM is an inappropriate measure for nonvertical grasps. We introduce a new measure, the “generalized safety margin” (GSM), which reduces to the SM for vertical and two-digit grasps. It was found that a close-to-constant GSM accounts for many of the finger force patterns that are observed when grasping an object oriented arbitrarily with respect to the gravity field. It was hypothesized that, when determining finger forces, the CNS assumes that a grasped object is more slippery than it actually is. An “operative friction coefficient” of approximately 30% of the actual coefficient accounted for the offset between experimental and optimized data. The data suggest that the CNS utilizes an optimization strategy when coordinating finger forces during grasping.
The mechanical complexities of rotating an object through the gravity field present a formidable challenge to the human central nervous system (CNS). The current study documents the finger force patterns selected by the CNS when performing one-, two-, and four-finger grasping while holding an object statically at various orientations with respect to vertical. Numerous mechanically “unnecessary” behaviors were observed. These included: nonzero tangential forces for horizontal handle orientations, large internal forces (i.e., those in excess of equilibrium requirements) for all orientations, and safety margins between 50 and 90%. Additionally, none of the investigated measures were constant across orientations or could be represented as a simple trigonometric function of orientation. Nonetheless, all measures varied in systematic (and sometimes symmetric) ways with orientation. The results suggest that the CNS selects force patterns that are based on mechanical principles but also that are not simply related to object orientation. This study is complemented by a second paper that provides an in-depth analysis of the mechanics of nonvertical grasping and accounts for many of the observed results with numerical optimization (see Part II - current issue). Together, the papers demonstrate that the CNS is likely to utilize optimization processes when controlling prehensile actions.