Search tips
Search criteria 


Logo of nihpaAbout Author manuscriptsSubmit a manuscriptHHS Public Access; Author Manuscript; Accepted for publication in peer reviewed journal;
Exp Brain Res. Author manuscript; available in PMC 2010 July 1.
Published in final edited form as:
PMCID: PMC2853943

Internal models and neural computation in the vestibular system


The vestibular system is vital for motor control and spatial self-motion perception. Afferents from the otolith organs and the semicircular canals converge with optokinetic, somatosensory and motor-related signals in the vestibular nuclei, which are reciprocally interconnected with the vestibulocerebellar cortex and deep cerebellar nuclei. Here, we review the properties of the many cell types in the vestibular nuclei, as well as some fundamental computations implemented within this brainstem–cerebellar circuitry. These include the sensorimotor transformations for reflex generation, the neural computations for inertial motion estimation, the distinction between active and passive head movements, as well as the integration of vestibular and proprioceptive information for body motion estimation. A common theme in the solution to such computational problems is the concept of internal models and their neural implementation. Recent studies have shed new insights into important organizational principles that closely resemble those proposed for other sensorimotor systems, where their neural basis has often been more difficult to identify. As such, the vestibular system provides an excellent model to explore common neural processing strategies relevant both for reflexive and for goal-directed, voluntary movement as well as perception.

Keywords: Vestibular, Computation, Internal model, Reference frame transformation, Eye movement, Motor control, Sensorimotor, Reafference, Motion estimation


Vision, hearing, smell, taste, and touch are the five senses we commonly recognize as providing us with information about our environment and our interaction with it. A less well recognized but exquisitely sensitive set of sensors, the vestibular organs in the inner ear, provide us with a vital sixth sense: the sense of our motion and orientation in space. In particular, three roughly orthogonal sets of semicircular canals measure how the head rotates in three-dimensions (3D). They are complemented by two otolith organs (the utricle and saccule) that measure linear accelerations including how the head translates and how it is positioned relative to gravity. Even creatures with relatively simple nervous systems (e.g., jellyfish, crustaceans) have basic graviceptors that provide information about orientation with respect to gravity that is critical for survival (Sandeman and Okajima 1972; Singla 1975).

The vestibular system plays a vital role in everyday life, contributing to gaze stabilization (Barnes 1993; Raphan and Cohen 2002; Angelaki 2004; Cullen and Roy 2004), balance and postural control (Inglis et al. 1995; Allum and Honegger 1998; Buchanan and Horak 2001; Horak et al. 2001; Cathers et al. 2005; Maurer et al. 2006; Stapley et al. 2006; Macpherson et al. 2007), spatial navigation (Andersen 1997; Stackman and Taube 1997; Page and Duffy 2003; Bremmer 2005; Day and Fitzpatrick 2005; Gu et al. 2007; Taube 2007), spatial perception and memory (Berthoz et al. 1995; Israel et al. 1997; Van Beuzekom et al. 2001; Stackman et al. 2002; Brandt et al. 2005; Klier et al. 2005; Li and Angelaki 2005; Klier and Angelaki 2008; Vingerhoets et al. 2008), voluntary movement planning (DiZio and Lackner 2001; Mars et al. 2003; Bresciani et al. 2005; Bockisch and Haslwanter 2007; Raptis et al. 2007), and autonomic function (Yates 1992; Balaban and Porter 1998; Yates and Bronstein 2005). However, unlike most other senses, we are typically not consciously aware of its contribution until uncertainty in interpreting vestibular signals or conflicts with other sensory cues give rise to illusions or motion sickness. Its essential contribution is felt most acutely when vestibular system function is compromised (e.g., due to vestibular hair cell loss, vestibular neuritis, central and peripheral lesions etc.) resulting in problems of disorientation, loss of balance and postural control, loss of visual acuity, and perceptual distortions (Curthoys et al. 1991; Halmagyi et al. 1991; Curthoys and Halmagyi 1995; Karnath and Dieterich 2006; Dieterich 2007). Being phylogenetically old, the vestibular system can also provide unique insights into the foundations upon which the computational strategies used widely by the brain are organized.

Much of the processing of vestibular signals occurs in the brainstem and cerebellum, where there is already strong multimodal convergence with optokinetic and proprioceptive information (Waespe and Henn 1977, 1981; Boyle and Pompeiano 1980, 1981; Boyle et al. 1985; Wilson et al. 1990; Buttner et al. 1991; Barmack and Shojaku 1995; McCrea et al. 1999; Wylie and Frost 1999; Gdowski and McCrea 2000; Barmack 2003). In addition, many of the secondary neurons receiving direct primary afferent inputs are also premotor cells that project directly to extraocular motoneurons (McCrea et al. 1980, 1987; Scudder and Fuchs 1992). Thus, beyond its obvious functional importance, the vestibular system also represents an ideal model system for studying broad principles of sensory processing ranging from multisensory integration for spatial motion estimation to the sensorimotor transformations required for motor control. While most recent reviews have concentrated on specific aspects of vestibular system function (e.g., gaze stabilization: Barnes 1993; Raphan and Cohen 2002; Angelaki 2004; Cullen and Roy 2004; Angelaki and Hess 2005; motor learning: du Lac et al. 1995; Raymond et al. 1996; Blazquez et al. 2004; Boyden et al. 2004; postural control and locomotion: Bent et al. 2005; Deliagina et al. 2008; spatial memory and visuo-spatial updating: Klier and Angelaki 2008; Smith et al. 2009; cortical multisensory integration: Andersen 1997; Fukushima 1997; Angelaki et al. 2009), the goal here is to focus on early (i.e., subcortical) vestibular processing (see also Angelaki and Cullen 2008) and how it has contributed to our understanding of neural computation.

Historically, computational approaches have always been an integral part of studies of the vestibular system. This trend was initiated early by pioneers who used control systems theory to establish the basic sensorimotor transformations by which vestibular signals are converted into the motor commands that drive compensatory eye movements (i.e., vestibulo-ocular reflexes, VOR) during head motion. The success of this approach was facilitated by at least four important factors: (1) vestibular stimuli can be precisely controlled, thus ensuring that the “input” can be easily quantified. This is also true for the “output”: eye movements can be very accurately measured (Robinson 1963); (2) all processing stages in the VOR, from primary afferents to extraocular motor neurons, take place within interconnected brainstem and cerebellar regions that are easily accessible for electrode recordings; (3) The eye represents a very simple motor plant both because it is a single joint system and because it carries a negligible load; (4) to a first approximation, the simplest processing in the VOR pathways is linear. As a result, it was possible not only to theoretically predict exactly which transformations need to take place to convert vestibular signals into an appropriate motor output, but also to identify experimentally the neural correlates for these transformations.

Such studies continue to provide new insights for sensorimotor control. However, more recently, research in the field has increasingly focused on more complex and often nonlinear or “context-dependent” computations. As reviewed below, the vestibular system provides an excellent model for identifying the neural correlates of contemporary principles of motor control (e.g., internal models, reafference versus exafference, and reference frame transformations) both because of its relative simplicity (e.g., as compared to the circuits for limb control) and because it is possible to precisely control and measure both the inputs to the system and its neural or behavioral outputs. Most recent work in the vestibular system has focussed on the following research questions: (1) the sensorimotor transformations for reflex generation; (2) the neural computations for inertial motion estimation; (3) the computations to distinguish active from passive movements; (4) the integration of vestibular and proprioceptive signals for body motion estimation. Therefore, in this review we will first discuss experimental and theoretical evidence for internal models in the VOR and their neural correlates in vestibular nuclei (VN) neurons that are sensitive to eye movements. Then we will shift our attention to another group of VN neurons without sensitivity to eye movements and summarize their role in the computation of inertial motion and in the distinction between actively-versus passively-generated head movements. The last topic we will review is how vestibular signals can be used to estimate not only head but also body motion. Note that, throughout this review, we concentrate on the subcortical processing of vestibular information (for reviews about cortical processing, see Fukushima 1997; Guldin and Grusser 1998; Angelaki et al. 2009).

A common theme throughout is the concept of internal models. In recent years, the term “internal model” has been used in a variety of contexts to refer to anything from an explicit neural representation of the dynamic properties of a motor plant or sensor (Shidara et al. 1993; Shadmehr and Mussa-Ivaldi 1994; Wolpert and Miall 1996; Kawato 1999; Green et al. 2007) to the representation of a solution to a specific equation that needs to be solved (Merfeld et al. 1999; Angelaki et al. 2004; Zago et al. 2004, 2009; Green et al. 2005). Here, we use the term in its broadest sense to refer to any neural representation of a specific computation that needs to be performed. The internal model concept is emphasized here because, as reviewed below, this is perhaps the only sensorimotor system for which neural correlates of internal models have been explicitly identified.

Internal models for slow eye movements in the vestibulo-ocular reflex

An essential role of the vestibular system is to ensure stable viewing of the world by eliciting short-latency reflexive eye movements to compensate for head movement, known as the vestibulo-ocular reflexes (VORs). Early studies of the vestibulo-ocular pathways have provided the groundwork for understanding basic sensorimotor transformations and have elucidated principles that have broad application to all types of motor control. In particular, in any motor system, the brain must compute motor commands from signals that provide a representation of desired action. The required computations often rely on internal representations of the dynamic properties of the motor plant to be controlled. Such “internal models”, which now constitute a general theoretical concept in motor control, may be used either to transform desired action into appropriate motor commands (“inverse model”) or conversely, to predict the consequences of motor commands on behavior (“forward model”) (Shidara et al. 1993; Shadmehr and Mussa-Ivaldi 1994; Wolpert and Miall 1996; Kawato 1999). Some of the earliest and most parsimonious evidence for such models and their neural implementation comes from studies of sensorimotor processing in the vestibulo-ocular pathways. Next we describe the sensorimotor transformations in the rotational vestibulo-ocular reflex (RVOR) and the concepts that have emerged thus far.

The requirement for an inverse dynamic model in the RVOR

The need for an inverse dynamic model in the RVOR (Fig. 1a) was pioneered by David Robinson and his colleagues in the 1970s (Skavenski and Robinson 1973). Their hypothesis, which has remained influential in motor control, was based on three basic observations: (1) afferents from the semicircular canals encode head velocity over a broad frequency range (> ~ 0.03 Hz); (2) having little inertia, the mechanics of the eyeball are dominated by visco-elastic forces such that the relationship between eye position and motoneural firing rates can be approximated by a first-order low-pass filter with a bandwidth of ~ 0.5–0.6 Hz (Robinson 1964, 1965, 1970). As a result, if the semicircular canal afferent signals were simply projected in a feed-forward fashion directly to extraocular motoneurons, eye velocity would be proportional to head velocity only for frequencies above ~ 0.5 Hz (Fig. 1b; blue curve labeled “no inverse model”). Yet, (3) it has been shown experimentally that the compensatory RVOR bandwidth is broad, extending to very low frequencies (Fig. 1b; red curve labeled “with inverse model”; Buettner et al. 1981; Mizukoshi et al. 1983; Paige and Sargent 1991; Angelaki et al. 1996). The difference between the red and blue curves in Fig. 1b implies an additional processing stage (“inverse model” in Fig. 1a), whereby premotor circuits compensate for the dynamics of the eyeball by “filtering” canal afferent signals with an inverse dynamic model of the eye plant.

Fig. 1
The sensorimotor processing underlying eye movement generation in the RVOR. a Angular velocity signals from the semicircular canals are processed by an inverse dynamic model of the eye plant before being conveyed onto extraocular motoneurons (MNs). b ...

Robinson and colleagues also pioneered the first plausible implementation of such an inverse dynamic model that became well-known as the “parallel-pathway” model (Fig. 1c; Skavenski and Robinson 1973; Robinson 1981): They proposed that velocity signals were conveyed to motoneurons (MN) both directly and indirectly via a “neural integrator” (∫ in Fig. 1c). Together the two pathways compensate for the viscoelastic properties of the eyeball and are thought to comprise an inverse dynamic model of a simplified (first-order) eye plant. In an alternative representation the integration was implemented in a distributed fashion via positive feedback loops through a forward model of the eye plant (Fig. 1d; Galiana and Outerbridge 1984; Galiana 1991). These two descriptions produce equivalent sensorimotor transformations and the same VOR response characteristics (i.e., same blue to red curve transformation in Fig. 1b; see Green et al. 2007, supplemental material, for details).

Both implementations make an important prediction regarding the properties of the neurons driving the RVOR: premotor neurons should exist the firing rates of which are closely correlated with eye position, reflecting the output of the neural integrator in Fig. 1c or the output of the forward model in Fig. 1d. Further expanded and more complex models consistent with this notion also predicted the existence of neurons which encode various combinations of head velocity and eye movement-related signals (Cannon et al. 1983; Galiana and Outerbridge 1984; Cannon and Robinson 1985; Arnold and Robinson 1991; Cova and Galiana 1996; Green and Galiana 1996; Hazel et al. 2002). As described next, recordings from brainstem and cerebellar neurons have provided solid experimental evidence consistent with these predictions.

Neural correlates for an inverse model

The first neurophysiological support for the existence of an inverse eye plant model that includes a neural integrator came from the discovery of “burst-tonic” and “tonic” neurons in the prepositus hypoglossi (PH) and adjacent medial vestibular nuclei (VN) (collectively referred to here as PH–BT cells). As shown in Fig. 2a, PH–BT neurons have firing rates that correlate closely with eye position during static fixation and low-frequency slow eye movements (Baker and Berthoz 1975; Lopez-Barneo et al. 1982; Escudero et al. 1992, 1996; McFarland and Fuchs 1992) and they do not respond to head movements in the absence of eye movement during fixation of a target that moves with the head (i.e., during RVOR suppression; McFarland and Fuchs 1992; Cullen et al. 1993; Green et al. 2007). Consequently, PH–BT neurons were thought to encode the eye position component of the inverse dynamic model (e.g., E* in Fig. 1c, d).

Fig. 2
Premotor eye-movement-sensitive cell types implicated in the brainstem RVOR pathways. a Burst-tonic (PH–BT) neuron recorded in the PH. Modified and reprinted with permission from McFarland and Fuchs (1992). b Eye-contralateral (Type I) position-vestibular-pause ...

Other populations of VN neurons that became known as “position-vestibular-pause” (PVP, Fig. 2b) and “eye-head” (EH, Fig. 2c) cells were shown to carry different combinations of head velocity and eye position (and/or eye velocity) signals (King et al. 1976; Lisberger and Miles 1980; Chubb et al. 1984; Tomlinson and Robinson 1984; Scudder and Fuchs 1992; Cullen et al. 1993; Cullen and McCrea 1993; Lisberger et al. 1994c). Many PVP and EH neurons receive monosynaptic canal inputs and make direct projections to extraocular motoneurons, thus being identified as putative interneurons in the shortest-latency VOR pathways (McCrea et al. 1980, 1987; Scudder and Fuchs 1992). Depending on whether PVP and EH cells prefer contralaterally or ipsilaterally directed eye movements, they can be further subdivided into “eye-contra” and “eye-ipsi” cell types. Notably, only the eye-contra (also widely known as “type I”) PVP and EH subgroups appear to make the bulk of direct projections to motoneurons and are thus considered the main premotor VN neurons in the RVOR pathways (McCrea et al. 1980, 1987; Scudder and Fuchs 1992).

The PVP and EH cell types can be distinguished by the way they combine head and eye movement signals. PVP cells increase their activities for head rotation in one direction during RVOR suppression (i.e., stabilization of a target that moves with the head so that the eyes do not move) and for eye rotation in the opposite direction during head-stationary smooth target tracking (smooth pursuit); as a result, response modulation is largest during stable-gaze RVOR when the eyes move in the opposite direction to head motion as animals fixate a world-fixed target (Fig. 2b). In contrast, the preferences of EH cells for head rotation during RVOR suppression and for eye rotation during smooth pursuit are in the same direction, such that response modulation is reduced during stable-gaze RVOR (Fig. 2c; Scudder and Fuchs 1992). As EH cells typically exhibit larger responses during pursuit as compared to RVOR suppression, their modulation during stable gaze RVOR is often dominated by eye-movement-related activity (Scudder and Fuchs 1992; Cullen et al. 1993; Lisberger et al. 1994c). As a result, some EH cells with large pursuit responses show an apparent reversal in preferred direction during RVOR suppression (when the eyes do not move) as compared to RVOR stable gaze conditions (when compensatory eye movements are elicited; e.g., Fig. 2c). This “oppositely-directed” activity is presumed responsible for canceling out the strong PVP modulation (e.g., Fig. 2b) at the motoneuron level during RVOR suppression (Scudder and Fuchs 1992; Cullen et al. 1993; Cullen and McCrea 1993). Thus, in conjunction with PH–BT cells, PVP and EH cells are generally presumed to provide motoneurons with the correct combination of velocity and position-like signals to compensate for the plant dynamics during slow eye movements.

Learning and viewing context-related changes in VOR amplitude are often accompanied by significant changes in the depth of modulation of EH neurons. These cells are thus thought to play a particularly important role in the online contextual modulation of the VOR with viewing location (McConville et al. 1996; Chen-Huang and McCrea 1999a, b; Meng and Angelaki 2006) as well as in long-term adaptive reflex changes brought about by altered visual-vestibular mismatch stimuli (Lisberger et al. 1994b). A subset of EH (but not PVP) cells, known as floccular-target neurons (FTNs), receive direct inhibitory projections from the cerebellar flocculus and exhibit properties appropriate to drive changes in reflex gain during motor learning (Lisberger et al. 1994b, c). FTNs and their connectivity with the cerebellar flocculus/ventral paraflocculus have provided an excellent model system for studying the neural, cellular, and genetic basis of a simple form of motor learning (see reviews by Lisberger 1988; du Lac et al. 1995; Raymond et al. 1996; Blazquez et al. 2004; Boyden et al. 2004).

This brief summary emphasizes a widely accepted notion that the several types of eye-movement-sensitive premotor neurons collectively contribute to computing an inverse dynamic model of the eye plant. The distributed nature of this inverse model is supported both by the high level of neuronal interconnectivity and by eye-movement deficits consistent with a loss of integration after lesions to many brainstem and cerebellar areas (Zee et al. 1981; Cannon and Robinson 1987; Godaux et al. 1993; Mettens et al. 1994; Kaneko 1997, 1999). Recently, aspects of this theoretical construct have been reconsidered and extended, leading to new insights into the organization of the system that reveal close parallels with other motor systems (e.g., limb control). These insights have been brought about by considering another reflex type, the translational vestibulo-ocular reflex (TVOR), that generates compensatory eye movements during translation (e.g., during locomotion). Next we describe how differences between the RVOR and TVOR have helped probe the concepts of internal models and their neural implementation.

Insights from the translational vestibulo-ocular reflex

The TVOR differs from the RVOR in many respects (reviewed in Angelaki 2004; Angelaki and Hess 2005), including the basic dynamic transformations required to convert sensory signals to motor commands. In particular, unlike other types of eye movements including saccades, smooth target tracking, and the RVOR that are all driven by velocity-like signals, the sensory drive for the TVOR provided by otolith afferents is encoded in terms of linear acceleration (Fernandez and Goldberg 1976a, b). Behaviorally, the TVOR also has a much narrower dynamic range and is robust only at frequencies above the eye plant bandwidth (>~0.5–1 Hz; Paige and Tomko 1991a; Telford et al. 1997; Angelaki 1998).

These differences both at sensory and at motor levels imply that ultimately different sensorimotor processing is required for the TVOR versus the RVOR. But to what extent are common computational strategies employed? Recall that the broad RVOR bandwidth has been used as the main argument for the existence of an inverse dynamic model that compensates for the eye plant dynamics (Fig. 1a, b; Skavenski and Robinson 1973). However, using a similar logic, no such compensation is needed for the TVOR: the high-pass dynamics of the TVOR (e.g., similar to those in Fig. 1b, blue curve “no internal model”) would either argue against an inverse plant model or at best suggest that such processing may be unnecessary (Green and Galiana 1998; Musallam and Tomlinson 1999; Angelaki et al. 2001). In principle, only an integrator is necessary for the TVOR to convert linear acceleration into the velocity-like signals required to drive the reflex at higher frequencies. Thus, one way that otolith signals could be processed is by only utilizing the integrator pathway in Robinson’s parallel pathway diagram (Fig. 3a; Green and Galiana 1998; Musallam and Tomlinson 1999; Angelaki et al. 2001).

Fig. 3
Schematic illustration of two hypotheses for the dynamic processing in the TVOR. a Distributed dynamic processing hypothesis, whereby the internal model is not fully implemented in the TVOR pathways. Otolith signals are processed by only the neural integrator ...

While the scheme shown in Fig. 3a represents the most efficient strategy for processing otolith signals in the TVOR, it nonetheless has a disadvantage. As a common inverse model would not be shared by all sensorimotor systems that drive the same effector (the eyeball in this case), the way that premotor neurons encode information about eye movement would depend on the sensory stimulus (see Green and Galiana 1998; Green et al. 2007 for details). Alternatively, a common inverse model might be shared by multiple sensorimotor systems to ensure that at least some premotor neurons always encode a consistent eye movement representation even when the dynamics of both the motor output and the sensory input differ. In this case, however, the processing in the TVOR would be less efficient; otolith linear acceleration signals would need to be preprocessed first (i.e., upstream of the inverse plant model; “prefiltering” stage; Fig. 3b; Paige and Tomko 1991a, b; Telford et al. 1997) both to make them compatible with the velocity-like eye movement drive from other sensory sources as well as to provide the high-pass properties that are observed behaviorally in the TVOR. What strategy does the brain use? One that optimizes the use of existing circuitry to perform multiple distinct sensorimotor transformations (Fig. 3a) or one that relies on a common internal model, despite the need for additional processing, with the goal of maintaining consistent internal state estimates (Fig. 3b)?

Single unit recordings from PH–BT, PVP, and EH cells during both rotation and translation have revealed distinctions in the way the particular neural subpopulations encode rotational versus translational signals (Angelaki et al. 2001; Meng et al. 2005; Meng and Angelaki 2006; Green et al. 2007). Nonetheless strong support has been provided for the prefiltering stage in Fig. 3b (Green et al. 2007). Both “eye-contra” PVP and PH–BT cells (but not EH and “eye-ipsi” PVP cells) exhibit modulations that lag eye velocity (i.e., are more closely in phase with eye position) at 0.5 Hz, suggesting that a second temporal integration of otolith signals must take place centrally in the TVOR pathways (i.e., compatible with a prefiltering stage). In addition, unlike all other cell types, PH–BT cells exhibited response dynamics relative to eye movement that were identical during rotation and translation. Thus, canal and otolith signals appear to be processed by a common inverse model to create a consistent estimate of the motor output at the level of PH–BT neurons (Green et al. 2007). Furthermore, as outlined next, a direct comparison of neural responses during rotation and translation with those of extraocular motor neurons has provided new insights regarding the neural basis of the inverse dynamic model and the role of PH–BT cells.

Neural correlates for an inverse model reconsidered

Recall that the prevailing theoretical conceptualizations emphasized the notion that PH–BT neurons encode an internal estimate of eye position, a signal representing the output of the neural integrator (E* in Fig. 1c) or of a forward eye plant model (E* in Fig. 1d). Yet, when the critical experiment to test such a presumption was performed, it was shown that PH–BT cell dynamics are identical to those of extraocular motoneurons (Fig. 4a; Green et al. 2007). Thus, PH–BT cells are not the output of the neural integrator portion of the inverse model, as previously assumed. Instead, they appear to represent the output of the inverse model itself, encoding an efference copy of the motor command signal (Green et al. 2007). In retrospect, this finding is not surprising. Motoneurons are only involved in generating the movement and the control of eye movements does not rely on on-line feedback from muscle spindles (Keller and Robinson 1971; Guthrie et al. 1983). Thus, PH–BT cells must play the important role of distributing an efference copy of the motor command (output of the inverse dynamic model) to different premotor and sensory areas, where it can be used for multiple purposes, including updating the brain about ongoing eye movements (McCrea and Baker 1985; Belknap and McCrea 1988; Green et al. 2007).

Fig. 4
Evidence that tonic and burst-tonic cells in the PH and adjacent medial VN (PH–BT cells) represent the output of a common inverse model. a Comparison of the dynamic characteristics of PH–BT cells with those of extraocular motoneurons. ...

In particular, the requirement for dedicated neuronal populations that carry an efference copy of motor command signals is found in contemporary theories of limb control, which suggest that sensorimotor transformations may rely on complementary forward and inverse models of the sensors and motor actuators (e.g., Wolpert and Kawato 1998). Accordingly, on-going eye movement can be estimated by feeding the efference copy signal of the motor command (i.e., the output of the inverse model) through a forward model. The output of the forward model would then predict the estimated eye movement consequences of this motor command (Fig. 4b). Such a signal can be used to update the brain about ongoing eye movement and to correct online for any errors between predicted and desired action by subsequently refining the motor command.

A forward model in the cerebellum?

If PH–BT neurons in the PH and VN represent the output of the inverse dynamic model for slow eye movements, where might the proposed forward model be? Green et al. (2007) (also see Glasauer 2003) suggested the cerebellum as one likely site for the implementation of the forward model. Indeed, the cerebellum has been implicated in the implementation of forward and inverse dynamic models both for limb control (Ito 1970; Miall et al. 1993; Wolpert and Kawato 1998; Kawato et al. 2003) and for eye movements (Shidara et al. 1993; Gomi et al. 1998; Glasauer 2003; Ghasia et al. 2008; Lisberger 2009). In support of this notion, BT neurons in PH/VN and paramedian tract are known to project to the flocculus (Langer et al. 1985b; McCrea and Baker 1985; Belknap and McCrea 1988; Buttner-Ennever et al. 1989; Nakamagoe et al. 2000). Feedback from a presumed forward model in the flocculus could be used to update the brainstem motor command signal via Purkinje cell projections onto FTNs (i.e., EH-type cells) in the VN (Langer et al. 1985a; Lisberger et al. 1994c).

Support for such a proposed role for the cerebellar flocculus and its projections onto EH cells in computing a forward model comes from a recent study which examined how various cell populations encode 3D ocular kinematics during smooth pursuit eye movements (Ghasia et al. 2008). In particular, visually-guided eye movements in 3D are subject to kinematic constraints such that eye positions always lie in what is known as Listing’s plane. To achieve this, the axis of eye rotation during movements initiated from eccentric positions must tilt out of this plane in the same direction as gaze, by approximately half as much (half-angle rule) (Tweed and Vilis 1990). As extraocular motoneurons do not encode the half-angle rule (Ghasia and Angelaki 2005) this property appears to be generated by the mechanical characteristics of the eyeball (Miller 1989; Demer et al. 2000; Kono et al. 2002; Klier et al. 2006). As a result, there is a clear distinction between the motor command and the resulting eye movement, a difference that provides a unique opportunity to investigate the neural substrates for inverse versus forward models. Indeed, PH–BT cells, like motoneurons, showed little systematic eye position dependence consistent with the half-angle rule (Ghasia et al. 2008), providing further support for the proposal that PH–BT neurons represent the output of the inverse model.

Combined, the studies of Green et al. (2007) and Ghasia et al. (2008) thus show that the firing rates of PH–BT neurons are both dynamically (i.e., in terms of their frequency response characteristics; Fig. 4a) and kinematically (i.e., in terms of their firing properties during 3D eye movements) identical to the firing rates of extraocular motoneurons. In contrast to motoneurons and PH–BT cells, EH neurons showed a systematic dependency on eye position that might be consistent with the half-angle rule, suggesting that they carry signals more closely related to the actual executed eye velocity (Ghasia et al. 2008). As many EH cells receive projections from the cerebellar flocculus, such signals could be conveyed from a forward model in the cerebellum. Yet, there are reasons to suggest that this may not simply be a forward model of the eye plant. In particular, many Purkinje cells in the cerebellar flocculus do not simply encode eye velocity but rather seem to combine eye and head velocity signals to compute an estimate of gaze velocity (Lisberger and Fuchs 1978; Miles et al. 1980; Stone and Lisberger 1990; Lisberger et al. 1994a). This has led to the speculation that if the cerebellar flocculus computes a forward model it may in fact be a model of the combined eye-head gaze system (Lisberger 2009). At present, no firm conclusion has been reached regarding the nature of and neural correlates for such a hypothesized forward model.

These recent advances emphasize a conceptual organization for the vestibulo-ocular system that closely parallels those proposed for limb control. Interpretations of experimental data in this context (e.g., Green et al. 2007; Chen-Harris et al. 2008; Ethier et al. 2008; Ghasia et al. 2008; Lisberger 2009) are thus likely to shed valuable new insights into neural strategies for sensorimotor processing, motor control, and learning that are relevant for all types of reflexive and goal-directed voluntary movement. Yet, while the vestibular system contribution to gaze stabilization is arguably one of its most well-studied functions, the vestibular sensors also provide important sensory cues for spatial orientation and self-motion perception. Next, we describe how signals from the two vestibular sensors interact and we focus on another neuron type, known as “vestibular-only” (VO) cells, which is distinct from the cell populations described above in that they do not carry signals related to eye movement.

Computations for inertial motion estimation in the brainstem and cerebellum

Early studies showed that VO cells in the VN and cerebellum also behave as a distributed neural integrator (Cohen et al. 1977, 1981; Raphan et al. 1977, 1979; Waespe and Henn 1977; Katz et al. 1991; Reisine and Raphan 1992; Yokota et al. 1992; Wearne et al. 1997a, 1998). The original function ascribed to this integrative network, which became popularly known as the “velocity storage” integrator, was to compensate for the high-pass dynamic properties of the semicircular canals, with the goal of improving or storing central estimates of angular velocity (Raphan et al. 1977, 1979; Robinson 1977). Thus, this network too appeared to be computing an inverse model, but this time not of the dynamics of the eye but instead of the semicircular canals. However, subsequent experiments revealed that this so-called “velocity storage” network also exhibits complex spatial properties that depend on head orientation with respect to gravity (Raphan et al. 1981; Harris 1987; Raphan and Cohen 1988; Dai et al. 1991; Merfeld et al. 1993b; Angelaki and Hess 1994, 1995; Wearne et al. 1997b, 1998). These observations pointed to a broader role for this VO-cell network in integrating multisensory signals (i.e., optokinetic, vestibular, and somatosensory) to compute internal estimates of inertial self-motion (Merfeld et al. 1993a; Angelaki and Hess 1994, 1995; Merfeld 1995; Glasauer and Merfeld 1997; Hess and Angelaki 1997; Zupan et al. 2002; Green and Angelaki 2003, 2004). In this regard, the nomenclature used to describe VO neurons is misleading, as many respond not only to vestibular stimuli but also to full-field optokinetic and/or proprioceptive stimulation (Waespe and Henn 1977, 1981; Boyle and Pompeiano 1980, 1981; Boyle et al. 1985; Kasper et al. 1988; Wilson et al. 1990; Buttner et al. 1991; Barmack and Shojaku 1995; McCrea et al. 1999; Wylie and Frost 1999; Gdowski and McCrea 2000; Barmack 2003; Bryan and Angelaki 2009).

Among the most important computations implemented by the VO-neuron-network is the resolution of an ambiguity in interpreting sensory otolith signals. Below we summarize evidence suggesting that VO cells within the VN, rostral fastigial nucleus of the cerebellum (rFN) and nodulus/uvula of the caudal cerebellar vermis (NU, lobules IX and X) implement an internal model of the solution to a fundamental physical law necessary to resolve this sensory ambiguity. We start with a brief description of the problem.

The tilt/translation ambiguity

The ambiguity arises because: (1) we move within a gravitational environment; (2) the otolith organs, like any other linear accelerometer, transduce both inertial (translational, t) and gravitational (g) accelerations, thereby providing information about net acceleration (a = tg; Einstein’s equivalence principle; Einstein 1908). Thus, changes in the firing rate of otolith afferents are ambiguous in terms of the type of motion they encode: they could reflect either translation or a head reorientation relative to gravity (i.e., tilt) or combinations of these motions.

A key difference between translation and tilt is that as the head is reoriented relative to gravity it is also simultaneously rotated. Thus, tilts typically activate rotational sensors (e.g., the canals). In contrast, the semicircular canals are not stimulated during pure translation. In theory, therefore, the ambiguity can be resolved by combining otolith signals with estimates of head rotation (e.g., from semicircular canal, visual and/or proprioceptive cues). In recent years, a number of theoretical and behavioral studies have illustrated that rotational cues can be used to explicitly separate the net gravito-inertial acceleration signal sensed by the otoliths into central estimates of gravity and translational acceleration (Merfeld et al. 1993a, 1999; Merfeld 1995; Merfeld and Young 1995; Glasauer and Merfeld 1997; Angelaki et al. 1999; Bos and Bles 2002; Merfeld and Zupan 2002; Zupan et al. 2002; Green and Angelaki 2003, 2004; MacNeilage et al. 2007).

Theoretically, the way that rotational cues should be combined with net acceleration signals to resolve the sensory ambiguity is described by the following equation:


Equation 1 states that to estimate translation, t, the otolith net acceleration signal, a, must be combined with an independent estimate of head tilt (g = − ∫ω × g dt) computed from an extra-otolith rotation estimate, ω. The ∫ω × g term (where ∫ is an integration and × is a vector cross-product) describes the computations that take into account an initial estimate of head orientation (initial g state from static otolith and/or proprioceptive cues) to transform a head-referenced angular velocity signal, ω (e.g., from the canals) into an updated estimate of dynamic tilt relative to gravity, g.

Experimental support for a role for rotational signals in estimating translation (as predicted by Eq. 1) was provided in a series of elegant human and monkey behavioral studies. Merfeld and colleagues (Merfeld et al. 1999, 2001; Zupan et al. 2000) reasoned that if canal signals are inaccurate they would give rise to an inaccurate estimate of gravity (i.e., tilt) and consequently an inaccurate estimate of translation (i.e., an incorrect central estimate of g = − ∫ω × g in Eq. 1 results in an incorrect estimate of t). They then took advantage of the fact that the canals provide an inaccurate estimate of angular velocity at low frequencies to reveal a systematic pattern of “erroneous” ocular responses in humans consistent with the hypothesis that canal signals had contributed to an internal, albeit incorrect, estimate of translational motion (Merfeld et al. 1999; Zupan et al. 2000).

At about the same time, Angelaki and colleagues (Angelaki et al. 1999; Green and Angelaki 2003) used combinations of tilt and translation stimuli (e.g., Fig. 5a, top) at higher frequencies (>0.1 Hz), where canal estimates of angular velocity are accurate, to demonstrate that signals from the semicircular canals directly contribute to the generation of the TVOR in monkeys. Similar types of stimuli have subsequently been used to show that canal signals contribute to tilt/translation discrimination in human perceptual responses (Merfeld et al. 2005a, b) as well as to tilt perception in monkeys (Lewis et al. 2008). An exception to this finding is the human TVOR where tilts and translations are not ideally distinguished. Instead the human TVOR appears to rely predominantly on an alternative, but non-ideal, “filtering” strategy in which higher-frequency otolith stimuli are interpreted as translations while low-frequency stimuli are interpreted as tilts (Mayne 1974; Paige and Tomko 1991a; Merfeld et al. 2005a, b). More generally, a combination of “filtering” and “otolithcanal” convergence strategies are likely to be used to varying extents. In addition, contemporary theories based on Bayesian inference suggest that experimental findings consistent with the predictions of both strategies may be obtained using a zero inertial acceleration prior (i.e., it is more likely that we are stationary rather than moving; Laurens and Droulez 2007; MacNeilage et al. 2007; for a review, see Angelaki et al. 2010).

Fig. 5
Evidence for a neural resolution to the tilt/translation ambiguity. Responses of an otolith afferent (a) and a mainly translation-coding rostral VN neuron (b) recorded during four tilt/translation stimulus combinations in darkness. Modified and replotted ...

Importantly, however, under conditions where tilts and translations are appropriately distinguished behavioral studies have confirmed that (1) semicircular canal signals play a critical role in the estimation of translational self-motion and (2) the dynamic processing of canal-derived rotational signals is consistent with the integration implied by Eq. 1 (Green and Angelaki 2003). As will be shown next, the otolith-canal convergence necessary to implement Eq. 1 takes place on VO cells within brainstem–cerebellar circuits that involve the VN, rFN, and NU (Angelaki et al. 2004; Green et al. 2005; Shaikh et al. 2005; Yakusheva et al. 2007).

Neural correlates of the internal model that resolves the tilt/translation ambiguity

To investigate how and where neurons combine canal and otolith signals to distinguish tilts and translations, neural responses were recorded during four stimuli: translation, roll tilt, and simultaneous combinations of the two motions in which translational and gravitational accelerations either summed (Tilt + Translation stimulus) or canceled one another out (TiltTranslation stimulus; Fig. 5a, top). Unlike the responses of otolith afferents which encode the net linear acceleration (Fig. 5a), many neurons in the VN and rFN modulated strongly during translation, and only marginally during tilt, as illustrated with the example VN cell in Fig. 5b. Note, in particular, that this VN neuron responded during TiltTranslation motion, even though the dynamic linear acceleration stimulus to the otoliths was close to zero (see the lack of modulation of the otolith afferent, Fig. 5a). Thus, the robust response of central neurons to TiltTranslation motion reveals nicely the underlying semicircular canal contribution to constructing an estimate of translation (Angelaki et al. 2004; Green et al. 2005; Shaikh et al. 2005; Yakusheva et al. 2007). Indeed, neural responses during TiltTranslation disappeared after the semicircular canals were inactivated by plugging (Shaikh et al. 2005; Yakusheva et al. 2007).

The extent to which individual neurons in the VN, rFN, and NU reflected a neural coding of translation versus net acceleration is summarized in Fig. 5c, which compares the normalized correlation coefficients of the fits of each model to cell responses to the stimuli in Fig. 5a,b. Data points falling in the upper left quadrant represent neurons that were significantly more translation-coding. Cells in the lower right quadrant were significantly more net acceleration-coding. Whereas VN and rFN neurons spanned the whole range from translation to net acceleration coding, all NU Purkinje cell responses correlated best with translation (i.e., the output of an internal model of the solution to Eq. 1). Perhaps most importantly, quantitative analyses showed that for most neurons (including those that did not explicitly encode translation) the otolith and canal-derived signals converging onto each cell (terms a and − ∫ω × g, respectively) were spatially and temporally aligned, as necessary to implement an internal model of the solution to Eq. 1 (Angelaki et al. 2004; Green et al. 2005; Shaikh et al. 2005; Yakusheva et al. 2007).

Thus, in summary, brainstem and cerebellar neurons were shown to carry the appropriate signals to distinguish translation in the horizontal plane from small tilts from an upright orientation and to explicitly construct a central representation of translation. Under these conditions (i.e., translation and small tilts from upright), otolith signals are combined with both spatially matched and dynamically transformed (i.e., temporally integrated) canal signals to resolve the tilt/translation ambiguity (Green et al. 2005). However, the head is not always upright. As described next, the specific way that otolith and canal signals must combine to resolve the sensory ambiguity problem in 3D depends critically on head orientation.

Evidence for a reference frame transformation of rotational signals

Let us return to Eq. 1, showing that the component of acceleration due to head reorientation relative to gravity must first be computed using rotational signals (i.e., the term − ∫ω × g). As emphasized in Fig. 6a, as the canals are fixed in the head whereas the gravity vector is fixed in space, different sets of canals signal a reorientation relative to gravity when the head is upright (i.e., vertical canals), as compared to when the head is pitched forward or backward (i.e., horizontal canals). Thus, in general, the way that otolith and canal signals must combine to distinguish tilts and translations is head-orientation-dependent (Green and Angelaki 2004, 2007; Green et al. 2005). This is exactly what is implied by the vector cross-product g = − ∫ω×g term in Eq. 1; it implies that the brain must combine head-centered rotational information, ω, nonlinearly (multiplicatively) with a current estimate of head orientation, g, to compute a new updated tilt estimate.

Fig. 6
A reference frame transformation of canal signals is required to discriminate tilts and translations in 3D. a Only rotations about an earth-horizontal axis (but not an earth-vertical axis) reorient the head relative to gravity (e.g., thick green and red ...

For small rotations from different static head orientations, this computation can be thought of as approximately equivalent to transforming a head-centered representation of angular velocity (e.g., from the canals) into a world-centered representation of the earth-horizontal rotation component (Green and Angelaki 2004, 2007; Green et al. 2005; Yakusheva et al. 2007). Specifically, as illustrated schematically in Fig. 6b, the rotation component about the earth-horizontal axis, ωEH, corresponds to the component of rotation that signals a change in head orientation with respect to gravity. Integration of this signal yields an estimate of dynamic tilt (gdyn ≈ ∫ωEH), which can then be combined with otolith signals to extract an estimate of translation, t. Accordingly, an important theoretical prediction for cells that encode the output of an internal model of Eq. 1 (i.e., the NU cells that encode translation) is that they should combine otolith signals with canal signals that have been transformed into a spatially-referenced signal (i.e., an estimate of ωEH). At present, this prediction indeed appears to hold for the simple spike responses of NU Purkinje cells which exhibit a robust canal-derived ωEH signal during TiltTranslation motion from an upright orientation (Fig. 6c) but do not respond to rotations about an earth-vertical axis (Fig. 6d; Yakusheva et al. 2007). That the responses of these neurons reflect the full vector cross-product computation of Eq. (1) required to estimate ωEH and compute translation in 3D remains to be explicitly shown by examining their responses across multiple head orientations.

The types of context-dependent (in this case head-orientation-dependent) computations that are required to estimate inertial motion are similar to those required for many other sensorimotor problems, such as planning limb movements where the way muscles are activated for the same movement direction depends on starting limb posture (Buneo et al. 1997; Scott and Kalaska 1997; Scott et al. 1997; Sergio and Kalaska 2003; Buneo and Andersen 2006; Ajemian et al. 2008). A better understanding of how such computations take place within the VO cell network and the role of the cerebellum in this process is thus likely to be of broad general relevance for understanding the processing strategies employed across multiple sensorimotor systems. As will be shown next, VO cells also participate in another fundamental computation: that of distinguishing actively-generated from passively-applied head movements (see also review by Angelaki and Cullen 2008).

Actively- versus passively-generated movements: the concept of reafference

Until recently, the vestibular system had been exclusively studied in head-restrained animals, by moving the head and body together using an externally applied stimulus. As a result, our understanding of vestibular processing was limited to the neuronal encoding of vestibular exafference (i.e., vestibular signals arising from motions applied by the external environment). More recently, investigators in the field have compared neural responses during self-generated head movements to those during more traditional “passive” vestibular stimulation (McCrea et al. 1999; Roy and Cullen 2001). While vestibular afferents reliably encode head motion during active movements (Cullen and Minor 2002; Sadeghi et al. 2007; Jamali et al. 2009), neural responses in the VN can be dramatically attenuated (Fig. 7, compare a and b; see also Boyle et al. 1996; McCrea et al. 1999; Roy and Cullen 2001). What is even more striking is that these same vestibular neurons continue to selectively respond to passively applied head motion when a monkey generates active head-on-body movements (Fig. 7c; Roy and Cullen 2001; Cullen and Roy 2004). Furthermore, cognitive signals appear to play no role as neural responses are not attenuated when the monkey uses a steering wheel to drive its own passive whole-body rotation (Roy and Cullen 2001). This selective suppression of self-generated vestibular activity during active head movements is specific to the class of VO neurons found in the VN and rFN regions that are interconnected with the NU (Cullen and Roy 2004). Notably, these are the same areas involved in computing inertial motion (i.e., described above) although at present whether the same neurons that extract such estimates also show a selective suppression of activity during active head movements remains to be determined.

Fig. 7
Neurons in the vestibular nuclei distinguish between sensory inputs that result from our own actions versus from externally applied motion. Responses of a VO neuron (gray filled traces) in the VN during a passive head movements (whole-body rotation); ...

These findings are of particular importance for understanding how the brain differentiates between sensory inputs that arise from changes in the world and those that result from our own voluntary actions. As pointed out by von Helmholtz (1925), this dilemma is notably experienced during eye movements: although targets rapidly jump across the retina as we move our eyes to make saccades, we never see the world move over our retina. Yet, tapping on the canthus of the eye to displace the retinal image (as during a saccadic eye movement) results in an illusionary shift of the visual world.

The concept of internal models, outlined in previous sections, is ultimately tied to the dilemma of distinguishing sensory inputs that arise from external sources from those that result from self-generated movements. To address this problem von Holst and Mittelstaedt (1950) proposed the “Principle of Reafference”, where a copy of the expected sensory results of a motor command is subtracted from the actual sensory signal, thereby eliminating the portion of the sensory signal resulting from the motor command (termed “reafference”) to create a perception of the outside world (termed “exafference”). An internal estimate of the reafferent signal can be derived by processing a motor efference copy signal via an internal model of the motor system to create an internal prediction of the sensory consequences of that motor command (Wolpert et al. 1995; Decety 1996; Farrer et al. 2003; Fig. 8a).

Fig. 8
An internal model of the sensory consequences of active head motion is used to selectively suppress reafferent activity in the VN. a Schematic to explain how vestibular sensitivity to active head movements could be selectively attenuated. During an active ...

Recently, a series of elegant experiments by Cullen and colleagues (Roy and Cullen 2004) have shown that such a mechanism underlies the selective elimination of sensitivity to active head movement. In principle, either neck proprioceptive signals or an efference copy of the neck motor command might be responsible. However, in the rhesus monkey, passive activation of neck proprioceptors did not significantly alter VN neural sensitivities to head rotation (Fig. 7d; Roy and Cullen 2001, 2004). Similarly, when head-restrained monkeys were encouraged to attempt to move their heads, even though they produced the motor commands to generate head torques comparable to those generated during large gaze shifts (i.e., when the head actually does move), this had no effect on neural responses (Roy and Cullen 2004). Thus, neither neck motor efference copy nor proprioception cues alone were sufficient to account for the elimination of neuronal sensitivity to active as compared to passive head rotation (i.e., compare Fig. 8b and c). Instead, by experimentally controlling the correspondence between intended and actual head movement (Fig. 8d; see legend for details), Roy and Cullen (2004) showed that a “cancellation signal” is generated only when the activation of neck proprioceptors matches the motor-generated expectation (Fig. 8a). In agreement with von Holst and Mittelstaedt’s (1950) original hypothesis, an internal model of the sensory consequences of active head motion is used to selectively suppress reafference at the level of the vestibular nuclei.

The finding that vestibular reafference is suppressed early in sensory processing has clear analogies with other sensory systems, most notably the electrosensory system of the mormyrid fish: cerebellum-like electrosensory lobes provide the signal that is used to cancel the sensory response to self-generated stimulation (Bell 1981; Mohr et al. 2003; Sawtell et al. 2007; Bell et al. 2008). Identifying the neural representations of the cancellation signal for vestibular reafference promises to be an interesting area of investigation and the cerebellum is a likely site (see Cullen and Roy 2004 and Angelaki and Cullen 2008). Next we describe why vestibular/proprioceptive integration is also required to compute the motion of the body.

Vestibular contribution to the computation of body motion

Although vestibular sensory cues are sufficient to estimate head motion and orientation, the ability to perform daily tasks, such as estimating our heading direction during locomotion and executing appropriate postural responses requires knowledge of the orientation and motion of the body. In conjunction with proprioceptive signals, vestibular cues are known to contribute to such body motion estimates (Mergner et al. 1981, 1991; Blouin et al. 2007). To use vestibular sensory information to help estimate body motion, the brain is faced with two computational tasks (Fig. 9). The first (Fig. 9; “reference frame transformation”) arises because our vestibular sensors are fixed in the head. As a result, the way in which individual sensors are stimulated as the body moves depends critically on how the head is statically oriented with respect to the body. For example, during forward locomotion with the head also facing forward, the otoliths are stimulated along the axis between the nose and the back of the head (naso-occipital axis; Fig. 9, top inset, center panel). However, the same body motion with the head turned far to the left or to the right stimulates the otoliths mainly along the axis between the ears (i.e., interaural axis; Fig. 9, top inset, left and right panels). The problem is thus similar to that of using canals signals to estimate head tilt across different head orientations with respect to gravity. To correctly interpret the relationship between the pattern of sensory vestibular activation and actual motion, vestibular signals must undergo a reference frame transformation. In the case of estimating body motion the transformation is from a head-centered to a body-centered reference frame. Such a computation requires a nonlinear interaction between dynamic vestibular estimates of head motion and neck proprioceptive estimates of static head orientation with respect to the body.

Fig. 9
Computations to estimate body motion. Conceptually, this requires two computational tasks. The first, (left, “reference frame transformation” transforms head-centered vestibular estimates of motion into a body-centered reference frame. ...

Recently, neurons in the rFN have been identified the responses of which are consistent with such a transformation. Specifically, Shaikh et al. (2004) dissociated head and body motions by examining neural responses in the rFN and VN when a monkey was translated in different horizontal-plane directions with the head fixed at different static positions relative to the trunk. Cells which encode motion in a body-centered reference frame should respond preferentially to a given direction of body motion independently of head orientation. In contrast, if a cell encodes motion in a head-centered reference frame, its preferred movement direction with respect to the body should systematically shift as the head is reoriented to maintain alignment with a particular axis in head coordinates. Most neurons in the rostral VN demonstrated responses consistent with this shift expected for a head-centered reference frame. In contrast, most rFN neurons also showed a shift but it was through a smaller angle than that of the head. As a result, their responses typically reflected encoding of motion in a frame intermediate between either head- or body-centered.

Similar observations were made by Kleine et al. (2004) when body and head reference frames were dissociated during rotation by considering pitch and roll rotations for different static horizontal-plane head positions relative to the trunk. Responses were not consistent with encoding of motion in a head-centered reference frame but rather one that was closer to body-centered. These observations suggest a potential role for the rFN in transforming vestibular signals into the appropriate reference frame for estimating body motion. This is compatible with the fact that the rFN represents a major target for projections from the anterior vermis (Voogd and Glickstein 1998) which has been implicated in vestibular-proprioceptive interactions for limb and postural control (Manzoni et al. 1997, 1999; Bruschini et al. 2006).

Importantly, while showing that in the rFN vestibular signals have been at least partially transformed into body-centered coordinates is consistent with the hypothesis that they are being used to estimate body motion, it does not yet prove that this is what they indeed encode. To estimate body motion requires a second computational step: motions of the body must be distinguished from motions of the head with respect to the body (Fig. 9; “Body motion computation”). In particular, whereas vestibular sensors will be stimulated in a similar fashion regardless of whether the head moves alone or the head and body move in tandem, to estimate body motion the two must be distinguished. The latter computation requires the integration of vestibular signals with dynamic neck proprioceptive inputs.

Despite an early convergence of vestibular and proprioceptive signals in the vestibular nuclei (Boyle and Pompeiano 1981; Kasper et al. 1988; Wilson et al. 1990; Gdowski and McCrea 2000), an explicit neural correlate for body (i.e., trunk) motion has been difficult to identify. For example, during passive movements VN neurons in rhesus monkeys encode motion of the head rather than the body (Roy and Cullen 2001, 2004), although there is evidence for a more mixed representation in squirrel monkeys (Gdowski and McCrea 2000). However, an elegant recent study by Brooks and Cullen (2009) showed that a neural correlate for body motion indeed exists in the macaque rFN. In particular, they showed that approximately half of rFN neurons responded robustly either to vestibular stimulation alone when the head and body were moved in tandem (i.e., whole-body rotation; Fig. 10a) or to neck proprioceptive stimulation alone when the body was passively moved beneath the head (Fig. 10b). In contrast, when the head was passively moved relative to the stationary body, proprioceptive and vestibular signals combined to cancel one another out (Fig. 10c). Thus, these neurons specifically encoded body motion. Importantly, the authors also showed that neural sensitivities to neck proprioceptive stimulation during body-under-head rotation varied as a function of static head orientation with respect to the body (Fig. 10d). This modulation in sensitivity was closely matched by similar head-position-dependent changes in sensitivity to vestibular stimulation during whole-body rotation (Fig. 10e). As a result, it was shown that vestibular and proprioceptive signals are not simply summed linearly to estimate body motion. Rather, the brain takes into account the specific nonlinear processing of vestibular signals required to match them to proprioceptive signals across head positions and compute accurate estimates of body motion (Fig. 10f).

Fig. 10
Evidence for coding of body motion in the rFN. Responses of a body-motion-encoding rFN neuron during a passive whole-body rotation that stimulated the semicircular canals, b passive body-under-head rotation that stimulated neck proprioceptors, and c passive ...

This latter observation, (i.e., a dependence of yaw vestibular responses on yaw static head-re-body position) is of particular note because in the Brooks and Cullen (2009) study both the head and the body were always moved about a common axis (i.e., body/head yaw axis). This experimental manipulation differs fundamentally from the reference frame studies of Shaikh et al. (2004) and Kleine et al. (2004), where the direction of motion was systematically varied and spatial tuning curves were constructed at different static head positions with respect to the body (i.e., thereby dissociating head and body reference frames; first computational step in Fig. 9, top inset). Instead, by considering body and head motions under conditions where the head/body axes of rotation were always coincident, the Brooks and Cullen study unmasked an additional nonlinear processing (i.e., the head-position-dependent processing within the second computational step in Fig. 9; see bottom inset).

At present, the reason for this second non-linear computation step in estimating body motion remains unknown. Indeed if both vestibular and proprioceptive inputs provided head-position-independent estimates of rotation, such nonlinear processing would not be required to compute body motion when the head and body are rotated about a common axis. There is no experimental evidence to suggest that the way semicircular canal afferents encode head motion depends on head orientation with respect to the body. Thus, it is logical to speculate that the nonlinear processing in the second computational step arises because of a nonlinear proprioceptive encoding of body motion. This might be a result of changes in the relative lengths of different neck muscles as the head is reoriented relative to the body. Consequently, to distinguish head from body motion, vestibular signals also need to be processed to encode motion in the same head-orientation-dependent way as neck proprioceptors.

While, at present, this interpretation remains speculative it can again be related to the concept of internal models. Specifically, to combine multisensory signals that encode similar information in different ways, the brain must effectively implement the computations necessary to “match the codes up”. In the case of body-motion-encoding rFN cells, this might be accomplished by processing vestibular signals using an internal model of the way neck proprioceptors encode body motion (Fig. 9; bottom inset). This internal model might also be thought of as implementing a further transformation of body-centered vestibular motion estimates into a neck muscle-centered reference frame.

More generally, when the axes of body and head motion are different, a head-to-body (or muscle)-centered reference frame transformation of vestibular signals is required to match vestibular and proprioceptive motion codes before combining the two to estimate body motion (Fig. 9). Future work will be required to establish whether the neurons that show evidence for a body-centered representation of vestibular signals are the same neurons that encode body motion and whether in effect the two sets of computations occur simultaneously within a common population of neurons (i.e., as opposed to the distinct stages suggested in Fig. 9). Again, the cerebellar cortex (either anterior vermis or NU) represents a likely site. Furthermore, because at least some rFN neurons encode inertial self-motion (i.e., they encode translation as opposed to tilt; Angelaki et al. 2004) and distinguish passive from active movements (Brooks and Cullen 2007), it will be important to address the extent to which these populations overlap with those encoding body motion. Ultimately such investigations promise to shed new insights into how multisensory signals are integrated and processed by the CNS to create consistent motion representations for different behavioral and perceptual purposes.


A fundamental goal of systems neuroscience is to elucidate the strategies by which sensory signals are transformed into central representations that give rise to behavior, and how behavior in turn influences the interpretation of sensory information. Over the years, the vestibular system has served as an excellent model framework for investigating the neural correlates for such transformations. Among the earliest theoretical concepts promoted by studies of the vestibular system was the need for processing of sensory signals by an internal model of the dynamics of the motor effector—the eye plant. Since that time, studies of the vestibular system have continued to provide new insights into increasingly more complex, and often nonlinear, computations involved in combining multisensory signals to create different motion representations that may serve a variety of motor and perceptual purposes. Here, we have summarized recent advances in elucidating the neural correlates for four computational problems: the sensori-motor transformations for reflex generation, the resolution of a sensory ambiguity for inertial motion estimation, the ability to distinguish active from passive movements, and the integration of vestibular and proprioceptive signals for body motion estimation. Each relates to the concept of the “internal model”, which has become popular in recent years as a means of describing particular classes of neural computations (e.g., representation of the dynamics of a sensor or effector) common to multiple sensorimotor systems. Understanding whether, how and where such models are implemented is thus of great importance for understanding sensorimotor processing and the vestibular system has provided an excellent experimental model.

Perhaps the most widely-accepted use of the “internal model” concept is in motor control: (e.g., complementary forward and inverse models of the sensors and motor actuators; Wolpert and Kawato 1998). Implicit in these theories are the notions that: (1) motor commands are computed by processing sensory or behavioral goal-directed information via an inverse model of the effector to be controlled; (2) a common inverse model should be shared by all sensorimotor systems that drive the same effector; (3) there should exist populations of neurons at the output of such a model that encode an efference copy of the motor command; and (4) the efference copy should be conveyed to a forward model of the effector to generate a prediction of the consequences of that motor command, a signal that is critical for online refinement and updating of the motor command.

These concepts, which have been particularly influential in the field of limb control, have nonetheless remained mostly conceptual, largely due to the difficulty in identifying neural correlates. As reviewed here, support for such an organization and indeed direct neurophysiological correlates for many of these general concepts have been provided by studying the sensorimotor processing in the vestibular system. Recent studies have further emphasized that, regardless of the sensory drive, particular groups of neurons encode consistent information about the current or predicted state of the effector. Importantly, dedicated populations of neurons have been shown to explicitly encode an efference copy of the oculomotor command (Green et al. 2007), a concept that in other sensorimotor systems has been proposed in computational models but remained largely unconfirmed at the neurophysiological level (but also see Sommer and Wurtz 2002, 2008). Furthermore, distinct populations of cells known to receive projections from the cerebellar flocculus carry signals more closely kinematically related to the actual eye movement than the motor command (Ghasia et al. 2008), thus providing preliminary evidence for a forward model in the cerebellum. The explicit existence of such a forward eye plant model in the cerebellum remains hypothetical at present and provides an excellent direction for future work.

Support for the implementation of internal models as a general theoretical concept has also been provided by several recent studies characterizing the properties of a particularly interesting class of brainstem–cerebellar vestibular neurons, the activity of which is not correlated with eye movements (VO cells). Among the important computations that they perform is to distinguish between sensory signals that result from our own actions (i.e., those that arise from self-generated behaviors, such as active voluntary movements) versus those arising from changes in the external world (e.g., passive perturbations applied by the environment). Evidence has been provided that the computations involve processing efference copies of neck motor commands by a forward model (this time of the “neck plant”), the output of which reflects the expected sensory consequences of those commands, and comparing them with actual sensory feedback from neck proprioceptors. When the prediction matches the sensory input, a signal appropriate to cancel vestibular reafference is generated (Cullen and Roy 2004; Roy and Cullen 2004). While the neural correlates for the proposed forward model and source of the “cancellation” signal remain to be explicitly identified, these observations provide strong neurophysiological support for well-established theoretical notions of sensorimotor system organization. These will undoubtedly help to guide future research in other areas, such as limb control where the more complicated multijoint nature of the plant itself and its varied interactions with the environment (e.g., support of different loads and use of different tools) introduce additional complexities in elucidating basic organizational principles and their neural correlates.

While internal models of the physical characteristics of a motor plant (or sensorimotor process) have been particularly influential in motor control theory, recent studies of the vestibular system have also emphasized the need for internal models to combine and transform multisensory signals into a meaningful information about our interaction with the environment that can ultimately be used for both motor and perceptual purposes. One such example reviewed here is the implementation of an internal model of the computations to resolve the “tilt/translation” ambiguity that arises in interpreting ambiguous sensory signals from otolith afferents (Angelaki et al. 2004; Green et al. 2005).

Similar considerations apply to the problem of distinguishing body and head motion. Whereas either vestibular or neck proprioceptive signals alone provide ambiguous information about whether the head, body or both are in motion, recent studies have shown that this problem can be resolved by combining vestibular and neck proprioceptive signals in a very specific fashion (Brooks and Cullen 2009; Kleine et al. 2004; Shaikh et al. 2004). A likely, although speculative, interpretation of recent findings is that to ensure that vestibular and proprioceptive signals combine correctly (i.e., the signals match up), vestibular signals must first be processed by an internal model of the nonlinear way that neck proprioceptors encode information about body motion with respect to the head.

In summary, studies of the vestibular system have played an influential and important role not only in identifying what transformations need to be performed to solve specific problems, but also in explicitly providing neurophysiological evidence for the necessary computations. In so doing, these studies have provided support for general concepts of sensorimotor organization (e.g., implementation of forward/inverse models, concept of reafference, and reference frame transformations) that are relevant for all sensorimotor systems. Importantly, the solid neurophysiological foundation for such concepts provides unique opportunities to further investigate critical details regarding strategies for their implementation and use. For example, what are the specific roles of particular brain areas (e.g., cerebellum) in implementing aspects of the required computations (e.g., forward model representations, nonlinear context-dependent processing)? How are internal model representations learned and modified both over the long-term and from moment-to-moment depending on behavioral context? Theories of motor skill learning in the limb control system suggest that the learning process involves changes within neural populations that compute inverse and/or forward models of the motor effector and the environment (e.g., a tool) with which it interacts (Shadmehr 2004). Yet, because the neural correlates for such internal models remain poorly established, it has been difficult to provide explicit neural evidence for such theories and to confirm which models are modified under a particular set of conditions (e.g., forward and/or inverse models; model of the effector vs. representation of its interaction with a particular tool; Wolpert and Kawato 1998; Haruno et al. 2001; Cothros et al. 2006; Kluzik et al. 2008; Wagner and Smith 2008; but see Li et al. 2001; Padoa-Schioppa et al. 2002). In contrast, because significant progress has been made in identifying both the neural correlates for internal models as well as those for motor learning in the vestibular system, this task now becomes tangible. Lessons learned by studying the neural processing of vestibular signals for the control of eye and head movements are thus likely to provide new insights into salient strategies for motor skill learning in the more complicated limb control system.

Similarly, the multisensory integration strategies and nonlinear context-dependent computations (e.g., that depend on head orientation with respect to gravity or the body) required to resolve problems, such as the tilt/translation ambiguity or the computation of body motion have broad relevance to a wide variety of problems ranging from the sensorimotor processing to implement reference frame transformations (Salinas and Abbott 1995; Andersen 1997; Shaikh et al. 2004; Smith and Crawford 2005; Buneo and Andersen 2006; Batista et al. 2007; Green and Angelaki 2007; Yakusheva et al. 2007; Blohm et al. 2009) to the integration of multisensory signals to create meaningful representations of our environment (Driver and Noesselt 2008; Stein and Stanford 2008; Angelaki et al. 2009). The vestibular system represents a particularly good model system to study the neural correlates for some of these more complex computations because of the solid framework, built on the foundations of control system theory, for understanding much of the basic dynamic processing of sensory signals. Studies in the vestibular system will thus undoubtedly continue to provide important new insights into neural processing and computation in the brain.


Supported by NIH grants DC04260 and EY12814 and a chercheur boursier salary award from the Fonds de la recherche en santé du Québec (FRSQ).


Vestibulo-ocular reflex
Rotational vestibulo-ocular reflex
Translational vestibulo-ocular reflex
Vestibular nuclei
Prepositus hypoglossi
Rostral fastigial deep cerebellar nuclei
Nodulus and ventral uvula regions of the caudal cerebellar vermis
“Tonic”and “burst-tonic” neurons in the PH and adjacent medial VN
“Position-vestibular-pause” VN cell type
“Eye-head” VN cell type
“Vestibular-only” VN cell type
“Floccular-target-neuron” VN cell type

Contributor Information

Andrea M. Green, Dépt. de Physiologie, Université de Montréal, 2960 Chemin de la Tour, Rm. 4141, Montreal, QC H3T 1J4, Canada.

Dora E. Angelaki, Department of Anatomy and Neurobiology, Washington University School of Medicine, St. Louis, MO, USA.


  • Ajemian R, Green A, Bullock D, Sergio L, Kalaska J, Grossberg S. Assessing the function of motor cortex: single-neuron models of how neural response is modulated by limb biomechanics. Neuron. 2008;58:414–428. [PubMed]
  • Allum JH, Honegger F. Interactions between vestibular and proprioceptive inputs triggering and modulating human balance-correcting responses differ across muscles. Exp Brain Res. 1998;121:478–494. [PubMed]
  • Andersen RA. Multimodal integration for the representation of space in the posterior parietal cortex. Philos Trans R Soc Lond B Biol Sci. 1997;352:1421–1428. [PMC free article] [PubMed]
  • Angelaki DE. Three-dimensional organization of otolith-ocular reflexes in rhesus monkeys. III. Responses to translation. J Neurophysiol. 1998;80:680–695. [PubMed]
  • Angelaki DE. Eyes on target: what neurons must do for the vestibuloocular reflex during linear motion. J Neurophysiol. 2004;92:20–35. [PubMed]
  • Angelaki DE, Cullen KE. Vestibular system: the many facets of a multimodal sense. Annu Rev Neurosci. 2008;31:125–150. [PubMed]
  • Angelaki DE, Hess BJ. Inertial representation of angular motion in the vestibular system of rhesus monkeys. I. Vestibuloocular reflex. J Neurophysiol. 1994;71:1222–1249. [PubMed]
  • Angelaki DE, Hess BJ. Inertial representation of angular motion in the vestibular system of rhesus monkeys. II. Otolith-controlled transformation that depends on an intact cerebellar nodulus. J Neurophysiol. 1995;73:1729–1751. [PubMed]
  • Angelaki DE, Hess BJ. Self-motion-induced eye movements: effects on visual acuity and navigation. Nat Rev Neurosci. 2005;6:966–976. [PubMed]
  • Angelaki DE, Hess BJ, Arai Y, Suzuki J. Adaptation of primate vestibuloocular reflex to altered peripheral vestibular inputs. I. Frequency-specific recovery of horizontal VOR after inactivation of the lateral semicircular canals. J Neurophysiol. 1996;76:2941–2953. [PubMed]
  • Angelaki DE, McHenry MQ, Dickman JD, Newlands SD, Hess BJ. Computation of inertial motion: neural strategies to resolve ambiguous otolith information. J Neurosci. 1999;19:316–327. [PubMed]
  • Angelaki DE, Green AM, Dickman JD. Differential sensorimotor processing of vestibulo-ocular signals during rotation and translation. J Neurosci. 2001;21:3968–3985. [PubMed]
  • Angelaki DE, Shaikh AG, Green AM, Dickman JD. Neurons compute internal models of the physical laws of motion. Nature. 2004;430:560–564. [PubMed]
  • Angelaki DE, Gu Y, DeAngelis GC. Multisensory integration: psychophysics, neurophysiology, and computation. Curr Opin Neurobiol. 2009;19(4):452–458. [PMC free article] [PubMed]
  • Angelaki DE, Klier EM, Snyder LH. A vestibular sensation: probabilistic approaches to spatial perception. Neuron. 2010 in press. [PMC free article] [PubMed]
  • Arnold DB, Robinson DA. A learning network model of the neural integrator of the oculomotor system. Biol Cybern. 1991;64:447–454. [PubMed]
  • Baker R, Berthoz A. Is the prepositus hypoglossi nucleus the source of another vestibulo-ocular pathway? Brain Res. 1975;86:121–127. [PubMed]
  • Balaban CD, Porter JD. Neuroanatomic substrates for vestibulo-autonomic interactions. J Vestib Res. 1998;8:7–16. [PubMed]
  • Barmack NH. Central vestibular system: vestibular nuclei and posterior cerebellum. Brain Res Bull. 2003;60:511–541. [PubMed]
  • Barmack NH, Shojaku H. Vestibular and visual climbing fiber signals evoked in the uvula-nodulus of the rabbit cerebellum by natural stimulation. J Neurophysiol. 1995;74:2573–2589. [PubMed]
  • Barnes GR. Visual-vestibular interaction in the control of head and eye movement: the role of visual feedback and predictive mechanisms. Prog Neurobiol. 1993;41:435–472. [PubMed]
  • Batista AP, Santhanam G, Yu BM, Ryu SI, Afshar A, Shenoy KV. Reference frames for reach planning in macaque dorsal premotor cortex. J Neurophysiol. 2007;98:966–983. [PubMed]
  • Belknap DB, McCrea RA. Anatomical connections of the prepositus and abducens nuclei in the squirrel monkey. J Comp Neurol. 1988;268:13–28. [PubMed]
  • Bell CC. An efference copy which is modified by reafferent input. Science. 1981;214:450–453. [PubMed]
  • Bell CC, Han V, Sawtell NB. Cerebellum-like structures and their implications for cerebellar function. Annu Rev Neurosci. 2008;31:1–24. [PubMed]
  • Bent LR, McFadyen BJ, Inglis JT. Vestibular contributions during human locomotor tasks. Exerc Sport Sci Rev. 2005;33:107–113. [PubMed]
  • Berthoz A, Israel I, Georges-Francois P, Grasso R, Tsuzuku T. Spatial memory of body linear displacement: what is being stored? Science. 1995;269:95–98. [PubMed]
  • Blazquez PM, Hirata Y, Highstein SM. The vestibulo-ocular reflex as a model system for motor learning: what is the role of the cerebellum? Cerebellum. 2004;3:188–192. [PubMed]
  • Blohm G, Keith GP, Crawford JD. Decoding the cortical transformations for visually guided reaching in 3D space. Cereb Cortex. 2009;19:1372–1393. [PubMed]
  • Blouin J, Teasdale N, Mouchnino L. Vestibular signal processing in a subject with somatosensory deafferentation: the case of sitting posture. BMC Neurol. 2007;7:25. [PMC free article] [PubMed]
  • Bockisch CJ, Haslwanter T. Vestibular contribution to the planning of reach trajectories. Exp Brain Res. 2007;182:387–397. [PubMed]
  • Bos JE, Bles W. Theoretical considerations on canal-otolith interaction and an observer model. Biol Cybern. 2002;86:191–207. [PubMed]
  • Boyden ES, Katoh A, Raymond JL. Cerebellum-dependent learning: the role of multiple plasticity mechanisms. Annu Rev Neurosci. 2004;27:581–609. [PubMed]
  • Boyle R, Pompeiano O. Responses of vestibulospinal neurons to sinusoidal rotation of neck. J Neurophysiol. 1980;44:633–649. [PubMed]
  • Boyle R, Pompeiano O. Convergence and interaction of neck and macular vestibular inputs on vestibulospinal neurons. J Neurophysiol. 1981;45:852–868. [PubMed]
  • Boyle R, Buttner U, Markert G. Vestibular nuclei activity and eye movements in the alert monkey during sinusoidal optokinetic stimulation. Exp Brain Res. 1985;57:362–369. [PubMed]
  • Boyle R, Belton T, McCrea RA. Responses of identified vestibulospinal neurons to voluntary eye and head movements in the squirrel monkey. Ann N Y Acad Sci. 1996;781:244–263. [PubMed]
  • Brandt T, Schautzer F, Hamilton DA, Bruning R, Markowitsch HJ, Kalla R, Darlington C, Smith P, Strupp M. Vestibular loss causes hippocampal atrophy and impaired spatial memory in humans. Brain. 2005;128:2732–2741. [PubMed]
  • Bremmer F. Navigation in space—the role of the macaque ventral intraparietal area. J Physiol. 2005;566:29–35. [PubMed]
  • Bresciani JP, Gauthier GM, Vercher JL, Blouin J. On the nature of the vestibular control of arm-reaching movements during whole-body rotations. Exp Brain Res. 2005;164:431–441. [PubMed]
  • Brooks J, Cullen KE. Reference frames and reafference in the rostral fastigial nucleus. Soc Neurosci Abstr. 2007;33
  • Brooks JX, Cullen KE. Multimodal integration in rostral fastigial nucleus provides an estimate of body movement. J Neurosci. 2009;29:10499–10511. [PMC free article] [PubMed]
  • Bruschini L, Andre P, Pompeiano O, Manzoni D. Responses of Purkinje-cells of the cerebellar anterior vermis to stimulation of vestibular and somatosensory receptors. Neuroscience. 2006;142:235–245. [PubMed]
  • Bryan AS, Angelaki DE. Optokinetic and vestibular responsiveness in the macaque rostral vestibular and fastigial nuclei. J Neurophysiol. 2009;101:714–720. [PubMed]
  • Buchanan JJ, Horak FB. Vestibular loss disrupts control of head and trunk on a sinusoidally moving platform. J Vestib Res. 2001;11:371–389. [PubMed]
  • Buettner UW, Henn V, Young LR. Frequency response of the vestibulo-ocular reflex (VOR) in the monkey. Aviat Space Environ Med. 1981;52:73–77. [PubMed]
  • Buneo CA, Andersen RA. The posterior parietal cortex: sensorimotor interface for the planning and online control of visually guided movements. Neuropsychologia. 2006;44:2594–2606. [PubMed]
  • Buneo CA, Soechting JF, Flanders M. Postural dependence of muscle actions: implications for neural control. J Neurosci. 1997;17:2128–2142. [PubMed]
  • Buttner U, Fuchs AF, Markert-Schwab G, Buckmaster P. Fastigial nucleus activity in the alert monkey during slow eye and head movements. J Neurophysiol. 1991;65:1360–1371. [PubMed]
  • Buttner-Ennever JA, Horn AK, Schmidtke K. Cell groups of the medial longitudinal fasciculus and paramedian tracts. Rev Neurol (Paris) 1989;145:533–539. [PubMed]
  • Cannon SC, Robinson DA. An improved neural-network model for the neural integrator of the oculomotor system: more realistic neuron behavior. Biol Cybern. 1985;53:93–108. [PubMed]
  • Cannon SC, Robinson DA. Loss of the neural integrator of the oculomotor system from brain stem lesions in monkey. J Neurophysiol. 1987;57:1383–1409. [PubMed]
  • Cannon SC, Robinson DA, Shamma S. A proposed neural network for the integrator of the oculomotor system. Biol Cybern. 1983;49:127–136. [PubMed]
  • Cathers I, Day BL, Fitzpatrick RC. Otolith and canal reflexes in human standing. J Physiol. 2005;563:229–234. [PubMed]
  • Chen-Harris H, Joiner WM, Ethier V, Zee DS, Shadmehr R. Adaptive control of saccades via internal feedback. J Neurosci. 2008;28:2804–2813. [PMC free article] [PubMed]
  • Chen-Huang C, McCrea RA. Effects of viewing distance on the responses of horizontal canal-related secondary vestibular neurons during angular head rotation. J Neurophysiol. 1999a;81:2517–2537. [PubMed]
  • Chen-Huang C, McCrea RA. Effects of viewing distance on the responses of vestibular neurons to combined angular and linear vestibular stimulation. J Neurophysiol. 1999b;81:2538–2557. [PubMed]
  • Chubb MC, Fuchs AF, Scudder CA. Neuron activity in monkey vestibular nuclei during vertical vestibular stimulation and eye movements. J Neurophysiol. 1984;52:724–742. [PubMed]
  • Cohen B, Matsuo V, Raphan T. Quantitative analysis of the velocity characteristics of optokinetic nystagmus and optokinetic after-nystagmus. J Physiol. 1977;270:321–344. [PubMed]
  • Cohen B, Henn V, Raphan T, Dennett D. Velocity storage, nystagmus, and visual-vestibular interactions in humans. Ann N Y Acad Sci. 1981;374:421–433. [PubMed]
  • Cothros N, Wong JD, Gribble PL. Are there distinct neural representations of object and limb dynamics? Exp Brain Res. 2006;173:689–697. [PubMed]
  • Cova AC, Galiana HL. A bilateral model integrating vergence and the vestibulo-ocular reflex. Exp Brain Res. 1996;107:435–452. [PubMed]
  • Cullen KE, McCrea RA. Firing behavior of brain stem neurons during voluntary cancellation of the horizontal vestibuloocular reflex. I. Secondary vestibular neurons. J Neurophysiol. 1993;70:828–843. [PubMed]
  • Cullen KE, Minor LB. Semicircular canal afferents similarly encode active and passive head-on-body rotations: implications for the role of vestibular efference. J Neurosci. 2002;22:RC226. [PubMed]
  • Cullen KE, Roy JE. Signal processing in the vestibular system during active versus passive head movements. J Neurophysiol. 2004;91:1919–1933. [PubMed]
  • Cullen KE, Chen-Huang C, McCrea RA. Firing behavior of brain stem neurons during voluntary cancellation of the horizontal vestibuloocular reflex. II Eye movement related neurons. J Neurophysiol. 1993;70:844–856. [PubMed]
  • Curthoys IS, Halmagyi GM. Vestibular compensation: a review of the oculomotor, neural, and clinical consequences of unilateral vestibular loss. J Vestib Res. 1995;5:67–107. [PubMed]
  • Curthoys IS, Halmagyi GM, Dai MJ. The acute effects of unilateral vestibular neurectomy on sensory and motor tests of human otolithic function. Acta Otolaryngol Suppl. 1991;481:5–10. [PubMed]
  • Dai MJ, Raphan T, Cohen B. Spatial orientation of the vestibular system: dependence of optokinetic after-nystagmus on gravity. J Neurophysiol. 1991;66:1422–1439. [PubMed]
  • Day BL, Fitzpatrick RC. Virtual head rotation reveals a process of route reconstruction from human vestibular signals. J Physiol. 2005;567:591–597. [PubMed]
  • Decety J. Neural representations for action. Rev Neurosci. 1996;7:285–297. [PubMed]
  • Deliagina TG, Beloozerova IN, Zelenin PV, Orlovsky GN. Spinal and supraspinal postural networks. Brain Res Rev. 2008;57:212–221. [PMC free article] [PubMed]
  • Demer JL, Oh SY, Poukens V. Evidence for active control of rectus extraocular muscle pulleys. Invest Ophthalmol Vis Sci. 2000;41:1280–1290. [PubMed]
  • Dieterich M. Central vestibular disorders. J Neurol. 2007;254:559–568. [PubMed]
  • DiZio P, Lackner JR. Coriolis-force-induced trajectory and endpoint deviations in the reaching movements of labyrinthine-defective subjects. J Neurophysiol. 2001;85:784–789. [PubMed]
  • Driver J, Noesselt T. Multisensory interplay reveals crossmodal influences on ‘sensory-specific’ brain regions, neural responses, and judgments. Neuron. 2008;57:11–23. [PMC free article] [PubMed]
  • du Lac S, Raymond JL, Sejnowski TJ, Lisberger SG. Learning and memory in the vestibulo-ocular reflex. Annu Rev Neurosci. 1995;18:409–441. [PubMed]
  • Einstein A. Uber das Relativitätsprinzip und die aus demselben gezogenen Folgerungen. Jahrb Radioakt. 1908;4:411–462.
  • Escudero M, de la Cruz RR, Delgado-Garcia JM. A physiological study of vestibular and prepositus hypoglossi neurones projecting to the abducens nucleus in the alert cat. J Physiol. 1992;458:539–560. [PubMed]
  • Escudero M, Cheron G, Godaux E. Discharge properties of brain stem neurons projecting to the flocculus in the alert cat. II Prepositus hypoglossal nucleus. J Neurophysiol. 1996;76:1775–1785. [PubMed]
  • Ethier V, Zee DS, Shadmehr R. Changes in control of saccades during gain adaptation. J Neurosci. 2008;28:13929–13937. [PMC free article] [PubMed]
  • Farrer C, Franck N, Paillard J, Jeannerod M. The role of proprioception in action recognition. Conscious Cogn. 2003;12:609–619. [PubMed]
  • Fernandez C, Goldberg JM. Physiology of peripheral neurons innervating otolith organs of the squirrel monkey. I. Response to static tilts and to long-duration centrifugal force. J Neurophysiol. 1976a;39:970–984. [PubMed]
  • Fernandez C, Goldberg JM. Physiology of peripheral neurons innervating otolith organs of the squirrel monkey. III. Response dynamics. J Neurophysiol. 1976b;39:996–1008. [PubMed]
  • Fukushima K. Corticovestibular interactions: anatomy, electrophysiology, and functional considerations. Exp Brain Res. 1997;117:1–16. [PubMed]
  • Galiana HL. A nystagmus strategy to linearize the vestibuloocular reflex. IEEE Trans Biomed Eng. 1991;38:532–543. [PubMed]
  • Galiana HL, Outerbridge JS. A bilateral model for central neural pathways in vestibuloocular reflex. J Neurophysiol. 1984;51:210–241. [PubMed]
  • Gdowski GT, McCrea RA. Neck proprioceptive inputs to primate vestibular nucleus neurons. Exp Brain Res. 2000;135:511–526. [PubMed]
  • Ghasia FF, Angelaki DE. Do motoneurons encode the noncommutativity of ocular rotations? Neuron. 2005;47:281–293. [PubMed]
  • Ghasia FF, Meng H, Angelaki DE. Neural correlates of forward and inverse models for eye movements: evidence from three-dimensional kinematics. J Neurosci. 2008;28:5082–5087. [PMC free article] [PubMed]
  • Glasauer S. Cerebellar contribution to saccades and gaze holding: a modeling approach. Ann N Y Acad Sci. 2003;1004:206–219. [PubMed]
  • Glasauer S, Merfeld D, editors. Modelling three-dimensional vestibular responses during complex motion stimulation. Harwood Academic; Amsterdam: 1997.
  • Godaux E, Mettens P, Cheron G. Differential effect of injections of kainic acid into the prepositus and the vestibular nuclei of the cat. J Physiol. 1993;472:459–482. [PubMed]
  • Gomi H, Shidara M, Takemura A, Inoue Y, Kawano K, Kawato M. Temporal firing patterns of Purkinje cells in the cerebellar ventral paraflocculus during ocular following responses in monkeys. I. Simple spikes. J Neurophysiol. 1998;80:818–831. [PubMed]
  • Green AM, Angelaki DE. Resolution of sensory ambiguities for gaze stabilization requires a second neural integrator. J Neurosci. 2003;23:9265–9275. [PubMed]
  • Green AM, Angelaki DE. An integrative neural network for detecting inertial motion and head orientation. J Neurophysiol. 2004;92:905–925. [PubMed]
  • Green AM, Angelaki DE. Coordinate transformations and sensory integration in the detection of spatial orientation and self-motion: from models to experiments. Prog Brain Res. 2007;165:155–180. [PubMed]
  • Green A, Galiana HL. Exploring sites for short-term VOR modulation using a bilateral model. Ann N Y Acad Sci. 1996;781:625–628. [PubMed]
  • Green AM, Galiana HL. Hypothesis for shared central processing of canal and otolith signals. J Neurophysiol. 1998;80:2222–2228. [PubMed]
  • Green AM, Shaikh AG, Angelaki DE. Sensory vestibular contributions to constructing internal models of self-motion. J Neural Eng. 2005;2:S164–S179. [PubMed]
  • Green AM, Meng H, Angelaki DE. A reevaluation of the inverse dynamic model for eye movements. J Neurosci. 2007;27:1346–1355. [PubMed]
  • Gu Y, DeAngelis GC, Angelaki DE. A functional link between area MSTd and heading perception based on vestibular signals. Nat Neurosci. 2007;10:1038–1047. [PMC free article] [PubMed]
  • Guldin WO, Grusser OJ. Is there a vestibular cortex? Trends Neurosci. 1998;21:254–259. [PubMed]
  • Guthrie BL, Porter JD, Sparks DL. Corollary discharge provides accurate eye position information to the oculomotor system. Science. 1983;221:1193–1195. [PubMed]
  • Halmagyi GM, Curthoys IS, Todd MJ, D’Cruz DM, Cremer PD, Henderson CJ, Staples MJ. Unilateral vestibular neurectomy in man causes a severe permanent horizontal vestibuloocular reflex deficit in response to high-acceleration ampullofugal stimulation. Acta Otolaryngol Suppl. 1991;481:411–414. [PubMed]
  • Harris LR. Vestibular and optokinetic eye movements evoked in the cat by rotation about a tilted axis. Exp Brain Res. 1987;66:522–532. [PubMed]
  • Haruno M, Wolpert DM, Kawato M. Mosaic model for sensorimotor learning and control. Neural Comput. 2001;13:2201–2220. [PubMed]
  • Hazel TR, Sklavos SG, Dean P. Estimation of premotor synaptic drives to simulated abducens motoneurons for control of eye position. Exp Brain Res. 2002;146:184–196. [PubMed]
  • Hess BJ, Angelaki DE. Inertial vestibular coding of motion: concepts and evidence. Curr Opin Neurobiol. 1997;7:860–866. [PubMed]
  • Horak FB, Earhart GM, Dietz V. Postural responses to combinations of head and body displacements: vestibular-somatosensory interactions. Exp Brain Res. 2001;141:410–414. [PubMed]
  • Inglis JT, Shupert CL, Hlavacka F, Horak FB. Effect of galvanic vestibular stimulation on human postural responses during support surface translations. J Neurophysiol. 1995;73:896–901. [PubMed]
  • Israel I, Grasso R, Georges-Francois P, Tsuzuku T, Berthoz A. Spatial memory and path integration studied by self-driven passive linear displacement. I. Basic properties. J Neurophysiol. 1997;77:3180–3192. [PubMed]
  • Ito M. Neurophysiological aspects of the cerebellar motor control system. Int J Neurol. 1970;7:162–176. [PubMed]
  • Jamali M, Sadeghi SG, Cullen KE. Response of vestibular nerve afferents innervating utricle and saccule during passive and active translations. J Neurophysiol. 2009;101:141–149. [PubMed]
  • Kaneko CR. Eye movement deficits after ibotenic acid lesions of the nucleus prepositus hypoglossi in monkeys. I. Saccades and fixation. J Neurophysiol. 1997;78:1753–1768. [PubMed]
  • Kaneko CR. Eye movement deficits following ibotenic acid lesions of the nucleus prepositus hypoglossi in monkeys. II. Pursuit, vestibular, and optokinetic responses. J Neurophysiol. 1999;81:668–681. [PubMed]
  • Karnath HO, Dieterich M. Spatial neglect–a vestibular disorder? Brain. 2006;129:293–305. [PubMed]
  • Kasper J, Schor RH, Wilson VJ. Response of vestibular neurons to head rotations in vertical planes. II. Response to neck stimulation and vestibular-neck interaction. J Neurophysiol. 1988;60:1765–1778. [PubMed]
  • Katz E, Vianney de Jong JM, Buettner-Ennever J, Cohen B. Effects of midline medullary lesions on velocity storage and the vestibulo-ocular reflex. Exp Brain Res. 1991;87:505–520. [PubMed]
  • Kawato M. Internal models for motor control and trajectory planning. Curr Opin Neurobiol. 1999;9:718–727. [PubMed]
  • Kawato M, Kuroda T, Imamizu H, Nakano E, Miyauchi S, Yoshioka T. Internal forward models in the cerebellum: fMRI study on grip force and load force coupling. Prog Brain Res. 2003;142:171–188. [PubMed]
  • Keller EL, Robinson DA. Absence of a stretch reflex in extraocular muscles of the monkey. J Neurophysiol. 1971;34:908–919. [PubMed]
  • King WM, Lisberger SG, Fuchs AF. Responses of fibers in medial longitudinal fasciculus (MLF) of alert monkeys during horizontal and vertical conjugate eye movements evoked by vestibular or visual stimuli. J Neurophysiol. 1976;39:1135–1149. [PubMed]
  • Kleine JF, Guan Y, Kipiani E, Glonti L, Hoshi M, Buttner U. Trunk position influences vestibular responses of fastigial nucleus neurons in the alert monkey. J Neurophysiol. 2004;91:2090–2100. [PubMed]
  • Klier EM, Angelaki DE. Spatial updating and the maintenance of visual constancy. Neuroscience. 2008;156:801–818. [PMC free article] [PubMed]
  • Klier EM, Angelaki DE, Hess BJ. Roles of gravitational cues and efference copy signals in the rotational updating of memory saccades. J Neurophysiol. 2005;94:468–478. [PubMed]
  • Klier EM, Meng H, Angelaki DE. Three-dimensional kinematics at the level of the oculomotor plant. J Neurosci. 2006;26:2732–2737. [PubMed]
  • Kluzik J, Diedrichsen J, Shadmehr R, Bastian AJ. Reach adaptation: what determines whether we learn an internal model of the tool or adapt the model of our arm? J Neurophysiol. 2008;100:1455–1464. [PubMed]
  • Kono R, Poukens V, Demer JL. Quantitative analysis of the structure of the human extraocular muscle pulley system. Invest Ophthalmol Vis Sci. 2002;43:2923–2932. [PubMed]
  • Langer T, Fuchs AF, Chubb MC, Scudder CA, Lisberger SG. Floccular efferents in the rhesus macaque as revealed by autoradiography and horseradish peroxidase. J Comp Neurol. 1985a;235:26–37. [PubMed]
  • Langer T, Fuchs AF, Scudder CA, Chubb MC. Afferents to the flocculus of the cerebellum in the rhesus macaque as revealed by retrograde transport of horseradish peroxidase. J Comp Neurol. 1985b;235:1–25. [PubMed]
  • Laurens J, Droulez J. Bayesian processing of vestibular information. Biol Cybern. 2007;96:389–404. [PubMed]
  • Lewis RF, Haburcakova C, Merfeld DM. Roll tilt psychophysics in rhesus monkeys during vestibular and visual stimulation. J Neurophysiol. 2008;100:140–153. [PubMed]
  • Li N, Angelaki DE. Updating visual space during motion in depth. Neuron. 2005;48:149–158. [PubMed]
  • Li CS, Padoa-Schioppa C, Bizzi E. Neuronal correlates of motor performance and motor learning in the primary motor cortex of monkeys adapting to an external force field. Neuron. 2001;30:593–607. [PubMed]
  • Lisberger SG. The neural basis for motor learning in the vestibulo-ocular reflex in monkeys. Trends Neurosci. 1988;11:147–152. [PubMed]
  • Lisberger SG. Internal models of eye movement in the floccular complex of the monkey cerebellum. Neuroscience. 2009;162(3):763–776. [PMC free article] [PubMed]
  • Lisberger SG, Fuchs AF. Role of primate flocculus during rapid behavioral modification of vestibuloocular reflex. I. Purkinje cell activity during visually guided horizontal smooth-pursuit eye movements and passive head rotation. J Neurophysiol. 1978;41:733–763. [PubMed]
  • Lisberger SG, Miles FA. Role of primate medial vestibular nucleus in long-term adaptive plasticity of vestibuloocular reflex. J Neurophysiol. 1980;43:1725–1745. [PubMed]
  • Lisberger SG, Pavelko TA, Bronte-Stewart HM, Stone LS. Neural basis for motor learning in the vestibuloocular reflex of primates. II. Changes in the responses of horizontal gaze velocity Purkinje cells in the cerebellar flocculus and ventral paraflocculus. J Neurophysiol. 1994a;72:954–973. [PubMed]
  • Lisberger SG, Pavelko TA, Broussard DM. Neural basis for motor learning in the vestibuloocular reflex of primates. I. Changes in the responses of brain stem neurons. J Neurophysiol. 1994b;72:928–953. [PubMed]
  • Lisberger SG, Pavelko TA, Broussard DM. Responses during eye movements of brain stem neurons that receive monosynaptic inhibition from the flocculus and ventral paraflocculus in monkeys. J Neurophysiol. 1994c;72:909–927. [PubMed]
  • Lopez-Barneo J, Darlot C, Berthoz A, Baker R. Neuronal activity in prepositus nucleus correlated with eye movement in the alert cat. J Neurophysiol. 1982;47:329–352. [PubMed]
  • MacNeilage PR, Banks MS, Berger DR, Bulthoff HH. A Bayesian model of the disambiguation of gravitoinertial force by visual cues. Exp Brain Res. 2007;179:263–290. [PubMed]
  • Macpherson JM, Everaert DG, Stapley PJ, Ting LH. Bilateral vestibular loss in cats leads to active destabilization of balance during pitch and roll rotations of the support surface. J Neurophysiol. 2007;97:4357–4367. [PubMed]
  • Manzoni D, Andre P, Pompeiano O. Changes in gain and spatiotemporal properties of the vestibulospinal reflex after injection of a GABA-A agonist in the cerebellar anterior vermis. J Vestib Res. 1997;7:7–20. [PubMed]
  • Manzoni D, Pompeiano O, Bruschini L, Andre P. Neck input modifies the reference frame for coding labyrinthine signals in the cerebellar vermis: a cellular analysis. Neuroscience. 1999;93:1095–1107. [PubMed]
  • Mars F, Archambault PS, Feldman AG. Vestibular contribution to combined arm and trunk motion. Exp Brain Res. 2003;150:515–519. [PubMed]
  • Maurer C, Mergner T, Peterka RJ. Multisensory control of human upright stance. Exp Brain Res. 2006;171:231–250. [PubMed]
  • Mayne RA. A systems concept of the vestibular organs. In: Kornhuber HH, editor. Handbook of sensory physiology: the vestibular system. Springer; New York: 1974. pp. 493–580.
  • McConville KM, Tomlinson RD, Na EQ. Behavior of eye-movement-related cells in the vestibular nuclei during combined rotational and translational stimuli. J Neurophysiol. 1996;76:3136–3148. [PubMed]
  • McCrea RA, Baker R. Anatomical connections of the nucleus prepositus of the cat. J Comp Neurol. 1985;237:377–407. [PubMed]
  • McCrea RA, Yoshida K, Berthoz A, Baker R. Eye movement related activity and morphology of second order vestibular neurons terminating in the cat abducens nucleus. Exp Brain Res. 1980;40:468–473. [PubMed]
  • McCrea RA, Strassman A, May E, Highstein SM. Anatomical and physiological characteristics of vestibular neurons mediating the horizontal vestibulo-ocular reflex of the squirrel monkey. J Comp Neurol. 1987;264:547–570. [PubMed]
  • McCrea RA, Gdowski GT, Boyle R, Belton T. Firing behavior of vestibular neurons during active and passive head movements: vestibulo-spinal and other non-eye-movement related neurons. J Neurophysiol. 1999;82:416–428. [PubMed]
  • McFarland JL, Fuchs AF. Discharge patterns in nucleus prepositus hypoglossi and adjacent medial vestibular nucleus during horizontal eye movement in behaving macaques. J Neurophysiol. 1992;68:319–332. [PubMed]
  • Meng H, Angelaki DE. Neural correlates of the dependence of compensatory eye movements during translation on target distance and eccentricity. J Neurophysiol. 2006;95:2530–2540. [PubMed]
  • Meng H, Green AM, Dickman JD, Angelaki DE. Pursuit–vestibular interactions in brain stem neurons during rotation and translation. J Neurophysiol. 2005;93:3418–3433. [PubMed]
  • Merfeld DM. Modeling the vestibulo-ocular reflex of the squirrel monkey during eccentric rotation and roll tilt. Exp Brain Res. 1995;106:123–134. [PubMed]
  • Merfeld DM, Young LR. The vestibulo-ocular reflex of the squirrel monkey during eccentric rotation and roll tilt. Exp Brain Res. 1995;106:111–122. [PubMed]
  • Merfeld DM, Zupan LH. Neural processing of gravitoinertial cues in humans. III Modeling tilt and translation responses. J Neurophysiol. 2002;87:819–833. [PubMed]
  • Merfeld DM, Young LR, Oman CM, Shelhamer MJ. A multidimensional model of the effect of gravity on the spatial orientation of the monkey. J Vestib Res. 1993a;3:141–161. [PubMed]
  • Merfeld DM, Young LR, Paige GD, Tomko DL. Three dimensional eye movements of squirrel monkeys following postrotatory tilt. J Vestib Res. 1993b;3:123–139. [PubMed]
  • Merfeld DM, Zupan L, Peterka RJ. Humans use internal models to estimate gravity and linear acceleration. Nature. 1999;398:615–618. [PubMed]
  • Merfeld DM, Zupan LH, Gifford CA. Neural processing of gravito-inertial cues in humans. II Influence of the semicircular canals during eccentric rotation. J Neurophysiol. 2001;85:1648–1660. [PubMed]
  • Merfeld DM, Park S, Gianna-Poulin C, Black FO, Wood S. Vestibular perception and action employ qualitatively different mechanisms. I. Frequency response of VOR and perceptual responses during Translation and Tilt. J Neurophysiol. 2005a;94:186–198. [PubMed]
  • Merfeld DM, Park S, Gianna-Poulin C, Black FO, Wood S. Vestibular perception and action employ qualitatively different mechanisms. II. VOR and perceptual responses during combined Tilt&Translation. J Neurophysiol. 2005b;94:199–205. [PubMed]
  • Mergner T, Anastasopoulos D, Becker W, Deecke L. Discrimination between trunk and head rotation; a study comparing neuronal data from the cat with human psychophysics. Acta Psychol (Amst) 1981;48:291–301. [PubMed]
  • Mergner T, Siebold C, Schweigart G, Becker W. Human perception of horizontal trunk and head rotation in space during vestibular and neck stimulation. Exp Brain Res. 1991;85:389–404. [PubMed]
  • Mettens P, Godaux E, Cheron G, Galiana HL. Effect of muscimol microinjections into the prepositus hypoglossi and the medial vestibular nuclei on cat eye movements. J Neurophysiol. 1994;72:785–802. [PubMed]
  • Miall RC, Weir DJ, Wolpert DM, Stein JF. Is the cerebellum a Smith predictor? J Mot Behav. 1993;25:203–216. [PubMed]
  • Miles FA, Fuller JH, Braitman DJ, Dow BM. Long-term adaptive changes in primate vestibuloocular reflex. III. Electro-physiological observations in flocculus of normal monkeys. J Neurophysiol. 1980;43:1437–1476. [PubMed]
  • Miller JM. Functional anatomy of normal human rectus muscles. Vision Res. 1989;29:223–240. [PubMed]
  • Mizukoshi K, Kobayashi H, Ohashi N, Watanabe Y. Quantitative analysis of the human visual vestibulo-ocular reflex in sinusoidal rotation. Acta Otolaryngol Suppl. 1983;393:58–64. [PubMed]
  • Mohr C, Roberts PD, Bell CC. The mormyromast region of the mormyrid electrosensory lobe. I Responses to corollary discharge and electrosensory stimuli. J Neurophysiol. 2003;90:1193–1210. [PubMed]
  • Musallam WS, Tomlinson RD. Model for the translational vestibuloocular reflex (VOR) J Neurophysiol. 1999;82:2010–2014. [PubMed]
  • Nakamagoe K, Iwamoto Y, Yoshida K. Evidence for brainstem structures participating in oculomotor integration. Science. 2000;288:857–859. [PubMed]
  • Padoa-Schioppa C, Li CS, Bizzi E. Neuronal correlates of kinematics-to-dynamics transformation in the supplementary motor area. Neuron. 2002;36:751–765. [PubMed]
  • Page WK, Duffy CJ. Heading representation in MST: sensory interactions and population encoding. J Neurophysiol. 2003;89:1994–2013. [PubMed]
  • Paige GD, Sargent EW. Visually-induced adaptive plasticity in the human vestibulo-ocular reflex. Exp Brain Res. 1991;84:25–34. [PubMed]
  • Paige GD, Tomko DL. Eye movement responses to linear head motion in the squirrel monkey. I. Basic characteristics. J Neurophysiol. 1991a;65:1170–1182. [PubMed]
  • Paige GD, Tomko DL. Eye movement responses to linear head motion in the squirrel monkey. II. Visual-vestibular interactions and kinematic considerations. J Neurophysiol. 1991b;65:1183–1196. [PubMed]
  • Raphan T, Cohen B. Organizational principles of velocity storage in three dimensions. The effect of gravity on cross-coupling of optokinetic after-nystagmus. Ann N Y Acad Sci. 1988;545:74–92. [PubMed]
  • Raphan T, Cohen B. The vestibulo-ocular reflex in three dimensions. Exp Brain Res. 2002;145:1–27. [PubMed]
  • Raphan T, Matsuo V, Cohen B, editors. A velocity storage mechanism responsible for optokinetic nystagmus (OKN), optokinetic after-nystagmus (OKAN) and vestibular nystagmus. Elsevier; Amsterdam: 1977.
  • Raphan T, Matsuo V, Cohen B. Velocity storage in the vestibulo-ocular reflex arc (VOR) Exp Brain Res. 1979;35:229–248. [PubMed]
  • Raphan T, Cohen B, Henn V. Effects of gravity on rotatory nystagmus in monkeys. Ann N Y Acad Sci. 1981;374:44–55. [PubMed]
  • Raptis HA, Dannenbaum E, Paquet N, Feldman AG. Vestibular system may provide equivalent motor actions regardless of the number of body segments involved in the task. J Neurophysiol. 2007;97:4069–4078. [PubMed]
  • Raymond JL, Lisberger SG, Mauk MD. The cerebellum: a neuronal learning machine? Science. 1996;272:1126–1131. [PubMed]
  • Reisine H, Raphan T. Neural basis for eye velocity generation in the vestibular nuclei of alert monkeys during off-vertical axis rotation. Exp Brain Res. 1992;92:209–226. [PubMed]
  • Robinson DA. A method of measuring eye movement using a scleral search coil in a magnetic field. IEEE Trans Biomed Eng. 1963;10:137–145. [PubMed]
  • Robinson DA. The mechanics of human saccadic eye movement. J Physiol. 1964;174:245–264. [PubMed]
  • Robinson DA. The mechanics of human smooth pursuit eye movement. J Physiol. 1965;180:569–591. [PubMed]
  • Robinson DA. Oculomotor unit behavior in the monkey. J Neurophysiol. 1970;33:393–403. [PubMed]
  • Robinson DA, editor. Vestibular and optokinetic symbiosis: an example of explaining by modeling. Elsevier; Amsterdam: 1977.
  • Robinson DA. The use of control systems analysis in the neurophysiology of eye movements. Annu Rev Neurosci. 1981;4:463–503. [PubMed]
  • Roy JE, Cullen KE. Selective processing of vestibular reafference during self-generated head motion. J Neurosci. 2001;21:2131–2142. [PubMed]
  • Roy JE, Cullen KE. Dissociating self-generated from passively applied head motion: neural mechanisms in the vestibular nuclei. J Neurosci. 2004;24:2102–2111. [PubMed]
  • Sadeghi SG, Minor LB, Cullen KE. Response of vestibular-nerve afferents to active and passive rotations under normal conditions and after unilateral labyrinthectomy. J Neurophysiol. 2007;97:1503–1514. [PubMed]
  • Salinas E, Abbott LF. Transfer of coded information from sensory to motor networks. J Neurosci. 1995;15:6461–6474. [PubMed]
  • Sandeman DC, Okajima A. Statocyst-induced eye movement in the crab Scylla serrata. I. The sensory input from the statocyst. J Exp Biol. 1972;57:187–204. [PubMed]
  • Sawtell NB, Williams A, Bell CC. Central control of dendritic spikes shapes the responses of Purkinje-like cells through spike timing-dependent synaptic plasticity. J Neurosci. 2007;27:1552–1565. [PubMed]
  • Scott SH, Kalaska JF. Reaching movements with similar hand paths but different arm orientations. I. Activity of individual cells in motor cortex. J Neurophysiol. 1997;77:826–852. [PubMed]
  • Scott SH, Sergio LE, Kalaska JF. Reaching movements with similar hand paths but different arm orientations. II. Activity of individual cells in dorsal premotor cortex and parietal area 5. J Neurophysiol. 1997;78:2413–2426. [PubMed]
  • Scudder CA, Fuchs AF. Physiological and behavioral identification of vestibular nucleus neurons mediating the horizontal vestibuloocular reflex in trained rhesus monkeys. J Neurophysiol. 1992;68:244–264. [PubMed]
  • Sergio LE, Kalaska JF. Systematic changes in motor cortex cell activity with arm posture during directional isometric force generation. J Neurophysiol. 2003;89:212–228. [PubMed]
  • Shadmehr R. Generalization as a behavioral window to the neural mechanisms of learning internal models. Hum Mov Sci. 2004;23:543–568. [PMC free article] [PubMed]
  • Shadmehr R, Mussa-Ivaldi FA. Adaptive representation of dynamics during learning of a motor task. J Neurosci. 1994;14:3208–3224. [PubMed]
  • Shaikh AG, Meng H, Angelaki DE. Multiple reference frames for motion in the primate cerebellum. J Neurosci. 2004;24:4491–4497. [PubMed]
  • Shaikh AG, Green AM, Ghasia FF, Newlands SD, Dickman JD, Angelaki DE. Sensory convergence solves a motion ambiguity problem. Curr Biol. 2005;15:1657–1662. [PubMed]
  • Shidara M, Kawano K, Gomi H, Kawato M. Inverse-dynamics model eye movement control by Purkinje cells in the cerebellum. Nature. 1993;365:50–52. [PubMed]
  • Singla CL. Staocysts of hydromedusae. Cell Tissue Res. 1975;158:391–407. [PubMed]
  • Skavenski AA, Robinson DA. Role of abducens neurons in vestibuloocular reflex. J Neurophysiol. 1973;36:724–738. [PubMed]
  • Smith MA, Crawford JD. Distributed population mechanism for the 3-D oculomotor reference frame transformation. J Neurophysiol. 2005;93:1742–1761. [PubMed]
  • Smith PF, Darlington CL, Zheng Y. Move it or lose it—is stimulation of the vestibular system necessary for normal spatial memory? Hippocampus. 2009 in press. [PubMed]
  • Sommer MA, Wurtz RH. A pathway in primate brain for internal monitoring of movements. Science. 2002;296:1480–1482. [PubMed]
  • Sommer MA, Wurtz RH. Brain circuits for the internal monitoring of movements. Annu Rev Neurosci. 2008;31:317–338. [PMC free article] [PubMed]
  • Stackman RW, Taube JS. Firing properties of head direction cells in the rat anterior thalamic nucleus: dependence on vestibular input. J Neurosci. 1997;17:4349–4358. [PMC free article] [PubMed]
  • Stackman RW, Clark AS, Taube JS. Hippocampal spatial representations require vestibular input. Hippocampus. 2002;12:291–303. [PMC free article] [PubMed]
  • Stapley PJ, Ting LH, Kuifu C, Everaert DG, Macpherson JM. Bilateral vestibular loss leads to active destabilization of balance during voluntary head turns in the standing cat. J Neurophysiol. 2006;95:3783–3797. [PubMed]
  • Stein BE, Stanford TR. Multisensory integration: current issues from the perspective of the single neuron. Nat Rev Neurosci. 2008;9:255–266. [PubMed]
  • Stone LS, Lisberger SG. Visual responses of Purkinje cells in the cerebellar flocculus during smooth-pursuit eye movements in monkeys. I. Simple spikes. J Neurophysiol. 1990;63:1241–1261. [PubMed]
  • Taube JS. The head direction signal: origins and sensory-motor integration. Annu Rev Neurosci. 2007;30:181–207. [PubMed]
  • Telford L, Seidman SH, Paige GD. Dynamics of squirrel monkey linear vestibuloocular reflex and interactions with fixation distance. J Neurophysiol. 1997;78:1775–1790. [PubMed]
  • Tomlinson RD, Robinson DA. Signals in vestibular nucleus mediating vertical eye movements in the monkey. J Neurophysiol. 1984;51:1121–1136. [PubMed]
  • Tweed D, Vilis T. Geometric relations of eye position and velocity vectors during saccades. Vision Res. 1990;30:111–127. [PubMed]
  • Van Beuzekom AD, Medendorp WP, Van Gisbergen JA. The subjective vertical and the sense of self orientation during active body tilt. Vision Res. 2001;41:3229–3242. [PubMed]
  • Vingerhoets RA, Medendorp WP, Van Gisbergen JA. Body-tilt and visual verticality perception during multiple cycles of roll rotation. J Neurophysiol. 2008;99:2264–2280. [PubMed]
  • von Helmholtz H. Optical Society of America. JPC Southall; Rochester: 1925. Handbuch der Physiologischen Optik [Treatise on physiological optics]
  • von Holst E, Mittelstaedt H. Das reafferenzprinzip. Naturwissenschaften. 1950;37:464–476.
  • Voogd J, Glickstein M. The anatomy of the cerebellum. Trends Neurosci. 1998;21:370–375. [PubMed]
  • Waespe W, Henn V. Neuronal activity in the vestibular nuclei of the alert monkey during vestibular and optokinetic stimulation. Exp Brain Res. 1977;27:523–538. [PubMed]
  • Waespe W, Henn V. Visual-vestibular interaction in the flocculus of the alert monkey. II. Purkinje cell activity. Exp Brain Res. 1981;43:349–360. [PubMed]
  • Wagner MJ, Smith MA. Shared internal models for feedforward and feedback control. J Neurosci. 2008;28:10663–10673. [PubMed]
  • Wearne S, Raphan T, Cohen B. Contribution of vestibular commissural pathways to spatial orientation of the angular vestibuloocular reflex. J Neurophysiol. 1997a;78:1193–1197. [PubMed]
  • Wearne S, Raphan T, Waespe W, Cohen B. Control of the three-dimensional dynamic characteristics of the angular vestibulo-ocular reflex by the nodulus and uvula. Prog Brain Res. 1997b;114:321–334. [PubMed]
  • Wearne S, Raphan T, Cohen B. Control of spatial orientation of the angular vestibuloocular reflex by the nodulus and uvula. J Neurophysiol. 1998;79:2690–2715. [PubMed]
  • Wilson VJ, Yamagata Y, Yates BJ, Schor RH, Nonaka S. Response of vestibular neurons to head rotations in vertical planes. III. Response of vestibulocollic neurons to vestibular and neck stimulation. J Neurophysiol. 1990;64:1695–1703. [PubMed]
  • Wolpert DM, Kawato M. Multiple paired forward and inverse models for motor control. Neural Netw. 1998;11:1317–1329. [PubMed]
  • Wolpert DM, Miall RC. Forward models for physiological motor control. Neural Netw. 1996;9:1265–1279. [PubMed]
  • Wolpert DM, Ghahramani Z, Jordan MI. An internal model for sensorimotor integration. Science. 1995;269:1880–1882. [PubMed]
  • Wylie DR, Frost BJ. Complex spike activity of Purkinje cells in the ventral uvula and nodulus of pigeons in response to translational optic flow. J Neurophysiol. 1999;81:256–266. [PubMed]
  • Yakusheva TA, Shaikh AG, Green AM, Blazquez PM, Dickman JD, Angelaki DE. Purkinje cells in posterior cerebellar vermis encode motion in an inertial reference frame. Neuron. 2007;54:973–985. [PubMed]
  • Yates BJ. Vestibular influences on the sympathetic nervous system. Brain Res Brain Res Rev. 1992;17:51–59. [PubMed]
  • Yates BJ, Bronstein AM. The effects of vestibular system lesions on autonomic regulation: observations, mechanisms, and clinical implications. J Vestib Res. 2005;15:119–129. [PubMed]
  • Yokota J, Reisine H, Cohen B. Nystagmus induced by electrical stimulation of the vestibular and prepositus hypoglossi nuclei in the monkey: evidence for site of induction of velocity storage. Exp Brain Res. 1992;92:123–138. [PubMed]
  • Zago M, Bosco G, Maffei V, Iosa M, Ivanenko YP, Lacquaniti F. Internal models of target motion: expected dynamics overrides measured kinematics in timing manual interceptions. J Neurophysiol. 2004;91:1620–1634. [PubMed]
  • Zago M, McIntyre J, Senot P, Lacquaniti F. Visuo-motor coordination and internal models for object interception. Exp Brain Res. 2009;192:571–604. [PubMed]
  • Zee DS, Yamazaki A, Butler PH, Gucer G. Effects of ablation of flocculus and paraflocculus of eye movements in primate. J Neurophysiol. 1981;46:878–899. [PubMed]
  • Zupan LH, Peterka RJ, Merfeld DM. Neural processing of gravito-inertial cues in humans. I. Influence of the semicircular canals following post-rotatory tilt. J Neurophysiol. 2000;84:2001–2015. [PubMed]
  • Zupan LH, Merfeld DM, Darlot C. Using sensory weighting to model the influence of canal, otolith and visual cues on spatial orientation and eye movements. Biol Cybern. 2002;86:209–230. [PubMed]