Search tips
Search criteria 


Logo of nihpaAbout Author manuscriptsSubmit a manuscriptHHS Public Access; Author Manuscript; Accepted for publication in peer reviewed journal;
Neuron. Author manuscript; available in PMC 2010 December 10.
Published in final edited form as:
PMCID: PMC2811884

Using A Compound Gain Field To Compute A Reach Plan


A gain field, the scaling of a tuned neuronal response by a postural signal, may help support neuronal computation. Here we characterize eye and hand position gain fields in the parietal reach region (PRR). Eye and hand gain fields in individual PRR neurons are similar in magnitude but opposite in sign to one another. This systematic arrangement produces a compound gain field that is proportional to the distance between gaze location and initial hand position. As a result, the visual response to a target for an upcoming reach is scaled by the initial gaze-to-hand distance. Such a scaling is similar to what would be predicted in a neural network that mediates between eye- and hand-centered representations of target location. This systematic arrangement supports a role of PRR in visually-guided reaching and provides strong evidence that gain fields are used for neural computations.


For over twenty years gain fields have been proposed to comprise a mechanism for neural computation. Modulations of visually-evoked responses by eye position were first reported in area 7a and the lateral intraparietal area (LIP) (Andersen and Mountcastle, 1983; Andersen et al., 1990) and were subsequently found in many other cortical and subcortical structures, including V1 (Weyand and Malpeli, 1993), V3A (Galletti and Battaglini, 1989), the dorsal premotor cortex (Boussaoud et al., 1998), parieto-occipital area or V6A (Galletti et al., 1995; Nakamura et al., 1999), superior colliculus (Van Opstal et al., 1995; Groh and Sparks, 1996), and lateral geniculate nucleus (Lal and Friedlander, 1990). Gain fields have been postulated for head position in LIP (Brotchie et al., 1995), attention in V4 (Connor et al., 1996; but see Boynton 2009), viewing distance in V4 (Dobbins et al., 1998), and eye and head velocity in the dorsal medial superior temporal area (Bradley et al., 1996). A topographical arrangement of gain fields has been suggested in 7a and the dorsal parietal area (Siegel et al., 2003). Gain field modulations may underlie more complex computations such as translation-invariance in inferior temporal cortex (Salinas and Thier, 2000; Salinas and Sejnowski, 2001). In summary, gain fields appear in many parts of the brain, in both dorsal and ventral streams, and have been suggested to be a universal mechanism for neural computations (Salinas and Sejnowski, 2001).

Zipser and Andersen realized that eye position gain fields might be used to transform the reference frame of eye-centered visual responses into head-centered responses, and built a neural network as an existence proof of this idea (Zipser and Andersen, 1988). They used back-propagation to train a three layer network with tuned visual inputs (similar to those of V1 and other early visual areas) and a linear eye position input (similar to those found in brainstem eye position neurons and, more recently, in primary somatosensory cortex, Wang et al., 2007) to produce head-centered outputs. The nodes within the middle “hidden” layer have tuned visual responses that are gain modulated by eye position, similar to LIP and 7a neurons. The findings generalize to other training algorithms, architectures and reference frame transformations (Mazzoni et al., 1991; Burnod et al., 1992; Pouget and Sejnowski, 1994; Salinas and Abbott, 1995; Salinas and Abbott, 1996; Xing and Andersen, 2000; White and Snyder, 2004; Smith and Crawford, 2005; Brozovic et al., 2007; Blohm et al., 2009). Based on these data, the hypothesis that gain fields help mediate spatial computations for action is now generally accepted.

In the current study, we present novel findings regarding gain fields in the parietal reach region (PRR). PRR neurons in the posterior portion of the intral parietal sulcus (IPS) are more active when planning a reach than a saccade, and have been proposed to play a role in planning visually-guided arm movements (Snyder et al., 1997; Andersen et al., 1998; Calton et al., 2002). PRR straddles the boundary between the medial intraparietal area (MIP) and V6A (Snyder et al., 1997; Calton et al., 2002; Chang et al., 2008). Tuned PRR neurons encode the target for an upcoming reach to a visual or auditory target, or discharge during reaching movements (Caminiti et al., 1996; Galletti et al., 1997; Batista et al., 1999; Cohen and Andersen, 2000; Battaglia-Mayer et al., 2001; Fattori et al., 2001; Buneo et al., 2002; Marzocchi et al., 2008). Under certain circumstances, PRR activity predicts reach reaction time and endpoint (Snyder et al., 2006; Chang et al., 2008; Quian Quiroga et al., 2006). Eye and hand position effects in PRR have been reported (Andersen et al., 1998; PhD thesis, Batista, 1999; Cohen and Andersen, 2000; Buneo et al., 2002; Marzocchi et al., 2008) but not quantified.

We now report that eye and hand position gain fields in PRR are systematically configured to encode the distance between the point of gaze fixation and the position of the hand. We refer to this as “eye-hand distance,” and the gain mechanism based on this distance, “eye-hand distance gain field,” or simply “distance gain field”. We define a hand-centered representation of target position as the location of the target in a coordinate system whose origin coincides with the location of the hand, or, equivalently, a vector extending from the location of the hand to the target. In a two-dimensional system, eye-hand distance is required to transform eye-centered visual target information into hand-centered visual target information (Bullock and Grossberg, 1988; Buneo et al., 2002; Shadmehr and Wise, 2005; Blohm and Crawford, 2007). The eye- to hand-centered transformation is crucial for reconciling information from different modalities related to arm movements and for generating a motor command to a visible target. The identification of an explicit eye-hand distance gain field supports a role of PRR in these processes, and adds to the evidence that gain fields are indeed used by the brain for certain spatial computations.


We recorded neuronal activity in PRR and identified 259 well-isolated, stable cells that showed spatial tuning (see Experimental Procedures). For each neuron, we first mapped its preferred direction and then ran a delayed visually-guided reaching task designed to measure eye and hand position gain fields. The task began with the animal touching an initial hand position target and looking at an initial eye position target. There were five different configurations of initial eye and hand positions: eyes and hand aligned at center, hand at center and eyes to the right or left, and eyes at center and hand to the right or left (Figure 1A). A peripheral reach target then appeared at one of eight locations, five of which were arrayed about the preferred direction (Figure 1A). After a variable delay the initial target shrank in size, cueing a reach (but no eye movement) to the peripheral target (Figure 1B).

Figure 1
Behavioral Task and Anatomical Locations of Recorded PRR Neurons

Cells were recorded from two animals (102 and 157 from monkey G and S, respectively). The reconstructed recording locations for neurons straddle the border between V6A and MIP (Lewis and Van Essen, 2000a; Lewis and Van Essen, 2000b; C. Galletti, personal communication), with a few cells on the lateral bank (Figure 1C). These locations match those reported in previous studies of PRR (Snyder et al., 1997; Calton et al., 2002; Chang et al., 2008).

Animals performed the task well, successfully completing 89% and 96% of the initiated trials (monkeys G and S, respectively) with a median reach response latency of 238 ± 76 ms and 246 ± 55 ms (± s.d.). Table 1 shows median eye and hand distance from the initial eye targets, initial hand targets, and final reach targets.

Table 1
Median absolute distance of eye and hand from the initial hand, initial eye and final reach targets

Single Neuron: Eye and Hand Gain Fields

Responses to the five targets near the preferred direction were tuned, and for most neurons the tuning was a function of the initial eye and hand configuration. For the example neuron in Figure 2A, when the eye and hand were initially aligned at the central position (Aligned), the delay period activity was strongest for the center target (T3). When the starting eye position was dis- placed to the left (Eyes Left) or right (Eyes Right), the greatest delay activity was evoked by a target shifted one position to the left (T2) or to the right (between T3 and T4), respectively. In contrast, when the starting hand positions were displaced to the left or right, the peak did not shift, but instead remained at the central target (Hand Left and Hand Right). Tuning shifts with changes in initial eye position but not with changes in initial hand position are consistent with an eye-centered representation of target location (Batista et al., 1999).

Figure 2
Eye and Hand Position Gain Fields in a Single PRR Neuron Are Similar in Strength But Opposite in Sign

Activity did not only shift with changes in initial configuration; it also showed systematic increases or decreases in amplitude (Figure 2A). Peak activity was much greater when initial fixation was to the left compared to the right (middle row, Eyes Left versus Eyes Right; 17.50 ± 3.09 sp/s versus 8.75 ± 2.06 sp/s [mean ± s.e.m.]; Wilcoxon rank sum test, p = 0.08), and activity was much greater in Hand Right than Hand Left (bottom row; 13.04 ± 3.21 sp/s versus 23.21 ± 2.51 sp/s; p < 0.05). The eye and hand gain effects were opposite in direction but similar in magnitude; there was no significant difference between the peak activity for Eyes Left versus Hand Right (p = 0.13), nor between Eyes Right versus Hand Left (p = 0.43).

In order to more precisely measure gain field effects, the data were fit to a seven parameter model that allowed Gaussian tuning in a frame of reference centered on the fixation point (eye-centered), on the starting hand position (hand-centered), or on any point along a line connecting those two points (Equation 1). The model included separate eye and hand position gain field terms. The fit for the example neuron (Figure 2B) accounted for 89% of the variance across conditions (r2 = 0.89). This fit included an eye gain field of −0.76 sp/s per deg and a hand gain field of 0.85 sp/s per deg. The unsigned amplitudes (0.76 and 0.85 sp/s/deg) were statistically indistinguishable (two-tailed t test, p = 0.79). To test the significance of the two gain fields, we compared the seven parameter “full” model with two “reduced” six parameter models, each the same as the original model but one lacking an eye gain field and the other lacking a hand gain field (Experimental Procedures). The full model accounted for significantly more variance than either of the two reduced models, demonstrating that both the eye and hand position gain fields were highly significant (two-tailed sequential F test, F = 12.21 and 16.25, respectively, p < 0.00001 for both comparisons).

Population: Model Fit

A total of 259 neurons were recorded from PRR in two monkeys. Model fits were judged based on how well the model accounted for firing rate, based initially on several different tests. For the delay interval, a Chi-square test of the goodness of fit accepted 161 (62%) of the neurons (p > 0.05). For these cells, the median Gaussian amplitude was 9.87 sp/s, and the median variance explained was 67%. Of the 259 cells, 61% (158) had an r2 value of at least 50%.

The variance explained (r2) criterion (> 50%) accepted some cells that showed minimal modulation across conditions (target position, initial eye and hand position), and for these neurons the fit sometimes appeared spurious (see Supplemental Figure S1 for an example). We therefore combined the amplitude of Gaussian tuning described by the model and the variance explained [r2] by multiplying the two factors together to produce a single measure of “spike-variance explained”, with units of spikes per second. Roughly, spike-variance explained quantifies the variance in our responses (in sp/s) that were driven by our manipulations of target position. For example, a cell with r2 of 60% that showed 40 sp/s modulation to target positions would have 24 sp/s spike-variance explained. A cell with an r2 of 15% and 40 sp/s of modulation, or a cell with an r2 of 60% and 10 sp/s of modulation, would each have a spike-variance explained of only 6 sp/s.

Using spike-variance explained, 176 of 259 cells (67%) met or exceeded a criterion of 2 sp/s, and 103 cells (40%) met or exceeded a criterion of 5 sp/s (61 and 42 cells for monkey G and S, respectively; monkey G cells are plotted in Figure 1C). Each criterion (Chi-square test, variance explained greater than 50%, spike-variance explained greater or equal to 2 or 5 sp/s) resulted in a similar conclusion with regard to eye and hand gain fields (see below). Note that none of these criteria required that the gain field terms of the model be different from zero.

Population: Eye and Hand Gain Fields Are Negatively Correlated

Across neurons with at least 5 sp/s of spike-variance explained, the median absolute eye position gain field was 3.44% of peak activity per deg and the median absolute hand position gain field was 2.08%/deg. Within each cell, eye and hand position gain fields were negatively correlated (Spearman’s rank correlation, r = −0.61, p < 0.00001; type II regression slope = −0.74; Figure 3A and Supplemental Figure 2). This relationship was clearly present when the data from each monkey is considered separately (monkey G: r = −0.68, p < 0.00001, slope = −0.78; monkey S: r = −0.47, p < 0.005, slope = −0.67). Only 4 cells showed a significant difference (two-tailed t-test, p < 0.05) between the fitted eye position gain field parameter and the negative of the fitted hand position gain field parameter. The number of cells showing a difference is not significantly different from that expected by chance, even if all cells are in fact correlated, given the criterion p value of 0.05 (proportion test, p = 0.77). The data points are evenly distributed about the negative unity line (y = −x), consistent with the two gain fields being similar in magnitude but opposite in sign (blue oblique marginal histogram, Figure 3A). If we eliminate those cells for which neither gain field is significant (24 cells, based on a sequential F-tests), there is an even stronger negative relationship between the eye and hand gain fields (Spearman’s rank correlation, r = −0.66, p < 0.00001, reg. slope = −0.77).

Figure 3
PRR Neurons Encode the Distance Between the Initial Eye and the Hand Positions Using Negatively Coupled Eye and Hand Position Gain Fields

If the acceptance criterion is relaxed to 2 sp/s of spike-variance explained in order to include 67% of our recorded cells, the significant negative coupling between eye and hand gain fields remains (174 cells, r = −0.29, p < 0.0001; reg. slope = −0.67; Figure 3B). Even when all the cells for which our model converged are considered (254 cells, 98%), a slope of −0.81 and significant negative correlation (r = −0.14, p < 0.05) are obtained (data not shown).

The negative correlation between the two gain fields was not restricted to the delay period. From each cell we selected the interval -- visual, delay, or peri-movement -- with the largest variance explained of the three. Cells were included if the model explained at least 5 sp/s of spike-variance (155 cells) in that interval. The strong negative relationship between eye and hand gain fields persisted (r = −0.56, p < 0.00001, reg. slope = −0.77; 134 cells with either significant eye or hand gain field: r = −0.62, p < 0.00001, reg. slope = −0.73; Figure 3C). Again, only 4 cells showed a significant difference between the fitted eye position gain field parameter and the negative of the fitted hand position gain field parameter (not different from chance by proportion test, p = 0.23). We also observed the negative correlation between the two gain fields when we looked at each interval alone, without pooling across intervals (visual interval: r = −0.53, p < 0.00001, reg. slope = −0.91, n = 103; peri-movement interval, r = −0.64, p < 0.00001, reg. slope = −0.80, n = 117).

The distance gain field was established as soon as the animals acquired the initial eye and hand targets. Prior to the appearance of a final reach target but after acquiring the eye and hand initial targets, the two gain fields were already negatively correlated (71 cells with at least 5 sp/s spike-variance explained: r = −0.63, p < 0.00001, reg. slope = −0.68; 143 cells with at least 2 sp/s spike-variance explained: r = −0.49, p < 0.00001, reg. slope = −0.80; all cells: r = −0.38, p < 0.00001, reg. slope = −0.80) (Supplemental Figure 3). The gain fields established before and after the onset of a reach target were strongly correlated. This strong correlation was present for both for eye gain fields (r = 0.52, p < 0.00001, n = 104) and for hand gain fields (r = 0.38, p < 0.0001, n = 104) (Supplemental Figure 4).

The slopes of the regressions in Figure 3 are close to but significantly smaller than −1 (linear regression, all p < 0.05). This was also true when the data from each monkey were considered separately, regardless of the criteria used to select neurons. See Supplemental material for more detail.

Negatively Correlated Gain Fields Are Equivalent to a Gain Field for Eye-Hand Distance

Our “full model” (Equation 1 in Experimental Procedures) posits that eye and hand gain fields add together. In other words, the combination of an eye gain field of 3%/deg and a hand gain field of −3% would produce a total modulation of 0%. More generally, gain fields of similar magnitude but opposite sign that add together would be identical to a single gain field encoding the signed distance between the fixation point and the hand (Equation 2). To test this idea, we fit the data using a six parameter model in which the two separate gain field terms from the full model were replaced with a single gain field term for eye-to-hand distance. Despite having one fewer parameter, the new model has similar spike-variance explained for most neurons (Figure 4A). The median distance gain field in this model is 2.30%/deg. On average, the full model explains only 0.43 sp/s of variance more than the model with a single distance gain field. Using the Bayesian Information Criterion (k = 25), the fit was judged to be better in the distance gain field model than in the two gain field model in 66% of the cells.

Figure 4
A Single Distance Gain Field Model Does Just as Well to Explain the Data as the Separate Eye and Hand Gain Field Model

To test whether a different ratio of eye gain field to hand gain field might provide a better fit to the data, we fit the data to a range of models with a single gain field term, but varied the ratio of the hand to eye gain field from −4.5:1 to 4.5:1, with an 0.2 increment. Figure 4B shows the eightieth percentile of spike-variance explained per cell. (We plot the eightieth percentile of spike-variance explained in order to capture the effect of the ratio on the well-fit cells, maximizing the sensitivity of the measure.) The maximum spike-variance explained occurs when the eye gain is equal to −1 times the hand gain, although values slightly less than −1 perform nearly as well.

A Trained Neural Network Produces Eye-Hand Distance Gain Fields

To test for a potential role of eye and hand position gain fields in visually-guided reach transformations, we trained a three-layer network to perform a coordinate transformation analogous to that of the Zipser-Andersen network (Figure 5A). Feed-forward models have been shown to accurately account for realistic reference frame transformations in two or three dimensions (Zipser and Andersen, 1988; Blohm and Crawford, 2007; Blohm et al., 2009). Tuned visual inputs similar to those of V1 and separate linear eye position inputs (similar to those found in the brainstem eye position neurons) and linear hand position inputs were provided, and the network was trained via back-propagation to produce tuned hand-centered outputs. The hidden layer nodes of these trained networks had tuned eye-centered visual responses that were gain field modulated by both eye and hand position, and these gain field coefficients were negatively correlated with one another, very much as in PRR (Figure 5B). Across nodes, the eye and hand position gain field strengths span a wide range (1.21 ± 5.39 and 1.16 ± 5.06%/deg [unsigned median ± s.d.], respectively; both significantly different from zero: p < 0.00001, Wilcoxon signed rank), but in any one node, the eye and hand gain fields were similar in magnitude but opposite in sign to one another (Spearman’s rank correlation, r = −0.95, p < 0.00001). This results in a gain field representation of eye-hand distance.

Figure 5
A Neural Network Also Uses a Systematic Arrangement of Eye and Hand Gain Fields


Neurons in PRR selectively encode spatial targets for upcoming reaches using a receptive field code (Snyder et al., 1997; Batista et al., 1999). We now show that visual, memory, and motor responses are modulated in proportion to eye and hand position (gain fields). Within each PRR neuron, eye and hand gain fields are similar in strength but opposite in sign to one another (Figures (Figures22 and and3).3). Two individual gain fields that are systematically related in this manner and whose effects add linearly are almost indistinguishable from a single gain field for the distance between gaze location and initial hand position (eye-hand distance).

There are at least two roles that a gain field for eye-hand distance might play in PRR. First, the gain field could help implement a transformation from eye- to hand-centered coordinates. Visual information in the early visual areas that project to PRR is referenced to the fovea (eye-centered), and many neurons in PRR are eye-centered (Batista et al., 1999). Motor commands for a reaching movement are necessarily referenced to muscles or joints. Compared to PRR, primary motor and premotor cortex use representations that are closer to “motor coordinates”, e.g., a hand-centered frame of reference that takes into account arm geometry (Caminiti et al., 1991; Scott et al., 1997; Herter et al., 2007; but see also Pesaran et al., 2006; Batista et al., 2007). Eye-hand distance gain fields in PRR might help to mediate the first step in this transformation. A second possibility is that PRR neurons may integrate information from different sensory streams into a single representation of target position. We discuss each of these possibilities in turn.

Reference Frame Transformation

Transforming a location from an eye-centered to a hand-centered reference frame in one or two dimensions requires subtracting the eye-hand distance from the eye-centered target location (Bullock and Grossberg, 1988; Shadmehr and Wise, 2005). In three dimensions, movements of the eyes to tertiary (oblique) positions or roll movements of the head will rotate the retina about the line of sight, complicating these transformations (Crawford and Vilis, 1991; Tweed and Vilis, 1987; Smith and Crawford, 2005; Blohm and Crawford, 2007; Blohm et al., 2009). Even secondary eye positions can lead to non-linear reference frame transformations if the hand vector is orthogonal to the direction of eye deviation (e.g., a horizontal hand movement executed when the eye are positioned upward) (Crawford et al., 2000). In many cases, however, the computation reduces to the vector subtraction of the eye-hand distance vector (e.g., when the eye-hand distance is small, or when the eyes are centered and the head is upright; e.g., see Crawford et al., 2004). Vector subtraction is trivial when both postural information and target location are encoded using a proportional rate code in a Cartesian coordinate frame. Indeed, posterior parietal neurons often encode postural information (eye, hand, or head position) using a rate code (Andersen and Mountcastle, 1983; Brotchie et al., 1995). However, the visual system uses a receptive field code, or place code, to represent target locations (Poggio, 1990). Implementing a vector subtraction using a combination of place-coded and rate-coded signals is not trivial (Mays and Sparks, 1980). A neural network solves this task with gain fields in the hidden layer (see Introduction). We now report that both a neural network model (Figure 5) and individual neurons from PRR (Figure (Figure22 and and3)3) contain eye and hand position gain fields, and that in both systems these gain fields are similar in strength but opposite in sign. These findings are consistent with PRR transforming visual spatial information from an eye- to a hand-centered frame of reference (Figure 6).

Figure 6
A Schematic of a Proposed Coordinate Transformation Mediated by PRR

Sensory Integration

A possible alternative role for PRR is that it may integrate information from different sensory systems to produce a single unified representation (Lacquaniti and Caminiti, 1998). In order to perform a reach to a visible target, the brain must reconcile visual and proprioceptive or efference copy information (Sober and Sabes, 2003; Sober and Sabes, 2005; Ren et al., 2006). We have just shown that PRR contains gain fields for eye-hand distance. These representations are rate coded, consistent with a derivation from proprioception or efference copy signals (see discussion below). PRR also contains eye-centered representations of target locations, coded using receptive fields, consistent with a derivation from visual input (Batista et al., 1999). PRR might also receive visually-derived information regarding arm position. Some PRR neurons code target location in hand-centered coordinates using a receptive field code (Chang and Snyder, unpublished observations; PhD thesis, Batista, 1999; Fig. 4e in Buneo et al., 2002). While this representation could reflect the output of a reference frame transformation (see previous section), it could also reflect an independent input, derived directly from visual information (Graziano, 1999; Graziano et al., 2000; Buneo et al., 2002). In this case, the three pieces of information (proprioceptive information about eye-hand distance, visual information about target position relative to the eye, and visual information about target position relative to the arm) would be derived from two independent sources (vision and proprioception). Information from these two sources will contain uncorrelated noise, and so should be combined in a statistically optimal fashion to obtain the best possible estimate of target location. Pouget and colleagues have proposed that a reciprocal neural network can be used for this task (Pouget and Sejnowski, 1994; Pouget and Snyder, 2000; Pouget et al., 2002; Avillac et al., 2005). Their proposal suggests that the intermediate layer, like PRR, would show gain fields. Furthermore, PRR is also involved in planning auditory-guided reaches (Cohen and Andersen, 2000). Therefore it is plausible that PRR plays a role in reconciling information from different sensory systems (visual, proprioceptive, auditory) for the ultimate goal of planning reaching movements.

Gain Fields

Eye position gain fields are particularly common in visual pathways (Lal and Friedlander, 1990; Weyand and Malpeli, 1993; Galletti and Battaglini, 1989). Eye position gain field signals are more likely to derive from efference copy than from proprioception (Prevosto et al., 2009). Proprioceptive signals could come from somatosensory area 3a, but these signals are too slow for mediating spatial computations for action (Wang et al., 2007). Furthermore, extraocular muscle proprioception is not necessary for monkeys to perform either double step saccades (Guthrie et al., 1983) or visually-guided reaching (Lewis et al., 1998), suggesting that efferent eye position signals are sufficient. Although the idea that gain fields support neuronal computations is well-supported by modeling studies, there is no direct evidence that gain fields are used for computation.

LIP has been proposed to play a role in identifying salient targets to which a saccade might be directed (Snyder et al., 1997; Gottlieb et al., 1998; Kusunoki et al., 2000; Goldberg et al., 2002; Dickinson et al., 2003). Since visual information arrives in an eye-centered frame of reference, and since saccades are performed in an eye-centered frame of reference, is there a need for eye position signals? Parsing a gaze movement into an eye and head component requires information about current eye-in-head position. However, this computation could take place downstream of the parietal cortex, in the colliculus or brainstem (Mays and Sparks, 1980; Robinson et al., 1990; Crawford et al., 1991; Van Opstal et al., 1995; Groh and Sparks, 1996; Pare and Guitton, 1998; Groh et al., 2001). One argument for the use of eye position signals is that accurate execution of large saccades from tertiary eye positions (associated with ocular rotation about the line of sight) and in some cases from secondary eye positions, requires a knowledge of eye position (Crawford and Guitton, 1997; Crawford et al., 2000). Despite this argument, the role of eye position gain fields remains uncertain (Wang et al., 2007; Blohm et al., 2009).

A stronger argument could be made for gain fields subserving computation if the format of a gain field could be shown to be clearly related to the presumed function of an area, where that function was determined independently of the observation of the gain field. Area LIP does not provide such an example, since LIP was first proposed to play a role in reference frame transformations involving eye position precisely because eye position gain fields were observed in its neurons (Zipser and Andersen, 1988). Furthermore, the eye position gain fields in LIP, as in many other cortical areas, do not show a systematic relationship with, for example, receptive field location. While LIP receptive fields are primarily contralateral, gain fields are oriented in all directions (unpublished observations; Bremmer et al., 1998). (Similarly, eye gain fields in PMd operate in both contralateral and ipsilateral directions, Boussaoud et al., 1998.) This lack of systematicity is not evidence against the use of gain fields, because theoretical models of coordinate transformations indicate that a population of cells coding a diverse combination of variables, including gain fields, may be particularly desirable (Pouget and Sejnowski, 1994; Pouget and Snyder, 2000; Pouget et al., 2002; Blohm et al., 2009). However, the identification of a systematic relationship between gain fields and a computational goal would strengthen the claim that gain fields are in fact used for neural computations.

Coding of eye-hand distance using a compound gain field is exactly the form that one would expect if gain fields are to play a critical role in mediating between eye- and hand-centered representations of target locations (Bullock and Grossberg, 1988; Desmurget et al., 1999; Shadmehr and Wise, 2005; Buneo et al., 2002). The fact that this relationship exists is strong evidence that PRR gain fields are constructed for a specific computational purpose. While computational models suggest that a more haphazard arrangement of eye and hand gain fields could be used instead, such a combinatorial representation suffers from the curse of dimensionality: as more variables (e.g., target position, eye position, and hand position) are added to a network using such a non-systematic organization, exponentially more neurons are required for the representation. By requiring eye and hand gain fields to be similar and opposite, many fewer neurons are required to accomplish particular transformations.

There is no reason to believe that compound gain fields are unique to PRR. In fact, some LIP neurons have gain fields for both eye-in-head position and for head-on-body position. These gain fields tend to be matched in scale, and therefore can be thought of as comprising a compound gain field that encodes gaze relative to body (Snyder et al. 1998). Other potential modulatory influences have either not yet been tested or not been tested in a paradigm that will distinguish effects due to gain fields versus effects due to tuning shifts (Mullette-Gillman et al., 2009). An area involved in orienting the head towards visible targets (speculatively, area VIP), or an area involved in orienting the eyes to auditory targets, for example, might contain compound gain fields for the distance between head and eye position. Many other configurations are possible and have yet to be tested.

We sampled five configurations of eye and hand position and assumed that the gain field effects are linear between the two gain fields, that is, additive (Equation 1) rather than, for example, multiplicative (Equation 3). Our conclusion that eye-hand distance is coded by the combined gain fields is dependent on this assumption. However, unless the eye, hand, and target are a substantial distance apart, there is minimal difference between the additive and multiplicative models (see Equation 3 and Equation 4, and associated text in Experimental Procedures and Supplemental Material). It is even possible that non-linearities in the combination of eye and hand gain fields could explain the inaccuracies seen with reaching to targets in the far periphery (Bock, 1986; Enright, 1995; Henriques et al., 1998; Lewald and Ehrenstein, 2000; Medendorp and Crawford, 2002). Another interesting possibility is that these inaccuracies arise from the fact that the magnitude of the eye gain fields are slightly larger than arm gain fields; this possibility could be tested in a modeling study.

In three dimensions, the transformation between eye-centered and hand-centered target location is more complex than merely shifting the origin by the eye-hand distance (Soechting and Flanders, 1992; McIntyre et al., 1997; Blohm and Crawford, 2007; Blohm et al., 2009). Whether the representations in PRR can account for these higher order issues is a matter for future study. It is also an open question whether and how PRR gain fields might account for eye-hand distances in depth.

In summary, the finding that eye and hand gain fields in individual PRR neurons are systematically related to one another provides strong physiological support for the hypothesis that gain fields are indeed used to perform specific computational tasks, and strongly supports the idea that PRR is involved in encoding targets for visually-guided reaching.

Experimental Procedures

Behavioral Tasks

We recorded neurons from two monkeys (Macaca mulatta) (see Supplemental Methods for general recording procedures). In the preferred direction mapping task, animals made center-out arm movements while maintaining central fixation. Animals first fixated and pointed at a blue center target (2.4° × 2.4°, within 4° radius). A peripheral target (2.4° × 2.4°) appeared at one of 16 locations at 12–14° eccentricity. Following a variable delay period (800 – 1200 ms), the center target shrank to a single pixel (0.3° × 0.3°) to signal the animal to make reaching movement to the target without breaking eye fixation. This task was used to determine the preferred direction, that is, the direction associated with the maximum neuronal response.

In the gain field task (Figure 1A), one initial “eye” target and one initial “hand” target were illuminated simultaneously (both 0.9° × 0.9°). Monkeys first fixated the initial eye target at one of three possible positions (P1-P3; spaced 7.5° apart), then touched the initial hand target (P1-P3). One possible target (either the initial eye or hand target) was always at the center of the screen, directly in front of the animal. The other two possible targets were located ± 7.5° along an imaginary line through the center of the screen and perpendicular to the cell’s preferred direction, as determined in the preferred direction mapping task. Five different configurations of the starting eye fixation (orbital eye position) and hand (pointed position) targets were used (see box in Figure 1A). Four hundred and fifty ms after the animal touched and fixated the initial hand and eye target, a peripheral target (2.4° × 2.4°) for a final reach appeared at one of eight possible target locations. On each trial, animals maintained the initial eye and hand position (within 4° and 5° of the center, respectively) for a variable delay period (900 – 1300 ms) after the peripheral target onset. The initial eye and hand targets then shrank to a single pixel, cueing the animal to touch the peripheral target (within 5–6°) without moving the eyes from the eye target. For the current study, we describe only the five targets in or near the preferred direction (T1-T5 in Figure 1A; spaced 7.5° apart), lying on a line perpendicular to the preferred direction and 12–14° away from the center target (P2 to T3). There was also one target opposite to the preferred direction and two targets orthogonal to the preferred direction, all at 12–14° eccentricity. These three additional targets lay well outside the response field of the cells, and were included only to make target position less predictable, expanding the range of target locations from ± 45 deg to a full 360 deg. For each cell we collected 8.0 ± 1.1 repetitions (mean and mode ± s.d.) of each trial type.

Data Analysis

We computed the mean spike rate in a 200 ms “visual” interval (50 to 250 ms from target onset time), in a 700 ms delay period (850 to 150 ms before the time of the go signal), and in a 250 ms peri-movement period (200 ms before to 50 ms after movement onset). Similar results were obtained using slightly different time intervals and alignment points (e.g., a delay period from 150 to 850 ms after target onset). In order to examine the relationship between eye and hand gain fields after acquisition of the initial eye and hand targets but prior to the onset of a final reach target, we analyzed the activity from 400 ms before target onset (100 ms after acquiring the initial eye and hand targets) to 25 ms before target onset (“pre-target interval”).

We fitted mean spike rates from the visual, delay or movement intervals from individual cells in the 25 principal conditions (5 initial conditions × 5 targets) to a non-linear seven parameter model:

Firing rate=pa×exp(θmid)22×sd2×(1+E×gEye+H×gHand)+k,whereθ=tan1(T(weight×E+(1weight)×H)ecc).

The model combines Gaussian tuning for a peripheral target with eye and hand gain fields. We refer to Equation 1 as “the full model” in the text. The fit was performed using the nls function in the R statistics package ( The model inputs were the 25 mean firing rates (spikes/s), the eccentricity of the central target (ecc, the distance between P2 and T3 in Figure 1A), the target displacement away from the central target (T, the distance between the target and T3), and the displacement of the initial eye (E) and hand (H) targets from the center position (P2). All distances are degrees of visual angle. The parameters that were fit from these data were the baseline (k) and peak amplitudes of modulation (pa) (spikes/s); the offset of the Gaussian tuning curve from the central target (T3) (mid) and its standard deviation (sd), both in degrees of visual angle; the amplitudes of the eye position gain field (gEye) and the hand position gain field (gHand), both in fractional modulation per degree; and a unitless weight parameter (weight). The weight parameter determined the frame of reference for the Gaussian tuning, with weights of 1 or 0 corresponding to eye or hand-centered tuning, respectively. Note that both our eye- and hand-centered frames of reference are constrained to lie within the plane of the screen on which targets were presented and touches were performed. Because the screen was flat, the distance of the points from the eyes and body changed with eccentricity. We did not take this into account in our model.

During the fitting procedure, the parameters were constrained as follows: from −5 to 100 sp/s for k, from 0 to 300 sp/s for pa, −1.5 to 2.5 for weight, −0.15 to +0.15 (−15% to +15%) of modulation per degree for gEye and gHand, −45° to 45° for mid, and 15° to 60° for sd. These constraints were based on previously recorded data and by inspection of model fits. The fitting procedure was identical for all of the alternate models, described below and in the Results: the model with a single distance gain field term (Equation 2), the model with no gain field terms, and the multiplicative gain field model (Equation 3).

The details of the paradigm, including the number of targets, target spacing and eccentricity, was established using a series of simulations. We simulated neuronal responses to a wide variety of task designs, using idealized cells whose characteristics (tuning width, response variability, etc) were derived from PRR cells we had recorded in previous studies (Snyder et al., 1997; Calton et al., 2002; Chang et al., 2008). We varied the task parameters, used our idealized cells to generate artificial data, and then analyzed those data in order to optimize the task design and to ensure that the fitting procedure was reliable.

Details of Gain Field Modeling

If eye and hand position gain fields are negatively coupled, that is, if changes in eye position (E) result in a response modulation of ‘E x g’, and changes in hand position (H) result in a modulation of ‘H x (−g)’, then the gain field portion of Equation 1 can be simplified:


In words, the eye and hand position gain fields are replaced by a single gain field for the signed distance between the eyes and the hand.

Equation 2 assumes that eye and hand gain fields simply add together in a linear fashion. An alternate model embodies a multiplicative relationship between the eye and hand gain fields:

Firing rate=pa×exp(θmid)22×sd2×(1+E×gEye)x(1+H×gHand)+k,whereθ=tan1(T(weight×E+(1weight)×H)ecc).

In our task conditions, parameters ‘E’ and ‘H’ were never both non-zero. As a result, the fit of our data to this model is identical to the fit of the full model (Equation 1). More generally, however, the difference between the additive and multiplicative model is small under many circumstances:


The difference term, EHg2, is small when the eye or hand positions are close to central position, as was the case in our design (7.5°) and as is often (though not always) the case in normal behavior, since primates tend to aim their eyes and head toward the center of their workspace. If we had used starting eye and hand positions 50 deg apart then the additive and multiplicative models would have differed by 33% (based on the median value of the measured gain field strength, 2.3% per deg). However, in our paradigm, with the eye and hand within 15° of one another, the two models differ by less than 3%. Thus, in our task and in many natural behaviors, similar and opposite eye and hand gain fields can be approximated using a single gain field term based on the signed distance between the eyes and the hand (Equation 4).

Evaluation of model fits

A total of 259 neurons were recorded from PRR in two monkeys. Model fits were judged based on how well the model accounted for firing rate. We took both the strength of the Gaussian tuning and the overall variance explained by the model into account. We combined these two factors into a single measure by multiplying variance explained (r2) by the peak modulation of the Gaussian fit (sp/s) to obtain “spike-variance explained” (sp/s). We accepted neurons with a criterion value of ≥ 5 sp/s of spike-variance explained. Acceptance criteria based on different criterion values of spike-variance explained (e.g., Figure 3B), on variance explained alone, or Chi-square tests of the goodness of fit, all resulted in similar conclusions regarding the relationship between eye and hand gain fields. Even when we considered all cells for which the model converged on a solution (255, or 98 %), the hand and eye gain field parameters were inversely related.

Model Comparisons

A sequential F test was used to compare the quality of fits between the full model and a reduced model that lacked either an eye gain field (gEye), a hand gain field term (gHand), or both gain terms. Cells were classified as having a gain field if they showed a significant improvement in the full model compared to at least one of the two reduced models (without gEye or gHand) (F test, p < 0.025). The F ratio for each cell was obtained using


RSS.reduced and RSS.full refer to the root mean square values for the reduced and the full model, respectively. Similarly, N.full and N.reduced refer to the number of parameters for each model, and DF.full refers to the degrees of freedom of the full model.

In order to ask whether the ratio of eye to hand gain fields was really −1 or some other ratio (Figure 4B), we fit the data from all cells to a series of models similar to Equation 2 but with different fixed ratios between the eye and hand position gain field terms. Ratios of −4.5:1 to +4.5:1, at intervals of 0.2, were tested. Across each of the 45 models, cells for which the model did not converge were excluded (no more than 5% of all cells).

Neural Network Simulations

A three-layer feed-forward network was used for the simulation (Zipser and Andersen, 1988). The input layer consisted of 61 retinal inputs (mapped from −120 to +120°), two eye position inputs and two hand position inputs. The retinal inputs were activated in accordance with a Gaussian input representing an eye-centered target location (peak locations ranged from −45 to 45°, sd = 11°). The eye and hand position inputs were configured to be activated proportional to eye and hand position in the range from −20 to +20°. The network was trained using a back-propagation algorithm to compute a tuning curve for the hand-centered location of a target according to the equation


where HC, EC, EP, and HP represent hand-centered target location, eye-centered target location, eye position, and hand position, respectively. There were 24 hidden units and 61 output units. Output units were mapped from −120 to +120°.

Supplementary Material



This work was supported by National Eye Institute and the National Institute of Mental Health. We thank J. Douglas Crawford for helpful discussions regarding reference frame transformations, Greg DeAngelis assistance in experimental design, Aaron Batista, Krishna Shenoy, and Chris Fetsch for comments on the manuscript, and Michael Goldberg for discussions on gain fields. We also thank Jason Vytlacil and Justin Baker for MR imaging, and Tom Malone, Emelia Proctor and Trevor Shew for technical assistance.


Publisher's Disclaimer: This is a PDF file of an unedited manuscript that has been accepted for publication. As a service to our customers we are providing this early version of the manuscript. The manuscript will undergo copyediting, typesetting, and review of the resulting proof before it is published in its final citable form. Please note that during the production process errorsmaybe discovered which could affect the content, and all legal disclaimers that apply to the journal pertain.

Supplemental Information

The Supplemental Information includes supplemental results, discussion and 5 figures, and can be found online at


  • Andersen RA, Bracewell RM, Barash S, Gnadt JW, Fogassi L. Eye position effects on visual, memory, and saccade-related activity in areas LIP and 7a of macaque. J Neurosci. 1990;10:1176–1196. [PubMed]
  • Andersen RA, Mountcastle VB. The influence of the angle of gaze upon the excitability of the light-sensitive neurons of the posterior parietal cortex. J Neurosci. 1983;3:532–548. [PubMed]
  • Andersen RA, Snyder LH, Batista AP, Buneo CA, Cohen YE. Posterior parietal areas specialized for eye movements (LIP) and reach (PRR) using a common coordinate frame. Novartis Found Symp. 1998;218:109–22. discussion 122–8, 171–5. [PubMed]
  • Avillac M, Deneve S, Olivier E, Pouget A, Duhamel JR. Reference frames for representing visual and tactile locations in parietal cortex. Nat Neurosci. 2005;8:941–949. [PubMed]
  • Batista AP. PhD dissertation. California Institute of Technology; 1999. Contributions of Parietal Cortex to Reach Planning; pp. 81–82.
  • Batista AP, Buneo CA, Snyder LH, Andersen RA. Reach plans in eye-centered coordinates. Science. 1999;285:257–260. [PubMed]
  • Batista AP, Santhanam G, Yu BM, Ryu SI, Afshar A, Shenoy KV. Reference frames for reach planning in macaque dorsal premotor cortex. J Neurophysiol. 2007;98:966–983. [PubMed]
  • Battaglia-Mayer A, Ferraina S, Genovesio A, Marconi B, Squatrito S, Molinari M, Lacquaniti F, Caminiti R. Eye-hand coordination during reaching. II. An analysis of the relationships between visuomanual signals in parietal cortex and parieto-frontal association projections. Cereb Cortex. 2001;11:528–544. [PubMed]
  • Blohm G, Crawford JD. Computations for geometrically accurate visually guided reaching in 3-D space. J Vis. 2007;7:4.1–422. [PubMed]
  • Blohm G, Keith GP, Crawford JD. Decoding the cortical transformations for visually guided reaching in 3D space. Cereb Cortex. 2009;19:1372–1393. [PubMed]
  • Bock O. Contribution of retinal versus extraretinal signals towards visual localization in goal-directed movements. Exp Brain Res. 1986;64:476–482. [PubMed]
  • Boussaoud D, Jouffrais C, Bremmer F. Eye position effects on the neuronal activity of dorsal premotor cortex in the macaque monkey. J Neurophysiol. 1998;80:1132–1150. [PubMed]
  • Boynton GM. A framework for describing the effects of attention on visual responses. Vision Res. 2009;49:1129–1143. [PMC free article] [PubMed]
  • Bradley DC, Maxwell M, Andersen RA, Banks MS, Shenoy KV. Mechanisms of heading perception in primate visual cortex. Science. 1996;273:1544–1547. [PubMed]
  • Bremmer F, Pouget A, Hoffmann KP. Eye position encoding in the macaque posterior parietal cortex. Eur J Neurosci. 1998;10:153–160. [PubMed]
  • Brotchie PR, Andersen RA, Snyder LH, Goodman SJ. Head position signals used by parietal neurons to encode locations of visual stimuli. Nature. 1995;375:232–235. [PubMed]
  • Brozovic M, Gail A, Andersen RA. Gain mechanisms for contextually guided visuomotor transformations. J Neurosci. 2007;27:10588–10596. [PubMed]
  • Bullock D, Grossberg S. Neural dynamics of planned arm movements: emergent invariants and speed-accuracy properties during trajectory formation. Psychol Rev. 1988;95:49–90. [PubMed]
  • Buneo CA, Jarvis MR, Batista AP, Andersen RA. Direct visuomotor transformations for reaching. Nature. 2002;416:632–636. [PubMed]
  • Burnod Y, Grandguillaume P, Otto I, Ferraina S, Johnson PB, Caminiti R. Visuomotor transformations underlying arm movements toward visual targets: a neural network model of cerebral cortical operations. J Neurosci. 1992;12:1435–1453. [PubMed]
  • Calton JL, Dickinson AR, Snyder LH. Non-spatial, motor-specific activation in posterior parietal cortex. Nat Neurosci. 2002;5:580–588. [PubMed]
  • Caminiti R, Ferraina S, Johnson PB. The sources of visual information to the primate frontal lobe: a novel role for the superior parietal lobule. Cereb Cortex. 1996;6:319–328. [PubMed]
  • Caminiti R, Johnson PB, Galli C, Ferraina S, Burnod Y. Making arm movements within different parts of space: the premotor and motor cortical representation of a coordinate system for reaching to visual targets. J Neurosci. 1991;11:1182–1197. [PubMed]
  • Chang SW, Dickinson AR, Snyder LH. Limb-specific representation for reaching in the posterior parietal cortex. J Neurosci. 2008;28:6128–6140. [PMC free article] [PubMed]
  • Cohen YE, Andersen RA. Reaches to sounds encoded in an eye-centered reference frame. Neuron. 2000;27:647–652. [PubMed]
  • Connor CE, Gallant JL, Preddie DC, Van Essen DC. Responses in area V4 depend on the spatial relationship between stimulus and attention. J Neurophysiol. 1996;75:1306–1308. [PubMed]
  • Crawford JD, Cadera W, Vilis T. Generation of torsional and vertical eye position signals by the interstitial nucleus of Cajal. Science. 1991;252:1551–1553. [PubMed]
  • Crawford JD, Guitton D. Visual-motor transformations required for accurate and kinematically correct saccades. J Neurophysiol. 1997;78:1447–1467. [PubMed]
  • Crawford JD, Henriques DY, Vilis T. Curvature of visual space under vertical eye rotation: implications for spatial vision and visuomotor control. J Neurosci. 2000;20:2360–2368. [PubMed]
  • Crawford JD, Medendorp WP, Marotta JJ. Spatial transformations for eye-hand coordination. J Neurophysiol. 2004;92:10–19. [PubMed]
  • Crawford JD, Vilis T. Axes of eye rotation and Listing’s law during rotations of the head. J Neurophysiol. 1991;65:407–423. [PubMed]
  • Desmurget M, Epstein CM, Turner RS, Prablanc C, Alexander GE, Grafton ST. Role of the posterior parietal cortex in updating reaching movements to a visual target. Nat Neurosci. 1999;2:563–567. [PubMed]
  • Dickinson AR, Calton JL, Snyder LH. Nonspatial saccade-specific activation in area LIP of monkey parietal cortex. J Neurophysiol. 2003;90:2460–2464. [PubMed]
  • Dobbins AC, Jeo RM, Fiser J, Allman JM. Distance modulation of neural activity in the visual cortex. Science. 1998;281:552–555. [PubMed]
  • Enright JT. The non-visual impact of eye orientation on eye-hand coordination. Vision Res. 1995;35:1611–1618. [PubMed]
  • Fattori P, Gamberini M, Kutz DF, Galletti C. ‘Arm-reaching’ neurons in the parietal area V6A of the macaque monkey. Eur J Neurosci. 2001;13:2309–2313. [PubMed]
  • Ferraina S, Brunamonti E, Giusti MA, Costa S, Genovesio A, Caminiti R. Reaching in depth: hand position dominates over binocular eye position in the rostral superior parietal lobule. J Neurosci. 2009;29:11461–11470. [PubMed]
  • Gail A, Andersen RA. Neural dynamics in monkey parietal reach region reflect context-specific sensorimotor transformations. J Neurosci. 2006;26:9376–9384. [PubMed]
  • Galletti C, Battaglini PP. Gaze-dependent visual neurons in area V3A of monkey prestriate cortex. J Neurosci. 1989;9:1112–1125. [PubMed]
  • Galletti C, Battaglini PP, Fattori P. Eye position influence on the parieto-occipital area PO (V6) of the macaque monkey. Eur J Neurosci. 1995;7:2486–2501. [PubMed]
  • Galletti C, Fattori P, Kutz DF, Battaglini PP. Arm movement-related neurons in the visual area V6A of the macaque superior parietal lobule. Eur J Neurosci. 1997;9:410–413. [PubMed]
  • Galletti C, Fattori P, Kutz DF, Gamberini M. Brain location and visual topography of cortical area V6A in the macaque monkey. Eur J Neurosci. 1999;11:575–582. [PubMed]
  • Goldberg ME, Bisley J, Powell KD, Gottlieb J, Kusunoki M. The role of the lateral intraparietal area of the monkey in the generation of saccades and visuospatial attention. Ann N Y Acad Sci. 2002;956:205–215. [PubMed]
  • Gottlieb JP, Kusunoki M, Goldberg ME. The representation of visual salience in monkey parietal cortex. Nature. 1998;391:481–484. [PubMed]
  • Graziano MS. Where is my arm? The relative role of vision and proprioception in the neuronal representation of limb position. Proc Natl Acad Sci U S A. 1999;96:10418–10421. [PubMed]
  • Graziano MS, Cooke DF, Taylor CS. Coding the location of the arm by sight. Science. 2000;290:1782–1786. [PubMed]
  • Groh JM, Sparks DL. Saccades to somatosensory targets. III. eye-position-dependent somatosensory activity in primate superior colliculus. J Neurophysiol. 1996;75:439–453. [PubMed]
  • Groh JM, Trause AS, Underhill AM, Clark KR, Inati S. Eye position influences auditory responses in primate inferior colliculus. Neuron. 2001;29:509–518. [PubMed]
  • Guthrie BL, Porter JD, Sparks DL. Corollary discharge provides accurate eye position information to the oculomotor system. Science. 1983;221:1193–1195. [PubMed]
  • Henriques DY, Klier EM, Smith MA, Lowy D, Crawford JD. Gaze-centered remapping of remembered visual space in an open-loop pointing task. J Neurosci. 1998;18:1583–1594. [PubMed]
  • Herter TM, Kurtzer I, Cabel DW, Haunts KA, Scott SH. Characterization of torque-related activity in primary motor cortex during a multijoint postural task. J Neurophysiol. 2007;97:2887–2899. [PubMed]
  • Kusunoki M, Gottlieb J, Goldberg ME. The lateral intraparietal area as a salience map: the representation of abrupt onset, stimulus motion, and task relevance. Vision Res. 2000;40:1459–1468. [PubMed]
  • Lacquaniti F, Caminiti R. Visuo-motor transformations for arm reaching. Eur J Neurosci. 1998;10:195–203. [PubMed]
  • Lal R, Friedlander MJ. Effect of passive eye position changes on retinogeniculate transmission in the cat. J Neurophysiol. 1990;63:502–522. [PubMed]
  • Lewald J, Ehrenstein WH. Visual and proprioceptive shifts in perceived egocentric direction induced by eye-position. Vision Res. 2000;40:539–547. [PubMed]
  • Lewis JW, Van Essen DC. Mapping of architectonic subdivisions in the macaque monkey, with emphasis on parieto-occipital cortex. J Comp Neurol. 2000a;428:79–111. [PubMed]
  • Lewis JW, Van Essen DC. Corticocortical connections of visual, sensorimotor, and multimodal processing areas in the parietal lobe of the macaque monkey. J Comp Neurol. 2000b;428:112–137. [PubMed]
  • Lewis RF, Gaymard BM, Tamargo RJ. Efference copy provides the eye position information required for visually guided reaching. J Neurophysiol. 1998;80:1605–1608. [PubMed]
  • Marzocchi N, Breveglieri R, Galletti C, Fattori P. Reaching activity in parietal area V6A of macaque: eye influence on arm activity or retinocentric coding of reaching movements? Eur J Neurosci. 2008;27:775–789. [PMC free article] [PubMed]
  • Mays LE, Sparks DL. Dissociation of visual and saccade-related responses in superior colliculus neurons. J Neurophysiol. 1980;43:207–232. [PubMed]
  • Mazzoni P, Andersen RA, Jordan MI. A more biologically plausible learning rule for neural networks. Proc Natl Acad Sci U S A. 1991;88:4433–4437. [PubMed]
  • McIntyre J, Stratta F, Lacquaniti F. Viewer-centered frame of reference for pointing to memorized targets in three-dimensional space. J Neurophysiol. 1997;78:1601–1618. [PubMed]
  • Medendorp WP, Crawford JD. Visuospatial updating of reaching targets in near and far space. Neuroreport. 2002;13:633–636. [PubMed]
  • Mullette-Gillman OA, Cohen YE, Groh JM. Motor-Related Signals in the Intraparietal Cortex Encode Locations in a Hybrid, rather than Eye-Centered Reference Frame. Cereb Cortex. 2009;19:1761–1775. [PMC free article] [PubMed]
  • Nakamura K, Chung HH, Graziano MS, Gross CG. Dynamic representation of eye position in the parieto-occipital sulcus. J Neurophysiol. 1999;81:2374–2385. [PubMed]
  • Pare M, Guitton D. Brain stem omnipause neurons and the control of combined eye-head gaze saccades in the alert cat. J Neurophysiol. 1998;79:3060–3076. [PubMed]
  • Pesaran B, Nelson MJ, Andersen RA. Dorsal premotor neurons encode the relative position of the hand, eye, and goal during reach planning. Neuron. 2006;51:125–134. [PMC free article] [PubMed]
  • Poggio T. A theory of how the brain might work. Cold Spring Harb Symp Quant Biol. 1990;55:899–910. [PubMed]
  • Pouget A, Deneve S, Duhamel JR. A computational perspective on the neural basis of multisensory spatial representations. Nat Rev Neurosci. 2002;3:741–747. [PubMed]
  • Pouget A, Sejnowski TJ. A neural model of the cortical representation of egocentric distance. Cereb Cortex. 1994;4:314–329. [PubMed]
  • Pouget A, Snyder LH. Computational approaches to sensorimotor transformations. Nat Neurosci. 2000;3(Suppl):1192–1198. [PubMed]
  • Prevosto V, Graf W, Ugolini G. Posterior parietal cortex areas MIP and LIPv receive eye position and velocity inputs via ascending preposito-thalamo-cortical pathways. Eur J Neurosci. 2009;30:1151–1161. [PubMed]
  • Quian Quiroga R, Snyder LH, Batista AP, Cui H, Andersen RA. Movement intention is better predicted than attention in the posterior parietal cortex. J Neurosci. 2006;26:3615–3620. [PubMed]
  • Ren L, Khan AZ, Blohm G, Henriques DY, Sergio LE, Crawford JD. Proprioceptive guidance of saccades in eye-hand coordination. J Neurophysiol. 2006;96:1464–1477. [PubMed]
  • Robinson DL, McClurkin JW, Kertzman C. Orbital position and eye movement influences on visual responses in the pulvinar nuclei of the behaving macaque. Exp Brain Res. 1990;82:235–246. [PubMed]
  • Salinas E, Abbott LF. Transfer of coded information from sensory to motor networks. J Neurosci. 1995;15:6461–6474. [PubMed]
  • Salinas E, Abbott LF. A model of multiplicative neural responses in parietal cortex. Proc Natl Acad Sci U S A. 1996;93:11956–11961. [PubMed]
  • Salinas E, Sejnowski TJ. Gain modulation in the central nervous system: where behavior, neurophysiology, and computation meet. Neuroscientist. 2001;7:430–440. [PMC free article] [PubMed]
  • Salinas E, Thier P. Gain modulation: a major computational principle of the central nervous system. Neuron. 2000;27:15–21. [PubMed]
  • Scherberger H, Jarvis MR, Andersen RA. Cortical local field potential encodes movement intentions in the posterior parietal cortex. Neuron. 2005;46:347–354. [PubMed]
  • Scott SH, Sergio LE, Kalaska JF. Reaching movements with similar hand paths but different arm orientations. II. Activity of individual cells in dorsal premotor cortex and parietal area 5. J Neurophysiol. 1997;78:2413–2426. [PubMed]
  • Shadmehr R, Wise SP. The Computational Neurobiology of Reaching and Pointing: A Foundation for Motor Learning. MIT Press; Cambridge, Mass: 2005.
  • Siegel RM, Raffi M, Phinney RE, Turner JA, Jando G. Functional architecture of eye position gain fields in visual association cortex of behaving monkey. J Neurophysiol. 2003;90:1279–1294. [PubMed]
  • Smith MA, Crawford JD. Distributed population mechanism for the 3-D oculomotor reference frame transformation. J Neurophysiol. 2005;93:1742–1761. [PubMed]
  • Snyder LH, Batista AP, Andersen RA. Coding of intention in the posterior parietal cortex. Nature. 1997;386:167–170. [PubMed]
  • Snyder LH, Grieve KL, Brotchie P, Andersen RA. Separate body- and world-referenced representations of visual space in parietal cortex. Nature. 1998;394:887–891. [PubMed]
  • Snyder LH, Dickinson AR, Calton JL. Preparatory delay activity in the monkey parietal reach region predicts reach reaction times. J Neurosci. 2006;26:10091–10099. [PubMed]
  • Sober SJ, Sabes PN. Multisensory integration during motor planning. J Neurosci. 2003;23:6982–6992. [PubMed]
  • Sober SJ, Sabes PN. Flexible strategies for sensory integration during motor planning. Nat Neurosci. 2005;8:490–497. [PMC free article] [PubMed]
  • Soechting JF, Flanders M. Moving in three-dimensional space: frames of reference, vectors, and coordinate systems. Annu Rev Neurosci. 1992;15:167–191. [PubMed]
  • Tweed D, Vilis T. Implications of rotational kinematics for the oculomotor system in three dimensions. J Neurophysiol. 1987;58:832–849. [PubMed]
  • Van Opstal AJ, Hepp K, Suzuki Y, Henn V. Influence of eye position on activity in monkey superior colliculus. J Neurophysiol. 1995;74:1593–1610. [PubMed]
  • Wang X, Zhang M, Cohen IS, Goldberg ME. The proprioceptive representation of eye position in monkey primary somatosensory cortex. Nat Neurosci. 2007;10:640–646. [PubMed]
  • Weyand TG, Malpeli JG. Responses of neurons in primary visual cortex are modulated by eye position. J Neurophysiol. 1993;69:2258–2260. [PubMed]
  • White R. L. r., Snyder LH. A neural network model of flexible spatial updating. J Neurophysiol. 2004;91:1608–1619. [PubMed]
  • Xing J, Andersen RA. Models of the posterior parietal cortex which perform multimodal integration and represent space in several coordinate frames. J Cogn Neurosci. 2000;12:601–614. [PubMed]
  • Zipser D, Andersen RA. A back-propagation programmed network that simulates response properties of a subset of posterior parietal neurons. Nature. 1988;331:679–684. [PubMed]