PMCCPMCCPMCC

Search tips
Search criteria 

Advanced

 
Logo of cercorLink to Publisher's site
 
Cereb Cortex. 2009 August; 19(8): 1761–1775.
Published online 2008 December 9. doi:  10.1093/cercor/bhn207
PMCID: PMC2705694

Motor-Related Signals in the Intraparietal Cortex Encode Locations in a Hybrid, rather than Eye-Centered Reference Frame

Abstract

The reference frame used by intraparietal cortex neurons to encode locations is controversial. Many previous studies have suggested eye-centered coding, whereas we have reported that visual and auditory signals employ a hybrid reference frame (i.e., a combination of head- and eye-centered information) (Mullette-Gillman et al. 2005). One possible explanation for this discrepancy is that sensory-related activity, which we studied previously, is hybrid, whereas motor-related activity might be eye centered. Here, we examined the reference frame of visual and auditory saccade-related activity in the lateral and medial banks of the intraparietal sulcus (areas lateral intraparietal area [LIP] and medial intraparietal area [MIP]) of 2 rhesus monkeys. We recorded from 275 single neurons as monkeys performed visual and auditory saccades from different initial eye positions. We found that both visual and auditory signals reflected a hybrid of head- and eye-centered coordinates during both target and perisaccadic task periods rather than shifting to an eye-centered format as the saccade approached. This account differs from numerous previous recording studies. We suggest that the geometry of the receptive field sampling in prior studies was biased in favor of an eye-centered reference frame. Consequently, the overall hybrid nature of the reference frame was overlooked because the non–eye-centered response patterns were not fully characterized.

Keywords: coordinate transformation, eye position, posterior parietal cortex, primate, reference frame, saccade

Introduction

The intraparietal cortex is implicated in the processing of spatial information and likely plays a role in guiding attention to, remembering, and responding to the locations of sensory stimuli (for reviews, see Andersen and Gnadt 1989; Colby and Goldberg 1999). The “frame of reference” of signals in the intraparietal cortex is currently a matter of controversy. Here, we define frame of reference operationally to mean the body part relative to which the response fields show the best alignment. For example, in an eye-centered reference frame, the response fields maintain a consistent position with respect to the direction of the eyes as they move with respect to the head, whereas in a head-centered reference frame (In the present study, the head is immobilized with respect to the world. Thus, head-, body- and world-centered reference frames are all stable with respect to each other in our experiments. For convenience, we will refer to this collection of potential reference frames as head centered.), the response fields maintain a consistent position with respect to the head irrespective of eye movements. This definition is agnostic about potential changes in the magnitude of the response at different fixation positions.

Several recording studies have demonstrated that visual signals are heavily influenced by eye position (e.g., Andersen and Mountcastle 1983; Andersen et al. 1985, 1990; Batista et al. 1999) but have described the reference frame as predominantly eye centered despite this eye position influence—in other words, these studies have suggested that the response fields align in an eye-centered reference frame and that only the response magnitude (i.e., gain) varies at different eye positions. Related studies involving the “double step” paradigm have investigated visual and visual memory response patterns before and after the eyes move to a new location. The findings from these studies have been described as being consistent with an eye-centered reference frame that is updated when the eyes move (Duhamel et al. 1992; Colby et al. 1995). Similarly, microstimulation studies in head-unrestrained animals have found that saccades evoked by electrically activating the intraparietal cortex have a constant direction and amplitude with respect to the eye, regardless of initial eye position, again suggesting an eye-centered reference frame (Constantin et al. 2007; see also Thier and Andersen 1998).

In contrast, we recently obtained results that were inconsistent with a predominantly eye-centered reference frame in the intraparietal cortex. We investigated visual sensory signals by sampling slices of the response fields for multiple fixation positions (Mullette-Gillman et al. 2005; see also Snyder 2005). Our analysis method focused on the alignment of the response fields, setting aside any potential gain modulations. We reported that the reference frames of individual neurons ranged from predominantly eye centered to predominantly head centered, with most neurons reflecting an intermediate, or hybrid, reference frame in which the neural discharge patterns were not uniquely determined by target location in any single, pure reference frame. We observed a similar pattern for auditory signals, consistent with previous results (Stricanne et al. 1996).

In this study, we explore possible explanations for these conflicting findings. Experimentally, we consider the possibility that we missed eye-centered activity by focusing on sensory-related activity in our previous study. Accordingly, in this study, we focus on the motor-related activity in LIP. Motor-related activity might be a better measure of what each individual neuron “votes” for during the read out process. We investigated the motor-related representation of visual and auditory targets in lateral and medial intraparietal neurons in monkeys performing a delayed saccade task.

We found that both visual and auditory reference frames continue to be encoded in a hybrid reference frame at the time of the movement, just as they are during the sensory response period. Given our failure to find evidence for a predominantly eye-centered representation in the intraparietal cortex in either the sensory- or motor-related activity periods, we consider other explanations. We reevaluate numerous prior studies and conclude that the geometry of how the response fields were sampled may have biased these studies’ results to favor eye-centered coordinates. We concur with previous studies that eye position interacts with visual signals to produce response patterns in the intraparietal cortex that are not dictated strictly by the pattern of illumination on the retina (Andersen et al. 1985, 1990; Batista et al. 1999; Cohen and Andersen 2000), but we conclude that the resulting representation includes eye-centered, head-centered, and hybrid-response patterns.

Materials and Methods

The neuronal data set described here has been the subject of a previous study (Mullette-Gillman et al. 2005). In brief, 275 neurons from the right IPS (the lateral and medial banks of the intraparietal sulcus, areas LIP and MIP) of 2 rhesus monkeys (1 male, 1 female) were recorded while the animals performed a saccade task to either visual or auditory targets from several initial eye positions. We confirmed the locations of these recording sites using magnetic resonance imaging (MRI) as has been presented previously (Mullette-Gillman et al. 2005).

Stimuli and Behavioral Task

Targets were presented from a stimulus array of 9 speakers with a light emiting diode (LED) attached to each speaker's face (Fig. 1a). The speakers were placed from 24° left to 24° right of the monkey in 6° increments, at an elevation of 0° relative to straight ahead. Additional LEDs that served as fixation positions were located 12° right, 0° (center), and 12° left at an elevation of ±18°. Either the upper row (+18°) or the lower row (−18°) of fixation positions was chosen for the use during recording of each individual neuron. After isolating a neuron but before beginning the experiment, we qualitatively tested whether the neural activity was more effectively driven by the targets when the monkey fixated the upper row of fixation lights versus the lower row. To ensure that we obtained adequate numbers of trials per condition, we limited further testing to the row that allowed us to most effectively test the reference frame of the neuron's response field. With this procedure, the mean number of trials per condition was 6.7 (standard deviation [SD] 1.4) for an average of 361.8 trials per neuron; each trial condition was 1 of the 3 eye positions, 1 of the 9 target locations, and 1 of the 2 modalities. We note that prescreening at each fixation row introduced a modest selection bias in favor of detecting eye-centered versus head-centered neurons: We probed 2 sets of eye-centered locations (above and below fixation) but only one fixed set of head-centered locations before continuing the main experiment. Auditory targets were band-pass white noise bursts (500–18 kHz; rise time of 10 ms) at 50 ± 2 dB sound pressure level (“A” weighting, Bruel and Kjaer, Model 2237 integrating sounds level meter with Model 4137 condenser microphone).

Figure 1.
Experimental design. (a) Fixation LEDs were located 12° right, 0°, and 12° left at an elevation of ±18°; either the upper set or the lower set were used for the characterization of any given neuron. Saccade targets ...

Monkeys performed an overlap-delayed saccade task (Fig. 1b) to auditory and visual targets (all conditions randomly interleaved). The task began with the onset of an LED which the monkey was required to fixate. After 900–1300 ms, a sensory target (either auditory or visual) was presented. Following a delay of 600–900 ms, the fixation light was extinguished and the monkey had 500 ms to shift its gaze to the location of the still-present target. After successful completion of a trial, the monkey received a juice or water reward. In some sessions, the monkeys also performed a memory-guided saccade task to aid in functionally defining areas LIP/MIP; results from this task were discussed in Mullette-Gillman et al. (2005) and are not considered further here.

Data Analysis

Action potentials were analyzed during several periods of time: prior to the onset of the target stimulus (baseline period), the response to the target stimulus (target period), and the initiation of the saccade (perisaccade period). Saccade onset was detected using a velocity-based algorithm (EyeMove software). The baseline period was defined as the 300-ms period prior to target onset. The target period was the 450-ms period that began 50 ms after target onset. The saccadic period began 150 ms before saccade initiation and ended 100 ms after saccade initiation (250 ms total length). (The ending point of this window was chosen so that it would include the full duration of most saccades without being contaminated by new visual responses after the saccade. The average saccade duration was 58.2 ms for visual saccades and 57.4 ms for auditory saccades. The SDs were 23.9 and 23.0 ms. Therefore, the upper end of the 95% confidence intervals (CIs; mean + 1.96 × SD) of the saccade durations was 102 and 105 ms, respectively.) We report neural data in terms of firing rate: the number of action potentials divided by the length of the analysis period (i.e., spikes per second).

Responsiveness and Spatial Tuning

Neurons were included for further analysis if their firing rate was modulated by the target location (Table 1). This modulation was assessed by an analysis of variance (ANOVA) with target location and fixation position as the independent factors. Neurons were defined as being “modulated” by the task if the ANOVA revealed a reliable (P < 0.05) main effect for target location or a reliable interaction between target location and fixation position. This test was conducted on the firing rates elicited during the target or perisaccadic periods. For each time period and for each target modality, the ANOVA was conducted twice: 1) when target location was defined with respect to the head and 2) when target location was defined with respect to the eyes. Locations that were not tested in both reference frames were excluded from the analysis.

Table 1
Results of statistical analysis on sensitivity to the location of the target during the target and perisaccadic periods

Quantitative Analyses of Reference Frame

To quantify the reference frame in which neurons code spatial information, we compared the alignment of each neuron's spatial tuning functions when defined in an eye-centered versus head-centered reference frame (see Mullette-Gillman et al. 2005). For each neuron and response period, we calculated the average firing rate for each target location to generate a spatial tuning function for each of the 3 initial eye positions. Next, we tested whether the 3 spatial tuning functions aligned better when target location was defined with respect to the eyes or with respect to the head. (Because we did not vary the monkeys’ head position, we cannot disambiguate between head-, body-, and world-centered coordinates, but these reference frames maintained a constant relationship with each other in our experiments.) Specifically, we calculated the correlation coefficient between the spatial tuning functions at the left versus center and right versus center initial fixation positions and then averaged these 2 values. This average correlation coefficient was calculated in 8 different ways for each neuron (2 × 2 × 2 matrix): the 2 response periods (the target and perisaccade periods), by the 2 reference frames (target locations defined with respect to the head and with respect to the eyes), and by each sensory modality. For this analysis, we only included target locations that were present for all 3 fixation positions in both the head- and eye-centered frames of reference (n = 5 locations: −12°, −6°, 0°, 6°, and 12°).

The value of the correlation coefficient can range from −1 to 1. If the correlation coefficient equals −1, it indicates that the response functions were perfectly anticorrelated with one another. If the correlation coefficient equals 0, it indicates that the response functions at the different fixation positions are not related. If the correlation coefficient equals 1, it indicates that the response functions were perfectly aligned in the reference frame used for the calculation (e.g., head-centered reference frame).

Figure 2 illustrates schematically how this metric would correspond to different kinds of reference frames. If a neuron's response field aligns well in an eye-centered reference frame (e.g., Fig. 2a,b), the eye-centered correlation coefficient will be higher than the head-centered correlation coefficient (Fig. 2b). Conversely, if its response fields align well in a head-centered reference frame (e.g., Fig. 2c), its head-centered correlation coefficient will be higher than its eye-centered correlation coefficient. Partially shifting response fields and complex interactions in which the neuron simply seems to have a “new” response field at each tested eye position, unrelated to its response fields at other eye positions, would produce similar head- and eye-centered correlation coefficients and thus would both be categorized as “hybrid” in this analysis. Because this correlation analysis is invariant to changes in gain, such as eye position modulations, our reference frame analyses are not confounded by such eye position modulations.

Figure 2.
Schematic of various possible reference frame representations depicting neuronal response functions for 3 eye positions. (a) Response function for a neuron encoding in a pure eye-centered reference frame, in which the neuronal response depends solely ...

We calculated the variance of this metric using a bootstrap analysis (100 iterations of 80% of data for each target location/eye position combination). This bootstrap analysis allowed us to estimate the variance of this measure from which we defined a 95% confidence area (±1.96 × SD) centered on the mean of the bootstrap distribution.

Results

Temporal Response Profile

In general, parietal neurons began responding to the target at the time of target onset. This activity either 1) remained sustained until the time of the saccade or 2) diminished and increased again as the saccade approached. Figure 3 illustrates the temporal profile of the discharge patterns of the population of neurons with elevated activity at the time of the saccade. This figure shows a population perievent time histogram (PETH) that is aligned on target onset and another that is aligned on saccade onset. These PETHs were constructed from the activity of neurons showing statistically significant perisaccadic activity (by ANOVA, see Materials and Methods) to either visual (n = 121; Fig. 3a) or auditory targets (n = 61; Fig. 3b). This population response was generated by calculating for each neuron those combinations of target location and eye position that generated the highest firing rate, normalizing each neuron's PETH, and then averaging together all of the individual PETHs.

Figure 3.
Population PETH of visual (a) and auditory (b) activity, synchronized on the target onset and saccade onset (dashed lines). Only neurons with statistically significant “perisaccadic” activity (by ANOVA, see Materials and Methods) to either ...

Two individual example neurons that illustrate these 2 temporal patterns are shown in Figures 4 and and5.5. The neuron in Figure 4 had a distinct saccade-related burst that began slightly before the movement and reached its peak shortly after saccade onset. This neuron also had a smaller burst time locked to the onset of the visual target (but not the auditory target). The neuron in Figure 5 had a more sustained pattern of activation for visual targets that began with target onset and lasted until the saccade. Unlike the neuron shown in Figure 4, this neuron did not have a burst of activity associated with the saccade. Also, the neuron shown in Figure 5 was relatively unresponsive during auditory trials.

Figure 4.
The visual and auditory responses of an example neuron, synchronized on target onset and saccade onset. The neuron presented demonstrates transient responses to visual stimuli during both the sensory and saccadic periods, while only responding robustly ...
Figure 5.
The visual and auditory responses of an example neuron, synchronized on target onset and saccade onset. These data involved trials with the central fixation position. The presented neuron had a sustained response to visual stimuli (response began upon ...

These 2 neurons exemplify 2 different ends of a continuum of temporal response profiles rather than representing 2 discrete categories. We did not find any evidence that neurons with larger perisaccadic bursts were more likely to use one kind of reference frame or another (data not shown). Accordingly, the results presented in the succeeding sections derive from the entire population of all neurons that showed spatial sensitivity (by ANOVA) to visual or auditory target location during the time window 150 ms prior to 100 ms after saccade onset, regardless of the neuron's specific temporal response profile.

Reference Frame

Our chief aim in this study was to test the reference frame of the visual and auditory signals present in intraparietal neurons during a perisaccadic time window as compared with a time window synchronized to the onset of the target. We tested a neuron's reference frame by comparing the correlation between the target or perisaccadic spatial response functions when they were measured at different initial fixation positions with respect to head-centered or eye-centered coordinates.

Figure 6 illustrates the perisaccadic response functions of the 2 neurons shown previously in Figures 4 and and5.5. Panel 6a shows the visual response functions of the neuron in Figure 4. The left graph shows the perisaccadic activity as a function of the head-centered location of the visual target, and the right panel shows the same activity realigned as a function of the eye-centered location of the target. The 3 response functions are better aligned when plotted as a function of eye-centered target location, suggesting that this neuron's visual responses encode the location of a visual target in a predominantly eye-centered frame of reference.

Figure 6.
Reference frame of activity for several example neurons. Panels (a) and (b) show the visual and auditory responses during the perisaccadic period for the same neuron shown in Figure 4, and panel (c) shows the visual responses of the neuron shown in Figure ...

The auditory response functions of this neuron are also predominantly eye centered (Fig. 6b) because the peaks of the responses are better aligned when target location is plotted as a function of eye-centered target location (right-hand graph) than head-centered target location. There is still a considerable difference in the responses across the 3 different initial fixation positions (i.e., the 3 different traces are not superimposed), but because the peaks align in an eye-centered reference frame, the correlation coefficient in eye-centered coordinates is considerably higher than it is in a head-centered reference frame.

An example neuron lacking a clear reference frame is depicted in Figure 6c, which shows the response functions of the neuron illustrated in Figure 5. As can be seen, the visual response peaks for 2 of the 3 initial fixation positions match slightly better when plotted as a function of the head-centered versus eye-centered location of the target. But, overall, there is no greater consistency in the response functions when plotted in one reference frame versus the other reference frame. (As mentioned previously, this neuron did not respond to auditory stimuli.)

We quantitatively evaluated reference frame by calculating a correlation coefficient between each neuron's response functions for different eye positions when plotted in head- versus eye-centered coordinates. If the response functions align better in one reference frame than the other, then the correlation coefficient for the better reference frame will be higher than the correlation coefficient in the other reference frame, even if there is a difference in the magnitude of the responses at different eye positions as is the case for the neuron in Figure 6b (see also Fig. 2). We conducted this analysis for only those neurons that had statistically significant spatial sensitivity (see Materials and Methods). The population results for both the target and perisaccadic periods are shown in Figure 7, and the perisaccadic results for the individual example cells as compared with the population are shown in the insets of Figure 6. These graphs show the head-centered correlation coefficient (y-axis) versus the eye-centered correlation coefficient (x-axis). Neurons whose response functions align better in head-centered coordinates lie in the upper quadrant, whereas more eye-centered neurons lie in the lower quadrant. The error bars indicate 95% CIs; neurons whose CIs do not include the line of slope = 1 were classified as predominantly head centered (green crosses) or eye centered (red crosses), respectively. The results for the target period were previously presented in Mullette-Gillman et al. (2005).

Figure 7.
Comparison of the reference frame during the target (a, b) and perisaccadic periods (c, d) of the population of neurons for visual (a, c) and auditory (b, d) trials. The ordinate and abscissa of each graph illustrate the degree of alignment in the response ...

The key observation from Figure 7 is that the pattern of target period correlation coefficients and the pattern of perisaccadic correlation coefficients are similar. For both the target period (Fig. 7a,b) and the perisaccadic period (Fig. 7c,d), the activity of most intraparietal neurons cannot be classified as being either head centered or eye centered but as hybrid-response patterns reflecting contributions of both reference frames (gray crosses; gray area on pie charts).

Figure 8 quantifies this pattern through a rerepresentation of the data shown in Figure 7. For each neuron, the head- and eye-centered correlation coefficients from the data shown in Figure 7 were converted to an angle with respect to the origin and rotated 45°. As a result, data points that lie along the line of slope = 1 in Figure 7 have an angle of 0°. Data points that lie above the line of slope = 1 have positive angles and those that lie below it have negative angles. (NB: data beyond ±135° are not shown as these reflect negative correlation coefficients in both head- and eye-centered coordinates—a finding that could be due to the presence of some noise in the response patterns.)

Figure 8.
Comparison of target versus perisaccadic reference frame, for visual (a) and auditory trials (b). Reference frame angle was computed by taking the tan-1(head-centered correlation coefficient/eye-centered correlation coefficient) and subtracting 45° ...

The results for the visual trials are shown in Figure 8a and the results for the auditory trials are shown in Figure 8b. For visual trials, both the target and perisaccadic periods are slightly biased toward an eye-centered reference frame (mean angles of −15.2° and −16.2°, respectively); a t-test indicates that these 2 distributions are not significantly different (P > 0.05). For the auditory trials, the distribution tilts from being slightly biased in favor of a head-centered reference frame during the target period (mean angle of 11.7°) to a bias toward an eye-centered reference frame during the perisaccadic period (mean angle −24.9°). This change is not large: For both distributions, the modes are, in fact, quite close to the middle (0°), but it is significant (P < 0.05). When we compared the visual target and auditory target reference frame “angles,” we found a slight but statistically significant (P < 0.05) difference during the target period. There was no significant difference during the perisaccade period (P > 0.05), indicating an improvement in the similarity of visual and auditory coordinates during this period. On the whole, these trends were small (even when significant) and do not overshadow the major point that the reference frame of most cells is squarely between head- and eye-centered coordinates, for both modalities and both response epochs.

Several additional analyses concerning reference frame are presented in the Supplementary Material.

Location of Recording Sites

Our recording sites included both the lateral and medial banks of the intraparietal sulcus (Fig. 9). These 2 banks are thought to be functionally distinct areas (e.g., Snyder et al. 1997, 2000b; Cohen and Andersen 2000). In experiments involving comparisons between saccade- and reach-related activity, area LIP appears to have somewhat greater saccade- than reach-related activity, whereas the opposite pattern has been observed in MIP (Snyder et al. 1997, 2000b). However, note that both LIP and MIP do contain neurons with saccade-related activity, and microstimulation in both the lateral and a portion of the medial banks of the intraparietal sulcus can elicit saccades (Thier and Andersen 1996, 1998). It is mainly in the comparison between saccade- and reach-related activities that a difference has been demonstrated (Snyder et al. 1997). Indeed, in our experiment that used only a saccade task, we found responsive neurons throughout the range of recording locations spanning both LIP and MIP. Because our monkeys did not perform a reach task, we cannot comment on any dissociation in saccade versus reach activity in LIP and MIP.

Figure 9.
Location of recording sites and relationship to reference frame. (a) Approximate locations of coronal MRI slices for both monkeys B and C. (b, c) Coronal MRI slices (1 mm apart) showing penetration locations in monkey B and C, respectively, and maximum ...

In monkey B, a subset of the penetrations were limited to the lateral bank (the 1–2 most lateral locations in the 3 most anterior panels in Fig. 9b), whereas the remaining penetrations likely included a mixture of neurons from both banks. To test whether there was any systematic relationship between visual and/or auditory spatial sensitivity and recording location, we first divided these penetrations into 2 categories: those that were limited to LIP and those that could have included either LIP or MIP. Table 2 shows the results, for both target and perisaccadic activity. The proportion of neurons showing sensitivity to visual or auditory target location did not differ as a function of recording location for any of these categories (χ2; P > 0.05). Indeed, for 3 of the 4 subpopulations (target modality × response period), the trend went in the wrong direction if LIP were the only region responsive in our task: The proportion of neurons with significant sensitivity to target location was actually greater in the penetrations that included both LIP and MIP than in the penetrations limited to LIP.

Table 2
Sensitivity to target location as a function of penetration location in monkey B

In monkey C, the recording cylinder was located more anteriorly, where the intraparietal sulcus is situated at an angle. In this monkey, penetration trajectories crossed the intraparietal sulcus from the medial bank to the lateral bank (Fig. 9c). If neurons responsive in a saccade task tend to be more concentrated in LIP, then the proportion of responsive neurons should have increased with increasing recording depth. Figure 9e shows that this expected pattern was certainly not particularly evident in our data set, for any of the target modalities or response periods.

Consistent with our previous study on the reference frame for the target period (Mullette-Gillman et al. 2005), we found no evidence that the reference frame during the perisaccadic period varied with recording location in either monkey (Fig. 9d,f; only visual responses shown).

On the whole, our criteria for identifying LIP/MIP as well as our anatomical and physiological findings (other than those relating to reference frame) are similar to those of many previous studies (Andersen et al. 1990; Platt and Glimcher 1998; Eskandar and Assad 1999; Grunewald et al. 1999; Linden et al. 1999; Powell and Goldberg 2000; Shadlen and Newsome 2001).

Discussion

The conventional view holds that the intraparietal cortex represents spatial information in an eye position sensitive but, nevertheless, predominantly eye-centered reference frame. Numerous studies have described the coding of information in these or closely-related terms (e.g., Andersen and Mountcastle 1983; Andersen et al. 1985, 1990; Duhamel et al. 1992; Colby et al. 1995; Batista et al. 1999; Constantin et al. 2007). It was, therefore, surprising when in our previous recording study involving visual and auditory sensory-related activity, we found a continuum of reference frames ranging from eye to head centered, with most neurons encoding spatial information in an intermediate or hybrid coordinate frame (Mullette-Gillman et al. 2005). This was the case even though our correlation analysis, which quantified the reference frame of a neuron, was largely invariant to any eye position gain modulation. In this study, we reinvestigated the issue to determine whether perhaps motor-related activity might be more predominantly eye centered. Such activity occurring at the time of the movement might be more reflective of intraparietal cortex's “true” coding of information because it might be a more accurate portrait of the activity patterns that are “read out” to contribute to the generation of behavior.

We found that, on average, neurons in the banks of the intraparietal sulcus predominantly employ a hybrid reference frame during a period of time around the saccadic eye movement. The only transition in reference frame in comparison to the sensory period was a subtle improvement in the correspondence between visual and auditory signals: Auditory signals shifted their coordinates to become slightly more similar to the coordinates of visual signals during the perisaccadic period (i.e., there was a small increase in the number of cells for which the eye-centered reference frame produced better alignment in the response functions, with a corresponding decrease in head-centered cells). But, during both time periods, both visual and auditory signals were predominantly hybrid. This finding confirms our prior results and would appear to be discrepant from numerous other studies in intraparietal cortex. (Buneo et al. (2008) concur that the reference frame among intraparietal neurons is stable across different epochs of the trial, although their conclusions differ from ours regarding what that reference frame is.).

Could the preponderance of hybrid coding be due to lack of statistical power in our analysis method? There are several reasons why we do not think this is the case. First, although noise or variability in the responses would tend to make responses appear hybrid, we only included cells in the analysis if they met statistical criteria for being sensitive to target location in at least one reference frame. Second, many cells also did meet a statistical test for being eye centered—the bootstrap analysis—but almost as many cells met the same criteria for being head centered. If the representation was truly eye centered but noisy and the hybrid cells were merely due to that noise, then we would have expected to find very few head-centered cells, and this was not the case.

We believe that there are methodological explanations for the differences between our characterization of intraparietal cortex and that of previous recording studies. Two kinds of previous studies are of particular interest: those that have investigated eye position sensitivity (also known as eye position gain fields) (e.g., Andersen and Mountcastle 1983; Andersen et al. 1985, 1990; Batista et al. 1999) and those that have investigated the response properties to remembered visual stimuli when the eyes move from one location to another (e.g., Duhamel et al. 1992; Colby et al. 1995, 2005; Heiser et al. 2005; Heiser and Colby 2006; Berman et al. 2007). The latter, referred to here as remapping studies, will be considered first.

Remapping studies have tested whether signals encoding the memory of a briefly flashed visual target are updated to reflect the new retinal location of that remembered stimulus when the eyes move. Given that we find that the majority of parietal neurons do not have an eye-centered representation, it does not seem possible that parietal neurons are updating in an eye-centered representation. However, viewed a different way, our findings do support the underlying principle at issue in these studies: namely, the basic thesis that intraparietal neurons have response fields that are not strictly anchored to a single location on the retina but are updated in some fashion as the eyes move. The chief difference is that in our study, we find that a sizeable proportion of neurons appear to use such an updating mechanism in ways not anticipated in these remapping experiments. For the majority of neurons, the updating mechanism “moves” the response field to a location that is neither consistently head centered nor consistently eye centered. Whether hybrid or head-centered cells have been included in the samples of previous remapping studies and whether they have met statistical criteria for being categorized as updating or remapping is not certain but will be an interesting subject for future investigation.

For the gain field studies, the most likely explanation rests in how the response fields have been evaluated at different eye positions. These studies have generally sampled the response fields using one or more of several paradigms illustrated in Figure 10. In one paradigm, the location of the response field is first assessed at one fixation position using a range of stimulus locations (Fig. 10a) (e.g., Andersen and Mountcastle 1983; Andersen et al. 1990). Then, the best location from that set is chosen for further study when the eyes move. Stimuli at that location “defined with respect to the eyes” are presented while the animal fixates a novel fixation position (Fig. 10b,c). Because the eyes have moved, this fixed retinal location is now at a new location with respect to the head. If a neuron has an eye-centered but eye position gain–modulated response field (Fig. 10b), then the response to that fixed stimulus will be different at the new eye position (Fig. 10d). However, the same will be true if a neuron has a head-centered response field (Fig. 10c) because the fixed retinal location is now at a new location with respect to the head-centered response field and a different magnitude of response will occur. In short, this method of sampling cannot distinguish between these 2 types of spatial encoding.

Figure 10.
Schematic of how different methods of sampling the response fields of intraparietal neurons can impact the results. Each column of plots depicts an experimental paradigm with fixation (+), target locations (•), and the neuronal response field ...

The second paradigm is to use a “slice” of stimulus locations that cross the response field (Fig. 10eh). Ostensibly, this pattern of sampling is more similar to what we have used here, but there is an important difference: In previous studies, the dimension of the receptive field sampling and the dimension of the change in fixation position have often been orthogonal to each other (Andersen et al. 1985; Batista et al. 1999). For example, the schematics in Figure 10eg show the situation in which the fixation positions vary in the horizontal dimension but the response field sampling is along a vertical slice. At the first fixation position (Fig. 10e), the target at the center of the vertical slice of locations will elicit the largest response. When the eyes move to a different fixation position (Fig. 10f,g), the slice of sampled locations is shifted in head-centered space but remains the same in eye-centered space. If the response field is head centered (Fig. 10g,h), the center target will still elicit the largest response and (in this particular example) that response will be larger than it was for the original fixation position (Fig. 10h). Again, this pattern is identical to that produced by an eye-centered response field with an eye position gain modulation (Fig. 10f). Thus, the slice of sampled locations might shift to be better centered within a head-centered response field at one eye position versus another, changing the magnitude of the best response but not changing which (eye centered) target elicits the magnitude of the response.

Some studies have employed a circular array of targets as in Figure 10i and shifted the entire circular array so as to maintain the same retinal locations at the new fixation position (Fig. 10j,k) (e.g., Andersen et al. 1990). The response field is quantified by the direction of the target evoking the strongest response (Fig. 10l). It has been assumed that if the neuron's best target direction is unchanged across different fixation positions, then it must have an eye-centered response field. However, a neuron with a head-centered response field might also have a stable best target direction across different fixation positions, depending on the relationship between the head-centered response field and the 2 fixation positions (Fig. 10k). In the example shown here, if the neuron has a head-centered field located to the right, the rightward target in the circular array will elicit the best response at both of the sampled fixation positions (Fig. 10il).

In contrast, in our paradigm, we sampled a slice through the response field and varied fixation position along the same dimension (Fig. 10mp). This method of sampling means that an overlapping set of locations in both potential reference frames are tested at all fixation positions, and this method of sampling can distinguish eye-centered from head-centered response fields, as can be seen by the different response patterns predicted for these different spatial representations in Figure 10p. To further eliminate bias, we limited our analyses to the set of target locations that existed in both reference frames.

An additional issue affects 2 other prior studies that purportedly demonstrated an eye-centered code for reach-related activity in the intraparietal cortex (Batista et al. 1999; Cohen and Andersen 2000). These 2 studies investigated the reference frame of visual or auditory reach–related activity in a paradigm in which both eye and limb position varied as shown in Figure 11. Using this design, these studies compared a limb-or-head (or body)–centered reference frame with an eye-or-head–centered reference frame. Both studies reported better response field alignment in the eye-or-head–centered frame of reference than the limb-or-head–centered frame of reference. This analysis shows that eye position is more important than limb position in determining the responses of parietal neurons to a set of target locations that are fixed with respect to the head, but it does not show whether neurons have predominantly eye-centered response fields as opposed to head-centered response fields.

Figure 11.
Experimental design for several related studies of reaching-related activity in the parietal cortex (Batista et al. 1999; Cohen and Andersen 2000). These studies have used a grid of target locations either 4 or 5 targets wide by 3 targets high. Monkeys ...

Batista et al. (1999) also showed a comparison of limb-or-head–centered coordinates with eye-centered coordinates, but this comparison suffered from the sampling problems described above: the target locations were situated along a rectangular slice that was 2-target wide horizontally but 3-target high vertically, orthogonal to the direction that eye position varied (horizontal). Thus, the analysis would have been at least affected, and potentially dominated, by a nonmeaningful correlation along the orthogonal dimension. In addition, the fact that there were more target locations in head-centered coordinates than in eye-centered coordinates could also have affected the results (see also Pesaran et al. 2006; Buneo et al. 2008).

Thus, our studies appear to be the first recording experiments in LIP and MIP to provide quantitative evidence and population analyses on reference frame that did not substantially privilege the eye-centered reference frame over a head-centered reference frame in the sampling, data analysis, or interpretation. The only potential source of bias is that we prescreened neural responses using 2 sets of eye-centered locations and only one set of head-centered locations before conducting the main experiment. The effects of this bias would have been to potentially increase the number of eye-centered neurons included in our sample. Because our main finding is that eye-centered neurons constitute only a minority of LIP neurons, this bias works against our overall conclusions. Our results are in fundamental disagreement with the commonly held view that the intraparietal cortex uses an eye-centered frame of reference but in agreement with a study using similar methodology that reported head-centered visual responses in neighboring ventral intraparietal area (VIP) (Duhamel et al. 1997).

It is important to emphasize what our findings call into question and what they do not. We do not question the actual results reported in previous studies, given that the difference between our results and those of the prior studies can be accounted for by methodological differences. However, we raise concerns about these studies’ conclusions, namely, that the findings indicate the presence of an eye-centered eye position gain–modulated representation of space.

Are there any neurons in the intraparietal cortex that do have an eye position gain–modulated but eye-centered response field as has been previously claimed? If they exist, how prevalent are they? At present, we are agnostic on this point because we have not developed a statistical test to identify such neurons.

At first blush, our results would seem to be harder to reconcile with microstimulation experiments that have shown that electrical stimulation in LIP produces a saccade of a consistent eye-centered vector regardless of initial eye position (Constantin et al. 2007). (Two other stimulation studies have found evidence for eye position sensitivity in the saccades evoked from some sites in LIP and MIP (Thier and Andersen 1996, 1998) in head-restrained animals. Eye position sensitivity in head-restrained animals is difficult to interpret as the immobility of the head could contribute to the eye position dependence. Thus, the Constantin et al. (2007) study, in which the heads were free to move, is a more definitive account of the effects of microstimulation in intraparietal cortex.) This apparent discrepancy might shed light on how parietal signals are read out to contribute to saccades. One potential explanation lies in the continuum of responses that we observed: Perhaps, it is primarily the neurons that exhibit more eye-centered response patterns that send axons to oculomotor structures and contribute to the programming of saccades. That a given brain area might have different read outs in different situations (e.g., Groh 1997) is another possibility.

Other possible explanations arise from how microstimulation might interact with the ongoing activity of stimulated neurons. Specifically, microstimulation might tend to reduce or eliminate the eye position effects in LIP. Strong microstimulation (high frequency, high current) might serve to “clamp” the firing rate of the activated neurons at a rate dictated solely by the stimulation pattern and not by any of the factors that would otherwise influence the neuron. Thus, microstimulation might effectively “remove” the influence of eye position.

The converse pattern might also be able to account for the results. Stimulation pulses might add additional action potentials to those already being fired by the neurons in the vicinity of the electrode. Thus, action potentials related to eye position would be combined with action potentials triggered by electrical stimulation. The read out algorithm might take into account the presence of eye position signals in extracting a signal of target location with respect to the eyes (Batista et al. 2008), thus producing eye-centered saccades from an input signal that encodes stimulus location in a hybrid reference frame.

Evidence for neural responses that can be thought of as reflecting a mixture of different reference frames has become increasingly prevalent in recent years. Such signals have been identified in the auditory pathway (Groh et al. 2001; Werner-Reiss et al. 2003; Fu et al. 2004; Zwiers et al. 2004), the visual pathway (Lal and Friedlander 1989, 1990a, 1990b; Weyand and Malpeli 1993; Bremmer et al. 1997; Guo and Li 1997; Nakamura et al. 1999; Trotter and Celebrini 1999; Bremmer 2000; Tolias et al. 2001; DeSouza et al. 2002; Sharma et al. 2003; Fetsch et al. 2007), and the oculomotor pathway (Jay and Sparks 1984, 1987; Van Opstal et al. 1995; Campos et al. 2006) as well as parietal cortex (Andersen and Mountcastle 1983; Andersen et al. 1985, 1990; Stricanne et al. 1996; Cohen and Andersen 2000; Schlack et al. 2005; Chang and Snyder 2007; see also Batista et al. 1999) and cingulate cortex (Dean and Platt 2006). (The studies cited here either expressly investigated reference frame with at least partially mixed results or provided evidence for interactions between responses to sensory stimuli and eye position.) At present, it is unclear how a hybrid representation might be computationally advantageous. On the face of it, hybrid representations would seem disadvantageous because the activity of any individual neuron employing such a code is ambiguous—the responses depend on both the head- and eye-centered location of a target, and thus, reading out such a signal to determine the spatial location of the target requires more information than the discharge pattern of that individual neuron.

One possible advantage of hybrid reference frames is that they might resemble the motor command. Moving the eyes to the target requires both head- and eye-centered information because the muscle force profile depends on both the head-centered location of the target and the eye-centered location of the target (Robinson 1970; Robinson and Keller 1972; Van Gisbergen et al. 1981; Sylvestre and Cullen 1999). (Strictly speaking, the pattern of muscle force is related to a combination of eye position and eye velocity. For saccades, the velocity profile depends on the amplitude of the saccade or the eye-centered location of the target. The desired eye-in-head position at the end of the saccade is equivalent to the head-centered location of the target.) It is reasonable to hypothesize that hybrid signals earlier in the pathway such as in intraparietal cortex may relate to the performance of this action, although it is unclear at present precisely how to reconcile the observed eye-centered effects of microstimulation in area LIP with this possibility.

The question of why the brain uses hybrid reference frames is not unique to the oculomotor system but extends to other sensorimotor systems as well. Indeed, arm movements in premotor and primary motor cortices can best be described as being encoded in a hybrid reference frame (Wu and Hatsopoulos 2006, 2007; Batista et al. 2007, 2008) (see also Pesaran et al. 2006). Taken together, these studies suggest that neural activity patterns may only rarely, and perhaps never, be defined solely by sensory properties (e.g., the location of sound relative to the head), physics (e.g., gravity), or mechanics (e.g., joint angle) in a single unique reference frame. Further work clarifying the specific details of how visual and auditory signals proceed from LIP/MIP to motor effectors for saccades or other behavioral responses will help shed light on this important question, and further computational work incorporating the read out algorithm will be needed to clarify just how this process unfolds.

Funding

Alfred P. Sloan Foundation (to J.M.G.); McKnight Endowment Fund for Neuroscience (to J.M.G.); Whitehall Foundation (to Y.E.C. and J.M.G.); John Merck Scholars Program (to J.M.G.); Office of Naval Research Young Investigator Program (to J.M.G.); EJLB Foundation (to J.M.G.); National Institutes of Health (NIH) (NS 17778 to Y.E.C. and J.M.G.); NIH (NS50942 to J.M.G.); National Science Foundation (0415634 to J.M.G.); NIH (EY016478 to J.M.G.); NIH B/START and Shannon Awards (to Y.E.C.); The Nelson A. Rockefeller Center at Dartmouth (to J.M.G.); National Eye Institute (EY016478); National Institute of Neurological Disorders and Stroke (NS50942 and NS17778 to J.M.G.).

Supplementary Material

Supplementary material can be found at: http://www.cercor.oxford journals.org/.

[Supplementary Data]

Acknowledgments

We wish to thank Abigail Underhill, Hany Farid, Gordon Gifford, Paul Glimcher, Ryan Metzger, Joe Moran, Kristin Kelly Porter, Peter Tse, and Uri Werner-Reiss for their insights and many helpful comments and ideas throughout this project. Conflict of Interest: None declared.

References

  • Andersen RA, Bracewell RM, Barash S, Gnadt JW, Fogassi L. Eye position effects on visual, memory, and saccade-related activity in areas LIP and 7a of macaque. J Neurosci. 1990;10:1176–1196. [PubMed]
  • Andersen RA, Essick GK, Siegel RM. Encoding of spatial location by posterior parietal neurons. Science. 1985;230:456–458. [PubMed]
  • Andersen RA, Gnadt JW. Posterior parietal cortex. Rev Oculomot Res. 1989;3:315–335. [PubMed]
  • Andersen RA, Mountcastle VB. The influence of the angle of gaze upon the excitability of the light-sensitive neurons of the posterior parietal cortex. J Neurosci. 1983;3:532–548. [PubMed]
  • Batista AP, Buneo CA, Snyder LH, Andersen RA. Reach plans in eye-centered coordinates. Science. 1999;285:257–260. [PubMed]
  • Batista AP, Santhanam G, Yu BM, Ryu SI, Afshar A, Shenoy KV. Reference frames for reach planning in macaque dorsal premotor cortex. J Neurophysiol. 2007;98:966–983. [PubMed]
  • Batista AP, Yu BM, Santhanam G, Ryu SI, Afshar A, Shenoy KV. Cortical neural prosthesis performance improves when eye position is monitored. IEEE Trans Neural Syst Rehabil Eng. 2008;16:24–31. [PubMed]
  • Berman RA, Heiser LM, Dunn CA, Saunders RC, Colby CL. Dynamic circuitry for updating spatial representations. III. From neurons to behavior. J Neurophysiol. 2007;98:105–121. [PMC free article] [PubMed]
  • Bremmer F. Eye position effects in macaque area V4. Neuroreport. 2000;11:1277–1283. [PubMed]
  • Bremmer F, Ilg UJ, Thiele A, Distler C, Hoffmann KP. Eye position effects in monkey cortex. I. Visual and pursuit-related activity in extrastriate areas MT and MST. J Neurophysiol. 1997;77:944–961. [PubMed]
  • Buneo CA, Batista AP, Jarvis MR, Andersen RA. Time-invariant reference frames for parietal reach activity. Exp Brain Res. 2008;188:77–89. [PubMed]
  • Campos M, Cherian A, Segraves MA. Effects of eye position upon activity of neurons in macaque superior colliculus. J Neurophysiol. 2006;95:505–526. [PubMed]
  • Chang S, Snyder L. Diverse frames of reference in the parietal reach region (PRR) 2007. San Diego (CA): Society for Neuroscience.
  • Cohen YE, Andersen RA. Reaches to sounds encoded in an eye-centered reference frame. Neuron. 2000;27:647–652. [PubMed]
  • Colby CL, Berman RA, Heiser LM, Saunders RC. Corollary discharge and spatial updating: when the brain is split, is space still unified? Prog Brain Res. 2005;149:187–205. [PubMed]
  • Colby CL, Duhamel JR, Goldberg ME. Oculocentric spatial representation in parietal cortex. Cereb Cortex. 1995;5:470–481. [PubMed]
  • Colby CL, Goldberg ME. Space and attention in parietal cortex. Annu Rev Neurosci. 1999;22:319–349. [PubMed]
  • Constantin AG, Wang H, Martinez-Trujillo JC, Crawford JD. Frames of reference for gaze saccades evoked during stimulation of lateral intraparietal cortex. J Neurophysiol. 2007;98:696–709. [PubMed]
  • Dean HL, Platt ML. Allocentric spatial referencing of neuronal activity in macaque posterior cingulate cortex. J Neurosci. 2006;26:1117–1127. [PubMed]
  • DeSouza JF, Dukelow SP, Vilis T. Eye position signals modulate early dorsal and ventral visual areas. Cereb Cortex. 2002;12:991–997. [PubMed]
  • Duhamel JR, Bremmer F, BenHamed S, Graf W. Spatial invariance of visual receptive fields in parietal cortex neurons. Nature. 1997;389:845–848. [PubMed]
  • Duhamel JR, Colby CL, Goldberg ME. The updating of the representation of visual space in parietal cortex by intended eye movements. Science. 1992;255:90–92. [PubMed]
  • Eskandar EN, Assad JA. Dissociation of visual, motor and predictive signals in parietal cortex during visual guidance. Nat Neurosci. 1999;2:88–93. [PubMed]
  • Fetsch CR, Wang S, Gu Y, Deangelis GC, Angelaki DE. Spatial reference frames of visual, vestibular, and multimodal heading signals in the dorsal subdivision of the medial superior temporal area. J Neurosci. 2007;27:700–712. [PMC free article] [PubMed]
  • Fu KM, Shah AS, O'Connell MN, McGinnis T, Eckholdt H, Lakatos P, Smiley J, Schroeder CE. Timing and laminar profile of eye-position effects on auditory responses in primate auditory cortex. J Neurophysiol. 2004;92:3522–3531. [PubMed]
  • Groh JM. A model for transforming signals from a place code to a rate code. Soc Neurosci Abstr. 1997;23:1560.
  • Groh JM, Trause AS, Underhill AM, Clark KR, Inati S. Eye position influences auditory responses in primate inferior colliculus. Neuron. 2001;29:509–518. [PubMed]
  • Grunewald A, Linden JF, Andersen RA. Responses to auditory stimuli in macaque lateral intraparietal area. I. Effects of training. J Neurophysiol. 1999;82:330–342. [PubMed]
  • Guo K, Li CY. Eye position-dependent activation of neurones in striate cortex of macaque. Neuroreport. 1997;8:1405–1409. [PubMed]
  • Heiser LM, Berman RA, Saunders RC, Colby CL. Dynamic circuitry for updating spatial representations. II. Physiological evidence for interhemispheric transfer in area LIP of the split-brain macaque. J Neurophysiol. 2005;94:3249–3258. [PubMed]
  • Heiser LM, Colby CL. Spatial updating in area LIP is independent of saccade direction. J Neurophysiol. 2006;95:2751–2767. [PubMed]
  • Jay MF, Sparks DL. Auditory receptive fields in primate superior colliculus shift with changes in eye position. Nature. 1984;309:345–347. [PubMed]
  • Jay MF, Sparks DL. Sensorimotor integration in the primate superior colliculus. II. Coordinates of auditory signals. J Neurophysiol. 1987;57:35–55. [PubMed]
  • Lal R, Friedlander MJ. Gating of retinal transmission by afferent eye position and movement signals. Science. 1989;243:93–96. [PubMed]
  • Lal R, Friedlander MJ. Effect of passive eye movement on retinogeniculate transmission in the cat. J Neurophysiol. 1990a;63:523–538. [PubMed]
  • Lal R, Friedlander MJ. Effect of passive eye position changes on retinogeniculate transmission in the cat. J Neurophysiol. 1990b;63:502–522. [PubMed]
  • Linden JF, Grunewald A, Andersen RA. Responses to auditory stimuli in macaque lateral intraparietal area. II. Behavioral modulation. J Neurophysiol. 1999;82:343–358. [PubMed]
  • Mullette-Gillman OA, Cohen YE, Groh JM. Eye-centered, head-centered, and complex coding of visual and auditory targets in the intraparietal sulcus. J Neurophysiol. 2005;94:2331–2352. [PubMed]
  • Nakamura K, Chung HH, Graziano MS, Gross CG. Dynamic representation of eye position in the parieto-occipital sulcus. J Neurophysiol. 1999;81:2374–2385. [PubMed]
  • Pesaran B, Nelson MJ, Andersen RA. Dorsal premotor neurons encode the relative position of the hand, eye, and goal during reach planning. Neuron. 2006;51:125–134. [PubMed]
  • Platt ML, Glimcher PW. Response fields of intraparietal neurons quantified with multiple saccadic targets. Exp Brain Res. 1998;121:65–75. [PubMed]
  • Powell KD, Goldberg ME. Response of neurons in the lateral intraparietal area to a distractor flashed during the delay period of a memory-guided saccade. J Neurophysiol. 2000;84:301–310. [PubMed]
  • Robinson DA. Oculomotor unit behavior in the monkey. J Neurophysiol. 1970;33:393–403. [PubMed]
  • Robinson DA, Keller EL. The behavior of eye movement motoneurons in the alert monkey. Bibl Ophthalmol. 1972;82:7–16. [PubMed]
  • Schlack A, Sterbing-D'Angelo SJ, Hartung K, Hoffmann KP, Bremmer F. Multisensory space representations in the macaque ventral intraparietal area. J Neurosci. 2005;25:4616–4625. [PubMed]
  • Shadlen MN, Newsome WT. Neural basis of a perceptual decision in the parietal cortex (area lip) of the rhesus monkey. J Neurophysiol. 2001;86:1916–1936. [PubMed]
  • Sharma J, Dragoi V, Tenenbaum JB, Miller EK, Sur M. V1 neurons signal acquisition of an internal representation of stimulus location. Science. 2003;300:1758–1763. [PubMed]
  • Snyder LH. Frame-up. Focus on “eye-centered, head-centered, and complex coding of visual and auditory targets in the intraparietal sulcus. J Neurophysiol. 2005;94:2259–2260. [PubMed]
  • Snyder LH, Batista AP, Andersen RA. Coding of intention in the posterior parietal cortex. Nature. 1997;386:167–170. [PubMed]
  • Snyder LH, Batista AP, Andersen RA. Intention-related activity in the posterior parietal cortex: a review. Vision Res. 2000a;40:1433–1441. [PubMed]
  • Snyder LH, Batista AP, Andersen RA. Saccade-related activity in the parietal reach region. J Neurophysiol. 2000b;83:1099–1102. [PubMed]
  • Stricanne B, Andersen RA, Mazzoni P. Eye-centered, head-centered, and intermediate coding of remembered sound locations in area LIP. J Neurophysiol. 1996;76:2071–2076. [PubMed]
  • Sylvestre PA, Cullen KE. Quantitative analysis of abducens neuron discharge dynamics during saccadic and slow eye movements. J Neurophysiol. 1999;82:2612–2632. [PubMed]
  • Thier P, Andersen RA. Electrical microstimulation suggests two different forms of representation of head-centered space in the intraparietal sulcus of rhesus monkeys. Proc Natl Acad Sci USA. 1996;93:4962–4967. [PubMed]
  • Thier P, Andersen RA. Electrical microstimulation distinguishes distinct saccade-related areas in the posterior parietal cortex. J Neurophysiol. 1998;80:1713–1735. [PubMed]
  • Tolias AS, Moore T, Smirnakis SM, Tehovnik EJ, Siapas AG, Schiller PH. Eye movements modulate visual receptive fields of V4 neurons. Neuron. 2001;29:757–767. [PubMed]
  • Trotter Y, Celebrini S. Gaze direction controls response gain in primary visual-cortex neurons. Nature. 1999;398:239–242. [PubMed]
  • Van Gisbergen JA, Robinson DA, Gielen S. A quantitative analysis of generation of saccadic eye movements by burst neurons. J Neurophysiol. 1981;45:417–442. [PubMed]
  • Van Opstal AJ, Hepp K, Suzuki Y, Henn V. Influence of eye position on activity in monkey superior colliculus. J Neurophysiol. 1995;74:1593–1610. [PubMed]
  • Werner-Reiss U, Kelly KA, Trause AS, Underhill AM, Groh JM. Eye position affects activity in primary auditory cortex of primates. Curr Biol. 2003;13:554–562. [PubMed]
  • Weyand T, Malpeli J. Responses of neurons in primary visual cortex are modulated by eye position. J Neurophysiol. 1993;69:2258–2260. [PubMed]
  • Wu W, Hatsopoulos N. Evidence against a single coordinate system representation in the motor cortex. Exp Brain Res. 2006;175:197–210. [PubMed]
  • Wu W, Hatsopoulos NG. Coordinate system representations of movement direction in the premotor cortex. Exp Brain Res. 2007;176:652–657. [PubMed]
  • Zwiers MP, Versnel H, Van Opstal AJ. Involvement of monkey inferior colliculus in spatial hearing. J Neurosci. 2004;24:4145–4156. [PubMed]

Articles from Cerebral Cortex (New York, NY) are provided here courtesy of Oxford University Press