|Home | About | Journals | Submit | Contact Us | Français|
Perceptual decisions that are used to select particular actions can appear to be formed in an intentional framework, in which sensory evidence is converted directly into a plan to act. However, because the relationship between perceptual decision-making and action selection has been tested primarily under conditions in which the two could not be dissociated, it is not known whether this intentional framework plays a general role in forming perceptual decisions or only reflects certain task conditions. To dissociate decision and motor processing in the brain, we recorded from individual neurons in the lateral intraparietal area (LIP) of monkeys performing a task that included a flexible association between a decision about the direction of random-dot motion and the direction of the appropriate eye-movement response. We targeted neurons that responded selectively in anticipation of a particular eye-movement response. We found that these neurons encoded the perceptual decision in a manner that was distinct from how they encoded the associated response. These decision-related signals were evident regardless of whether the appropriate decision-response association was indicated before, during, or after decision formation. The results suggest that perceptual decision-making and action selection are different brain processes that only appear to be inseparable under particular behavioral contexts.
A perceptual decision is a deliberative process that converts sensory information into a categorical judgment. Our understanding of how and where in the brain this process is implemented has benefited from a focus on motor intention: when a decision is used to select a particular action, brain regions that contribute to selecting that action also represent the associated decision process (Gold and Shadlen, 2007). However, the implications of these findings remain unclear. One view is that these findings represent a form of embodiment, which casts decision-making and other aspects of higher brain function primarily in behavioral terms (Clark, 1998; Cisek, 2006; O'Regan and Noe, 2001). Alternatively, these findings might be specific to certain task designs, in which perceptual decisions are explicitly linked to real or potential motor plans. Our goal was to distinguish between these alternatives and clarify the relationship between perceptual decision-making and action selection in the brain.
We trained monkeys to decide the direction of random-dot motion and indicate their decision with an eye movement to a visual response target. When the targets are located at predictable spatial locations, neurons in several brain regions including the lateral intraparietal area (LIP), superior colliculus (SC), and the frontal eye field (FEF) that encode the choice of a particular response target also encode the process of converting incoming motion evidence into that choice (Roitman and Shadlen, 2002; Kim and Shadlen, 1999; Shadlen and Newsome, 2001; Horwitz and Newsome, 1999). This decision-related activity, particularly in area LIP, is consistent with the idea of a “priority map” in which different forms of evidence, including diverse sensory cues or cognitive variables like value expectation, are interpreted in terms of the behavioral relevance of a given spatial location (Platt and Glimcher, 1999; Sugrue et al., 2004; Roitman and Shadlen, 2002; Yang and Shadlen, 2007; Bisley and Goldberg, 2010).
Other results suggest that LIP might play a role in perceptual decision-making that extends beyond this spatial framework. Certain LIP neurons can exhibit selectivity for non-spatial features of visual stimuli, including color, shape, and motion direction (Sereno and Maunsell, 1998; Fanini and Assad, 2009; Freedman and Assad, 2006; Toth and Assad, 2002). This kind of selectivity does not require an overt saccade, can extend to stimuli placed outside of the neuron’s response field (RF), and can reflect the subject’s perceptual report (Freedman and Assad, 2009; Williams et al., 2003). Accordingly, LIP’s role in decision-making might not necessarily be tied to a given neuron’s role in saccadic or spatial processing but rather its selectivity for a particular visual feature.
Given that these spatial and non-spatial forms of selectivity co-exist and can have overlapping functions in terms of sensory processing, a key, unresolved question is if and how their relative contributions to perceptual decision-making differ under different behavioral conditions. Does the brain typically interpret sensory evidence in terms of motor plans, and the plans themselves become more abstract (e.g., less tied to a specific spatial location) when necessary? Or does the brain typically form perceptual decisions and plan movements separately and only appear to link the two under certain conditions? Here we support the latter interpretation by showing that individual LIP neurons encode a visual perceptual decision in a manner that is distinct from how they encode the subsequent oculomotor response.
All training, surgical and experimental procedures were in carried out in accordance with the US National Institutes of Health Guide for the Care and Use of Laboratory Animals and were approved by the University of Pennsylvania Institutional Animal Care and Use Committee. We used two rhesus monkeys, one male (At) and one female (Av). Both monkeys had been trained extensively on a pro-saccade version of the direction-discrimination task, in which the two choice targets were placed at known locations along the axis of motion (Connolly et al., 2009), before being trained on the colored-target version of the task used in this study.
The colored-target task required the monkeys to decide the direction of random-dot motion and indicate their decision with an eye movement to one of two equiluminant targets of different colors: red for rightward motion, green for leftward motion. The motion stimulus, described in detail elsewhere (Gold et al., 2008), was presented in a 5°-diameter circular aperture centered on the fixation point for 800 ms. The percentage of coherently moving dots (99.9, 25.6, or 6.4%) and motion direction (one of two possible directions, separated by 180°) were interleaved randomly from trial to trial. One target was placed in the given neuron’s RF at a distance of 9° from the fixation point, the other 180° opposite the fixation point at the same eccentricity. The targets were initially shown in a neutral color (blue). We used three versions of the task that differed in terms of when the color of the targets changed from neutral to red/green: task 1, 200 ms before motion onset; task 2, 400 ms after motion onset; and task 3, 300 ms after motion offset. During motion viewing, the monkey maintained fixation within ±2° (there was no systematic relationship between small, horizontal eye movements made within this window and the direction of motion on correct trials across tasks for either monkey; Wilcoxon test for H0: median difference in eye velocity on trials with rightward versus leftward motion, p=0.55 for At, 0.11 for Av). After fixation-point offset, the monkey was rewarded for making a saccadic eye movement within 800 ms to foveate the target of the appropriate target. The identity of the red and green target was chosen randomly on each trial. In each session, the monkey performed each of the three tasks (Fig. 1a) in blocks.
Each monkey was surgically implanted with an eye-coil, head-holding device, and recording cylinder. Area LIP was targeted using stereotaxic coordinates and magnetic resonance imaging (Fig. 1b) (Kalwani et al., 2008). A sterile guide tube inserted through a plastic grid (Crist Instruments) was used to place a glass-coated tungsten electrode to the dural surface. The electrode was advanced using a NaN microdrive (Plexon). Spike waveforms were stored and sorted offline (Plexon). We searched for LIP neurons using a memory-saccade task and selected neurons with spatially selective activity during the delay period (Roitman and Shadlen, 2002). This spatial tuning defined the neuron’s RF, which we used to place one of the two choice targets on the discrimination task.
We fit behavioral data describing Fright, the fraction of rightward (red) choices, as a function of SCOH, signed motion coherence (negative coherence for leftward motion, positive coherence for rightward motion), to a logistic function of the form:
where λ, β0, and β1 are fit parameters. λ is the lapse rate, corresponding to the fraction of incorrect choices at the highest motion strength. β0 is a measure of choice bias, with positive (negative) values implying a tendency to choose the red (green) target. β1 reflects perceptual sensitivity, with higher values implying higher sensitivity. Fit parameters and their uncertainty (s.e.m.) were determined using maximum-likelihood methods (Watson, 1979; Meeker and Escobar, 1995). Threshold (the motion strength corresponding to d’=1, or 76.02% correct for an unbiased observer) was computed as 1.151/β1.
Neuronal selectivity for target color, motion direction, and saccadic choice was quantified using both a multiple ANOVA and an ROC-based index that describes the ability of an ideal observer to predict the value of the given variable based solely on the neural responses (Parker and Newsome, 1998). Both were computed in 200-ms bins, offset by 50 ms. Peak selectivity was measured 200–900 ms after motion onset for direction selectivity, 100–300 ms following the target-color change for color selectivity, and from 100 ms before until 100 ms after fixation-point offset for choice selectivity. We found no qualitative difference in the distributions of selectivity indices for the two monkeys and therefore combined data for all neural analyses.
We trained two monkeys to decide the direction of coherent motion in a random-dot stimulus and indicate their decision with an eye movement to a target of a particular color (red for rightward motion, green for leftward motion). In a given session, the two targets always appeared at known locations, but the color shown at each location was not predictable until the colored targets appeared. To prevent the monkeys from using previously formed associations between motion direction and target location (Connolly et al., 2009), the targets were typically placed roughly perpendicular to the horizontal axis of motion (see Fig. 3d). We used three versions of the task that differed in terms of when the colored targets appeared – either before (task 1), during (task 2), or after (task 3) motion viewing (Fig. 1a). This design allowed us to control the time when the decision was formed relative to when the decision was associated with a specific eye-movement response. We examined how these manipulations affected the representation of sensory, decision, and motor activity in area LIP.
Both monkeys used the color, not location, of the targets to govern their choices (Fig. 2). For all three tasks, the target of the appropriate color was chosen on 72% of trials by monkey At, 69% by monkey Av (o’s in Fig. 2). By comparison, from the 56 out of 102 total behavioral sessions in which the targets were not directly perpendicular to the axis of motion (see Fig. 3d), the target in the direction of motion was chosen at chance levels (49% of trials for monkey At, 50% for monkey Av; x’s in Fig. 2). Moreover, performance depended systematically on the strength of the motion stimulus. For high-coherence stimuli, error (“lapse”) rates were 6–18% for the three tasks and two monkeys, indicating that for easily perceptible stimuli the monkeys performed well above chance but not perfectly on these difficult tasks. Performance degraded for lower coherences, but without systematic choice biases (best-fitting logistic functions from Eq. 1, parameterized by terms describing the choice bias, β0, and coherence dependence, β1, are shown in Fig. 2).
Each monkey performed somewhat similarly on the three tasks, suggesting that their strategies did not differ substantially when the oculomotor mapping was indicated before, during, or after decision formation (despite quantitative differences in best-fitting parameters when comparing task-specific fits in Fig. 2, which were applied to data across all sessions, likelihood-ratio tests comparing session-by-session fits to data from the three tasks considered separately versus together were <0.01 – implying differences across tasks – for only 15 of 52 sessions for monkey At and 4 of 27 sessions for Av). For task 2, in which the colored targets appeared during motion viewing, there was also minimal effect on performance of changing the time at which the colored targets appeared (fits for tasks in which the targets appeared either 200, 400, or 600 ms after motion onset differed in 1 of 52 sessions for At and 0 of 19 sessions for Av, likelihood-ratio test, p<0.01; Fig. 2). These results imply that the monkeys were not just attending before or after appearance of the targets. Thus, the three tasks seemed to require similar perceptual decision-making processes that differed primarily in terms of when the appropriate action could be selected. We tested how this difference in the timing of the signal indicating the sensory-motor mapping affected the representation of the decision process in area LIP.
We recorded from 84 individual LIP neurons (n=51 from At, 33 from Av; Fig.1b) while the monkeys performed the tasks. We selected neurons with spatially selective responses during the delay period of a memory-saccade task (initially measured qualitatively on-line, then later quantified off-line; Fig. 3), like in previous studies of decision-related activity in LIP (Roitman and Shadlen, 2002; Shadlen and Newsome, 2001). We used the memory-period selectivity to define a response field (RF) in which we subsequently placed one of the colored choice targets on the direction-discrimination task (Tin, as opposed to Tout). We typically searched for neurons with RFs located below (monkey Av, found 4,000 – 8,500 µm below the cortical surface along 2 separate electrode trajectories, as shown in Fig. 1b) or above (monkey At, found 4,000 – 8,500 µm below the cortical surface along 3 separate electrode trajectories) fixation, consistent with the task geometry. Of the 84 neurons we found, 71 (84.5%) had responses that were modulated between motion viewing and the saccadic response on the discrimination task, as described below.
Individual LIP neurons were selective for different combinations of motion direction, target color, and saccadic choice. For example, the neuron shown in Fig. 4a tended to respond more strongly when the target in the neuron’s RF changed from neutral to red, as opposed to green, and for Tin versus Tout choices, which is consistent with the definition of the RF from the memory-saccade task. In contrast, the neuron shown in Fig. 4b tended to respond more strongly to leftward versus rightward motion during motion viewing and the subsequent delay period, and then became selective for Tin versus Tout choices around the time of the saccadic response. This neuron’s direction-selective responses were evident when the colored targets appeared before (task 1), during (task 2), or after (task 3) motion viewing and regardless of the direction of the subsequent saccadic choice.
The population of recorded LIP neurons exhibited selectivity for motion direction, target color, and saccadic choice, each of which evolved differently as a function of time for the three tasks. We quantified these different forms of selectivity using a multiple ANOVA applied to time-binned spike-count data from individual trials, with motion direction, target color, and their interaction (i.e., selectivity for saccadic choice: for correct trials, a red target in a neuron’s RF and rightward motion implied a Tin choice, whereas a green target and leftward motion implied a Tout choice) as factors (Fig. 5). For all three tasks, selectivity for motion direction appeared soon after motion onset, peaked mid-way through motion viewing, then declined steadily through the end of motion viewing and the delay period preceding the saccadic choice (24.5% of all responses from individual neurons considered separately for all time bins and tasks shown in Fig. 5a were selective for motion direction; of these, 65.1% were selective for rightward, 34.9% for leftward motion). In contrast, selectivity for target color appeared just after they changed color, and then declined over the remainder of the trial (34.8% of all responses in Fig. 5a, of which 81.1% were selective for red, 18.9% for green). Selectivity for saccadic choice also appeared after the target-color change, indicating the sensory-motor mapping, but tended to increase over the course of the trial, until the choice was made (34.1% of all responses in Fig. 5a, of which 92.2% were selective for Tin, 7.8% for Tout choices).
We further quantified these forms of selectivity using an ROC-based index that describes the ability of an ideal observer to distinguish the given task parameter using only spike-rate data from individual neurons (Parker and Newsome, 1998; Hanley and McNeil, 1982). We computed this index for each neuron with respect to motion direction (a value >0.5 implies larger responses for rightward versus leftward motion, whereas a value <0.5 implies larger responses for leftward versus rightward motion), target color (>0.5 for red, <0.5 for green), and saccadic choice (>0.5 for Tin, <0.5 for Tout).
Individual LIP neurons exhibited combinations of selectivity for the three parameters during each of the three tasks (Fig. 5b). For task 1, 23 of the 71 recorded neurons showed significant selectivity for all three parameters (H0: index=0.5, p<0.05, measured around the time of peak selectivity as shown in Fig. 5a), 30 showed selectivity for two of the three parameters (3 for motion direction and target color, 5 for motion direction and saccadic choice, and 22 for target color and saccadic choice), and 9 showed selectivity for just one of the three parameters (1 for motion direction, 4 for target color, and 4 for saccadic choice). For task 2, 18 neurons showed significant selectivity for all three parameters, 28 showed selectivity for two of the three parameters, (5 for motion direction and target color, 4 for motion direction and saccadic choice, and 19 for target color and saccadic choice) and 21 showed selectivity for just one of the three parameters (3 for motion direction, 5 for target color, and 13 for saccadic choice). For task 3, 12 neurons showed significant selectivity for all three parameters, 22 showed selectivity for two of the three parameters (3 for motion direction and target color, 7 for motion direction and saccadic choice, and 12 for target color and saccadic choice), and 26 showed selectivity for just one of the three parameters (3 for motion direction, 10 for target color, and 13 for saccadic choice). Thus, individual LIP neurons exhibited a range of response properties, including selectivity for different combinations of key task variables.
To better interpret the relationship between motion and saccade selectivity, we computed the value of the selectivity index for motion direction separately for trials in which the red or green target was in the neuron’s RF. If the index had matching values when either the red or green target was shown in the RF, then the responses were selective for motion direction independent of the saccadic choice. Conversely, if the index corresponded to opposite direction selectivities for the different target colors, then the responses were selective for saccadic choice.
For all three tasks, the population of recorded LIP neurons tended to exhibit selectivity for motion direction that was largely independent of saccadic choice during motion viewing, but then selectivity for saccadic choice that was largely independent of motion direction around the time of the saccade. During motion viewing, the population of selectivity indices included values that were both greater and less than 0.5, implying selectivity for both directions (the median [5th 95th percentiles] index values across tasks were 0.59 [0.39 0.83] and 0.55 [0.33 0.71] when the red or green target was in the RF, respectively). Moreover, these values were positively correlated when comparing trials in which either a red or green target was shown in the given neuron’s RF (Fig. 5c). In contrast, around the time of the saccadic choice, the same neurons tended to be selective for rightward motion when the red target was in the RF but leftward motion when the green target was in the RF (index values = 0.78 [0.47 0.97] and 0.20 [0.05 0.42] when the red or green target was in the RF, respectively), which is equivalent to selectivity for Tin choices (Fig. 5d). These findings were similar across the three tasks, with a strong, positive correlation between the value of the selectivity index measured on one task versus another (Spearman’s rho has values between 0.51 and 0.87 for each comparison, H0: rho=0, p<0.001 in all cases).
There was also no clear relationship between a given neuron’s selectivity for motion direction and the location of that neuron’s RF. Of the 71 recorded neurons, 37 had RFs that were not located directly along the vertical meridian (Fig. 3d; all but one of these were located slightly to the left). For task 1, 12 of these 37 neurons had significant direction selectivity during motion viewing (H0: selectivity index=0.5, p<0.05), of which 6 preferred motion in the same direction (relative to the vertical meridian) as the neuron’s RF and 6 preferred the opposite direction. For task 2, 11 of these neurons had significant direction selectivity, of which 4 preferred motion in the same direction as the neuron’s RF and 7 preferred the opposite direction. For task 3, 15 of these neurons had significant direction selectivity, of which 3 preferred motion in the same direction as the neuron’s RF and 12 preferred the opposite direction. Thus, selectivity for the spatial location of a saccade target could not account for the direction preferences we measured in the context of the colored-target task.
Moreover, the timing of selectivity for motion direction, unlike the timing of selectivity for saccadic choice, did not depend on the time at which the colored targets were shown (Fig. 6). Selectivity for motion direction tended to appear ~200 ms after the onset of the motion stimulus for all tasks. In contrast, selectivity for saccadic choice tended to occur, on average, after selectivity for motion direction was established (paired Wilcoxon test for H0: median difference in selectivity onset=0, p<0.001) and after the target color change. Thus, the appearance of the colored targets affected the onset of choice-selective responses but not motion direction-selective responses.
We further analyzed these patterns of selectivity in LIP with respect to two key features of perceptual decision-making: first, selectivity for not just the categorical judgment but also the sensory evidence used to arrive at that judgment; and second, selectivity on correct versus error trials, to relate the responses more directly to the perceptual report (Roitman and Shadlen, 2002; Shadlen and Newsome, 2001).
For the colored-target tasks, the strength of the sensory evidence was reflected in neuronal selectivity for motion direction, but not for target color or saccadic choice (Fig. 7). This coherence dependence was computed using a similar ROC-based index as in Fig. 5b,c but encoded for each neuron with respect to its preferred value, computed around the time of peak selectivity (see Methods), for motion direction, target color, or saccadic choice. Therefore, increasing values of this index above 0.5 imply increasingly selective responses of the neuron for the preferred versus anti-preferred value of the given property. The neural responses were increasingly selective for motion direction as a function of increasing coherence, starting early in motion viewing and lasting into the delay period preceding the saccadic response, regardless of whether the targets changed color before, during, or after motion viewing. In contrast, there was no systematic coherence dependence with respect to the neurons’ selectivity for saccadic choice or target color. These results are consistent with the idea that LIP activity represents the process of converting motion information into a categorical direction judgment that, in turn, instructs the saccadic choice.
The time course of LIP selectivity also differed for motion direction and saccadic choice. As noted above, even for task 1, when the sensory-motor mapping was specified in advance, selectivity for motion direction tended to be established before selectivity for saccadic choice (Fig. 6a). Once established, the temporal dynamics of these different forms of selectivity differed considerably. After motion onset and the target-color change, selectivity for saccadic choice tended to build up slowly, reaching a peak around the time of the saccade (Fig. 7c). These temporal dynamics are reminiscent of those described in LIP for a reaction-time version of the pro-saccade task, in which the monkey initiated the saccadic response as soon as it formed the decision (Roitman and Shadlen, 2002). In contrast, selectivity for motion direction tended to build up quickly, starting soon after motion onset, then reaching a peak after ~500 ms of motion viewing and declining into the delay period (Fig 7a). The relatively brief rising phase of this selectivity is reminiscent of the temporal dynamics in LIP on a pro-saccade version of the task in which the stimulus was presented for a fixed duration, like in our study (Roitman and Shadlen, 2002; Shadlen and Newsome, 2001). Together, these results imply that different selection processes represented in LIP can have different temporal dynamics, which might be difficult to distinguish when perceptual and oculomotor decisions have a fixed relationship, like in the pro-saccade task.
A comparison of responses on correct and error trials further supports the idea that direction-selective responses in LIP were related to the monkeys’ perceptual judgments about motion direction and not simply the stimulus itself. For all three tasks, individual neurons tended to have similar selectivity on correct and error trials for either target color or saccadic choice, implying that the errors did not arise from mis-encoding of either variable (Fig. 8d,e). In contrast, selectivity for motion direction tended to be negatively correlated for low-coherence (6.4%) stimuli for all three tasks and for middle-coherence (25.6%) stimuli for task 1, but slightly positively correlated (for task 1) or uncorrelated (for tasks 2 and 3) for high-coherence (99%) stimuli (Fig. 8a–c). These results are consistent with the idea that the errors arose from two different sources. The first is an inappropriate direction-color mapping. Assuming that these mapping errors are the primary source of the non-zero lapse rates, it follows that motion direction is encoded in a similar manner on correct and error trials with high-coherence stimuli (Fig. 8c). The second source of error is from perceptual processing, which is expected to be more prevalent for weaker stimuli. Accordingly, the negative correlation in selectivity for lower coherences implies that these neurons encode the perceived, not actual, direction of motion (Fig. 8a,b).
Previous studies showed that in monkeys trained to indicate a decision about the direction of random-dot motion with a saccadic eye movement to a visual target at a predictable location in the same direction, neurons in LIP encode the process of converting incoming visual information into the saccadic choice (Roitman and Shadlen, 2002; Shadlen and Newsome, 2001). However, because that task design explicitly linked the perceptual decision with a specific oculomotor response, it was impossible to dissociate the decision about the direction of motion with the selection of the appropriate action. To overcome this limitation, we used a task in which the association between the direction decision and the saccadic choice was based on the color, not location, of the visual target. We identified a neural correlate of the decision process in LIP, which was present regardless of whether the appropriate decision-response association was indicated before, during, or after the decision was formed. This activity, which included not just selectivity for the given stimulus feature, but also sensitivity to the input, timing, and outcome of the decision process, was found in the same neurons that subsequently encode the saccadic response. These results imply that LIP can play multiple roles in perceptual and saccadic processing.
We do not know the limits of these roles. One possibility is that the decision-related activity represents purely perceptual processing and is thus independent of potential or actual actions that follow. This idea might be further tested using tasks in which the decision is formed before the monkey is informed about whether a response is needed at all, or whether to use different modalities (say, eye or arm movements) to indicate the response. However, a challenge with such designs is that it can be difficult to rule out the possibility that the response not used was also not planned. Another possibility is that our results in LIP reflected particular aspects of the task design, such as the fact that we always showed a visual target in the given neuron’s RF or that we always required a roughly vertical eye-movement response. This idea implies that LIP’s role in perceptual decision-making, and its relationship to saccade planning, depends on the task context, including not just the spatial configuration and sensory-motor association but also other factors known to be encoded in LIP like reward expectation (Platt and Glimcher, 1999; Sugrue et al., 2004; Dorris and Glimcher, 2004). Further studies are needed to characterize how all of these spatial and non-spatial factors affect the representation of perceptual decision-making across the population of neurons in LIP.
Nevertheless, either interpretation implies a flexible relationship between perceptual decision-making and spatial processing in LIP. In particular, our results seem inconsistent with the idea that a given neuron represents a perceptual decision only insofar as the decision is used to direct attention or intention towards or away from that neuron’s RF. Because selectivity for motion direction did not correspond to selectivity for target color or saccadic choice, motion-driven responses were not predictive of a particular target color or choice to a given spatial location. Moreover, a previous study using a version of the colored-target task similar to task 3 found no evidence for spatially organized saccade plans that corresponded to a particular direction decision (Gold and Shadlen, 2003). Thus, even if the direction-selective responses we found in LIP represent a sort of temporary plan to generate a particular eye movement or focus of attention either towards or away from a given target (Snyder et al., 2000; Barash et al., 1991; Zhang and Barash, 2000; Gnadt and Andersen, 1988; Colby and Goldberg, 1999), this plan is not organized with respect to the same spatial map defined by the neurons’ RFs measured on the memory-saccade task.
Experience likely played an important role in establishing and shaping these flexible, task-relevant responses in LIP (Freedman and Assad, 2006; Law and Gold, 2008; Law and Gold, 2009). The monkeys used in this study were previously trained on a pro-saccade version of the direction-discrimination task (Connolly et al., 2009). That task used only red targets, which might help to explain the preponderance of red-selective neurons we found in this study. Moreover, training on that task gives rise to responses that encode the strength and direction of the moving visual stimulus. These responses are found in the same subpopulation of neurons, with spatially selective activity during the delay period of the memory-saccade task, that we sampled (Shadlen and Newsome, 2001; Roitman and Shadlen, 2002; Law and Gold, 2008). However, when measured in the context of the pro-saccade task, these responses are strongly spatial, reflecting both the direction decision and the impending oculomotor response (Gold and Shadlen, 2000; Gold and Shadlen, 2003). In contrast, after training on the colored-target task, we found that the same subpopulation of LIP neurons can encode the direction decision and oculomotor plan separately. Together, these results suggest that experience plays an ongoing role in shaping LIP responses properties to be appropriate for the task at hand (Freedman and Assad, 2006; Law and Gold, 2008).
We also do not know whether the decision-related signals we measured originated in LIP or were computed elsewhere and sent as copies to LIP. In principle, these signals could arise from numerous brain regions that provide direct or indirect input to LIP and are thought to be involved in decision-making, including in the prefrontal cortex and basal ganglia (Kim and Shadlen, 1999; Heekeren et al., 2004; Balleine et al., 2007). However, none of these brain regions has been examined using the kind of task we present here. Another possibility is the SC, which has been shown to include a small subset of neurons with direction-selective activity that is not strongly tied to a given saccadic response (Horwitz et al., 2004). However, those results were obtained using a task in which the spatial configuration of the choice targets always included a component in the direction of motion, leaving open the possibility that the neural activity was selective for that spatial component of the saccadic response and not the perceptual decision.
Conversely, LIP receives direct and indirect input from the middle temporal area (MT) of extrastriate visual cortex that could be used directly to form the direction decision (Blatt et al., 1990). On a reaction-time (RT) version of the pro-saccade task, electrical microstimulation in LIP affects RTs in a manner consistent with a causal role in the decision process (Hanks et al., 2006). It would be interesting to design an RT version of the colored-task to more effectively analyze the time course of the perceptual decision and test for similar causality when the decision is not explicitly linked to a specific eye-movement response.
We thank M. Shadlen and M. Nassar for helpful comments on this manuscript and J. Zweigle for expert technical assistance. Supported by the McKnight Endowment Fund for Neuroscience, the Burroughs Wellcome Fund, NIH R01-EY015260 and R03-MH087798.