Search tips
Search criteria 


Logo of nihpaAbout Author manuscriptsSubmit a manuscriptHHS Public Access; Author Manuscript; Accepted for publication in peer reviewed journal;
Annu Rev Psychol. Author manuscript; available in PMC 2011 January 1.
Published in final edited form as:
PMCID: PMC2849803

Cognitive Neural Prosthetics


The cognitive neural prosthetic (CNP) is a very versatile method for assisting paralyzed patients and patients with amputations. The CNP records the cognitive state of the subject, rather than signals strictly related to motor execution or sensation. We review a number of high-level cortical signals and their application for CNPs, including intention, motor imagery, decision making, forward estimation, executive function, attention, learning, and multi-effector movement planning. CNPs are defined by the cognitive function they extract, not the cortical region from which the signals are recorded. However, some cortical areas may be better than others for particular applications. Signals can also be extracted in parallel from multiple cortical areas using multiple implants, which in many circumstances can increase the range of applications of CNPs. The CNP approach relies on scientific understanding of the neural processes involved in cognition, and many of the decoding algorithms it uses also have parallels to underlying neural circuit functions.

Keywords: decision making, planning, intention, posterior parietal cortex, brain-machine interface, efference copy, learning, sensorimotor transformation


The number of patients suffering from some form of paralysis in the United States alone has been estimated to be from 1.7 million (U.S. Dept. Health Human Serv. 1995) to 5.6 million (Christopher & Dana Reeve Found. 2009). Paralysis can result from spinal cord lesion and other traumatic accidents, peripheral neuropathies, amyotrophic lateral sclerosis, multiple sclerosis, and stroke. Another 1.4 million patients have motor disabilities due to limb amputation (U.S. Dept. Health Human Serv. 1995). A majority of these patients still have sufficiently intact cortex to plan movements, but they are unable to execute them. Thus they are candidates for assistance using cortical neural prosthetics.

Figure 1 shows the concept of cortical neural prosthetics generally, and cognitive neural prosthetics (CNPs) more specifically. In this particular case, the patient is shown to have a spinal cord lesion, but a similar logic applies to other forms of paralysis or to amputation. The patient can still plan movements but cannot execute them. Recordings can be made from microelectrode arrays in cortex. The implants not only record the activity of populations of nerve cells but also transmit these signals wirelessly to external assistive devices. Implants can be placed in a variety of areas, and they record the intent or other cognitive variables of the subject. Decoding algorithms interpret the meaning of the recorded signals. These algorithms can be incorporated into hardware in the implant or in the external devices. The decoded neural signals are further transformed to provide control signals to operate assistive devices. In the example in Figure 1, these devices can include robotic limbs, functional electrical stimulation of otherwise paralyzed limbs for reanimation, wheelchair navigation, Internet access, email, telephone and other forms of computer-assisted communication, and the control of the patient's environment including television, temperature control, and calls for assistance. Elements of a cortical neural prosthetic, including the electrodes, decoding algorithms, and associated electronics, are often referred to collectively as a brain-machine interface (BMI).

Figure 1
Schematic representation of a cognitive neural prosthetic. In this example, the patient has a lesion of the spinal cord, represented by the red X on the brain drawing on the left. The patient can still see the goal of a movement and can plan the movement, ...

In research applications, healthy monkeys are used to test cortical prosthetics. Typically, the animals have a permanently implanted array of electrodes, similar to the case for human patients. The animals control an output device with their thoughts and can do this without eliciting movements. This process is often referred to as a brain-control task. It is also called a closed-loop task since the animals receive feedback about their performance, for instance, the movement of a cursor on a computer screen controlled by their neural activity.

Many studies have involved extracting motor execution signals from motor cortex (Carmena et al. 2003, Fetz 1969, Serruya et al. 2002, Taylor et al. 2002). It was often observed that the monkeys did not need to actually move the limb to bring a cursor under brain control. In terms of the current topic, this would constitute cognitive control, in which brain signals not directly related to executing a movement can nonetheless be harnessed for the task. This control can be derived from motor imagery, planning, attention, decision making, or executive control, to name just a few of the cognitive signals that are potentially useful for neuroprosthetics. The distinction is not the brain location of the recording but rather the type of signal that is being extracted (Andersen et al. 2004a). This being said, some brain areas will no doubt be better sources of signals for particular neuroprosthetic applications. The specialization of different cortical areas is an advantage for CNPs. For example, for mute patients speech can potentially be decoded directly from speech areas rather than using a letter board and controlling a cursor from motor cortex. CNPs can also take advantage of parallel decoding, in which implants are placed in multiple cortical areas and different signals are decoded simultaneously. A prime example of this parallel decoding is the application of CNPs to complex, multi-effector movements, discussed below.

Science as a Guide for Cognitive Neural Prosthetics

One central element of neuroprosthetics is engineering. Advanced statistical and signal-processing techniques are commonly used to optimize decoding algorithms as well as develop algorithms that are adaptive. Such approaches are essential to CNPs. However, CNPs also benefit from the additional component of a scientific understanding of the brain processes being performed by the region(s) of recording. This understanding extends to functional neuroanatomy and network/circuit properties that can guide the selection of recording sites and the design and implementation of decoding algorithms. The following sections highlight examples that match cortical function, and in some cases decoding algorithms, to particular cognitive neuroprosthetic applications.


Neural prosthetic applications have often used trajectory signals to bring a cursor or a robotic hand to a goal (Carmena et al. 2003, Serruya et al. 2002, Taylor et al. 2002). This approach, especially for cursor control, is similar to using a mouse to drag a cursor to a location on a computer screen. However, many applications would benefit from being able to rapidly indicate a series of goals. One example would be the rapid indication of letters on a letter board for communication. Another would be to provide a sequence of movements for a robotic limb to ensure a more fluid programming of a string of movements.

The primary motor cortex (M1) contains some goal information, but goal information is strongly represented in the premotor and posterior parietal cortex (Hatsopoulos et al. 2004, Snyder et al. 1997) (Figure 2). This goal information reflects the intent of the animal to make a movement to the goal.

Figure 2
The representation of intended goals in the parietal reach region (PRR). The top plot shows the delayed goal-directed reach task. After the brief presentation of a cue stimulus (green circle), the monkey plans a reach to the cued location but delays the ...

Gnadt & Andersen (1988) first demonstrated intended eye movement signals in the lateral intraparietal area (LIP) of the posterior parietal cortex (PPC). Subsequent studies showed that neurons in the parietal reach region (PRR) of PPC encode the intent to make reach movements (Snyder et al. 1997). A variety of experiments have demonstrated intention-related activity that is not spatial attention (Cui & Andersen 2007, Gail & Andersen 2006, Quiroga et al. 2006, Scherberger & Andersen 2007, Scherberger et al. 2005, Snyder et al. 1998). These movement plans can be formed but then cancelled without executing a movement; therefore, they do not reflect motor execution but rather the higher-level plan to move (Andersen & Buneo 2002, Bracewell et al. 1996, Snyder et al. 1998). The more high-level, abstract nature of the intention signals is also evident from the finding that the intended reach activity in PRR is coded in visual rather than limb coordinates (Batista et al. 1999).

Putative human homologues of LIP and PRR have been identified in humans using functional magnetic resonance imaging (fMRI) experiments (Astafiev et al. 2003, Connolly et al. 2003, Filimon et al. 2009). Interestingly, electrical stimulation of the posterior parietal cortex in human patients invoked the conscious intention to move various body parts even though no movements resulted from the stimulation (Desmurget et al. 2009). Whereas it was possible to demonstrate intention-related activity in monkey PPC neurons, this study shows that the conscious awareness of intention also arises with increased PPC activity.

Musallam et al. (2004) demonstrated decoding of four or eight goal locations from array recordings from the PRR of PPC and the dorsal premotor cortex (PMd) in frontal cortex (Figure 3A). To emphasize the cognitive nature of the signal, they decoded the persistent activity that results when monkeys plan a movement to a briefly cued location in space but while withholding the execution of the movement. This goal signal is decoded in the dark, with no stimulus present and no movement being executed; it is endogenously generated and represents the movement thought or intent of the animal. This intent can be decoded very rapidly. If a 100 ms time segment was used for decoding, it was nearly as accurate as using a 900 ms time segment. Subsequent studies in PMd showed that three goals could be decoded in rapid succession (Santhanam et al. 2006).

Figure 3
(A) Schematic of areas in the cortex where cognitive signals can be recorded for neural prosthetic applications. For reaches, these areas include the parietal reach region (PRR) and dorsal premotor cortex (PMd); for saccades, lateral intraparietal (LIP) ...

Many natural movements are highly coordinated concatenations of movement sequences rather than single reaches as discussed above. Frontal areas encode the parts of sequences including the directions and order of movements (Averbeck et al. 2006, Fujii & Graybiel 2003, Histed & Miller 2006, Lu & Ashe 2005, Mushiake et al. 2006, Ninokura et al. 2003, Ohbayashi et al. 2003, Tanji & Shima 1994). An early study of PRR found that only the next movement of a sequence is encoded, but the task was complex and involved the canceling of old plans and formation of new ones (Batista & Andersen 2001). In a more direct test of sequential planning in PRR, it was found that the area represents simultaneously and in parallel the first and second goals in a sequence of two movements (Baldauf et al. 2008). This dual representation was present regardless of whether the two movements were made rapidly or slowly. A nearest-neighbor decoding algorithm applied to the data revealed that the sequence of planned movements could be decoded. This decoding was done during a delay period in which there was no stimulus or movement, again emphasizing the cognitive nature of the signals encoding the sequence. Thus PRR activity encodes simultaneously and in parallel a sequence of movements, and this feature could be utilized to provide for a more fluid operation of output devices for prosthetic applications.


A basis of neural prosthetic control may be motor imagery. The fact that primary motor cortex cells can be trained to respond without evoking a movement suggests that movements can be imagined even from an area very close to the final motor output (Fetz 2007, Hochberg et al. 2006). Noninvasive studies using fMRI have provided a picture of the circuits involved in imagined movement. Motor imagery activates a subset of the areas that are also active during real movements, particularly premotor areas in the frontal lobe and areas in the posterior parietal cortex (Decety 1996, Gerardin et al. 2000, Stephan et al. 1995) (Glidden et al. 2005), and this activation can be as large as that seen for real movements [with the exception of motor cortex, which shows much less activation for imagined compared to real movements (Glidden et al. 2005)].

Another potential source of motor imagery for prosthetic control is the mirror neuron. Mirror neurons respond when a monkey makes a movement and also when the monkey observes the experimenter making the same movement (di Pellegrino et al. 1992, Fogassi et al. 2005, Gallese et al. 1996, Tkach et al. 2007). It has been proposed that mirror neurons form the basis of action understanding (Fogassi et al. 2005). It is possible that mirror neurons may also be activated during internally generated, imagined movements. If this is the case, and these cells also exist in human cortex, they may provide a source of control of complex and meaningful movements for prosthetics applications.


Many of the cortical areas that can produce control signals are also involved in action selection. These areas represent the expected value or utility of an action. Decision making is based on choosing the alternative with the highest value. In monkey experiments, the value is generally appetitive and includes the type, amount, and probability of reward. Many areas in the parietal and frontal cortex which represent movement plans also represent the expected value of the planned action (Barraclough et al. 2004, Campos et al. 2005, Hikosaka & Watanabe 2000, Kobayashi et al. 2002, Leon & Shadlen 1999, Matsumoto et al. 2003, Platt & Glimcher 1999, Schultz 2000, Shidara & Richmond 2002, Sugrue et al. 2004, Tremblay & Schultz 2000).

Expected value can be decoded from PRR recordings in both delayed reach and brain-control tasks (Musallam et al. 2004). In the latter, the monkey plans a movement but does not execute it and instead uses the planning activity to move a cursor to a goal on a computer screen. During a session, one reward variable (type, size, or probability) changed from trial to trial. The cue size indicated on each trial whether the animal would receive the preferred or less-preferred reward for successful completion of the trial. The cue size was varied across sessions so that a large cue represented a more desirable or less desirable reward on different days. In general, the anticipation of a preferred reward led to a larger response and improved spatial tuning. The increase in activity was unlikely to be due to increased attention given that no increase in activity was seen when the non-preferred reward was aversive (saline solution). Overall, the cells carried more information for preferred reward expectation. Parallel decoding showed that expected reward and spatial location could be decoded simultaneously. Moreover, since the cells carried more information about spatial location when higher reward was expected, the decoding performance for target location was better in high-reward trials.

A practical advantage of the reward expectation decoding results is that they provide insight into the preferences and potentially the mood of the patient. The first thing a doctor asks a patient is, “How are you feeling?” On a more general level, this study was the first to show that a very high level cognitive signal, expected value, could be decoded in brain-control trials. These results open the door to decoding many complex cognitive signals including speech, attention, executive control, and emotion for prosthetics applications.


Numerous studies support the idea that the brain constructs internal forward and inverse models to control movement (Atkeson 1989, Jordan & Rumelhart 1992, Kawato et al. 1987, Wolpert et al. 1995). The forward model predicts the sensory consequences of a movement by incorporating recent motor commands into a model of the movement dynamics, thereby predicting the upcoming state of the effector (e.g., the limb). The inverse model produces the motor commands necessary to achieve the desired movement.

Efference copy, and by extension a forward model, can be used to cancel the sensory effects of one's own movements (Andersen et al. 1987, Bradley et al. 1996, Claxton 1975, Crowell et al. 1998, Diedrichsen et al. 2005, Duhamel et al. 1992, Haarmeier et al. 2001, Roy & Cullen 2004, Royden et al. 1992, Shadmehr & Krakauer 2008, Weiskran et al. 1971). Another important feature of forward models is that they remove the delays that are present between movements and the resulting sensory feedback. When sensory feedback alone is used to correct movements online, these delays would normally lead to overcompensation and instability. For example, the execution of a goal-directed arm movement will result in visual signals that will take approximately 90 ms (Raiguel et al. 1999) and somatosensory signals that will take 20 to 40 ms (Allison et al. 1991) to reach sensorimotor cortex. Subsequent processing delays for sensorimotor integration, motor command generation, and execution result in delays of more than 100 ms for somatosensory control (Flanders & Cordo 1989) and over 200 ms for visualmotor control (Georgopoulos et al. 1981, Miall et al. 1993). However, by monitoring the movement commands through an efference copy of the command, the current state of the arm can be estimated internally well in advance of the late-arriving sensory information.

Kalman Filter

The forward model can also be incorporated into an observer framework (Goodwin & Sin 1984, Miall & Wolpert 1996) (see Figure 4). The forward model derives an estimate of the upcoming or current state of the limb. Sensory events arriving later are integrated with the forward model to update and refine the estimate. This combination of the forward model and sensory feedback is called the “ob-server,” and for linear systems with additive and Gaussian noise, the optimal observer is known as a Kalman filter (Kalman 1960).

Figure 4
Sensorimotor integration for reach planning and online control. Rounded boxes denote pertinent sensorimotor variables, and computational processes are contained in the rectangular boxes. Prior to a reach, an intended trajectory is formulated as a function ...

Studies in humans have suggested that the observer may, at least in part, be located in the PPC. Lesions of the PPC produce optic ataxia in which patients have difficulty in locating and reaching to targets (Balint 1909, Perenin & Vighetto 1998, Rondot et al. 1977), in making corrective movements (Grea et al. 2002, Pisella et al. 2000), and in maintaining an estimate of the internal state of the arm (Wolpert et al. 1998). Transcranial magnetic stimulation (TMS) over the PPC interferes with the correction of trajectories or adaptation to novel force-fields (Della-Maggiore et al. 2004, Desmurget et al. 1999). PPC lesions disrupt the mental simulation of movement (Sirigu et al. 1996), and a disruption is not seen with lesions of M1 (Sirigu et al. 1995), cerebellum (Kagerer et al. 1998), or basal ganglia (Dominey et al. 1995). The movement durations are generally underestimated during simulation by these patients, suggesting a disruption of the forward model.

Neurophysiological recording experiments also suggest that a forward model may include PPC. This part of cortex receives massive feedback projections from motor structures and input from visual and somatosensory cortex and thus is ideally situated for integrating efference copy and sensory signals (Andersen et al. 1985a, 1990; Goldman-Rakic 1998; Johnson et al. 1996; Jones & Powell 1970). In an experiment designed to investigate forward models in PPC, monkeys were trained in a joystick task to move a cursor to targets either directly or around obstacles (Mulliken et al. 2008b). The obstacles afforded a more curved trajectory (Figure 5A). Neurons in the PPC not only encoded the static goal of the movement endpoint, but also the dynamic heading angle of the moving cursor. The timing of the dynamic component was centered on zero lag (Figure 5B ). Thus it was too late to represent the motor command and too early to be derived from sensory input. Rather, it appears to represent the current direction of cursor movement consistent with a forward estimate. This direction was approximately linear in space-time, indicating that it encodes a mostly linear instantaneous trajectory. Similar dynamics have recently been observed between hand kinematics and neural activity in area 5 (Archambalut et al. 2009).

Figure 5
(A) Example trajectories made for direct and obstacle versions of the joystick task. (B) Distribution of optimal lag times (OLTs) for posterior parietal cortex (PPC) population tuned to the dynamic movement angle. Many neurons' OLTs were consistent with ...

A subsequent experiment showed that this forward estimate could be harnessed for neuroprosthetic applications (Mulliken et al. 2008a). The trajectories of a cursor could be decoded in joystick and brain-control tasks (Figure 5C and 5D). In the latter, the neural activity moved the cursor on the computer screen in real time. A goal-based Kalman filter was also applied for decoding, which used both the forward estimate and the goal component of neural activity. This decoding method was superior to other methods that did not use a combination of the goal and trajectory information.


So far, neural prosthetics applications have focused on single effector movements, for instance a robotic limb or a cursor. However, natural movements often involve several body parts, especially for bimanual operations, hand-eye coordination, and reach-to-grasp.

Cells in parietal and premotor areas show response specificity for effectors (Figure 3B). For instance, in the PPC there are cells specific for reaching, eye movements, and grasp (Andersen et al. 1985a, 1990; Andersen & Buneo 2002; Sakata et al. 1997; Snyder et al. 1997). These cells tend to be clustered into cortical areas—saccade selectivity in lateral intraparietal (LIP) area, reach in PRR, and grasp in the anterior intraparietal (AIP) area. A similar clustering of specificity has been shown in the frontal lobe, with saccades for the frontal eye fields (FEFs) (Bizzi 1967, Bruce & Goldberg 1985, Bruce et al. 1985), reach for PMd (Wise 1985), and grasp for the ventral premotor cortex (PMv) (Rizzollatti et al. 1994). To date, closed-loop brain control for reach has been shown in PRR and PMd (Carmena et al. 2003, Mulliken et al. 2008a, Musallam et al. 2004, Santhanam et al. 2006) and online decoding for grasp in AIP and PMv (Townsend et al. 2007), but not for eye movements from FEF or LIP.


A most natural extension of brain-control reach is reach-to-grasp. One method of approaching this problem would be to record from the limb area and hand area of M1 (Velliste et al. 2008). However, an alternative would be to record from reach (PRR/PMd) and grasp areas (AIP/PMv) in parietal and premotor cortex, where the movements are more abstractly represented (Baumann et al. 2009). For instance, cells in AIP and PMv represent the shapes of objects and the hand shape needed to grasp them (Baumann et al. 2009, Rizzollatti et al. 1994, Sakata et al. 1997). Thus, single cells can indicate the configuration of the hand and would not require a large number of cells for the different digits (as would perhaps be the case for M1 recordings).

Bimanual Movements

There are very few investigations of the neural mechanisms for bimanual movement. Most early studies considered M1 to be involved only in the control of the contralateral limb. However, experiments in which monkeys made bimanual movements showed that a significant number of M1 cells responded to ipsilateral movements, although less than the contralateral limb (Donchin et al. 1998). When comparing bimanual movements to single-limb movements, most M1 cells showed significant differences in activity, indicating that bimanual interactions are extremely common. These effects could not be accounted for by postural differences between the single- and two-limb tasks. The interactions were often quite complex and included facilitation, suppression, and even changes in preferred direction tuning (Donchin et al. 1998, Rokni et al. 2003). The supplementary motor area (SMA) contains a large number of bimanual-responding neurons and bimanual interactions (Donchin et al. 1998). In PRR, a recent study found a continuum of representations of the limb from pure contralateral representation to bimanual representation (Chang et al. 2008). These studies indicate a high degree of coordination between the limbs in parietal-frontal circuits and open the possibility of being able to control two limbs effectively in bimanual operations.

Hand-Eye Coordination

Recordings from eye movement areas may be used for improving the decoding of reaches. This combination of recording from eye and reach areas utilizes the fact that eye and hand movements are coordinated and we look to where we reach. Using eye position information recorded from an external eye tracker or estimated from neural activity, the success for decoding reach targets can be improved (Batista et al. 2008). Similarly, activity in parietal and frontal areas indicates the focus of attention. Attention is automatically attracted to the target of a reach (Baldauf et al. 2006, Deubel et al. 1998) and could also be used to facilitate decoding.

Common Coordinate Frames

Cells in LIP and PRR encode visual targets mostly in eye coordinates (Andersen et al. 1985b, Batista et al. 1999). That is, they signal the location of a target with respect to the eyes, and if the gaze direction changes, the location in space for which the cells are sensitive shifts with the gaze. The spatial locations of sounds are initially extracted with respect to the head. However, when auditory stimuli are the targets of saccades or reaches, the encoding of the saccade targets in LIP and reach targets in PRR are often represented in eye coordinates (Cohen & Andersen 2000, 2002; Stricanne et al. 1996). Common coordinate frames between these areas may facilitate decoding during hand-eye coordination.

Brain-control trials from PRR have been performed with the eyes fixating straight ahead to compensate for the eye-centered encoding of stimuli. However, when the eyes are free to move, the efficiency of spatial decoding is about the same as with eyes fixed (Musallam et al. 2004). This curious observation raises several possibilities. It may be that compensations are made through updating or gain fields (Andersen et al. 1985b, Duhamel et al. 1992, Gnadt & Andersen 1988), the decoding algorithms may extract the regularities of hand-eye coordination, or PRR may change coordinate frames depending on the constraints of the task.

Relative Coordinate Frames

PMd uses a different coordinate frame for encoding reach targets from PRR. Whereas PRR uses predominantly an eye-centered coordinate frame (Batista et al. 1999, Cisek & Kalaska 2002, Pesaran et al. 2006b), PMd encodes simultaneously the target with respect to the eye (eye-centered), the target with respect to the hand (hand-centered), and the hand with respect to the eye (hand-in-eye) (Pesaran et al. 2006b). Rather than encoding the three variables in absolute spatial coordinates, it represents all three with respect to one another. This relative coding may be tailored to coordinating different body parts invariant of particular locations in the workspace. Area 5 may use a similar relative coordinate frame since it encodes reach targets with respect to the hand and the eye (Buneo et al. 2002), although the hand-ineye coding has yet to be tested.

This relative coordinate frame encoding has potential advantages for neuroprosthetic applications. It defines a “work space,” as mentioned above, which can be used for multi-effector movement tasks. Since the three relative frames are in extrinsic coordinates, it also allows inversions/transformations between coordinate frames (Pesaran et al. 2006b). Relative codes can reduce the accumulation of errors that may result from maintaining absolute encodings of spatial location (Csorba & Durrant-Whyte 1997, Dissanayake et al. 2001, Newman 1999, Olfati & Murray 2002).


Sensorimotor context determines movement goals. For instance, one may wish to reach to the location of a cookie, but reach away from the location of a bee. Most neural prosthetics research has used straightforward goal-directed movements toward a stimulus.

One method for studying context is the antimovement task (Figure 6A). The animal is cued to either move toward or away from a target (Boussaoud et al. 1993, Crammond & Kalaska 1994, di Pellegrino and Wise 1993, Everling et al. 1999, Gail & Andersen 2006, Georgopoulos et al. 1989, Gottlieb & Goldberg 1999, Kalaska 1996, Schlag-Rey et al. 1997, Zhang & Barash 2000). This task has typically been used for saccades and reaches to dissociate sensory signals from movement signals. For antimovement trials, if a neuron only codes the stimulus location, it is considered sensory; if it only encodes the movement direction, it is considered movement-related; and if it codes both, it is considered sensorimotor.

Figure 6
(A) Flow diagram of rule-based pro-/antimovement task. (B) Decoding of the task rule in PRR. The prediction of the task rule (pro/anti) was significantly above chance (50%) prior to the start of the cue period, indicating explicit task rule representations ...

The pro-/antimovement task can be structured as a sensorimotor transformation with two opposing stimulus-response mappings. The executive function for applying the abstract rule for transformation may reside in the prefrontal cortex, premotor cortex, and basal ganglia (Boettiger & D'Esposito 2005, Nixon et al. 2004, Pasupathy & Miller 2005, Petrides 1982, Toni & Passingham 1999, Wallis et al. 2001, White & Wise 1999), although rule-based activity has also been reported in the PPC (Grol et al. 2006, Stoet & Snyder 2004). This rule can then act on the sensorimotor transformation process in PPC, premotor, and motor areas. We present here a recent example from PRR, since in this case neural decoding techniques were used as part of the analysis and shed light on how a CNP could determine the abstract rule and the appropriate stimulus-response mapping (Gail & Andersen 2006).

A pro-/antireach task was used for recordings from PRR (Figure 6A) that had three advantages: (a) Four different directions for pro and anti movements were used so the spatial tuning of the cells for both rules could be determined; (b) briefly flashed targets were used, and delays were interposed before the “go” signal to highlight cognitive-related activity; and (c) the task rule to be applied was provided each time at the beginning of the trial prior to the presentation of the target cue. This last feature of the task is important for examining whether the rule can be decoded, since often the features of the target dictate the rule and lead to a confounding in time of rule-based activity with other variables such as a sensory response to the target and the possible cancelling of an automatic plan toward the target in the anti-movement trials. Using this paradigm, it was found that the number of cells tuned only to the target was statistically insignificant, the number tuned only to the movement direction predominated (45%), and a small number were sensorimotor (7%). The sensorimotor cells were initially tuned to the stimulus location and then, during the delay period, were tuned to the direction of the planned movement. The fact that most cells were only tuned for movement direction rules out spatial attention as a contributing factor for those neurons.

Decode performance for task rule and direction (2 × 4, 12.5% chance) rose steeply after the brief (200 ms) presentation of the target cue, reaching 90% peak in 150 ms, and remained high during the variable delay (1–1.5 s) before the “go” signal. A transconditional decode revealed that the cue location was only weakly coded during the brief cue presentation, through the sensorimotor cells. PRR represented the movement plan from the end of the cue throughout the delay period (Figure 6C). These dynamics indicate that PRR immediately transforms the sensory representation into a movement representation without any residual memory of the location of the sensory signal. Interestingly, the rule could already be predicted above chance a short time before the presentation of the target, indicating that the rule was already explicitly represented in PRR before the appearance of the target cue (Figure 6B). In a separate neural network modeling study based on the above experiment, it was found that the context-based information could be integrated with the sensory target location through a classic gain field mechanism (Brozovic et al. 2007). It was suggested that this context modulation may result from top-down information originating from the frontal or parietal lobe.

Recent experiments show that rule-based sensorimotor transformations can also be extended to brain-control experiments (E.J. Hwang and R.A. Andersen, personal observation). The monkeys were able to move the cursor on a computer screen in the opposite direction to a cue using cell recordings from PRR and without any overt reaches. In a second experiment, the monkeys were trained to associate an arrow presented at the straight-ahead position on a computer screen with brain-control cursor movements in the direction the arrow was pointing (Hwang & Andersen 2008).

In the above reach tasks and brain-control tasks, the rules are applied to sensorimotor transformations. However, the fact that rules and their effect on neural transformations can both be decoded from the same population of cells in PPC suggests that other types of executive functions can be decoded in other brain areas. Executive rules that lie outside sensorimotor transformation include categorization, direction of spatial attention, and the formation of abstract concepts and thoughts.


Typically, neural prosthetic applications have relied on spiking activity of neurons as a signal source. Information from spikes is very precise and represents fundamental building blocks of the brain. Another signal that is interesting, particularly from the viewpoint of CNPs, is the local field potential (LFP) (Andersen et al. 2004b, Pesaran et al. 2006a). The LFP is determined by a number of factors, including the geometry and alignment of the sources, and thus can vary from region to region due to changes in local architecture. Also, the LFP is derived from multiple sources including synaptic potentials and action potentials and is summed over a volume of tissue that contains hundreds or thousands of cells. Still there are features of the LFPs that make them generally useful for prosthetic applications. They are often tuned, for instance, to the direction of planned reaches (Scherberger et al. 2005). Thus they can provide additional information to improve decoding when used in combination with spikes. They have a larger “listening sphere” than that of single cells. Electrode array implants generally have fixed geometries. The sampling of cells is hit or miss, and many electrodes will not be near neurons and will not yield recordings. Typically, to increase yield, lower impedance electrodes are used. This approach increases the listening sphere but also lowers the signal-to-noise ratio and makes single-cell isolation difficult. LFPs, on the other hand, sample from a large listening sphere and so the yield is much higher. Over time, the reliability of recording spikes often goes down, although the LFP signal remains largely unaffected. The basis of this decrease in performance is not completely clear, but it may include long-term encapsulation of the electrodes by glial scarring, which would be expected to have a larger effect on the more local signals of spikes compared to LFPs.

From the viewpoint of CNPs, LFPs have two primary contributions. The first is that it is actually easier to decode cognitive state from LFPs than from spikes. Recordings made from single electrodes in LIP during a memory saccade task showed that, on a single-trial basis, the direction of a planned saccade could be equally well decoded from single-cell spike activity and the LFP (Pesaran et al. 2002). On the other hand, the time of transition from planning to executing a saccade could be decoded from the LFP but not the spike recording. Interestingly, the decoding of direction was obtained from the higher-frequency (30–100 Hz) LFP spectrum and the cognitive state transition from the lower (0–20 Hz) spectrum. A similar result was found in PRR decoding from a population of PRR recording sites obtained on different days of recording (Scherberger et al. 2005). In this particular task, the monkeys made saccades or reaches on different trials so there were a variety of cognitive states that included baseline (fixating only), reach planning, saccade planning, reaching, and saccading. Whereas state could be decoded from LFPs and spikes, it took many more recording sites using spikes compared to LFPs to reach a similar performance. On the other hand, although direction could be decoded from both spikes and LFPs, the spike decodes were slightly better.

The second contribution of LFPs to CNPs is that they may provide complementary information due to some differences in source and as a result allow a broader view of overall network activity. In cortex, the largest class of neurons is the pyramidal (output) cell. Also, single-cell recording is biased toward larger cells, again the pyramidal neurons. Thus, spiking activity tends to represent the output of a cortical region (Figure 7). The LFP is not simply a sum of averaged spike activity but rather reflects, to a considerable degree, the mean synaptic activity that derives from inputs and intracortical processing (Buzsaki & Draguhn 2004, Logothetis et al. 2001). Thus, a component of LFP activity in the PPC may derive from inputs into the area (Figure 7). An efference copy from motor areas to PPC may explain why the LFP in PPC is so sensitive to the transition from planning to executing, since the PPC output is not thought to contribute to the execution of movements (Andersen & Buneo 2002). Recently, monkeys have been trained to generate a self-paced “go” LFP signal in PPC for brain-control experiments (Hwang & Andersen 2007). The animals were trained to generate the signature LFP signal associated with the state change from planning to executing actual reaches, an increase in lower frequencies and decrease in higher frequencies, without making any movements. The feasibility of a hybrid BMI system was also demonstrated in brain-control experiments in which the direction was decoded from the PRR spike activity while the state was decoded from the LFP recorded at the same site (Hwang & Andersen 2007). As mentioned above, the direction can be decoded from both spikes and LFPs in PRR. Additional brain-control experiments showed that using spikes and LFPs together increased decode performance compared to using either alone, demonstrating that combining information from the two sources can lead to better decodes (Hwang & Andersen 2009).

Figure 7
Example of recording sources of spikes and local field potentials. Large pyramidal neurons project out of cortex and represent the predominant source of recorded spikes due to their larger size and number. The local field potentials reflect to a large ...


Most applications using brain-control tasks have shown a high degree of learning. This learning can be very rapid, over a few minutes to hours of training (Fetz 1969, Jarosiewicz et al. 2008, Moritz et al. 2008), over a period of days (Mulliken et al. 2008a), or even over a period of weeks (Carmena et al. 2003, Musallam et al. 2004, Taylor et al. 2002). This learning extends to LFPs (Hwang & Andersen 2007). Learning effects have been seen in many brain areas including the motor cortex, premotor cortex, and parietal cortex. To our knowledge, it is currently not understood which areas may show more plasticity or be better for learning particular categories of tasks.


A number of classes of decoding algorithms have been applied in brain-control experiments. Interestingly, many of these algorithms have parallels with brain function and may be successful, in part, because of these similarities. Bayesian decoding, which calculates maximum likelihood of an intended movement (Bokil et al. 2006, Gao et al. 2002, Scherberger et al. 2005, Shenoy et al. 2003), is one example. Recent modeling studies have suggested that cortical areas represent probability distributions and may use Bayesian inference for decision making (Beck et al. 2008). Population vector decoding has also been used in brain-control experiments (Taylor et al. 2002, Velliste et al. 2008). This algorithm was originally developed to explain how the direction of reaches is represented by populations of neurons in motor cortex (Georgopoulos et al. 1986). As mentioned above, Kalman filter decoding works well in PPC, and this cortical region has properties of state estimation that can be modeled effectively with Kalman filters (Mulliken et al. 2008a, Wu et al. 2004). Transitions between cognitive states show changes in brain activity, particularly reflected in the spectrum of LFPs but also in spike firing rates. State-space models such as finite state machines and Markov models have been successful in decoding state changes and may reflect the states and transitions between states in brain activity (Shenoy et al. 2003). Decoding has been improved by taking into account correlations between spike trains (Abbott & Dayan 1999, Averbeck et al. 2006, Brown et al. 2004, Nirenberg & Latham 2003) and the temporal regularities in responses (Musallam et al. 2004). These decoding methods that take into account correlations and dynamics may be exploiting underlying temporal coding strategies used by the brain.


The applications of CNPs are very wide ranging and rely on the decoding of signals that are neither motor execution nor sensory signals but rather rely on internal cognitive states. Although several specific examples are given in this review, in principle all cognitive states likely can be decoded with the appropriate recording technologies, placement of electrodes, and decoding algorithms. Spike activity is a major source of signal, but LFPs are also particularly useful for determining cognitive state. Extensions of signal analysis may include the measure of spike-field coherence between cortical areas, which may provide additional insights into the processing of cognitive functions, and their decoding, across cortical circuits (see Future Issues, below). CNPs may also be extended to patients' volitional control of brain stimulation, which can be applied to movement disorders, depression, epilepsy, and other brain diseases that may benefit from neural stimulation. Clearly, the future is bright for CNPs and their future application to assisting patients with brain disorders. An added benefit is that research in CNPs will continue to uncover the neural basis of cognitive functions through the basic research that forms the foundations of CNPs as well as the insights that are afforded by the operation and performance of CNPs.


  1. Cognitive neural prosthetics tap into brain signals that are neither motor execution commands nor sensory signals, but rather represent higher brain functions such as intention, multi-effector and sequential movement planning, attention, decision making, executive control, emotions, learning, and speech. Scientific understanding of the functional organization of cortex helps to guide the placement of electrodes and the choices of decoding algorithms.
  2. Memory-guided movements are often used in examining cognitive processes that might be applicable to cognitive neural prosthetic applications. These tasks have delay components in which there is no sensory stimulus or movement, and persistent neural activity during this period represents the cognitive process under study.
  3. Single goals and even sequences of intended goals can be decoded from prefrontal and parietal regions. Advantages of this approach over conventional trajectory decoding are speed (typically it takes a second or more to arrive at a goal, whereas goal endpoint decoding can be achieved in one-tenth the time) and the ability to plan ahead. Such goal decoding is ideally suited for rapid applications such as “typing” using letter boards.
  4. Cognitive neural prosthetics can make use of motor imagery that appears in motor cortex and generates even greater activity, judged from fMRI experiments, in other frontal and parietal areas related to movement planning.
  5. Decision variables related to reward expectation, including the amount, type, and probability of reward, can be decoded from parietal cortex. These signals may be useful for cognitive neural prosthetic applications by registering the preferences and mood of the patients.
  6. Forward models, used to predict the current state of a movement and derived from efference copy signals, can be harnessed for producing trajectories for CNPs. Interestingly, these signals can be internally generated without any movement actually occurring.
  7. The representation of different effectors in different cortical areas allows for the decoding of complex movements such as reach-to-grasp and bimanual movements. Common coordinate frames in some of these cortical areas, and the use of relative coordinate frames, can facilitate the use of multi-effector operations using CNPs.
  8. CNPs can be used to decode executive functions. This application has been demonstrated for determining rules for sensorimotor tasks but in principle can also be applied to executive functions of categorization, directing spatial attention, and the formation of abstract concepts and thoughts.
  9. Local field potentials (LFPs) can be used in addition to spike activity to enhance CNP applications. They are superior to spikes for decoding cognitive state in parietal cortex. Also, spikes and LFPs reflect to a certain degree different sources, with spikes more indicative of cortical outputs and LFPs indicative of inputs and intracortical processing. Thus, combined use of these signals provides a larger functional view of the network in which a cortical area is embedded.
  10. Certain decoding algorithms may model underlying brain processes and thus be particularly useful for CNP applications.


  1. A future challenge is to extend CNPs to multiple cortical areas. Recording from multiple cortical areas allows measures of LFP-LFP and spike-LFP coherences between them (Pesaran et al. 2008). These measures, particularly the spike-LFP measures, may indicate changes in communication between areas and may provide additional insights into cognitive functions and refinement of cognitive decoding algorithms.
  2. Another advance would be to bring therapies using brain stimulation under volitional control of the patient. For instance, deep-brain stimulation for movement disorders such as Parkinson's disease can be controlled manually by patients, although this can be a bit cumbersome. A more direct approach would be to bring the stimulation under cognitive control, in which a patient's decoded intentions could be used to control stimulation. Such an approach could be extended to other future uses of brain stimulation, such as the control of severe depression and obsessive-compulsive disorders.


We thank the National Institutes of Health, the Defense Advanced Research Projects Agency, the Boswell Foundation, and the McKnight Foundation for supporting this research. We also thank Drs. Eb Fetz and Michael Campos for discussions during the preparation of this review. We thank Dr. Viktor Shcherbatyuk, Tessa Yao, Kelsíe Pejsa, and Nicole Sammons for technical and editorial assistance.


Cognitive neural prosthetics (CNPs)
instruments that consist of an array of electrodes, a decoding algorithm, and an external device controlled by the processed cognitive signal
Decoding algorithms:
computer algorithms that interpret neural signals for the purposes of understanding their function or for providing control signals to machines
Brain-machine interface (BMI)
a device that records neural activity, decodes these signals, and uses the decoded signals for operating machines
Brain-control task
a task in which the subject uses only neural signals to control an external device
Cognitive signals
neural activities related to high-level cognitive function (e.g., intention, planning, decision making, executive function, thoughts, concepts, and speech)
primary motor cortex
lateral intraparietal area
posterior parietal cortex
parietal reach region
dorsal premotor cortex
Forward model
a prediction of the consequences of a movement by processing the efference copy signal of motor commands
cortical processing and cortical areas that are involved in transforming sensory inputs to motor outputs for sensory-guided movements
anterior intraparietal area
frontal eye fields
ventral premotor cortex
Executive function
a cognitive system that manages other brain processes
Local field potential (LFP)
the recorded sum of electrical activity from hundreds to thousands of neurons around the tip of a microelectrode



The authors are not aware of any biases that might be perceived as affecting the objectivity of this review.


  • Abbott LF, Dayan P. The effect of correlated variability on the accuracy of a population code. Neural Comput. 1999;11:91–101. [PubMed]
  • Allison T, McCarthy G, Wood CC, Jones SJ. Potentials evoked in human and monkey cerebral cortex by stimulation of the median nerve. Brain. 1991;114:2465–503. [PubMed]
  • Andersen RA, Asanuma C, Cowan WM. Callosal and prefrontal associational projecting cell populations in area 7A of the macaque monkey: a study using retrogradely transported fluorescent dyes. J. Comp. Neurol. 1985a;232:443–55. [PubMed]
  • Andersen RA, Asanuma C, Essick C, Siegel RM. Corticocortical connection of anatomically and physiologically defined subdivisions within the inferior parietal lobe. J. Comp. Neurol. 1990;296:65–113. [PubMed]
  • Andersen RA, Budrick JW, Musallam S, Pesaran B, Cham JG. Cognitive neural prosthetics. Trends Cogn. Sci. 2004a;8:486–93. [PubMed]
  • Andersen RA, Buneo CA. Intentional maps in posterior parietal cortex. Annu. Rev. Neurosci. 2002;25:189–220. [PubMed]
  • Andersen RA, Essick GK, Siegel RM. The encoding of spatial location by posterior parietal neurons. Science. 1985b;230:456–58. [PubMed]
  • Andersen RA, Essick GK, Siegel RM. Neurons of area 7 activated by both visual stimuli and oculomotor behavior. Exp. Brain Res. 1987;67:316–22. [PubMed]
  • Andersen RA, Musallam S, Pesaran B. Selecting the signals for a brain-machine interface. Curr. Opin. Neurobiol. 2004b;14:1–7. [PubMed]
  • Archambalut PS, Caminiti R, Battaglia-Mayer A. Cortical mechanisms for online control of hand movement trajectory: the role of the posterior parietal cortex. Cereb. Cortex. 2009 doi:10.1093/cercor/bhp058. [PubMed]
  • Astafiev SV, Shulman GL, Stanley CM, Snyder AZ, Van Essen DC, Corbetta M. Functional organization of human intraparietal and frontal cortex for attending, looking, and pointing. J. Neurosci. 2003;23:4689–99. [PubMed]
  • Atkeson CG. Learning arm kinematics and dynamics. Annu. Rev. Neurosci. 1989;12:157–83. [PubMed]
  • Averbeck BB, Latham PE, Pouget A. Neural correlations, population coding and computation. Nat. Rev. Neurosci. 2006;7:358–66. [PubMed]
  • Baldauf D, Cui H, Andersen RA. The posterior parietal cortex encodes in parallel both goals for double-reach sequences. J. Neurosci. 2008;28:10081–99. [PMC free article] [PubMed]
  • Baldauf D, Wolf M, Deubel H. Deployment of visual attention before sequences of goal-directed hand movements. Vision Res. 2006;46:4355–74. [PubMed]
  • Balint R. Seelenlahmung des “Schauens,” optische Ataxie, raumliche Storung der Aufmerksamkeit. Monatsschr. Psychiatr. Neurol. 1909;25:51–81.
  • Barraclough DJ, Conroy ML, Lee D. Prefrontal cortex and decision making in a mixed-strategy game. Nat. Neurosci. 2004;7:404–10. [PubMed]
  • Batista AP, Andersen RA. The parietal reach region codes the next planned movement in a sequential reach task. J. Neurophysiol. 2001;85:539–44. [PubMed]
  • Batista AP, Buneo CA, Snyder LH, Andersen RA. Reach plans in eye-centered coordinates. Science. 1999;285:257–60. [PubMed]
  • Batista AP, Yu BM, Santhanam G, Ryu SI, Afshar A, Shenoy KV. Cortical neural prosthesis performance improves when eye position is monitored. IEEE Trans. Neural. Syst. Rehabil. Eng. 2008;16:24–31. [PubMed]
  • Baumann MA, Fluet MC, Scherberger H. Context-specific grasp movement representation in the macaque anterior intraparietal area. J. Neurosci. 2009;29:6436–48. [PubMed]
  • Beck JM, Ma WJ, Kiani R, Hanks T, Churchland AK, et al. Probabilistic population codes for Bayesian decision making. Neuron. 2008;60:1142–52. [PMC free article] [PubMed]
  • Bizzi E. Discharge of frontal eye field neurons during eye movements in unanesthetized monkeys. Science. 1967;157:1588–90. [PubMed]
  • Boettiger CA, D'Esposito M. Frontal networks for learning and executing arbitrary stimulus-response associations. J. Neurosci. 2005;25:2723–32. [PubMed]
  • Bokil H, Pesaran B, Andersen RA, Mitra PP. A framework for detection and classification of events in neural activity. IEEE Trans. Biomed. Eng. 2006;53:1678–87. [PubMed]
  • Boussaoud D, Barth TM, Wise SP. Effects of gaze on apparent visual responses of frontal cortex neurons. Exp. Brain Res. 1993;93:423–34. [PubMed]
  • Bracewell RM, Mazzoni P, Barash S, Andersen RA. Motor intention activity in the macaque's lateral intraparietal area. II. Changes of motor plan. J. Neurophysiol. 1996;76:1457–64. [PubMed]
  • Bradley DC, Maxwell M, Andersen RA, Banks MS, Shenoy KV. Mechanisms of heading perception in primate visual cortex. Science. 1996;273:1544–47. [PubMed]
  • Brown E, Kass RE, Mitra PP. Multiple neural spike train data analysis: state-of-the-art and future challenges. Nat. Neurosci. 2004;7:456–61. [PubMed]
  • Brozovic M, Gail A, Andersen RA. Gain mechanisms for contextually guided visuomotor transformations. J. Neurosci. 2007;27:10588–96. [PubMed]
  • Bruce CJ, Goldberg ME. Primate frontal eye fields: I. Single neurons discharging before saccades. J. Neurophysiol. 1985;53:603–35. [PubMed]
  • Bruce CJ, Goldberg ME, Bushnell MC, Stanton GB. Primate frontal eye fields. II. Physiological and anatomical correlates of electrically evoked eye movements. J. Neurophysiol. 1985;54:714–34. [PubMed]
  • Buneo CA, Jarvis MR, Batista AP, Andersen RA. Direct visuomotor transformations for reaching. Nature. 2002;416:632–36. [PubMed]
  • Buzsaki G, Draguhn A. Neuronal oscillations in cortical networks. Science. 2004;304:1926–29. [PubMed]
  • Campos M, Breznen B, Bernheim K, Andersen RA. The supplementary motor area encodes reward expectancy in eye movement tasks. J. Neurophysiol. 2005;94:1325–35. [PubMed]
  • Carmena JM, Lebedev MA, Crist RE, O'Doherty JE, Santucci DM, et al. Learning to control a brain–machine interface for reaching and grasping by primates. PLoS. 2003;1(2):e42. [PMC free article] [PubMed]
  • Chang SWC, Dickinson AR, Snyder LH. Limb-specific representation for reaching in the posterior parietal cortex. J. Neurosci. 2008;28:6128–40. [PMC free article] [PubMed]
  • Christopher & Dana Reeve Found . Paralysis and Spinal Cord Injury in the United States. Christopher & Dana Reeve Found; Short Hills, NJ: 2009. One Degree of Separation.
  • Cisek P, Kalaska JF. Modest gaze-related discharge modulation in monkey dorsal premotor cortex during a reaching task performed with free fixation. J. Neurophysiol. 2002;88:1064–72. [PubMed]
  • Claxton G. Why can't we tickle ourselves? Percept. Mot. Skills. 1975;41:335–38. [PubMed]
  • Cohen YE, Andersen RA. Eye position modulates reach activity to sounds. Neuron. 2000;27:647–52. [PubMed]
  • Cohen YE, Andersen RA. A common reference frame for movement plans in the posterior parietal cortex. Nat. Rev. Neurosci. 2002;3:553–62. [PubMed]
  • Connolly JD, Andersen RA, Goodale MA. fMRI evidence for a “parietal reach region” in the human brain. Exp. Brain Res. 2003;153:140–45. [PubMed]
  • Crammond DJ, Kalaska JF. Modulation of preparatory neuronal-activity in dorsal premotor cortex due to stimulus-response compatibility. J. Neurophysiol. 1994;71:1281–84. [PubMed]
  • Crowell JA, Banks MS, Shenoy KV, Andersen RA. Visual self-motion perception during head turns. Nat. Neurosci. 1998;1:732–37. [PubMed]
  • Csorba M, Durrant-Whyte HF. A new approach to map building using relative position estimates; Presented at Soc. Photo-Optical Instrumentation Engineers Conf.; Orlando, FL. 1997.
  • Cui H, Andersen RA. Posterior parietal cortex encodes autonomously selected motor plans. Neuron. 2007;56:552–59. [PMC free article] [PubMed]
  • Decety J. Do imagined and executed actions share the same neural substrate? Brain Res. Cogn. Brain Res. 1996;3:87–93. [PubMed]
  • Stimulation of the posterior parietal cortex interferes with arm trajectory adjustments during the learning of new dynamics. J. Neurosci. 2004;24:9971–76. [PubMed]
  • Role of the posterior parietal cortex in updating reaching movements to a visual target. Nat. Neurosci. 1999;2:563–67. [PubMed]
  • Desmurget M, Grafton S. Forward modeling allows feedback control for fast reaching movements. Trends Cogn. Sci. 2000;4:423–31. [PubMed]
  • Desmurget M, Reilly KT, Richard N, Szathmari A, Mottolese C, Sirigu A. Movement intention after parietal cortex stimulation in humans. Science. 2009;324:811–13. [PubMed]
  • Deubel H, Schneider WX, Paprotta I. Selective dorsal and ventral processing: evidence for a common attentional mechanism in reaching and perception. Visual Cogn. 1998;5:81–107.
  • Diedrichsen J, Hashambhoy Y, Rane T, Shadmehr R. Neural correlates of reach errors. J. Neurosci. 2005;25:9919–31. [PMC free article] [PubMed]
  • di Pellegrino G, Fadiga L, Fogassi L, Gallese V, Rizzolatti G. Understanding motor events: a neurophysiological study. Exp. Brain Res. 1992;91:176–80. [PubMed]
  • di Pellegrino G, Wise SP. Visuospatial versus visuomotor activity in the premotor and prefrontal cortex of a primate. J. Neurosci. 1993;13:1227–43. [PubMed]
  • Dissanayake MWMG, Newman P, Clark S, Durrant-Whyte HF, Csorba M. A solution to the simultaneous localization and map building (SLAM) problem. IEEE Trans. Rob. Autom. 2001;17:229–41.
  • Dominey P, Decety J, Broussolle E, Chazot G, Jeannerod M. Motor imagery of a lateralized sequential task is asymmetrically slowed in hemi-Parkinson's patients. Neuropsychologia. 1995;33:727–41. [PubMed]
  • Donchin O, Gribova A, Steinberg O, Berman H, Vaadia E. Primary motor cortex is involved in bimanual coordination. Nature. 1998;395:274–78. [PubMed]
  • Duhamel JR, Colby CL, Goldberg ME. The updating of the representation of visual space in parietal cortex by intended eye movements. Science. 1992;255:90–92. [PubMed]
  • Everling S, Dorris MC, Klein RM, Munoz DP. Role of primate superior colliculus in preparation and execution of anti-saccades and prosaccades. J. Neurosci. 1999;19:2740–54. [PubMed]
  • Fetz EE. Operant conditioning of cortical unit activity. Science. 1969;163:955–58. [PubMed]
  • Fetz EE. Volitional control of neural activity: implications for brain-computer interfaces. J. Physiol. 2007;579:571–79. [PubMed]
  • Filimon F, Nelson JD, Huang RS, Sereno MI. Multiple parietal reach regions in humans: cortical representations for visual and proprioceptive feedback during on-line reaching. J. Neurosci. 2009;29:2961–71. [PMC free article] [PubMed]
  • Flanders M, Cordo PJ. Kinesthetic and visual control of a bimanual task: specification of direction and amplitude. J. Neurosci. 1989;9:447–53. [PubMed]
  • Fogassi L, Ferrari PF, Gesierich B, Rozzi S, Chersi F, Rizzolatti G. Parietal lobe: from action organization to intention understanding. Science. 2005;308:662–67. [PubMed]
  • Fujii N, Graybiel AM. Representation of action sequence boundaries by macaque prefrontal cortical neurons. Science. 2003;301:1246–49. [PubMed]
  • Gail A, Andersen RA. Neural dynamics in monkey parietal reach region reflect context-specific sensorimotor transformations. J. Neurosci. 2006;26:9376–84. [PubMed]
  • Gallese V, Fadiga L, Fogassi L, Rizzolatti G. Action recognition in the premotor cortex. Brain. 1996;119:593–609. [PubMed]
  • Gao Y, Black MJ, Bienenstock E, Shoham S, Donoghue JP. Probabilistic inference of hand motion from neural activity in motor cortex. In: Dietterich TG, Becker S, Ghahramani Z, editors. Advances in Neural Information Processing Systems. Vol. 14. MIT Press; Cambridge, MA: 2002. pp. 213–20.
  • Georgopoulos AP, Kalaska JF, Massey JT. Spatial trajectories and reaction-times of aimed movements: effects of practice, uncertainty, and change in target location. J. Neurophysiol. 1981;46:725–43. [PubMed]
  • Georgopoulos AP, Lurito JT, Petrides M, Schwartz AB, Massey JT. Mental rotation of the neuronal population vector. Science. 1989;143:234–36. [PubMed]
  • Georgopoulos AP, Schwartz A, Kettner RE. Neuronal population coding of movement direction. Science. 1986;233:1416–19. [PubMed]
  • Gerardin E, Sirigu A, Lehericy S, Poline JB, Gaymard B, et al. Partially overlapping neural networks for real and imagined hand movements. Cereb. Cortex. 2000;10:1093–104. [PubMed]
  • Glidden HK, Rizzuto DS, Andersen RA. Localizing neuroprosthetic implant targets with fMRI: premotor, supplementary motor, and parietal regions; Presented at Ann. Meet. Soc. Neurosci; Washington, DC. 2005.
  • Gnadt JW, Andersen RA. Memory-related motor planning activity in posterior parietal cortex of macaque. Exp. Brain Res. 1988;70:216–20. [PubMed]
  • Goldman-Rakic PS. Topography of cognition-parallel distributed networks in primate association cortex. Annu. Rev. Neurosci. 1998;11:137–56. [PubMed]
  • Goodwin GC, Sin KS. Adaptive Filtering Prediction and Control. Prentice-Hall; Englewood Cliffs, NJ: 1984.
  • Gottlieb J, Goldberg ME. Activity of neurons in the lateral intraparietal area of the monkey during an antisaccade task. Nat. Neurosci. 1999;2:906–12. [PubMed]
  • Grea H, Pisella L, Rosetti Y, Desmurget M, Tilikete C, Grafton S. A lesion of the posterior parietal cortex disrupts on-line adjustments during aiming movements. Neuropsychologia. 2002;40:2471–80. [PubMed]
  • Grol MJ, de Lange FP, Verstraten FAJ, Passingham RE, Toni I. Cerebral changes during performance of overlearned arbitrary visuomotor associations. J. Neurosci. 2006;26:117–25. [PubMed]
  • Haarmeier T, Bunjes F, Lindner A, Berret E, Thier P. Optimizing visual motion perception during eye movements. Neuron. 2001;32:527–35. [PubMed]
  • Hatsopoulos N, Joshi J, O'Leary JG. Decoding continuous and discrete motor behaviors using motor and premotor cortical ensembles. J. Neurophysiol. 2004;92:1165–74. [PubMed]
  • Hikosaka K, Watanabe M. Delay activity of orbital and lateral prefrontal neurons of the monkey varying with different rewards. Cereb. Cortex. 2000;10:263–71. [PubMed]
  • Histed MH, Miller EK. Microstimulation of frontal cortex can reorder a remembered spatial sequence. PLoS Biol. 2006;4:e134. [PMC free article] [PubMed]
  • Hochberg LR, Serruya MD, Friehs GM, Mukand JA, Saleh M, et al. Neuronal ensemble control of prosthetic devices by a human with tetraplegia. Nature. 2006;442:164–71. [PubMed]
  • Hwang EJ, Andersen RA. Decoding a “go” signal using the local field potential in the parietal reach region. Presented at Annu. Meet. BMES, Los Angeles, CA. 2007
  • Hwang EJ, Andersen RA. The parietal reach region represents the spatial goal in symbolically instructed reaches. Presented at Annu. Meet. Soc. Neurosci. Washington, DC. 2008
  • Hwang EJ, Andersen RA. Complementary multi-site LFPs and spikes for reach target location decoding in parietal region. Presented at Ann. Meet. Soc. Neurosci., Chicago, IL. 2009
  • Jarosiewicz B, Chase SM, Fraser GW, Velliste M, Kass RE, Schwartz AB. Functional network reorganization during learning in a brain-computer interface paradigm. Proc. Natl. Acad. Sci. USA. 2008;105:19486–91. [PubMed]
  • Johnson PB, Ferraina S, Bianchi L, Caminiti R. Cortical networks for visual reaching: physiological and anatomical organization of frontal and parietal lobe arm regions. Cereb. Cortex. 1996;6:102–19. [PubMed]
  • Jones EG, Powell TP. An anatomical study of converging sensory pathways within the cerebral cortex of the monkey. Brain. 1970;93:793–820. [PubMed]
  • Jordan MI, Rumelhart DE. Forward models: supervised learning with a distal teacher. Cogn. Sci. 1992;16:307–54.
  • Kagerer F, Bracha V, Wunderlich DA, Stelmach GE, Bloedel JR. Ataxia reflected in the simulated movements of patients with cerebellar lesions. Exp. Brain Res. 1998;121:125–34. [PubMed]
  • Kalaska JF. Parietal cortex area 5 and visuomotor behavior. Can. J. Physiol. Pharmacol. 1996;74:483–98. [PubMed]
  • Kalman RE. A new approach to linear filtering and prediction problems. J. Basic Engin. 1960;March:35–46.
  • Kawato M, Furukawa K, Suzuki R. A hierarchical neural-network model for control and learning of voluntary movement. Biol. Cybern. 1987;57:169–85. [PubMed]
  • Kobayashi S, Lauwereyns J, Koizumi M, Sakagami M, Hikosaka O. Influence of reward expectation on visuospatial processing in macaque lateral prefrontal cortex. J. Neurophysiol. 2002;87:1488–98. [PubMed]
  • Leon MI, Shadlen MN. Effect of expected reward magnitude on the response of neurons in the dorso-lateral prefrontal cortex of the macaque. Neuron. 1999;24:415–25. [PubMed]
  • Logothetis NK, Pauls J, Augath M, Trinath T, Oeltermann A. Neurophysiological investigation of the basis of the fMRI signal. Nature. 2001;412:150–57. [PubMed]
  • Lu X, Ashe J. Anticipatory activity in primary motor cortex codes memorized movement sequences. Neuron. 2005;45:967–73. [PubMed]
  • Matsumoto K, Suzuki W, Tanaka K. Neuronal correlates of goal-based motor selection in the prefrontal cortex. Science. 2003;301:229–32. [PubMed]
  • Miall RC, Weir DJ, Wolpert DM, Stein JF. Is the cerebellum a Smith predictor? J. Motor Behav. 1993;25:203–16. [PubMed]
  • Miall RC, Wolpert DM. Forward models for physiological motor control. Neural Netw. 1996;9:1265–79. [PubMed]
  • Moritz CT, Perlmutter SI, Fetz EE. Direct control of paralysed muscles by cortical neurons. Nature. 2008;456:639–42. [PMC free article] [PubMed]
  • Mulliken GH, Musallam S, Andersen RA. Decoding trajectories from posterior parietal cortex ensembles. J. Neurosci. 2008a;28:12913–26. [PMC free article] [PubMed]
  • Mulliken GH, Musallam S, Andersen RA. Forward estimation of movement state in posterior parietal cortex. Proc. Natl. Acad. Sci. USA. 2008b;105:8170–77. [PubMed]
  • Musallam S, Corneil BD, Greger B, Scherberger H, Andersen RA. Cognitive control signals for neural prosthetics. Science. 2004;305:258–62. [PubMed]
  • Mushiake H, Saito N, Sakamoto K, Itoyama Y, Tanji J. Activity in the lateral prefrontal cortex reflects multiple steps of future events in action plans. Neuron. 2006;50:631–41. [PubMed]
  • Newman PM. On the Solution to the Simultaneous Localization and Map Building Problem. Univ. Sydney; Sydney, Austral.: 1999.
  • Ninokura Y, Mushiake H, Tanji J. Representation of the temporal order of objects in the primate lateral prefrontal cortex. J. Neurophysiol. 2003;89:2869–73. [PubMed]
  • Nirenberg S, Latham PE. Decoding neuronal spike trains: How important are correlations? Proc. Natl. Acad. Sci. USA. 2003;100:7348–53. [PubMed]
  • Nixon PD, McDonald KR, Gough PM, Alexander IH, Passingham RE. Cortico-basal ganglia pathways are essential for the recall of well-established visuomotor associations. Eur. J. Neurosci. 2004;20:3165–78. [PubMed]
  • Ohbayashi M, Ohki K, Miyashita Y. Conversion of working memory to motor sequence in the monkey premotor cortex. Science. 2003;301:233–36. [PubMed]
  • Olfati R, Murray RM. Distributed cooperative control of multiple vehicle formations using structural potential functions. Proc. 15th IFAC World Congress, Barcelona, Spain. 2002
  • Pasupathy A, Miller EK. Different time courses for learning-related activity in the prefrontal cortex and striatum. Nature. 2005;433:873–76. [PubMed]
  • Perenin MT, Vighetto A. Optic ataxia: a specific disruption in visuomotor mechanisms: 1. Different aspects of the deficit in reaching for objects. Brain. 1998;111:643–74. [PubMed]
  • Pesaran B, Musallam S, Andersen RA. Cognitive neural prosthetics. Curr. Biol. 2006a;16:77–80. [PubMed]
  • Pesaran B, Nelson MJ, Andersen RA. Dorsal premotor neurons encode the relative position of the hand, eye and goal during reach planning. Neuron. 2006b;51:125–34. [PMC free article] [PubMed]
  • Pesaran B, Nelson MJ, Andersen RA. Free choice activates a decision circuit between frontal and parietal cortex. Nature. 2008;453:406–9. [PMC free article] [PubMed]
  • Pesaran B, Pezaris J, Sahani M, Mitra PM, Andersen RA. Temporal structure in neuronal activity during working memory in macaque parietal cortex. Nat. Neurosci. 2002;5:805–11. [PubMed]
  • Petrides M. Motor conditional associative-learning after selective prefrontal lesions in the monkey. Behav. Brain Res. 1982;5:407–13. [PubMed]
  • Pisella L, Grea H, Tilikete C, Vighetto A, Desmurget M. An “automatic pilot” for the hand in human posterior parietal cortex: toward reinterpreting optic ataxia. Nat. Neurosci. 2000;3:729–36. [PubMed]
  • Platt ML, Glimcher PW. Neural correlates of decision variables in parietal cortex. Nature. 1999;400:233–38. [PubMed]
  • Quian Quiroga R, Snyder LH, Batista AP, Cui H, Andersen RA. Movement intention is better predicted than attention in the posterior parietal cortex. J. Neurosci. 2006;26:3615–20. [PubMed]
  • Raiguel SE, Xiao DK, Marcar VL, Orban GA. Response latency of macaque area MT/V5 neurons and its relationship to stimulus parameters. J. Neurophysiol. 1999;82:1944–56. [PubMed]
  • Rizzollatti G, Riggio L, Sheliga B. Space and selective attention. In: Umilta C, Moscovitch M, editors. Attention and Performance. MIT Press; Cambridge, MA: 1994. pp. 231–65.
  • Rokni U, Steinberg O, Vaadia E, Sompolinsky H. Cortical representation of bimanual movements. J. Neurosci. 2003;23:11577–86. [PubMed]
  • Rondot P, Recondo JD, Ribadeaudumas JL. Visuomotor ataxia. Brain. 1977;100:355–76. [PubMed]
  • Roy JE, Cullen KE. Dissociating self-generated from passively applied head motion: neural mechanisms in the vestibular nuclei. J. Neurosci. 2004;27:2102–11. [PubMed]
  • Royden CS, Banks MS, Crowell JA. The perception of heading during eye movements. Nature. 1992;360:583–85. [PubMed]
  • Sakata H, Taira M, Kusunoki M, Murata A, Tanaka Y. The TINS Lecture. The parietal association cortex in depth perception and visual control of hand action. Trends Neurosci. 1997;20:350–57. [PubMed]
  • Santhanam G, Ryu SI, Yu BM, Afshar A, Shenoy KV. A high-performance brain-computer interface. Nature. 2006;442:195–98. [PubMed]
  • Scherberger H, Andersen RA. Target selection signals for arm reaching in the posterior parietal cortex. J. Neurosci. 2007;27:2001–12. [PubMed]
  • Scherberger H, Jarvis MR, Andersen RA. Cortical local field potential encodes movement intentions in the posterior parietal cortex. Neuron. 2005;46:347–54. [PubMed]
  • Schlag-Rey M, Amador N, Sanchez H, Schlag J. Antisaccade performance predicted by neuronal activity in the supplementary eye field. Nature. 1997;390:398–401. [PubMed]
  • Schultz W. Multiple reward signals in the brain. Nat. Rev. Neurosci. 2000;1:199–207. [PubMed]
  • Serruya MD, Hatsopoulos NG, Paninski L, Fellows MR, Donoghue JP. Instant neural control of a movement signal. Nature. 2002;416:141–42. [PubMed]
  • Shadmehr R, Krakauer JW. A computational neuroanatomy for motor control. Exp. Brain Res. 2008;185:359–81. [PMC free article] [PubMed]
  • Shenoy KV, Meeker D, Cao SY, Kureshi SA, Pesaran B, et al. Neural prosthetic control signals from plan activity. Neuroreport. 2003;14:591–96. [PubMed]
  • Shidara M, Richmond BJ. Anterior cingulate: single neuronal signals related to degree of reward expectancy. Science. 2002;296:1709–11. [PubMed]
  • Sirigu A, Cohen L, Duhamel JR, Pillon B, Dubois B, Agid Y. Congruent unilateral impairments for real and imagined hand movements. Neuroreport. 1995;6:997–1001. [PubMed]
  • Sirigu A, Duhamel JR, Cohen L, Pillon B, Dubois B, Agid Y. The mental representation of hand movements after parietal cortex damage. Science. 1996;273:1564–68. [PubMed]
  • Snyder LH, Batista AP, Andersen RA. Coding of intention in the posterior parietal cortex. Nature. 1997;386:167–70. [PubMed]
  • Snyder LH, Batista AP, Andersen RA. Change in motor plan, without a change in the spatial locus of attention, modulates activity in posterior parietal cortex. J. Neurophysiol. 1998;79:2814–19. [PubMed]
  • Stephan KM, Fink GR, Passingham RE, Silbersweig D, Ceballosbaumann AO, et al. Functional anatomy of the mental representation of upper extremity movements in healthy subjects. J. Neurophysiol. 1995;73:373–86. [PubMed]
  • Stoet G, Snyder LH. Single neurons in posterior parietal cortex of monkeys encode cognitive set. Neuron. 2004;42:1003–12. [PubMed]
  • Stricanne B, Andersen RA, Mazzoni P. Eye-centered, head-centered, and intermediate coding of remembered sound locations in area LIP. J. Neurophysiol. 1996;76:2071–76. [PubMed]
  • Sugrue LP, Corrado GS, Newsome WT. Matching behavior and the representation of value in the parietal cortex. Science. 2004;304:1782–87. [PubMed]
  • Tanji J, Shima K. Role for supplementary motor area cells in planning several movements ahead. Nature. 1994;371:413–16. [PubMed]
  • Taylor DM, Tillery SIH, Schwartz AB. Direct cortical control of 3D neuroprosthetic devices. Science. 2002;296:1829–32. [PubMed]
  • Tkach D, Reimer J, Hatsopoulos NG. Congruent activity during action and action observation in motor cortex. J. Neurosci. 2007;27:13241–50. [PubMed]
  • Toni I, Passingham RE. Prefrontal-basal ganglia pathways are involved in the learning of arbitrary visuomotor associations: a PET study. Exp. Brain Res. 1999;127:19–32. [PubMed]
  • Townsend BR, Lehmann SJ, Subasi E, Scherberger H. Decoding hand grasping from primate premotor and parietal cortex. Presented at Ann. Meet. Soc. Neurosci., San Diego, CA. 2007
  • Tremblay L, Schultz W. Reward-related neuronal activity during go-nogo task performance in primate orbitofrontal cortex. J. Neurophysiol. 2000;83:1864–76. [PubMed]
  • U.S. Dept. Health Human Serv Current estimates from the National Health Interview Survey, 1994. Vital Health Stat. 1995;193(Pt 1):1–260. [PubMed]
  • Velliste M, Perel S, Spalding MC, Whitford AS, Schwartz AB. Cortical control of a prosthetic arm for self-feeding. Nature. 2008;453:1098–101. [PubMed]
  • Wallis JD, Anderson KC, Miller EK. Single neurons in prefrontal cortex encode abstract rules. Nature. 2001;411:953–56. [PubMed]
  • Weiskran L, Elliott J, Darlingt C. Preliminary observations on tickling oneself. Nature. 1971;230:598. [PubMed]
  • White IM, Wise SP. Rule-dependent neuronal activity in the prefrontal cortex. Exp. Brain Res. 1999;126:315–35. [PubMed]
  • Wise SP. The primate premotor cortex: past, present, and preparatory. Annu. Rev. Neurosci. 1985;8:1–19. [PubMed]
  • Wolpert DM, Ghahramani Z, Jordan MI. Are arm trajectories planned in kinematic or dynamic coordinates? An adaptation study. Exp. Brain Res. 1995;103:460–70. [PubMed]
  • Wolpert DM, Goodbody SJ, Husain M. Maintaining internal representations: the role of the human superior parietal lobe. Nat. Neurosci. 1998;1:529–33. [PubMed]
  • Wu W, Black MJ, Mumford D, Gao Y, Bienenstock E, Donoghue JP. Modeling and decoding motor cortical activity using a switching Kalman filter. IEEE Trans. Biomed. Eng. 2004;51:933–42. [PubMed]
  • Zhang M, Barash S. Neuronal switching of sensorimotor transformations for antisaccades. Nature. 2000;408:971–75. [PubMed]


  • Berger TW, Ahuja A, Courellis SH, Deadwyler SA, Erinjippurath G, et al. Restoring lost cognitive function. IEEE Eng. Med. Biol. Mag. 2005;24(5):30–44. [PubMed]
  • Donoghue JP. Bridging the brain to the world: a perspective on neural interface systems. Neuron. 2008;60(3):511–21. [PubMed]
  • Fagg AH, Hatsopoulos NG, de Lafuente V, Moxon KA, Nemati S, et al. Biomimetic brain machine interfaces for the control of movement. J. Neurosci. 2007;27(44):11842–46. [PMC free article] [PubMed]
  • Fetz EE. Volitional control of neural activity: implications for brain-computer interfaces. J. Physiol. 2007;579(3):571–79. [PubMed]
  • Kennedy PR, Bakay RAE, Moore MM, Adams K, Goldwaithe J. Direct control of a computer from the human central nervous system. IEEE Trans. Rehabil. Eng. 2000;8(2):198–202. [PubMed]
  • Lebedev MA, Nicolelis MAL. Brain-machine interfaces: past, present and future. Trends Neurosci. 2006;29(9):536–46. [PubMed]
  • Schwartz AB, Cui XT, Weber DJ, Moran DW. Brain-controlled interfaces: movement restoration with neural prosthetics. Neuron. 2006;52(1):205–20. [PubMed]