The study participants, referred to as S3 and T2 (a 58 year-old woman, and a 65 year-old man, respectively), were each tetraplegic and anarthric as a result of a brainstem stroke. Both were enrolled in the BrainGate2 pilot clinical trial (see Methods). Neural signals were recorded using a 4 × 4 mm, 96-channel microelectrode array, which was implanted in the dominant MI hand area (for S3, in November 2005, 5.3 years prior to the beginning of this study; for T2, in June 2011, 5 months prior to this study). Participants performed sessions on a near-weekly basis to carry out point-and-click actions of a computer cursor using decoded MI ensemble spiking signals7
. Across four sessions in her sixth year post-implant (trial days 1952–1975), S3 used these neural signals to perform reach and grasp movements of either of two differently purposed right-handed robot arms. The DLR Light-Weight Robot III (German Aerospace Center, Oberpfaffenhofen, Germany, , left)10
is designed to be an assistive device that can reproduce complex arm and hand actions. The DEKA Arm System (DEKA Research and Development Corp., Manchester, NH, right) is a prototype advanced upper limb replacement for people with arm amputation11
. T2 controlled the DEKA prosthetic limb on one session day (day 166). Both robots were operated under continuous user-driven neuronal ensemble control of arm endpoint (hand) velocity in 3D space; a simultaneously decoded neural state executed a hand action. S3 had used the DLR robot on multiple occasions over the prior year for algorithm development and interface testing, but she had no exposure to the DEKA arm prior to the sessions reported here. T2 participated in three DEKA arm sessions for similar development and testing prior to the session reported here but had no other robotic arm experience.
Figure 1 Experimental setup and performance metrics. (a) Diagram showing an overhead view of participant’s location at the table (grey rectangle) from which the targets (purple spheres) were elevated by a motor. The robotic arm was positioned to the right (more ...)
To decode movement intentions from neural activity, electrical potentials from each of the 96 channels were filtered to reveal extracellular action potentials (i.e., ‘unit’ activity). Unit threshold crossings (see Methods) were used to calibrate decoders that generated velocity and hand state commands. Signals for reach were decoded using a Kalman filter12
to continuously update an estimate of the participant’s intended hand velocity. The Kalman filter was initialized during a single “open-loop” filter calibration block (< 4 min) in which the participants were asked to imagine controlling the robotic arm as they watched it undergo a series of regular, pre-programmed movements while the accompanying neural activity was recorded. This open-loop filter was then iteratively updated during four to eight “closed-loop” calibration blocks while the participant actively controlled the robot under visual feedback, with gradually decreasing levels of computer-imposed error attenuation (see Methods). To discriminate an intended hand state, a linear discriminant classifier was built on signals from the same recorded units while the participant imagined squeezing his or her hand8
. On average, the decoder calibration procedure lasted ~ 31 minutes (ranging from 20–48 minutes, exclusive of time between blocks).
After decoder calibration, we assessed whether each participant could use the robotic arm to reach for and grasp 6 cm diameter foam ball targets, presented in 3D space one at a time by motorized levers (, and Supplementary Fig. 1b
). Because hand aperture was not much larger than the target size (only 1.3× larger for DLR, and 1.8× larger for DEKA) and hand orientation was not under user control, grasping targets required the participant to maneuver the arm within a narrow range of approach angles with the hand open while avoiding the target support rod below. Targets were mounted on flexible supports; brushing them with the robotic arm resulted in target displacements. Together, these factors increased task difficulty beyond simple point-to-point movements and frequently required complex curved paths or corrective actions (, Supplementary Movies 1–3
). Trials were judged successful or unsuccessful by two independent visual inspections of video data (see Methods). A successful “touch” trial occurred when the participant contacted the target with the hand; a successful “grasp” trial occurred when the participant closed the hand while any part of the target or the top of its supporting cone was within the volume enclosed by the hand.
In the 3D reach-and grasp task, S3 performed 158 trials across 4 sessions and T2 performed 45 trials in a single session (; ). S3 touched the target within the allotted time in 48.8% of the DLR and 69.2% of the DEKA trials, and T2 touched the target within the allotted time in 95.6% of trials (Supplementary Movies 1–3, Supplementary Fig. 2
). Of the successful touches, S3 grasped the target 43.6% (DLR) and 66.7% (DEKA) of the time, while T2 grasped the target 65.1% of the time. Of all trials, S3 grasped the target 21.3% (DLR) and 46.2% (DEKA) of the time, and T2 grasped the target 62.2% of the time. In all sessions from both participants, performance was significantly higher than expected by chance alone (Supplementary Fig. 3
). For S3, times to touch were approximately the same for both robotic arms (, blue bars; median 6.2 +/− 5.4 sec) and were comparable to times for T2 (6.1 +/− 5.5 sec). The times for combined reach and grasp were similar for both participants (S3, 9.4 +/− 6.2 sec; T2, 9.5 +/− 5.5 sec), although for the first DLR session, times were about twice as long.
Summary of neurally-controlled robotic arm target acquisition trials
To explore the utility of NISs for facilitating activities of daily living for people with paralysis, we also assessed how well S3 could control the DLR arm as an assistive device. We asked her to reach for and pick up a bottle of coffee, and then drink from it through a straw and place it back on the table. For this task, we restricted velocity control to the 2D tabletop plane and we used the simultaneously decoded grasp state as a sequentially activated trigger for one of four different hand actions that depended upon the phase of the task and the position of the hand (see Methods). Because the 7.2 cm bottle diameter was 90% of the DLR hand aperture, grasping the bottle required even greater alignment precision than grasping the targets in the 3D task described above. Once triggered by the state switch, robust finger position and grasping of the object was achieved by automated joint impedance control. We familiarized the participant with the task for approximately 14 minutes (during which we made adjustments to the robot hand grip force, and the participant learned the physical space in which the state decode and directional commands would be effective in moving the bottle close enough to drink from a straw). After this period, the participant successfully grasped the bottle, brought it to her mouth, drank coffee from it through a straw, and replaced the bottle on the table, on 4 of 6 attempts over the next 8.5 minutes (, Supplementary Fig. 4 and Supplementary Movie 4
). The two unsuccessful attempts (#2 and 5 in sequence) were aborted to prevent the arm from pushing the bottle off the table (because the hand aperture was not properly aligned with the bottle). This was the first time since the participant’s stroke more than 14 years earlier that she had been able to bring any drinking vessel to her mouth and drink from it solely of her own volition.
Figure 2 Participant S3 drinking from a bottle using the DLR robotic arm. (a) Four sequential images from the first successful trial showing participant S3 using the robotic arm to grasp the bottle, bring it towards her mouth, drink coffee from the bottle through (more ...)
The use of NISs to restore functional movement will become practical only if chronically implanted sensors function for many years. It is thus notable that S3’s reach and grasp control was achieved using signals from an intracortical array implanted over 5 years earlier. This result, supported by multiple demonstrations of successful chronic recording capabilities in animals13–15
, suggests that the goal of creating long-term intracortical interfaces is feasible. At the time of this study, S3 had lower recorded spike amplitudes and fewer channels contributing signals to the filter than during her first years of recording. Nevertheless, the units included in the Kalman filters were sufficiently directionally tuned and modulated to allow neural control of reach and grasp ( and Supplementary Figs. 5 and 6
). S3 sometimes experiences stereotypic limb flexion. These movements did not appear to contribute in any way to her multidimensional reach and grasp control, and the neural signals used for this control exhibited waveform shapes and timing characteristics of unit spiking ( and Sup. Fig. 7
). Furthermore, T2 produced no consistent volitional movement during task performance, which further substantiates the intracortical origin of his neural control.
Figure 3 Examples of neural signals from three sessions and two participants: a 3D reach and grasp session from S3 (a–c) and T2 (d–f), and the 2D drinking session from S3 (g–i). (a,d,g) Average waveforms (thick black lines) ± 2 (more ...)
We have shown that two people with no functional arm control due to brainstem stroke used the neuronal ensemble activity generated by intended arm and hand movements to make point-to-point reaches and grasps with a robotic arm across a natural human arm workspace. Moreover, S3 used these neurally-driven commands to perform an everyday task. These findings extend our previous demonstrations of point and click neural control by people with tetraplegia7,16
and show that neural spiking activity recorded from a small MI intracortical array contains sufficient information to allow people with longstanding tetraplegia to perform even more complex manual skills. This result suggests the feasibility of using cortically-driven commands to restore lost arm function for people with paralysis. In addition, we have demonstrated considerably more complex robotic control than previously demonstrated in able-bodied non-human primates (NHPs)9,17,18
. Both participants operated human-scale arms in a 3D target task that required curved trajectories and precise alignments over a volume that was 1.4 to 7.7 times greater than has been used by NHPs. The drinking task, while only 2D + state control, required both careful positioning and correctly-timed hand state commands to accomplish the series of actions necessary to retrieve the bottle, drink from it, and return it to the table.
Both participants performed these multidimensional actions after longstanding paralysis. For S3, signals were adequate to achieve control 14 years and 11 months after her stroke, showing that MI neuronal ensemble activity remains functionally engaged despite subcortical damage of descending motor pathways. Future clinical research will be needed to establish whether more signals19–22
, signals from additional or other areas2,23–25
, better decoders, explicit participant training, or other advances (see Supplementary Materials
) will provide more complex, flexible, independent, and natural control. In addition to the robotic assistive device shown here, MI signals might also be used by people with paralysis to reanimate paralyzed muscles using functional electrical stimulation (FES)27–29
or by people with limb loss to control prosthetic limbs. Whether MI signals are suitable for people with limb loss to control an advanced prosthetic arm (such as the device shown here) remains to be tested and compared to other control strategies11,26
. Though further developments might enable people with tetraplegia to achieve rapid, dexterous actions under neural control, at present, for people who have no or limited volitional movement of their own arm, even the basic reach and grasp actions demonstrated here could be substantially liberating, restoring the ability to eat and drink independently.