|Home | About | Journals | Submit | Contact Us | Français|
Functional electrical stimulation (FES), the coordinated electrical activation of multiple muscles, has been used to restore arm and hand function in people with paralysis. User interfaces for such systems typically derive commands from mechanically unrelated parts of the body with retained volitional control, and are unnatural and unable to simultaneously command the various joints of the arm. Neural interface systems, based on spiking intracortical signals recorded from the arm area of motor cortex, have shown the ability to control computer cursors, robotic arms and individual muscles in intact non-human primates. Such neural interface systems may thus offer a more natural source of commands for restoring dexterous movements via FES. However, the ability to use decoded neural signals to control the complex mechanical dynamics of a reanimated human limb, rather than the kinematics of a computer mouse, has not been demonstrated. This study demonstrates the ability of an individual with long-standing tetraplegia to use cortical neuron recordings to command the real-time movements of a simulated dynamic arm. This virtual arm replicates the dynamics associated with arm mass and muscle contractile properties, as well as those of an FES feedback controller that converts user commands into the required muscle activation patterns. An individual with long-standing tetraplegia was thus able to control a virtual, two-joint, dynamic arm in real time using commands derived from an existing human intracortical interface technology. These results show the feasibility of combining such an intracortical interface with existing FES systems to provide a high-performance, natural system for restoring arm and hand function in individuals with extensive paralysis.
Functional electrical stimulation (FES) of the neuro-motor system is a rehabilitation technology for the restoration of function that has been implemented in more than 300 humans with paralysis . In particular, FES activation of paralysed upper extremity muscles has been used to achieve useful hand opening and closing [2, 3] and shoulder and elbow function [4–6] in individuals with partial arm paralysis due to C5-C6 spinal cord injury and for achieving elbow extension in the paretic upper extremity following stroke .
A major obstacle to extending the benefits of such a neuroprosthesis to individuals with complete arm paralysis is providing a straightforward, intuitive interface through which the user can command desired arm movements. Existing control strategies for neuroprostheses are often based on measuring the activity in or the motion produced by muscles with retained voluntary control (such as the neck or face) , or on other surrogate commands such as sip-and-puff, eye-tracking, or head-orientation devices [9, 10]. The devices needed to implement these approaches must be consistently and accurately applied to the user on a daily basis by a caregiver. Furthermore, the non-intuitive mapping between the retained voluntary actions and the desired FES-restored actions places a high cognitive burden on the user. This limits control to a serially-commanded subset of the numerous joint motions  required for most functional motions. An interface capable of extracting command signals that are naturally related to the intended movement could potentially provide control for many different joint motions simultaneously and, when used with an appropriate FES system, could effectively bypass the neural lesion by directly activating the intact but disconnected peripheral motor system. Such a system would facilitate the deployment of much more sophisticated neuroprostheses that provide the user with significantly enhanced functional abilities.
A promising approach for providing natural, simultaneous control of multiple joints is based upon recordings obtained from the motor cortex. Work in human  and non-human primates has shown that movement intent signals for multidimensional movements are present in the motor cortex and accessible through intracortical multielectrode recordings [12,13], as are higher-level signals related to movement goals [14,15]. Brain control of robots  and of electrically stimulated wrist flexion and extension motions has also been shown in monkeys [17, 18]. Recent work in pilot clinical trials involving four people with tetraplegia [19,20] has also demonstrated the ability to record and decode information related to movement intent from the human motor cortex. In participants with tetraplegia, motor cortical recordings have provided continuous control of a computer cursor for the execution of functional tasks such as drawing simple figures, playing video games, opening and closing prosthetic hands, making rudimentary movements with a robotic arm, and using point-and-click interfaces.
However, controlling a limb reanimated by FES presents a number of additional, significant challenges. The neuroprosthesis and its associated user command interface must take into account the dynamics and nonlinearities introduced by the mass of the arm segments, muscle contractile dynamics, inter-joint interactions resulting from multi-articular muscles, and the redundant nature of the muscle set. Our long-term approach is to combine a neural interface system that extracts abstract movement commands from the motor cortex of an individual with tetraplegia with a peripheral FES system that then calculates and applies the muscle stimulation patterns needed to achieve these movements. This controller reduces the cognitive burden on the user and can compensate for changes in muscle forces and external loads. Other groups are investigating controlling muscle activations directly from cortical recordings , but the number of muscle activations that need to be controlled to obtain coordinated movements is high. Furthermore, disabled individuals that will eventually be the users of such systems lack other neural control mechanisms such as reflexes and proprioceptive feedback pathways that are critical to the control of arm movement.
In this study we used a simulated dynamic arm that models the dynamic and nonlinear properties of a real paralysed arm that has been reanimated with FES. The ability of an individual with tetraplegia to use signals extracted from a small ensemble of neurons in the arm area of motor cortex (MI arm area) to accurately command the real-time movements of the virtual arm was assessed. The dynamic arm simulator (DAS) was specifically designed to allow the execution of “user-in-the-loop” experiments incorporating both simulated sensor-based feedback of arm movements and user visual feedback into the control scheme. The combination of a fully implanted FES system together with a fully implanted cortical device does not yet exist, and the combination of a fully implanted FES system and a percutaneous cortical device currently poses an unjustifiable infection risk to the participant. The use of surface FES for this would not provide the kind of control that is needed to evaluate the performance of the BCI and would confound the results. In addition, the use of FES is not permitted under the current investigational device exemption. These experiments allowed us to evaluate the “closed-loop” behaviour of an actual intracortical neural interface system and a realistic, but simulated, FES system. The goal of this study was thus to evaluate the feasiblity of using this technology for restoring upper limb function in humans with tetraplegia, prior to actually implementing an implanted FES system and cortical device.
Clinical trial sessions of the BrainGate Neural Interface System were conducted with a participant (designated as S3) with tetraplegia (i.e., paralysis of both arms and both legs). The participant (S3) had longstanding (>11 years) tetraplegia and anarthria secondary to a brainstem stroke. S3 is currently implanted with the Braingate system and active in this ongoing (BrainGate2) pilot clinical trial. Participant S3 is a 56-year old woman who had thrombosis of the basilar artery and extensive pontine infarction 9 years prior to BrainGate trial recruitment (and 11 years prior to the studies reported here). She is unable to speak and has no functional movement of her arms or legs. Slow (several seconds) elbow flexor spasms of the arms over a limited range of motion occur intermittently. Voluntary bilateral (but not unilateral or independent) flexion/extension about the elbow can be commanded over a limited range of motion. Somatosensory function of the arms and legs is intact. She is right-hand dominant, and the intracortical array was placed in the left precentral gyrus in the region of the arm representation [19, 22]. The research is conducted under an Investigational Device Exemption (IDE) and approval from the local Institutional Review Board§.
To simulate FES control of muscles for arm movement, we created a real-time, computational model of the upper limb as part of a “virtual” dynamic arm simulator. The upper extremity model used in this study was based on a five degree-of-freedom, real-time model that has been previously described , but was modified to reduce the number of degrees of freedom (DOF). The current model is a 2-DOF model with six muscles which allows control of the shoulder and elbow joints in the horizontal plane. Control of a 2-DOF system with realistic arm dynamics provided an appropriate step-up for the user from a strictly kinematic 2-DOF system, without presenting her with a completely unfamiliar task. Control of higher degrees of freedom with dynamics will follow in future reports. This model is appropriate to represent control of reaching movements along a desk or table top where the arm’s weight is supported against gravity by a typical wheelchair-mounted mobile arm support. A feedback controller that would be realistic for implementation in an actual FES system was also included in the simulation. This controller drove the desired movements of the simulated arm by automatically calculating the muscle activations needed (a) to perform the commanded movements and (b) to correct for arm trajectory errors arising from external perturbations or from changes in muscle performance due to fatigue. This controller accounted for the non-linear aspects of skeletal mass dynamics and muscle contractile properties, and distributed the required forces across redundant mono- and bi-articular muscles. The model and controller have been further described elsewhere .
Figure 1 shows a schematic overview of the experimental setup. Discriminated action potentials (spikes) recorded from MI in subject S3 were decoded into endpoint velocity commands and then passed to the FES controller, which calculated the muscle activations necessary to achieve the desired movement. The muscle activations were then passed to the real-time arm model, which computed the limb movements that would result from those activations. Finally, the simulated arm movements were visually displayed to the user via a computer monitor placed in front of the participant. A “click” signal, a binary state that was also decoded from the cortical signals, was generated by the subject to indicate selection of a target upon arrival at the goal position.
Movement commands decoded from neural spiking signals obtained from a multielectrode array implanted in the arm area of primary motor cortex (MI) [19,20] were used to command the planar reaching movements of the virtual arm. Training of the Kalman filter decoder was accomplished in the first several minutes of each experimental session by asking the subject to imagine moving her arm to mimic a set of different motions that were visually presented using an animation of arm movement on the computer screen. This phase was “open loop”; i.e. the animated arm was not under her control, but was simply displaying a predetermined tracking movement at two speeds. The displayed movements, together with simultaneously obtained neural recordings, were used to train the Kalman filter-based decoder, as described in Kim et al. . The number of units sorted and used for decoder training varied from day to day, and thus the Kalman filter required retraining at the start of each session. The number of units sorted and used for decoder training and subsequent closed-loop trials for each session was 34, 46, and 38 respectively.
Following training, the participant’s ability to control the dynamic virtual arm was assessed and compared with her ability to control the arm without dynamics. The participant was seated in a wheelchair and viewed a 0.45m (diagonal) monitor from a distance of approximately 0.5–0.6m that displayed a computer generated arm animation that moved only in the horizontal plane. The camera view point was set to look down at the arm from above. Therefore, the arm appeared to move along the plane of the screen (see example in figure 1). This task was similar (except for the inclusion of arm and muscle dynamics) to her previous use of the Braingate implant in controlling a 2D kinematic cursor on the screen, at which she was well-practiced. A 2D velocity command was decoded from the available units in real time and multiplied by a gain factor before being passed to the FES controller. This gain factor determined the maximum possible movement speed and was set at the highest value for which the participant felt she had adequate control. Once a suitable value had been identified, all trials were carried out with that gain. In each trial, lasting up to 9 minutes, targets of varying sizes appeared at pseudorandom locations on the screen and the user was instructed to move the extended virtual index finger towards the target and to then execute a finger click while over the target within the allowed time (45s). The movement was quantified by measuring the path efficiency (shortest possible straight-line endpoint path length divided by path length of actual participant-commanded path, expressed as a percentage), average speed (actual distance moved divided by the time taken), and throughput (also known as information transfer rate, and given by the index of difficulty divided by the movement time) .
Using the virtual FES system, the participant was able to produce both smooth and stable point-to-point actions, and by implementing a state decoder the animated index finger could be flexed to indicate a choice.
The performance of the participant in controlling the virtual arm was measured in each of three sessions separated by 8 and 23 days respectively (day 1049, day 1057 and day 1080 post-implant). Note that the participant had also tried the simulator during three previous sessions that were used to determine various parameters of the decoder and controller. The first reported data point was thus the fourth time the participant had used the dynamic arm simulator. The duration of each session including setup, decoder training and task completion was typically 3–4 hours.
A 120s segment of recorded data is shown in Figure 2. Cortical spike data recorded by the 96-electrode BrainGate system (shown as spike occurrences in Figure 2, Panel A) were decoded into velocity commands (panel B) and then transformed by the FES controller into muscle activations (Panel C) that ultimately led to simulated arm movements with the goal of moving the tip of the index finger to the targets (Panel D). The FES controller calculated the muscle activations required to achieve the desired trajectory based on the decoded commands and feedback regarding the current state of the limbs. Muscle activation levels and the modulation of these activations were relatively small in several of the muscles (e.g., BR and BL) because there was no external load on the limb and no external disturbances as it moved in the horizontal plane, and because the user-commanded movements were slow. The activations were, however, not zero even for the stationary limb because the FES controller uses low-level muscle co-contraction to improve the stability of the arm and because the arm model included passive elastic properties of muscles that had to be countered by active muscle force. A video of the same segment of data shown in Figure 2 is available in the Supplementary Material at http://iopscience.iop.org/1741-2552/.
In each session, two conditions were assessed: NO DAS (where the participant controlled the movement of the endpoint of the arm with no arm dynamics included, essentially equivalent to 2D cursor control), and DAS (where the participant controlled the movements of the Dynamic Arm Simulator, which included arm dynamics and musculoskeletal nonlinearities). In each test session the participant was able to reach targets of various sizes at any reachable location on the screen and to then select them by generating a separate “click” (a decoded state change related to imagined hand grasp). Performance varied across the three days, with the percent success for acquiring the target (i.e., moving the cursor within the target and then generating a “click”) ranging from 21% to 97%. The success rate for reaching the target while controlling arm dynamics, however, remained above 70% for two out of the three days (Table 1). No significant differences were found (Kruskal-Wallis test for non-normally distributed data) between the DAS and the NO-DAS cases for each of the sessions in terms of path efficiency (p=0.34, 0.69, 0.11) and throughput (p=0.056, 0.27, 0.48) (Figure 3).
Our participant was able to control the complex dynamics of the virtual arm immediately after training the Kalman filter decoder in a manner comparable to previously demonstrated non-dynamic cursor control tasks. This was achieved even though the Kalman filter decoder was characterised using the standard decoding approach, i.e. using data generated with no arm dynamics present. This means that the combination of the FES controller and appropriate user commands was able to compensate for the musculoskeletal dynamics and nonlinearities of the arm to the point where these had no measurable impact on the quality of the commanded movements.
Movement speeds were generally slower when using the FES dynamic simulation (DAS) compared to non-dynamic cursor control (NO DAS), but this resulted primarily from a decision to train the controller for accuracy at the expense of speed by using slow movements typical for FES systems, rather than from an inability of the user to generate more rapid commands. Since the NO DAS condition has no inertia and does not rely on muscle activations to generate movement, its speeds are not limited in quite the same way. There is still a gain factor to be chosen between the decoded command and the cursor velocity, however, and the same interaction between accuracy and speed would be expected. The decision was made to simulate a lower-speed interface that would most likely be of benefit to the more impaired user.
No significant differences between the DAS and the NO DAS conditions were seen for information throughput, suggesting that the addition of arm dynamics did not affect the information transfer rate of the BCI. The average throughput of less than 0.2 bit/s that we observed is significantly lower than that reported for comparable nonhuman primate studies (e.g., 1.1–3.7 bit/s depending on calculation method ). It may be tempting to think that a human participant, who understands the objectives of the brain-computer interface and is able to act on precise instructions would outperform nonhuman primate participants in similar tasks. However, this has not been demonstrated in BrainGate participants to date, and the information transfer rate in this study was indirectly limited because of the slow speed of movement expected in an FES system, rather than by the performance of the BCI. The more relevant comparison, therefore, is between NO DAS (which represents a well-practiced, baseline task for this participant) and the DAS (the unique condition that we examined in this study), where no difference was observed.
No in-session differences were seen in path efficiency measures between the DAS and the NO DAS conditions. The inertial properties of the simulated limb and the delay associated with muscle dynamics will tend to act as a low-pass filter for the movement command and may in fact smooth the movement compared to the kinematic task.
Our study demonstrates that a person with brainstem stroke who has not used her arms in a functional manner for more than a decade can readily achieve a usable level of control over a non-linear, dynamic system using commands extracted from an intracortical recording system. The participant demonstrated control not only of 2D end-point positioning of the arm but also of a discrete state change (i.e., the click), simulating the execution of a functional task involving finger flexion at the target location. The participant had previous experience in the control of kinematic tasks such as cursor positioning with the Braingate system, but had never used it to control a system with complex non-linear dynamics. The effects of training were not comprehensively evaluated in this study, but with her experience with the kinematic task and the similarity of performance with and without the added dynamics, it seems unlikely that additional training would further improve performance.
These results suggest that an individual with a simple FES system that includes joint motion feedback (implemented using implanted or body-worn motion sensors) and whose arm is supported against gravity (i.e. by a mobile arm support) would be able to perform functional reaching movements to targets and to then use the state command to elicit grasp actions. This represents a significant advance towards realising the goal of commanding multi-joint movements of an FES-reanimated arm in a coordinated manner using natural commands extracted from brain recordings. For this goal to be fully realised, fully implanted and wireless BCI devices with adequate performance will need to reach the human implant standards that have already been achieved by FES systems. Several such efforts are already ongoing [26, 27] and will hopefully become available in the next five years.
This work was funded by the United States National Institutes of Health (NICHD, NCMRR, with additional support from NINDS and NIDCD). Also funded in part by the Rehabilitation Research and Development Service, Department of Veterans Affairs, Providence, RI (LRH, JDS, JPD); the MGH-Deane Institute for Integrated Research on Atrial Fibrillation and Stroke (LRH); the Doris Duke Charitable Foundation (LRH). A pilot clinical study of the BrainGate Neural Interface System was initiated by Cyberkinetics Neurotechnology Systems, Inc. under a Food and Drug Administration (FDA) Investigational Device Exemption (IDE) and with Institutional Review Board (IRB) approvals; the studies began in May 2004. The BrainGate/BrainGate2 pilot clinical trials are now directed by Massachusetts General Hospital. CAUTION: Investigational Device. Limited by Federal Law to Investigational Use. We thank participant S3 for her dedication to this research, and Katherine Centrella for technical assistance.
§(CAUTION: Investigational Device. Limited by U.S. Federal Law To Investigational Use.)