PMCCPMCCPMCC

Search tips
Search criteria 

Advanced

 
Logo of nihpaAbout Author manuscriptsSubmit a manuscriptHHS Public Access; Author Manuscript; Accepted for publication in peer reviewed journal;
 
IEEE Int Conf Rehabil Robot. Author manuscript; available in PMC 2010 May 7.
Published in final edited form as:
PMCID: PMC2865680
NIHMSID: NIHMS184439

Usage of the ACT3D Robot in a Brain Machine Interface for Hand Opening and Closing in Stroke Survivors

Abstract

At six months after brain injury, about 65% of stroke survivors have been shown to be unable to incorporate the affected hand into activities of daily living (ADL). Using a reliable Brain-Machine-Interface (BMI) together with Neural Electronic Stimulation (NES) is a possible solution for the restoration of hand function in severely impaired hemiparetic stroke survivors. However, discoordination, i.e. the abnormal coupling between adjacent joints, causes an expected reduction in the performance of BMI algorithms. In this study, we test whether the active support of an ACT3D robot can increase the performance of two brain-machine-interface (BMI) algorithms in separating the subject’s intention to open or close the impaired hand during reach. Improvement in recognition rate was obtained in 4 chronic hemiparetic stroke subjects when support from the robot was available. Further analysis on one subject suggests that such an improvement is related to quantitative changes in cortical activity. This result suggests that the ACT3D robot can be used to train severely impaired stroke subjects to use a BMI-controlled NES device.

I. Introduction

ACT3D robot is a 3D force control robot equipped with a six degree of freedom load cell and an instrumented gimbal at the end effector. This robot can increase or decrease the weight of the tested arm in a well-controlled manner and allows a subject’s upper limb to move freely in 3D space. This device has successfully been used in increasing the workspace of the upper limb for severely impaired chronic stroke survivors [1, 2]. However, the hand function in most of these subjects is impaired, thus usage of their paretic arms during activities of daily life (ADL) is still very limited. The combination of a Brain-Machine-Interface (BMI) with neural electrical stimulation (NES) is a possible solution for the restoration of hand function in severely impaired stroke survivors. BMI should be able to reliably detect a subject’s intention for simple hand tasks, such as hand opening and closing. Although BMI algorithms for detecting hand opening and closing already exist, none of them target stroke subjects; instead, they usually target only locked-in patients. These BMI algorithms usually depend on teaching a subject to keep non-associated subjects or motions in mind. Most stroke subjects still have some volitional control of hand muscles. This volitional control results in abnormal muscle co-activation patterns which parallel discoordination in the upper limb following stroke. Therefore, detection of intention in stroke survivors is believed to be more difficult than that for healthy subjects or for locked-in patients.

More than 50% of stroke patients are left with a residual motor deficit, called discoordination, especially reflected as the loss of independent joint control associated with abnormal muscle co-activation patterns in the paretic limb. Quantitative investigation of these stereotypic movement patterns has revealed an abnormal linkage between the activations of shoulder abduction (SABD) and elbow flexion (EF) [36] and closely parallel the abnormal muscle co-activation patterns reported previously [7]. Recently, results from our lab have shown that by providing support to the paretic arm, a stroke subject can significantly decrease the coupling between SABD and EF because of a reduced need in activation of shoulder abductors [1, 2]. The robot-induced decoupling between shoulder and elbow in the paretic arm of stroke subjects may also improve the performance of BMI algorithms. The underlying mechanism is postulated to be related to the changes in the cortical activity induced by the use of ACT3D robot.

Changes of cortical activity after stroke have been assessed by microelectrode stimulation techniques in animal models and by different functional neuroimaging modalities, such as positron emission tomography (PET), functional magnetic resonance imaging (fMRI), and transcranial magnetic stimulation (TMS). Furthermore, reconstruction of cortical activity is feasible with electro-encephalography (EEG) and/or magneto- encephalography (MEG). In this study, we choose to use EEG to image the cortical activity related to reaching and hand movement tasks. This choice was made because EEG has a time resolution at the level of ms, which allow us to study the changes of cortical activity during different phases in the movement task. Furthermore, simultaneous measurement of EEG signals during the performance of reaching and hand movements is more robust compared to other imaging modalities.

This study is the first attempt to explore the possible usage of the ACT3D robot in the restoration of hand function in severely impaired chronic stroke subjects. Cortical imaging based on EEG measurement was combined in the generation of haptic environments to understand the underlying mechanisms associated with hand impairment following stroke. Results of this study provide evidence showing that the ACT3D robot can be an effective tool in the study of neurorehabilitation of hand function in chronic stroke survivors.

II. Method

A. Subject

We tested the performance of two BMI algorithms in the impaired arm of four chronic stroke subjects (see Table 1 for stroke subject information). All subjects provided written consent prior to participation in the study that was approved by the Institutional Review Board of Northwestern University and in compliance with the principles of the Declaration of Helsinki.

TABLE 1
Stroke Subject Information

Participants sat in a Biodex chair that completely supported the trunk The trunk was restrained to the back of the chair with straps crossing the chest and abdomen to prevent trunk and pelvis motion during the experiment. The ACT3D robot was fixed in a position relative to the chair. Subject’s forearm was strapped in a forearm-hand orthosis attached to the end effector of the robot. Initially, the tested arm was positioned with a 45° shoulder flexion angle, 75° shoulder abduction angle, and 90° elbow flexion angle (see figure 1). This position will be referred to as the ‘home position’ from now on.

Fig 1
Experiment Setup. The tested arm is required to move from ‘home position’ (the red dot) to ‘target position’ (the blue dot) in a comfortable speed while either opening or closing the hand.

At the start of the experiment, the subject’s limb lengths were measured and entered into software in order to scale the OpenGL rendered graphical representation of the limb (an avatar). The ‘target position’ was then set as far as the tip of the hand can reach based on segment lengths (i.e., the reduction of workspace due to discoordination was not taken into account) with the configuration of 95° shoulder-flexion angle, 95° shoulder-abduction angle, and 0° elbow-flexion angle (see figure 1).

At the beginning of a trial, the subject was instructed to move the paretic arm into the home position which was presented as a small red sphere on the screen, and stay there for 3 seconds to provide baseline EEG signals for analysis. (These 3 seconds are called the ‘preparing phase’ in the rest of the paper.) During this phase, the target position appeared as a blue or green sphere. (The blue and green targets suggest an opening- and closing-hand trial, respectively.) Subjects were instructed to relax on the haptic table in the home position when they saw the home target. The home target then disappeared indicating to the subject that he/she should then reach at a comfortable speed for the target while opening or closing the hand within 2 seconds. During the reaching phase the subject was required to maintain a position above the table, which was monitored by the haptic feedback from the robot during the whole trail. Auditory feedback was provided to subjects if they touched the haptic table during the trial. The movement tasks were performed under two conditions: a gravity condition in which the subject performed the reaching task while actively supporting the weight of their paretic limb and a non-gravity condition in which the robot provided full support of the arm weight. In some cases, especially while supporting the weight of the paretic arm, subjects were unable to reach the target because of the expression of the abnormal flexion synergy. In this case, they were instructed to perform the task to the best of their ability. After the target position was reached or after the subject spent more than 2 seconds attempting to reach, the target position changed to yellow. (The 1 to 2 second time window, i.e. from home-position disappearance to target-position color change, is called the ‘reaching phase’ throughout the rest of the paper.) Upon seeing the yellow target, the subject was instructed to concentrate on either opening or closing the hand while keeping the arm above the table for 1 to 2 seconds. (This 1 to 2 second window will be referred to as the ‘holding phase’) The sequence of the whole movement task is shown in figure 2.

Fig. 2
The sequence of movement tasks.

The avatar representing the tested limb was displayed continuously on a computer screen in front of the subject to provide visual feedback of the arm configuration and target locations during the preparing and reaching phases. While in the holding phase, the avatar was fixed in the target position. Figure 3 is an example of the feedback screen with the target displayed. A dark gray graphical table is also shown to provide a visual ground. Similarly, the haptically rendered virtual table was present throughout the experiment to provide a point of reference for the subject during the movement and as a rest surface during the relax phase.

Fig 3
An example of subject feedback screen.

In both conditions, concurrent with the disappearance of the home target, a 5V TTL pulse signal was generated by the robot. This pulse was used as a time marker to segment the EEG signals. For each target, four blocks of 30 trials, a total of 120 trials, were performed. To minimize fatigue, a rest period of 10–20 seconds between trials and 10 to 20 minutes between blocks were used. The hand opening and closing tasks each were performed in two different conditions (gravity and non-gravity) on two different days. During the data collection session, subjects were instructed to keep their gaze on the small red dot that showed the tip of the tested hand in order to avoid eye blinks.

C. Data collection

We simultaneously collected force/moment signals, 2-channel EMG signals from the wrist extensor and flexor of the tested arm, and 160-channel EEG signals (Biosemi, Inc., Active II, Amsterdam, The Netherlands) during each session. The EEG electrodes were mounted on a stretchable fabric cap based on a 10/20 system. Force/moment data were sampled at 500 Hz, and other data were sampled at 1000 Hz.

D. Data analysis

1) EEG data preprocessing for detecting the subject’s intention of opening or closing hand

The EEG signals were first visually inspected in order to remove trials with artifacts such as those resulting from eye blinks and sweating.. Subsequently, a finite-difference Surface Laplacian [8] transformation was applied to each EEG channel as a spatial volume conductor and to increase the SNR [9]. This allowed for the elimination of the outermost electrodes, i.e. a reduction in total channel number from 160 to 136 on the scalp. The TTL signal was used to align and segment remaining EEGs. Segmented signals (−1 s to 1 s with 0 indicating the rising phase of TTL signal) were then baseline corrected (baseline of −2.5 s to −2 s), re-referenced to the common average, and down sampled to 256 Hz. Further processing was conducted using MATLAB software.

In the MATLAB environment, the modified time-frequency synthesized spatial patterns (TFSP) BMI algorithm [10] was used to detect the subject’s intention of opening or closing the tested hand. The modification applied to this study includes that 1) the weight for each time-frequency grid was set as either 1 or 0 depending on the recognition rate achieved on the training set; 2) besides the original nearest neighbor (NN) classifier, a support vector classifier (SVC) was tested separately for the ability of intention detection [11].

2) EEG data preprocessing for quantifying the movement-related cortical activity

The EEG signals were first visually inspected for removing the trials with the presence of artifacts. Afterwards, the earlier onset of two EMG signals was used to align and segment remaining EEGs. Segmented signals (−1 s to 1 s with 0 indicating the onset of EMG signals) were then baseline (−2.5 s to −2 s) corrected ensemble averaged, and down sampled to 256 Hz. Further processing was conducted using CURRY V5.0 (Neuroscan, El Paso, TX) software.

In the CURRY software environment, the ensemble averaged EEG signals sent through a multi-stage processing procedure: 1) re-referencing to the common average, 2) low pass filtering (9th order Butterworth filter) with a cutoff frequency of 50 Hz, 3) estimating the signal-to-noise ratios (SNRs), and 4) co-registering of EEG electrode positions with the reconstructed subject skin (based on subject’s MRI). Subsequently, the cortical current distribution in two windows: −50 ms ~ 0 ms and 0 ms ~ 25 ms (with 0 representing the onset of EMGs) was computed using the LORETA method (Lp=1) based on a subject-specific boundary element model of the head with the regulation parameter equal to 1/SNR. This inverse method was chosen because its performance is superior to other available inverse methods in CURRY as demonstrated using both simulated EEG data and real cortical sensory evoked potentials [12].

Although the inverse calculation was performed over the whole cortex, only the activities in the sensorimotor cortices (SMCs) were further analyzed using four quantitative measurement indices: 1) the maximum cortical current strength (MCCS), 2) the active cortical area ratio (ACAR), 3) the center of gravity (CoG) of cortical currents, and 4) the overlapping active area (OAA).

The maximum cortical current strength is the peak magnitude determined in SMCs. The active cortical area ratio (ACAR) is defined as the ratio between active cortical area and the total area of SMCs, where the active cortical area is the area with a significantly larger current strength at the 95% confidence level. This index (0~1) reflects the spatial extent of cortical activity related to the generation of SABD or EF torques. The OAA between shoulder and elbow activities in SMCs is defined as:

OAASLD/ELB=iICsiNorm×CeiNorm×aiiIai,
(1)

where CsiNorm and CeiNorm are the normalized strengths of the current in the ith location in SMCs while generating SABD and EF torques, respectively; and ai is the area of the ith position. This index reflects the overlap of active areas between shoulder and elbow activities without considering the effect of current strength.

III. Results

Means of recognition rates with two modified STFP BMI algorithms (i.e., using NN classifier or SVC classifier) under two conditions (i.e., gravity or none-gravity) in 4 stroke subjects for opening or closing hand while reaching are listed in table 2. A two-way ANOVA (i.e., BMI algorithm and condition) result demonstrate that there is a non-significant increase of the recognition rate (P=0.16) when using ACT3D to support the total arm weight in these 4 stroke subjects. Regression between the mean difference in recognition rates for the two conditions and methods shows that there is a trend (P=0.17) suggesting that a stroke subject with a lower FM hand scores benefits more from the usage of the ACT3D robot.

Table 2
Accuracy of two BMI algorithms under gravity and non-gravity conditions. (Note: results under the two conditions are listed in each cell using the format: gravity/non-gravity)

In subject S2, who has the largest increase in recognition rate when using ACT3D to support his paretic arm, we did an inverse calculation in order to reconstruct the cortical activity during the end of the reaching phase and the beginning of the holding phase (see figure 2). Results for hand-opening task are shown in figure 4. Quantitative analyses of the inverse results on this subject for both hand-opening and closing tasks are listed in table 3.

Fig 4
Inverse calculation results in stroke subject S2 when performing tasks of reaching (left column, (a) and (c)) and then opening of his hand (right column, (b) and (d)) using his impaired arm under non-gravity (upper row) and gravity conditions (lower row). ...
Table 3
The quantitative results of inverse calculation in subject S2.

IV. Discussion

Using two different BMI algorithms, we observed a non-significant trend showing that the usage of the ACT3D robot can improve recognition rates in 4 stroke subjects. Additionally, our results suggest that a more severely impaired stroke subject may better benefit from the usage of the robot. This result may be due to the fact that more severe coupling between the shoulder and elbow is expressed in subjects with lower FM scores. Our group also has quantitative data demonstrating that using the 3D robot to compensate for arm weight, thus reducing the drive of the shoulder abductor required for reaching, can help stroke subjects decouple the shoulder and elbow [1, 2]. This decoupling may also reduce the abnormal finger/wrist flexion during reach when the arm weight is supported by the ACT3D robot. The robot-induced decoupling could be associated with changes in cortical activity, which is indirectly suggested by the increase in recognition rate as determined by the aforementioned BMI algorithms.

We further investigated the cortical activity under gravity and non-gravity conditions in one stroke subject. Results of this investigation provide more direct evidence for the robot-induced changes at the sensorimotor cortices. The following observations were made:

First, with regards to the cortical activity strength, we observed a higher maximum strength under the gravity condition in the whole SMC region. However, in the hand area of the motor cortex, lower cortical activity in the gravity condition was found when compared with that under the non-gravity condition (in the case of opening the hand, we found that strengths in hand area are 0.09 μA/mm2 verses 0.15 μA/mm2 in the gravity and non-gravity conditions, respectively). This is because different levels of cortical activity occur in the two different conditions. In the gravity condition, cortical activity is observed throughout the whole SMC area, while activity remains in the hand area of the SMC in the non-gravity condition.

Second, with regards to the ACAR, we saw less active area during the non-gravity condition than that during the gravity condition for both hand-opening and closing tasks. This result suggests that, when a stroke subject has to lift up their paretic arm, an increase in the total active cortical area may occur.

Third, with regards to the shift in CoG, we made two different comparisons. One is to compare the CoGs within the same time window between the two different gravity conditions. The other is to compare the changes in CoGs between the two time windows for the two different conditions. In the first case, we found more medial CoGs under the gravity condition in both windows. This suggests that cortical activity was scattered in both hemispheres as opposed to the contralateral hand area on the cortex. In the second case, we observed a larger shift in x-direction (i.e., medial/lateral) toward the more lateral M1 under the non-gravity condition as compared to that under the gravity condition. In the first window (i.e., −50~0 ms), the subject was still approaching the target. However, cortical activity related to the hand opening/closing movement may also be involved in this window. In the second window (0~25 ms) the subject concentrated more on the hand task. When comparing inverse results obtained in the first to that in the second window, shift of cortical activity from the more medial to more lateral position was anticipated in a normal brain due to more hand activity expected during the 2nd window. Differences between the two conditions for the hand-closing task are not as clear as that for the hand-opening task. The shifts in CoGs suggests that the usage oftheACT3D robot can result in the assignment of the remaining cortical resources focusing more on hand tasks instead of being distracted from the need to drive other muscles.

Finally, with regards to the OAA, we observed a higher overlap of areas in the 1st and the 2nd time windows during the gravity condition than during the non-gravity condition (0.14 vs. 0.09) when the subject was performing the hand opening task. Again, differences between the two conditions for hand-closing are not as clear as that for hand-opening. This may result from the fact that the closing of the hand is a part of the flexion synergy during reach following stroke. We believe that the increase in overlapped area under the gravity condition corresponds to the higher shoulder activity required to counteract the force of gravity.

In short, we found that the usage of the ACT3D robot can improve the performance of BMI algorithms. This improvement is believed to be related to the changes of cortical activity induced by the robot. Our results suggest that the ACT3D robot can be used for the neurorehabilition of hand function in severely impaired stroke subjects.

Acknowledgments

This work was supported in part by the Grant R01 (5R01HD39343-02) from NIH and SDG (0435348Z) from American Heart Association.

Contributor Information

Jun Yao, Department of Physical Therapy and Human Movement Sciences, Northwestern University, Chicago, IL 60611 USA (phone: 312-503-4430; fax: 312-908-0741.

Clay Sheaff, Department of Biomedical Engineering, Northwestern University, Chicago, IL 60611 USA.

Julius P. A. Dewald, Departments of Physical Therapy and Human Movement Sciences, Biomedical Engineering and Physical Medicine and Rehabilitation, Northwestern University, Chicago, IL 60611 USA.

References

1. Beer RF, Dewald JP, Dawson ML, Rymer WZ. Target-dependent differences between free and constrained arm movements in chronic hemiparesis. Exp Brain Res. 2004;156:458–70. [PubMed]
2. Sukal TM, Ellis MD, Dewald JP. Application of a Haptic Interface for Investigating Movement Patterns Following Stroke. presented at IEEE EMBC; Shanghai, China. 2005.
3. Beer RF, Given JD, Dewald JPA. Task-Dependent weakness at the elbow in patients with hemiparesis. Archives of Physical Medicine and Rehabilitation. 1999;80:766–772. [PubMed]
4. Bourbonnais D. Abnormal spatial patterns of elbow muscle activation in hemiparetic human subjects. Brain. 1989;112:85–102. [PubMed]
5. Dewald J, Beer R. Evidence for abnormal joint torque patterns in the paretic upper limb of subjects with hemiparesis. Muscle & Nerve. 2001a;24:273–283. [PubMed]
6. Dewald J, Sheshadri V, Dawson M, Beer R. Upper limb discoordination in hemiparetic stroke: Implications for neurorehabilitation. Topics in Stroke Rehabilitation. 2001b;8:1–12. [PubMed]
7. Dewald JP, Pope SP, Given JD, Buchanan TS, Rymer WZ. Abnormal muscle coactivation patterns during isometric torque generation at the elbow and shoulder in hemiparetic subjects. Brain. 1995;118:495–510. [PubMed]
8. Pascual-Marqui RD, Biscay-Lirio R. Spatial resolution of neuronal generators based on EEG and MEG measurements. Int J Neurosci. 1993;68:93–105. [PubMed]
9. Hjorth B. An on-line transformation of EEG scalp potentials into orthogonal source derivations. Electroencephalogr Clin Neurophysiol. 1975;39:526–30. [PubMed]
10. Deng J, Yao J, Dewald JP. Classification of the intention to generate a shoulder versus elbow torque by means of a time-frequency synthesized spatial patterns BCI algorithm. J Neural Eng. 2005;2:131–8. [PubMed]
11. Zhou J, Yao J, Deng J, Dewald J. EEG-based Discrimination of Elbow/Shoulder Torques using Brain Computer Interface Algorithms: Implications for Rehabilitation. Conf Proc IEEE Eng Med Biol Soc; 2005. pp. 4134–7. [PubMed]
12. Yao J, Dewald J. Evaluation of different cortical source localization methods using simulated and experimental EEG data. Neuroimage. 2005;25:369–382. [PubMed]