PMCCPMCCPMCC

Search tips
Search criteria 

Advanced

 
Logo of bmengonBioMed CentralBiomed Central Web Sitesearchsubmit a manuscriptregisterthis articleBioMedical Engineering OnLine
 
Biomed Eng Online. 2010; 9: 64.
Published online 2010 October 28. doi:  10.1186/1475-925X-9-64
PMCID: PMC2987782

Decoding hand movement velocity from electroencephalogram signals during a drawing task

Abstract

Background

Decoding neural activities associated with limb movements is the key of motor prosthesis control. So far, most of these studies have been based on invasive approaches. Nevertheless, a few researchers have decoded kinematic parameters of single hand in non-invasive ways such as magnetoencephalogram (MEG) and electroencephalogram (EEG). Regarding these EEG studies, center-out reaching tasks have been employed. Yet whether hand velocity can be decoded using EEG recorded during a self-routed drawing task is unclear.

Methods

Here we collected whole-scalp EEG data of five subjects during a sequential 4-directional drawing task, and employed spatial filtering algorithms to extract the amplitude and power features of EEG in multiple frequency bands. From these features, we reconstructed hand movement velocity by Kalman filtering and a smoothing algorithm.

Results

The average Pearson correlation coefficients between the measured and the decoded velocities are 0.37 for the horizontal dimension and 0.24 for the vertical dimension. The channels on motor, posterior parietal and occipital areas are most involved for the decoding of hand velocity. By comparing the decoding performance of the features from different frequency bands, we found that not only slow potentials in 0.1-4 Hz band but also oscillatory rhythms in 24-28 Hz band may carry the information of hand velocity.

Conclusions

These results provide another support to neural control of motor prosthesis based on EEG signals and proper decoding methods.

Background

Brain-computer interface (BCI) is a system that translates brain signals reflecting user intentions into commands and drives external devices [1,2]. In the past decades, various BCI systems have been developed for the purpose of rehabilitation and medical care for the disabled patients [3-7]. Among them, researchers have particular interest in neuromotor prosthesis that moves an artificial limb by the brain signals which control the equivalent movement of a corresponding body part such as an arm or a hand [2]. To date, most progresses of these BCI systems have been based on invasive approaches using neuronal firing patterns [4,8,9], local field potentials (LFPs) [10,11] or electrocorticogram (ECoG) [12-14]. These signals inside head possess the advantages of little noise, high topographical resolution and broad bandwidth.

However, for applications on human being, invasive ways are seriously limited by questions about the safety and durability of implanted channels [15]. Some recent studies have demonstrated that brain signals recorded by non-invasive approaches also carry significant information of detailed limb movements. For instance, from magnetoencephalogram (MEG) signals, hand movement directions have been decoded in the discrete center-out reaching task [16]; hand positions have been decoded during the continuous joystick movements [17]; and hand velocities have been decoded during the discrete center-out drawing task [18], the target-to-target joystick movements [19] and the continuous trackball movements [20]. It has been reported that low frequency band (≤3 Hz or 2-5 Hz) MEG on motor-related areas is critically involved in representing limb movement direction and speed [16,20]. Moreover, long-distance coupling between primary motor cortex and multiple brain areas in the low frequency band has been found during a continuous visuomotor task [20]. And the neural mechanisms of speed and tau in pointing hand movement from MEG have been revealed (tau is defined as the ratio of the current distance-to-goal gap over the current instantaneous speed towards the goal) [19].

Compared with MEG, electroencephalogram (EEG) has lower signal-to-noise ratio and spatial resolution. It was generally thought that EEG could not extract sufficient information to reconstruct limb movements. However, EEG is easily available and more suitable for ambulatory prosthetic system [17,21]. Therefore, a few ambitious researchers have extended the exploration to EEG signals. For example, hand directions have been inferred from EEG recorded in a center-out joystick operation [16]. The subjects were constrained to small finger and wrist movements. Another study has been presented about the prediction of reaching target from EEG recorded in multi-joint center-out movements [22]. Later, a movement delay paradigm was designed to investigate brain activities in the human posterior parietal cortex (PPC) during the planning of intended movements [23]. Newly, the positions, the velocities and the accelerations of hand movement were modestly decoded during a 3-D center-out reaching task [24,25]. As far as we know, most of these EEG studies employed a center-out movement task which contained pre-specified point-to-point movements. Specifically, the starting and end points were fixed, and the length of each movement was well constrained.

In our study, we designed a 2-D drawing task in which the subjects were required to move a pen at their own pace along a zigzag route in each trial (refer to Figure Figure1).1). This zigzag route was determined online by the subjects themselves. Specifically, this task can be regarded as sequential point-to-point movements. At each point the subjects selected one of the four directions, i.e., up, down, left and right. Moreover, the numbers and the positions of these points, and the distance between two sequential points were up to the subjects (not pre-specified). Thus the starting point, the end point and the length of each point-to-point movement were less restricted compared to the center-out task. During the experiment, multi-channel EEG activities from whole scalp were recorded. Then, independent component analysis (ICA) [26] was used to remove the effects of electrooculogram (EOG) and electromyogram (EMG) activities. After that, discriminative spatial pattern (DSP) filtering [27] and common spatial pattern (CSP) filtering [28] were employed to extract the amplitude features and the power features from the retained independent components (ICs) in multiple frequency bands. Then Kalman filtering and a smoothing algorithm [29] were applied to decode the hand movement velocity with these features. Furthermore, we investigated the scalp areas most involved for the decoding and evaluated the decoding performance of each frequency band.

Figure 1
Drawing task paradigm. The example of movement trajectories (blue dotted lines) performed by a subject. Movement directions are displayed as the red arrows. The starting point is represented as green circle 1. It was randomly initialized by the laptop. ...

Methods

Subjects and Recording System

Five right-handed healthy male subjects participated voluntarily in this study. Among them, subject 1 had been well trained in the BCI experiments of hand motor imageries, while the other subjects had less or never participated in any kind of BCI experiment before. These five subjects were instructed to move a pen (using their right wrist only and relaxing left hand on the lap) on the touch screen of a laptop in front of them. Meanwhile, the pen tracks denoting the trajectories of hand movements were recorded with a sampling rate of 64 Hz by the laptop. At the same time, a 40-channel EEG cap LT37 from Compumetics was used to collect EEG signals from the subjects. And a portable amplifier (NeuroScan NuAmps) amplified the analog EEG signals, digitalized them with a sampling rate of 250 Hz. The laptop received the EEG data from the amplifier through a USB port and sent synchronous stimulus code through parallel port to the amplifier.

Experimental Paradigm

Our experiment contained 60 trials. Each trial started with a fixation cross shown on the touch screen for 2 seconds. After that, a graphical user interface (GUI) was displayed. It was a 7 cm × 7 cm square in which a green ball denoting the starting point was randomly initialized. Then, in the next 40-50 seconds, subjects were asked to touch the green ball by a pen and move it to arbitrary points at their own pace in 4 directions (up, down, left and right). An example of the task is shown in Figure1. Actually, the pen track of each trial corresponded to a sequence of directional hand movements. In this experiment, the subjects self-chose the number of point-to-point movements during the drawing task. After the drawing time slot, this GUI disappeared and the trial was ended. The time for rest between the trials was randomized in a range from 8 s to 10 s to prevent subjects getting used to the timing of rest state to drawing task. During the time for rest, subjects were periodically told which directions were under-represented by the laptop for data balance. More detailed parameters of this experiment are listed in Table Table11.

Table 1
The detailed parameters of the drawing task

EOG and EMG removal

During our drawing tasks, the recorded EEG signals were contaminated with various artifacts such as EOG and EMG [30]. These artifacts may confound the EEG decoding of hand movements [18]. To show an example, we collected the EOG of Subject 3 and provided an off-line analysis in Appendix A1. The off-line analysis of EOG and the decoding of hand velocity of Subject 3 were based on the same dataset. To remove EOG and EMG, we employed ICA. It is a process that detects and isolates independent components (ICs) of signals consisting of mixed sources. For each subject, 30 ICs were decomposed from EEG signals by using the EEGLAB software [31], and about 12 ICs regarded as EOG/EMG were removed by the following heuristics: (i) Eye movements should project mainly to frontal sites with a low-pass time course; (ii) Eye blinks should project to frontal sites and have large punctate activations; (iii) Temporal muscle activities should project to temporal sites with a spectral peak in the band above 20 Hz [32]. An example of EOG and EMG removal is also given in Appendix A1.

Feature extraction

Since the direction was approximately fixed (up, down, left or right) in each point-to-point movement in our study, the values of hand velocities have close relationship with the directions. For example, when a subject performed a movement to the right, the absolute value of hand velocity in y-dimension is small and the hand velocity in x-dimension is large. It may suggest that the brain components discriminative for different directional movements were helpful for reconstructing the profiles of hand velocities. Therefore, supervised spatial filtering methods CSP and DSP were employed here to extract the discriminative brain components. Specifically, after EOG and EMG were removed, a filter bank was applied to filter the retained ICs into multiple bands (0.1-4 Hz, 4-8 Hz, 8-12 Hz, ..., 36-40 Hz). Then DSP was used to extract the amplitude features of slow potentials within 0.1-4 Hz band of the ICs. And CSP was applied to extract the power features of oscillatory rhythms from the other bands of the ICs. The details of DSP and CSP methods can be found in Appendix A2.

In DSP and CSP training procedure, we cut hand movement trajectories into segments with a sliding window (1s wide and 0.5s overlap) to obtain the directions in the drawing task. It was expected that the trajectory in each segment only exhibits one movement direction. However, in practice, the trajectories of some segments may not be straight lines or not extend enough in a direction. The ICs of these segments were not used into DSP or CSP training. Note that DSP and CSP were originally proposed to deal with binary classification problems. As far as our 4-direction hand movements are concerned, DSP and CSP need to be extended to multiclass paradigms. In this study, they were computed between each pair of directions [33], and the number of the pairs was C42=6.

After CSP/DSP filter training was completed, regarding each pair of directions, 2 most discriminating filters of DSP and 4 most discriminating filters of CSP were obtained (see Appendix A2). Then they were used to filter the multi-band ICs into time series. In each frequency band, the combination of ICA and DSP/CSP can be formulated as:

ξi=WiT(UXi)
(1)

where Xi [set membership] RC×T is the recorded EEG signal in the ith frequency band, i = 1,2,...,10, C is the number of channels, T is the number of sample points covering the entire time period of an experiment, U [set membership] Rm×C is the 'unmixing' matrix of ICA, m is the number of retained ICs, WiRm×li is the filtering matrix of DSP or CSP in the ith frequency band, li is the number of the selected filters (l1 = 12, l2 = l3 =...= l10 = 24), ξiRli×T is the filtered data.

At last, we extracted the features from the filtered data ξi every 200 ms without overlap, i.e., ξi = [ψi1, ψi2,..., ψiN], where N is the number of 200 ms bins. Within each 200 ms bin, the average amplitudes of 0.1-4 Hz signals were calculated as z1,j(q)=means(ψ1,jq), where ψ1,jq is the qth row of ψ1,j, j = 1,2,..., N, q = 1,2,...,12. The variances of the other frequency band signals were computed, normalized and log-transformed as zi,j(p)=log{var(ψi,jp)/p=124var(ψi,jp)}, where ψi,jp is the pth row of ψ1,j, i = 2,3,...,10, p = 1,2,...,24. Before decoding, these features were normalized to zero mean and unit variance. They were denoted as zj = [z1,j, z2,j, ..., z10,j], z [set membership] RD×N, where D = 228 is the dimension of features. Moreover, in this paper, x-velocity and y-velocity of the hand movement were measured as the displacements of pen track on horizontal dimension and vertical dimension within each 200 ms bin, respectively.

Decoding Algorithms

The decoding algorithm presented in this paper consists of a standard Kalman filter and a smoother. The Kalman filter is a real-time processing algorithm in which the state estimate is updated immediately after a new observation is available. On the other hand, the smoother optimally combines the Kalman filter with a reverse-time information filter. The result is a minimum variance estimate based on past, present and future information [34].

(1) Kalman filter

Kalman filter considers a discrete filtering model [29], of which the system and observation models are:

vj+1=Ajvj+nj
(2)
zj=Hjvj+qj
(3)

In this paper, the state vector is denoted by vj = [vx, j, vy, j]T with vx, j and vy, j representing the horizontal and the vertical velocities respectively at time step j; Aj [set membership] R2×2 is the state transition matrix, and nj ~ N (0,Nj) is the noise term, where Nj [set membership] R2×2. The observation vector zj [set membership] RD is made up of the extracted features, Hj [set membership] RD×2 is the observation matrix, and qj ~ N(0,Qj) is the noise term of observation, where Qj [set membership] RD×D, D = 228, j = 1,2,..., Mk, and Mk is the number of time steps in the kth trial. Here Aj, Hj, Nj and Qj are simplified as constant matrices. The matrices A and H can be obtained from training data by using least squares estimation:

argminAkTrj=1Mk1vk,j+1Avk,j2argminHkTrj=1Mkzk,jHvk,j2

where Tr is the set of training trials. For the estimated A and H, the noise covariance matrices N and Q can be obtained by equation (2) and (3). The prediction and update equations of Kalman filter for test can be written as follows [29]:

Prediction:v^j=Av^j1Pj=APj1AT+N
Update:Sj=HPjHT+QKj=PjHTSj1v^j=v^j+Kj(zjHv^j)Pj=PjKjSjKjT

where v^j and Pj are the predicted mean and covariance of the state before seeing zj; Sj is the prediction covariance of the observation; v^j and Pj are the estimated mean and covariance of the state after seeing zj, Kj is the filter gain.

(1) Smoother

The smoother is calculated from the results of Kalman filter by recursions [34]:

Cj=PjAT[APjAT+N]1v^js=v^j+Cj[v^j+1sAv^j]Pjs=Pj+Cj[Pj+1sAPjATN]CjT

where Cj is the smoother gain; v^j and Pj are the filter estimates for the state mean and state covariance; v^js and Pjs are the smoother estimates for the state mean and state covariance. The recursions start from the last time step.

Results

To study the fidelity of the drawing movement decoding and the characteristics of the associated EEG signals, we will show the accuracy of the hand velocity decoding, demonstrate the scalp areas most involved for the decoding and present the frequency bands that carried information of hand velocity. 5-fold cross-validation was employed in the evaluation, i.e., each subject's data were divided into 5 parts, among them 4 parts were used for training, and the retained part was adopted for test. This procedure was repeated 5 times. In each time, a different part was used as the test set. The results of these evaluations are described below.

Decoding accuracy of drawing movement

Table Table22 shows three performance indexes to assess the decoding accuracy, including (i) Pearson correlation coefficient (r-value), abbreviated as CC, between the measured and the decoded hand velocities; (ii) p-value for testing the null hypothesis that the measured and the decoded hand velocities are uncorrelated by Student's t-test; (iii) signal-to-noise ratio (SNR), where SNR=10log10[(E(v2)/E(vv^)2], v denotes the measured hand velocity, v^ represents the decoded hand velocity.

Table 2
Decoding performance of hand velocity using ICA-cleaned EEG

From Table Table2,2, we can find that, except the result of Subject 1 in y-dimension, the small p-values indicate that the CCs are significant. On average, the modest CCs and SNRs demonstrate that it is possible to infer information about hand velocities in drawing task by EEG. For most subjects, the hand velocities in horizontal dimension, x, were better decoded than those in vertical dimension, y. Similar disparity in the MEG decoding between dimensions of hand movement has been discussed in [35]. Because the subjects were asked to draw on the vertical touch screen, gravitational force may impact the drawing action of subjects and degrade the decoding in y-dimension [35]. Although we only presented the results for one parameter setting (1s segment length for CSP/DSP filter training and 200 ms step size for Kalman smoother decoding), it was also found that these parameters could be chosen in a wide range. For instance, we also tried other parameter settings (segment length for CSP/DSP filter training: 0.5s and 2s; decoding step size: 100 ms and 300 ms), and obtained comparable results. These results are not included in this paper due to limited page space.

Some examples of measured and decoded hand velocities in x-dimension and y-dimension are displayed in Figure Figure2.2. It can be seen that, in y-dimension, the decoded velocities hardly reflect the trends of the measured ones, while in x-dimension, generally, the decoded velocities match the measured ones better. Meanwhile, the measured velocities roughly consist of sequential bell shapes. Each bell shape indicates a relative straight trajectory made by a subject in a certain direction. Note that most bell shapes are irregular, which may be caused by two facts (i) the variable friction exists between the pen and the touch screen; (ii) visual guided point-to-point movements are not implemented in a purely feed-forward manner [19].

Figure 2
Decoding examples. Examples of smoothed and standardized measured (blue) and decoded (red) hand velocities. The left column is for x-dimension, and the right column is for simultaneous y-dimension. Each row contains data for one subject. The Pearson correlation ...

Scalp areas most involved for hand velocity decoding

Note that the brain components were generated by applying ICA and CSP/DSP to EEG signals. We rewrite equation (1) as

ξi=BiXi

where Bi=WiTU, BiRli×C, li is the number of selected filters in the ith frequency band, i = 1,2,...,10, C is the number of channels. Each row of Bi gives a weight vector for channels to construct a brain component. Regarding velocity decoding by Kalman model, the observation is consisted of the features extracted from these brain components. Thus Bi partly reflects the importance of the channels for velocity decoding. To investigate which channels were more involved for the velocity decoding in the ith frequency band, we average the rows of Bi as follows:

Ii=1liq=1li|Biq|

where Biq is the qth row of Bi, Biq,IiRC, |·| is an element-wise absolute operator. Figure Figure33 shows the scalp topographies of I1 and (i=210Ii/9) corresponding to the frequency bands 0.1-4 Hz and 4-40 Hz respectively.

Figure 3
Scalp topographies of channel weights according to the feature extraction for velocity decoding. (A) This figure shows the averaged scalp topographies of channel weights across five subjects in 0.1-4 Hz (left) and 4-40 Hz (right), respectively; (B) This ...

Figure 3(A) presents the average scalp topographies across the 5 subjects. Generally, the contralateral and ipsilateral channels in motor, posterior parietal and occipital areas have greater weights, and the contralateral dominance is demonstrated. Specifically, for amplitude features in low frequency band (0.1-4 Hz), the channels over premotor, posterior parietal and occipital areas get greater weights; for power features in 4-40 Hz, the channels over posterior parietal and occipital areas get greater weights. These findings suggest the widespread involvement of brain areas with hand kinematics during the drawing task. The results are approximately in accordance with the following studies: Wang et al. demonstrated that intended movement directions can be predicted by recording EEG from posterior parietal areas [23]; Bradberry et al. showed that the sensorimotor area is important for hand velocity decoding [24]; And Vaillancourt DE et al. presented that the parietal and premotor cortex are associated with visuomotor processes [36].

Figure 3(B) displays the scalp topographies separately for each subject. On the whole, the channels on motor, posterior parietal and occipital areas get greater weights both in 0.1-4 Hz band and in 4-40 Hz band for all the subjects, although the weights of these areas are subject-dependent. As an exception, for Subject 4, the channels on prefrontal area also get greater weights. It may have been caused by some artifacts.

Decoding performance of different frequency bands

In order to explore which frequency bands carry information about hand velocity, we studied the decoding performance of each band, and show them in Figure Figure4.4. It can be seen that the frequency distribution for decoding is highly subject-dependent. For example, for Subject 1, the CC value of low frequency band (0.1-4 Hz) is significantly inferior to those of the other frequency bands in x-dimension (p < 0.05, paired left-tailed Student's t-test). However, for Subject 3, the CC value of low frequency band (0.1-4 Hz) is significantly superior to those of the other frequency bands in x-dimension (p < 0.05, paired right-tailed Student's t-test). Moreover, the CC values for Subject 1 are essentially zero in y-dimension for all the frequency bands and about 0.5 in x-dimension above 8 Hz. This may be due to the following fact. Subject 1 has been well trained for cursor control in a BCI system through left and right hand movement imageries. His voluntary power modulation of 8-40 Hz rhythms has been reinforced. The drawing task performed by right hand may have activated this power modulation in x-dimension which masks the information about hand movement in y-dimension. For Subject 2, the poor CC values of the frequency bands beyond 4 Hz indicated that, for certain people, the information about limb kinematics may not be inferred from the EEG above 4 Hz. The study of Waldert et al. provided similar results [16]. Regarding the average across all the subjects, there is no significant difference between the CC values of the low frequency band (0.1-4 Hz) and those of the other frequency bands in x-dimension (p > 0.40, paired two-tailed Student's t-test); however, the CC values of the frequency band from 24 Hz to 28 Hz are significantly higher than those of the low frequency band (0.1-4 Hz) in y-dimension (p < 0.05, paired right-tailed Student's t-test). These findings imply that, besides the slow potentials from 0.1 Hz to 4 Hz, the oscillatory rhythms from 24 Hz to 28 Hz may also carry notable information about hand movement velocity.

Figure 4
Decoding performance of different bands. By using the features from different frequency bands respectively, we show the mean and SEM of the Pearson correlation coefficients (CCs) between measured and decoded hand velocities across cross-validation folds ...

Comparison on decoding performance with ICA-cleaned data and non-ICA-cleaned data

Here we list the decoding performance (CC) with non-ICA-cleaned data in Table Table3.3. Comparing the CCs in Table Table22 and Table Table3,3, we can find that non-ICA-cleaned data result in remarkably higher decoding accuracies in x-dimension and in y-dimension (p < 0.05, paired right-tailed Student's t-test). It indicates that the components removed by ICA could offer considerable contribution to hand velocity decoding. Although most of these components are EOG and EMG (see Appendix A1), these removed components may contain EEG signals which carry the information of hand velocity to some degree.

Table 3
Decoding performance of hand velocity using non-ICA-cleaned EEG

Comparison on decoding performance of linear filter, Kalman filter and Kalman smoother

Until now, many decoding algorithms have been used to reconstruct hand velocities, such as linear filter in the study of Bradberry et al. [24] and Kalman filter in the research of Wu et al. [37]. As discussed in [37], compared with linear filter, Kalman filter possesses the advantages of a clear probabilistic foundation and a model of the temporal hand kinematics. Based on the work of Wu et al. [37], here we employed the smoothing method to integrate not only past and present information but also future information of hand velocities into Kalman model. With different lag time, the average decoding performance across five subjects for linear filter, Kalman filter and Kalman smoother are shown in Figure Figure5.5. Paired Student's t-test is employed to compare the decoding performance of the three methods. The results are listed in Table Table4.4. From Figure Figure55 and Table Table4,4, we find that with different lag times, the CCs and SNRs of Kalman smoother are significantly better than those of the linear filter and Kalman filter (p < 0.05, right-tailed), except in y-dimension where the SNRs of Kalman smoother are not significantly superior to those of Kalman filter (p > 0.05, right-tailed). Considering the Kalman smoother in this paper being an off-line algorithm, we plan to modify it and extrapolate this work to an online system in the future.

Figure 5
Comparison on decoding performance of linear filter, Kalman filter and Kalman smoother. This figure shows the mean (bar) with SEM (error bar) of CC (the first row) and SNR (the second row) across the 5 subjects with different lag time using linear filter, ...
Table 4
Comparison on decoding performance of Kalman smoother and the other methods

Discussion

Comparison with other related studies

In this paper, the average CC across the five subjects over x-dimension and y-dimension is 0.30. As the most related work, hand velocity was reconstructed from EEG during a 3-D center-out reaching task, and a very close CC (0.29) was obtained [24]. In addition, MEG signals also reflect the activities of large neuronal populations. From MEG, hand velocities were predicted during a 2-D center-out drawing task, and a higher CC (0.4) was gained without EOG or EMG removal [18]. Therefore, the decoding accuracy of our work is within the range of those achieved in the studies mentioned above. Moreover, we would like to compare the experimental paradigms in this paper and that in [24] as below:

(i) In [24], the center-out task is a 3D reaching movement, in which the subject moved his hand from a fixed starting point (center) to one of the 8 stationary targets, and then moved his hand back to the center. In this paper, the task is a 2D self-routed drawing movement, in which the subject was required to move a pen at his own pace along a zigzag route in each trial. This task can be regarded as sequential point-to-point movements. At each point the subject selected one of the four directions. Moreover, the numbers and positions of these points, and the distance between two sequential points were up to the subject. Therefore, compared to [24], the starting point, the end point and the length of each point-to-point movement in our experiments were less constrained. The subjects can perform the movements with higher variability. It has been reported in [24] that the variabilities of movement time and movement length are negatively correlated with the accuracy of hand velocity decoding. From this viewpoint, the hand velocity of our drawing movement could be harder to decode than that of the center-out movement task.

(ii) In [24], subjects were asked to perform multi-joint movements of the upper limb. In our work, the subjects were instructed to make movements only with their hands and wrists, while keeping their shoulders and arms at rest. We studied hand movements not only because of the interesting work on hand movement direction decoding [16], but also because hand is relatively far from the EEG cap, therefore reduces EMG contamination to the EEG signals. Since our drawing task needs the coordination of eye and hand, EOG and EMG may confound the EEG decoding. Thus we employed ICA to remove EOG and EMG artifacts.

Decoding hand kinematics in different frequency bands

Which frequency band of neural signal carries most information about limb kinematics is an important issue discussed in the existing studies. For example, Ball et al. summarized the decoding accuracies of arm movement direction with different band ECoG, and indicated that highest decoding accuracy can be obtained from slow movement-related potentials (MRPs) (<2 Hz) [38]. Jerbi et al. reported the notable phase locking between 2-5 Hz MEG oscillatory activity in the contralateral primary motor cortex and time-varying hand speed [20]. Regarding EEG recording, Waldert et al. discovered that low frequency band (≤3 Hz) EEG of the sensors located in the motor-related area have close relationship with movement directions [16]. In addition, it is well known that the planning and execution of movement leads to significant power modulation in 8-30 Hz EEG, i.e., event-related synchronization/desynchroniza- tion (ERS/ERD) [39,40]. Such characteristic changes in EEG rhythms have been used to classify brain states related to the planning/imagery of different types of limb movement [41]. Newly, Han et al. reported that EEG activities in the alpha (8-12 Hz) and beta (18-28 Hz) frequency bands were correlated with the speed of imagery clenching [42]. In our study, we have shown that displacement velocity can be represented by the MRP in 0.1-4 Hz band and the ERD/ERS in 24-28 Hz band. Further more, we analyzed the relevance of decoding results from different frequency bands (see Appendix A3), and found that the decoding results of MRPs from low frequency band (0.1-4 Hz) are little correlated with those of oscillation rhythms from higher frequency bands (4-40 Hz). It indicates that the potential shifts in the low frequency band and the power modulations in the higher frequency bands reflect different aspects of brain activities related to hand movement velocity. Furthermore, from the scalp map in Figure 3 (A), we find that in the low frequency band, the channels in the motor, posterior parietal and occipital areas get greater weights. This demonstrates that the features in the low frequency bands capture the neural signature. The finding is in accordance with the ECoG study of Schalk et al. which also focused on decoding kinematic parameters of hand movement [14].

Conclusions

Decoding limb kinematics from brain signals in non-invasive ways may realize safe and convenient control of motor prosthesis. In this paper, we demonstrated that EEG signals can be used to decode hand velocity during a sequential drawing task. The scalp areas over motor cortex, posterior parietal cortex and occipital areas were most involved for the decoding. Furthermore, we show that not only slow potentials in 0.1-4 Hz band, but also oscillatory rhythms in 24-28 Hz band may carry information about hand velocity.

Competing interests

The authors declare that they have no competing interests.

Authors' contributions

JL participated in the design of the study, carried out the experiment and data analysis, and drafted the manuscript. YL conceived of the study, and participated in its design and coordination. JL, YL and ZG read and approved the final manuscript.

Appendix

A1. EOG and EMG removal based on ICA

In our study, we recorded Subject 3's EOG activity with a bipolar sensor montage with sensors attached superior and inferior to the orbital fossa of the right eye for vertical eye movements and to the external canthi for horizontal eye movements. Firstly, we computed Pearson correlation coefficient (CC) and p-value (for no correlation hypothesis, Student's t-test) between the EOG signal and the measured hand velocity. The results are listed in Table Table5.5. It is found that the correlation between the horizontal EOG activity and the horizontal hand velocity is significant (p < 0.001).

Table 5
Correlation between EOG activity and hand velocity

Next, we removed EOG and EMG artifacts using ICA method. ICA removes artifacts from EEG records by eliminating the contributions of artifact sources to the scalp sensors. Using the data from Subject 3, we provided the regularized scalp maps of all the ICs in Figure Figure66.

Figure 6
Regularized Scalp maps of all the independent components (ICs). This figure shows the scalp maps of all the ICs based on the data of Subject 3.

From Figure Figure6,6, we can find that the projection strengths of IC5, IC6 and IC14 were concentrated on Fp1 or Fp2. These ICs were removed as the eye movement artifacts [31]. To demonstrate the validity of ICA for EOG removal in our study, we have computed the CCs between the independent components (ICs) and the recorded EOG activities. The results are shown in Figure Figure7,7, where we can observe that, except IC5, IC6 and IC14, all the components are not obviously correlated with EOG activities.

Figure 7
Correlation coefficients between EOG activities and independent components (ICs). This figure shows the correlation coefficients (CCs) between the ICs and EOG in horizontal and vertical direction respectively.

On the other hand, from Figure Figure6,6, we can find the projection strengths of IC10 and IC29 are concentrated on the temporal sites. Their power spectrums are shown in Figure Figure8,8, which demonstrates high power at frequencies above 20 Hz. Here, IC10, IC29 were removed as the EMG artifacts [31]. In our study, some ICs partially exhibit the characters of EOG/EMG, such as IC1, IC7, IC13, IC15, IC21, IC22, IC25, IC26 and IC27. They were also removed.

Figure 8
Power spectrums of EMG independent components. This figure shows the power spectrums of IC10 (A) and IC29 (B). The corresponding scalp maps are shown in Figure 6.

A2. Details of DSP and CSP algorithms

Both DSP and CSP are linear projection methods [27,28]. They have the same data model as Y = WT X , where Y [set membership] RC×T denotes the source component, W [set membership] RC×C is the projection matrix and X [set membership] RC×T represents the EEG segment, with C denoting the number of channels, and T denoting the number of samples in the time interval of interest.

However, the goals of DSP and CSP are different. For DSP, W is sought for the purpose of extracting the amplitude of slow non-oscillatory source. It projects EEG segments to the linear subspace where the between-class separation is maximized while the within-class separation is minimized. The projection vector achieving the largest ratio of between-class separation and within-class separation is defined as the most discriminative filter. Let Sb, Sw denote the between-class and the within-class scatter matrices of EEG segments, respectively.

Sb=j=1Knj(MjM)(MjM)T
(A1)
Sw=j=1Ki=1nj(Xj(i)Mj)(Xj(i)Mj)T
(A2)

where Xj (i) represents the ith EEG segment of class j, K is the number of classes, nj is the number of EEG segments for class j, Mj is the average of EEG segments for class j, M is the average of all the EEG segments. Then the objective function of DSP can be written as [27]:

maxJDSP(W)=|WTSbW||WTSwW|
(A3)

(A3) is in the form of Rayleigh quotient. The solution can be obtained by solving the following generalized eigenvalue problem:

Sbwq=γqSwwq
(A4)

where q = 1,2,..., C, γq is an eigenvalue and wq is the corresponding eigenvector. Assuming these eigenvalues are sorted in a descending order, only a few eigenvectors W* = [w1,...,wd] associated with the largest eigenvalues are chosen as the most discriminative spatial filters, where d <<C. Then each EEG segment is projected as Y* = W*T X, Y* [set membership] Rd×T. To obtain the amplitude features of slow potential shifts, we calculate the mean of Y* as fDSPr=mean(yr*), where r = 1, ..., d, yr* is the rth row of Y*. In our work, d = 2.

For CSP, W is optimized to obtain the band power of oscillatory source. It maps EEG segments to the linear subspace where the variance of one class is maximized while the variance of the other class is minimized. The projection vectors achieving the largest and smallest ratios of the variances of the two classes are defined as the most discriminative filters. Assuming R denotes the normalized covariance matrix of EEG segment, i.e., R = XXT/trace (XXT), then the objective function of CSP can be formulated as [28]:

maxJCSP(W)=|WTR1W||WTR2W|
(A5)

where R1 and R2 represents the average of the covariance matrices from EEG segments within class 1 and class 2 respectively. Similar to (A3), (A5) is also in the form of Rayleigh quotient. The solution can be obtained by solving the generalized eigenvalue problem:

R1wq=βqR2wq
(A6)

where q = 1,2,..., C, βq is an eigenvalue and wq is the corresponding eigenvector. Suppose these eigenvalues are sorted in a descending order, the eigenvectors associated with the largest and smallest m eigenvalues are chosen as the most discriminative spatial filters, i.e., W* = [w1,...,wm, wC-m+1,...,wC], where m <<C. Then each EEG segment is projected as Y* = W*T X, Y* [set membership] R2m×T. To extract the power features, we calculated the logarithm transformation, normalized the variance of Y* by rows fCSPr=log{var(yr*)/r=12mvar(yr*)}. In this paper, m = 2. The logarithm transformation is performed to normalize the distribution of the elements in fCSPr.

A3. Relevance of decoding results from different frequency bands

The absolute correlation coefficient matrices of the decoded hand velocities from different frequency bands are shown in Figure Figure9.9. Figure 9(A) illustrates the average of the matrices of the 5 subjects. The decoding result from low frequency band (0.1-4 Hz) is little correlated with those from the frequency bands above 4 Hz in x-dimension and in y-dimension (|cc|<0.05). When we consider the patterns for individual subjects, we obtain similar results as above. Figure 9(B)-(F) show the matrices for the five subjects respectively. For all the 5 subjects, the decoding result from low frequency band (0.1-4 Hz) is not significantly correlated with those from the frequency bands above 4 Hz in x-dimension and in y-dimension (|cc|<0.07, p > 0.05 for testing the hypothesis of no correlation).

Figure 9
The absolute correlation coefficient matrices of decoded hand velocities from different frequency bands.

Acknowledgements

This work was supported by National Natural Science Foundation of China under Grants 60825306, Guangdong Natural Science Foundation under Grants 9251064101000012 and Fundamental Research Funds for the Central Universities, SCUT under Grants 2009ZZ0055 and 2009ZZ0059.

References

  • Dornhege G, Millan J, Hinterberger T, McFarland DJ, Müller KR. Toward brain-computer interfacing. Cambridge MA, MIT Press; 2007.
  • Waldert S, Pistohl T, Braun C, Ball T, Aertsen A, Mehring C. A review on directional information in neural signals for brain-machine interfaces. J Physiol (Paris) 2009;103:244–254. doi: 10.1016/j.jphysparis.2009.08.007. [PubMed] [Cross Ref]
  • Wolpaw JR, McFarland DJ. Control of a two-dimensional movement signal by a noninvasive brain-computer interface in humans. Proc Natl Acad Sci USA. 2004;101:17849–17854. doi: 10.1073/pnas.0403504101. [PubMed] [Cross Ref]
  • Hochberg LR, Serruya MD, Friehs GM, Mukand JA, Saleh M, Caplan AH, Branner A, Chen D, Penn RD, Donoghue JP. Neuronal ensemble control of prosthetic devices by a human with tetraplegia. Nature. 2006;442:164–171. doi: 10.1038/nature04970. [PubMed] [Cross Ref]
  • Mason SG, Bashashati A, Fatourechi M, Navarro KF, Birch GE. A comprehensive survey of brain interface technology designs. Ann Biomed Eng. 2007;35:137–169. doi: 10.1007/s10439-006-9170-0. [PubMed] [Cross Ref]
  • Zhang H, Guan C, Wang C. Asynchronous P300-based brain-computer interfaces: a computational approach with statistical models. IEEE Trans Biomed Eng. 2008;55:1754–1763. doi: 10.1109/TBME.2008.919128. [PubMed] [Cross Ref]
  • Blakely T, Miller KJ, Zanos SP, Rao RP, Ojemann JG. Robust, long-term control of an electrocorticographic brain-computer interface with fixed parameters. Neurosur Focus. 2009;27:E13. doi: 10.3171/2009.4.FOCUS0977. [PubMed] [Cross Ref]
  • Taylor DM, Tillery SI, Schwartz AB. Direct cortical control of 3D neuro- prosthetic devices. Science. 2002;296:1829–1832. doi: 10.1126/science.1070291. [PubMed] [Cross Ref]
  • Velliste M, Perel S, Spalding MC, Whitford AS, Schwartz AB. Cortical control of a prosthetic arm for self-feeding. Nature. 2008;453:1098–1101. doi: 10.1038/nature06996. [PubMed] [Cross Ref]
  • Mehring C, Rickert J, Vaadia E, Cardosa DO, Aertsen A, Rotter S. Inference of hand movements from local field potentials in monkey motor cortex. Nat Neurosci. 2003;6:1253–1254. doi: 10.1038/nn1158. [PubMed] [Cross Ref]
  • Rickert J, Oliveira SC, Vaadia E, Aertsen A, Rotter S, Mehring C. Encoding of movement direction in different frequency ranges of motor cortical local field potentials. J Neurosci. 2005;25:8815–8824. doi: 10.1523/JNEUROSCI.0816-05.2005. [PubMed] [Cross Ref]
  • Leuthardt EC, Schalk G, Wolpaw JR, Ojemann JG, Moran DW. A brain computer interface using electrocorticographic signals in humans. J Neural Eng. 2004;1:63–71. doi: 10.1088/1741-2560/1/2/001. [PubMed] [Cross Ref]
  • Pistohl T, Ball T, Schulze-Bonhage A, Aertsen A, Mehring C. Prediction of arm movement trajectories from ECoG-recordings in humans. J Neurosci Methods. 2008;167:105–115. doi: 10.1016/j.jneumeth.2007.10.001. [PubMed] [Cross Ref]
  • Schalk G, Kubánek J, Miller KJ, Anderson NR, Leuthardt EC, Ojemann JG, Limbrick D, Moran DW, Gerhardt LA, Wolpaw JR. Decoding two-dimensional movement trajectories using electrocorticographic signals in humans. J Neural Eng. 2007;4:264–275. doi: 10.1088/1741-2560/4/3/012. [PubMed] [Cross Ref]
  • Wolpaw JR, Birbaumer N, McFarland DJ, Pfurtscheller G, Vaughan TM. Brain- computer interfaces for communication and control. Clin Neurophysiol. 2002;113:767–791. doi: 10.1016/S1388-2457(02)00057-3. [PubMed] [Cross Ref]
  • Waldert S, Preissl H, Demandt E, Braun C, Birbaumer N, Aertsen A, Mehring C. Hand movement direction decoded from MEG and EEG. J Neurosci. 2008;28:1000–1008. doi: 10.1523/JNEUROSCI.5171-07.2008. [PubMed] [Cross Ref]
  • Georgopoulos AP, Langheim FJ, Leuthold AC, Merkle AN. Magnetoencephalo-graphic signals predict movement trajectory in space. Exp Brain Res. 2005;167:132–135. doi: 10.1007/s00221-005-0028-8. [PubMed] [Cross Ref]
  • Bradberry TJ, Rong F, Contreras-Vidal JL. Decoding center-out hand velocity from MEG signals during visuomotor adaptation. NeuroImage. 2009;47:1691–1700. doi: 10.1016/j.neuroimage.2009.06.023. [PubMed] [Cross Ref]
  • Tan HR, Leuthold AC, Lee DN, Lynch JK, Georopoulos AP. Neural mechanisms of movement speed and tau as revealed by magnetoencephalo-graphy. Exp Brain Res. 2009;195:541–552. doi: 10.1007/s00221-009-1822-5. [PubMed] [Cross Ref]
  • Jerbi K, Lachaux JP, N'Diaye K, Pantazis D, Leahy RM, Garnero L, Baillet S. Coherent neural representation of hand speed in humans revealed by MEG imaging. Proc Natl Acad Sci. 2007;104:7676–7681. doi: 10.1073/pnas.0609632104. [PubMed] [Cross Ref]
  • Stefan R, Hermann S. On the opposition of EEG and MEG. Clin Neurophysiol. 2007;118:1658–1659. doi: 10.1016/j.clinph.2007.04.021. [PubMed] [Cross Ref]
  • Hammon PS, Makeig S, Poizner H, Todorov E, de Sa VR. Predicting reaching targets from human EEG. IEEE Signal Process Mag. 2008;25:69–77. doi: 10.1109/MSP.2008.4408443. [Cross Ref]
  • Wang Y, Makeig S. Predicting intended movement direction using EEG from human posterior parietal cortex. Conf Proc HCI (16) 2009. pp. 437–446.
  • Bradberry TJ, Gentili RJ, Contreras-Vidal JL. Reconstructing three-dimensional hand movements from noninvasive electroencephalographic signals. J Neurosci. 2010;30:3432–3437. doi: 10.1523/JNEUROSCI.6107-09.2010. [PubMed] [Cross Ref]
  • Bradberry TJ, Gentili RJ, Contreras-Vidal JL. Decoding Three-Dimensional Hand Kinematics from Electroencephalographic signals. Conf Proc IEEE EMBS. 2009. pp. 5010–5013. [PubMed]
  • Kachenoura A, Albera L, Senhadji L, Comon P. ICA: a potential tool for BCI systems. IEEE Signal Process Mag. 2008;25:57–68. doi: 10.1109/MSP.2008.4408442. [Cross Ref]
  • Liao X, Yao DZ, Wu D, Li CY. Combining spatial filters for the classification of single-trial EEG in a finger movement task. IEEE Trans Biomed Eng. 2007;54:821–831. doi: 10.1109/TBME.2006.889206. [PubMed] [Cross Ref]
  • Blankertz B, Tomioka R, Lemm S, Kawanabe M, Müller KR. Optimizing spatial filters for robust EEG single-trial analysis. IEEE Signal Process Mag. 2008;25:41–56. doi: 10.1109/MSP.2008.4408441. [Cross Ref]
  • Bar-Shalom Y, Li XR, Kirubarajan T. Estimation with applications to tracking and navigation: Theory, Algorithms and Software. New York: Wiley Press; 2001. full_text.
  • Fatourechi M, Bashashati A, Ward RK, Birch GE. EMG and EOG artefacts in brain computer interface systems: a survey. Clin Neurophysiol. 2006;118:480–494. doi: 10.1016/j.clinph.2006.10.019. [PubMed] [Cross Ref]
  • Delorme A, Makeig S. EEGLAB: an open source toolbox for analysis of single-trial EEG dynamics. J Neurosci Methods. 2004;134:9–21. doi: 10.1016/j.jneumeth.2003.10.009. [PubMed] [Cross Ref]
  • Jung TP, Makeig S, Humphries C, Lee TW, McKeown MJ, Iragui V, Sejnowski TJ. Removing Electroencephalographic Artifacts by Blind Source Separation. Psychophysiol. 2000;37:163–78. doi: 10.1017/S0048577200980259. [PubMed] [Cross Ref]
  • Sadeghian EB, Moradi MH. Continuous detection of motor imagery in a four-class asynchronous BCI. Conf Proc IEEE Eng Med Biol Soc. 2007. pp. 3241–3244. [PubMed]
  • Tarvainen MP, Georgiadis SD, Ranta-Aho PO, Karjalainen PA. Time-varying analysis of heart rate variability signals with a kalman smoother algorithm. Physiol Meas. 2006;27:225–239. doi: 10.1088/0967-3334/27/3/002. [PubMed] [Cross Ref]
  • Bradberry TJ, Contreras-Vidal JL, Rong F. Decoding hand and cursor kinematics from magnetoencephalographic signals during tool use. Conf Proc IEEE Eng Med Biol Soc. 2008. pp. 5306–5309. [PubMed]
  • Vaillancourt DE, Mayka MA, Corcos. Intermittent visuomotor processing in the human cerebellum, parietal cortex and premotor cortex. J Neurophysiol. 2006;95:922–931. doi: 10.1152/jn.00718.2005. [PMC free article] [PubMed] [Cross Ref]
  • Wu W, Black MJ, Gao Y, Bienenstock E, Serruya M, Shaikhouni A, Donoghue JP. Advances in Neural Information Processing Systems 15. Cambridge, MA: MIT Press; 2003. Neural decoding of cursor motion using a Kalman filter; pp. 133–140.
  • Ball T, Schulze-Bonhage A, Aertsen A, Mehring C. Differential representation of arm movement direction in relation to cortical anatomy and function. J Neural Eng. 2009;6:016006. doi: 10.1088/1741-2560/6/1/016006. [PubMed] [Cross Ref]
  • Pfurtscheller G, Lopes da Silva FH. Event-related EEG/MEG synchronization and desynchronization: Basic principles. Clin Neurophysiol. 1999;110:1842–1857. doi: 10.1016/S1388-2457(99)00141-8. [PubMed] [Cross Ref]
  • Pineda JA, Allison BZ, Vankov A. The effects of self-movement, observation, and imagination on mu rhythms and readiness potentials (RP's): Toward a brain-computer interface (BCI) IEEE Trans Rehabil Eng. 2000;8:219–222. doi: 10.1109/86.847822. [PubMed] [Cross Ref]
  • Townsend G, Graimann B, Pfurtscheller G. Continuous EEG classification during motor imagery--simulation of an asynchronous BCI. IEEE Trans Neur Syst and Rehabil Eng. 2004;12:258–265. doi: 10.1109/TNSRE.2004.827220. [PubMed] [Cross Ref]
  • Han Y, Christopher P, Bin H. Relationship between speed and EEG activity during imagined and executed hand movements. J Neural Eng. 2010;7:026001. doi: 10.1088/1741-2560/7/2/026001. [PMC free article] [PubMed] [Cross Ref]

Articles from BioMedical Engineering OnLine are provided here courtesy of BioMed Central