Search tips
Search criteria 


Logo of jzusbLink to Publisher's site
J Zhejiang Univ Sci B. 2010 April; 11(4): 298–306.
PMCID: PMC2852547

Neural decoding based on probabilistic neural network*


Brain-machine interface (BMI) has been developed due to its possibility to cure severe body paralysis. This technology has been used to realize the direct control of prosthetic devices, such as robot arms, computer cursors, and paralyzed muscles. A variety of neural decoding algorithms have been designed to explore relationships between neural activities and movements of the limbs. In this paper, two novel neural decoding methods based on probabilistic neural network (PNN) in rats were introduced, the PNN decoder and the modified PNN (MPNN) decoder. In the experiment, rats were trained to obtain water by pressing a lever over a pressure threshold. Microelectrode array was implanted in the motor cortex to record neural activity, and pressure was recorded by a pressure sensor synchronously. After training, the pressure values were estimated from the neural signals by PNN and MPNN decoders. Their performances were evaluated by a correlation coefficient (CC) and a mean square error (MSE). The results show that the MPNN decoder, with a CC of 0.8657 and an MSE of 0.2563, outperformed the traditionally-used Wiener filter (WF) and Kalman filter (KF) decoders. It was also observed that the discretization level did not affect the MPNN performance, indicating that the MPNN decoder can handle different tasks in BMI system, including the detection of movement states and estimation of continuous kinematic parameters.

Keywords: Brain-machine interfaces (BMI), Neural decoding, Probabilistic neural network (PNN), Microelectrode array

1. Introduction

The technology of invasive brain-machine interfaces (BMI) has been developed in decades, due to its possibility to cure neurological disabilities (Lebedev and Nicolelis, 2006; Feng et al., 2007; Ye et al., 2008). The main task of BMI system is decoding the cortical neural activity into commands of direct brain-controlled prosthetic devices. According to a BMI theory, it is proved that the neural ensemble activity of the motor cortex is distributed but not centralized, and at present the development of multielectrode array techniques for population-level neural recording has enabled the research of distributed coding in the motor cortex (Nicolelis, 2003).

Decoding movement information of human and animal forelimbs from neural spiking activity has been the most focused task of BMI domain. As early as 1970s, Schmidt et al. (1978) had proven that monkeys could learn to control the spike firing rate of a single neuron freely in the primary motor cortex in a close-loop system. In the 1980s, Georgopoulos et al. (1988; 1989) found the relation between the movement direction of upper limb and single neuronal firing pattern in the primary motor cortex in rhesus monkeys. These studies opened the door of BMI research. In 2000, an open-loop BMI system used for direct neural control of a robot arm in the aotus was developed by Nicolelis’ group (Wessberg et al., 2000). Subsequently, his group realized the prediction of velocity and force of rhesus monkey’s upper limb by neural ensemble activities from its motor cortex, and the prediction signal was used to control a robot arm as well (Lebedev et al., 2005). In 2006, the BMI technology was firstly used in humans by Hochberg et al. (2006). In the system, a 96-microelectrode array was implanted in the primary motor cortex of a paralytic patient to record neural spiking signal, and decoding algorithms were developed to control a curser for direct neural control. Velliste et al. (2008) decoded neural signal in rhesus monkey’s motor cortex to control a prosthetic device with four degrees of freedom. With the development of motor-related BMI technology, it will be possibly used to recover the motor function of paralytic patients and greatly improve their life quality.

Recently, a great number of neural decoding algorithms have been developed, including the population vector algorithm, Wiener filter (WF), and Bayesian filter (BF). The population vector (PV) algorithm has proven to be a successful neural decoding method, in which each neuron’s activity is characterized by its preferred direction and firing rate. PV is an optimal solution when the tuning functions are linear and the preferred directions are uniformly distributed (Georgopoulos et al., 1988; 1989; Zhang et al., 1998; Taylor et al., 2002). WF is used in BMI systems when the preferred directions are not uniformly distributed, which uses the least square rule (Wessberg et al., 2000; Serruya et al., 2002; Carmena et al., 2003; Hatsopoulos et al., 2004; Hochberg et al., 2006; Gage et al., 2005). It is noted that WF is a more general method, and PV is a special case of WF. Both of them assume that the firing rates are linearly related to the underlying movement parameters. BF usually assumes that the kinematic decoding procedure accords with the first order Markov model, whose states are the kinematic parameters, and the measurements are the neural spiking activity. The Kalman filter (KF), one of these approaches, uses the linear assumption and least square rule, providing an efficient recursive algorithm to Bayesian rule when the likelihood and prior models are linear and Gaussian (Wu et al., 2002; 2003; 2004; 2006). As a more general method than KF, particle filter (PF) combines Monte Carlo sampling methods with Bayesian inference for kinematic state estimation, which has no constrain on linear and Gaussian assumptions (Brockwell et al., 2004; Shoham et al., 2005). Although the PF is more powerful than other methods, the heavy computational cost makes it difficult to implement. Spike training analysis was also described as a point process based on linear and nonlinear models (Brown et al., 1998; Smith and Brown, 2003; Eden et al., 2004; Wang et al., 2009). In this paper, probabilistic neural network (PNN) was firstly used to decode neural spiking activity. PNN has no linear assumption constrain on the relation between neural firing and kinematic parameter, and it costs much less computational consumption than PF (Specht, 1990). In addition, as a classification method, PNN decoder could change the number of decision attributes. A smaller number of decision attributes are enough when we just want to find the happening of events, and a larger number of decision attributes should be used when precise decoding is required. By modulating the number of decision attributes, this method is more computational managing and controllable than PF. And as a nonlinear method, it has better performance than KF and WF in our experiment.

In the experiment, a 2×8-microelectrode array was implanted in the motor cortex of rats to record neural spiking activity, and water reward was given when rat pressed a lever by its forelimb. Spikes were detected and sorted, and the pressure signal of the lever was also recorded by a set of pressure sensors. We then used PNN as a decoding algorithm to decode rat’s neural firing into pressure values, which has two steps. In the training process, the structure of PNN was trained by training data, which contain both binned neural spiking data and synchronously recorded pressure signal. Then in the testing process, the pressure value was estimated, given PNN structure and current neural activity.

In the present study, two types of PNN decoders were defined. One uses current neural activity as the input of the PNN, which is similar to the WF. The other has the input vector containing both current neural activity and previous estimated pressure value. The latter is a modified PNN (MPNN), with an auto-regressive algorithm in the spirit of a hidden Markov model, but not equivalent to it, since there is no distribution kept as state model. Furthermore, two commonly-used decoding methods, WF and KF, were also used here, and their performances were compared with PNN decoders. Correlation coefficient (CC) and mean square error (MSE) were used to evaluate the performance of the decoders. The results show that the MPNN decoder performed the best, which had higher CC and lower MSE than others.

2. Materials and methods

2.1. Probabilistic neural network

PNN is a typical nonlinear classifier, which uses minimum Bayesian risk criterion (Specht, 1990). In this paper, we discretized pressure value into several levels as the decision attributes of PNN. The number of the discretization levels could be changed for different demands. Only 2–5 levels are needed for judging whether the rat has pressed the lever. For more precise decoding, the pressure value should be discretized into much more levels, i.e., 10–100 levels in the current study. Compared with traditional Back-Propogation neural network, PNN takes less time in training and does not need to train again when more training data are added or reduced. Moreover, it can always obtain the optimal solution in Bayesian criterion when the training data are sufficient, regardless of the complexity of the classification problem.

In our case, multidimensional data equation M1are to be classified into N (N≥2) classes, C 1, C 2, …, Ci, …, CN corresponding to levels of pressure values. The Bayes’ decision Ci is made by the rule: vector X belongs to Ci when HiLifi(X)>HjLjfj(X) for all j=1, 2, …, N and ji, where Hi denotes the prior probability that X belongs to Ci, Li is the loss factor when a mistake occurs for each Ci, and fi(X), as the spherical Gaussian radial basis function, is a Parzen probability density function estimator as

equation M2,

where (·)T indicates transpose of a matrix, q is the dimension of X, Mi is the number of training vector in class Ci, Xik is the kth training vector in class Ci, and σ is the smooth factor or bandwidth. When both training and test vectors are normalized, we have

equation M3.

We can get a simplified classification rule from Eq. (2), when considering Hi=Mi/M total from Bayes’ decision rule (HiLifi(X)>HjLjfj(X) for ji), as Eq. (3):

equation M4.

The PNN network is simply a parallel 4-layer structure: input, pattern, summation, and decision layers (Fig. (Fig.1).1). The input layer receives and normalizes input vector; each unit in pattern layer represents a training vector with response function exp[(X T Xik−1)/σ 2]. Summation layer computes the summation of each pattern and multiplies the loss factor. Decision layer selects the largest one in summation layer as the classification result. Along with the increasing prior knowledge, the PNN can be expanded horizontally and the classification capacity increased.

Fig. 1

Probabilistic neural network structure

2.2. Experiment environment and data processing

Approval for all experimental procedures was obtained from the Animal Care Committee at Zhejiang University, China, which follows the guide for the care and use of laboratory animals (Ministry of Health of the People’s Republic of China, 2000). Methods for surgery and multineuron recording have been published (Paxinos et al., 1980). In this paper, Sprague-Dawley (SD) rats (approximately 280 g, provided by Zhejiang Academy of Medical Sciences, China) were trained to perform conditional operant task. In the task, when the rat press the lever by its forelimb and the pressure value was over a threshold, the animal was rewarded by a drop of water. All rats were trained to achieve a success rate of >75% before surgery. After that, a 2×8-channel chronical microwire electrodes array (35 μm in diameter, 300 μm for a space between rows, and 200 μm for a space within rows; California Fine Wire, CA, USA) was implanted into rat’s forepaw region of the forelimb primary motor cortex. The electrode tips were positioned in the layer V, and the approximate depth in layer V was in the range of 1.1–1.8 mm beneath the pia in the cerebral cortex. Layer V contained cell bodies that projected directly to the spinal cord to activate peripheral motor neurons. To decrease the common mode noise, one of the 16 electrodes was used as a reference electrode, and 16-channel neural analog signals were recorded in a 30-kHz sample rate, filtered, and amplified using Cerebus 128™ (Cyberkinetics Inc., USA). Then pressure signal was synchronously recorded by pressure sensor at 500 Hz during the experiment.

Neural spiking activities were extracted from 15-channel neural signals by threshold method (except the reference electrode in the array). After that, spikes from each channel were classified into 1–3 types by typically-used principal components analysis and k-means clustering, and each type represented one neuron physiologically. A total of 22–58 neurons were found in all 15 channels per rat. To compute neural spiking frequency for each neuron, firing number in a time bin (Δt=100 ms) was counted. Meanwhile, the pressure signal was recorded and computed into bin size (Δt=100 ms) by averaging pressure value in the bin period. Discretization was implemented to transfer the pressure value into discrete decision attributes of PNN. The discretization procedure of pressure value was realized as

equation M5,

where p(n) denotes pressure value in the nth bin.

After that, PNN approach was used to estimate pressure value from neural ensemble spike firing. The issue of neural decoding was transferred into a classification problem. Two kinds of PNN decoders were defined here with different input vectors as Eqs. (5) and (6):

equation M6,

equation M7,

where fi(n) denotes firing rate of the jth neuron in the nth bin, and equation M8 denotes previous estimated pressure. The decoder with input vector Eq. (5) was named PNN decoder, and the decoder with input vector Eq. (6) was named MPNN decoder. The decoding process of the PNN decoder was as follows: firstly, the structure of PNN was trained by a length of training data, whose decision attributes was p(n); and in the testing step, we estimated equation M9 by classifying the class of X(n) with the PNN structure. The process of MPNN decoder was similar. In the training process, the equation M10 in Eq. (6) was replaced by the actually-recorded pressure value p(n−1).

WF and KF were also implemented as comparisons to PNN methods. The process of WF decoder was the same as PNN decoder, which used least square rule to train the linear transform matrix, and estimated pressure value with input vector as Eq. (5). The KF decoder separated the neural decoding issue into two steps: relating the neural activity to movements and evolving movements over bins (Wu et al., 2003), generative model, and system model. The detail of these methods was not shown.

The estimated pressure value by decoders and the sensor-detected pressure value were compared by CC and MSE. The CC was computed as Eq. (7), and MSE as Eq. (8):

equation M11,

equation M12,

where P denotes real pressure train andequation M13 denotes estimated pressure train.

3. Results

During the task training, rats were kept on water restriction and water was maintained by rewarding from pressing the lever, 0.05 ml each time. Each rat pressed the lever for 200–300 times a day, about 10–15 ml water rewarded. Three rats (S9-03, S9-04, S9-12) were implanted with a 2×8-microelectrode array. After restoring from surgery for one week, all the three rats could obtain water by pressing lever repeatedly. The pressing events were repeated in a stable period, 3–5 s each time (1 s for pressing the lever and 2–4 s for drinking water). Signal processing procedures preparing for neural decoding of S9-03 are shown in Fig. Fig.2.2. After a second bandwidth Butterworth filter (750–7000 Hz), the neural signal was recorded in 30 kHz (Fig. (Fig.2a),2a), which contains two channels (5, 13). By spike detection (threshold crossing) and sorting algorithm (principal components analysis and k-means clustering), spikes were extracted and classified (Fig. (Fig.2b)2b) from two channels (Fig. (Fig.2a).2a). Fig. Fig.2c2c shows the spiking raster of four neurons in the same timeline as Fig. Fig.2a.2a. The pressure value was synchronously recorded with neural signal by press sensor in the lever, with a sample rate of 500 Hz (Fig. (Fig.2d).2d). Spike firing rate in bin period was computed later, as well as averaging and discretizing the pressure value in bin. Fig. Fig.2e2e shows the result of S9-03, where the color of the grids indicates the firing rate of the neurons. In each rat, scores of neurons were extracted, 31 neurons for S9-03, 22 for S9-04, and 58 for S9-12. Some neurons that had only a small amount of spike firing were not included.

Fig. 2Fig. 2Fig. 2Fig. 2Fig. 2Fig. 2

Simultaneously recorded neural signal and pressure signal

In the current study, two types of neural decoders based on PNN were used, the PNN decoder and the MPNN decoder, depending on two kinds of input vectors as Eqs. (5) and (6). Fig. Fig.22 shows that the bin size was 100 ms and the discretization level was 100. The examples of decoders’ outputs are shown in Figs. 3a and 3b, where the dot-line indicates the pressure signal recorded by pressure sensor and the solid-line the decoding outputs. The result of MPNN decoder generated smoother waveform than the PNN decoder (Fig. (Fig.3a).3a). There were cleavages in yellow line peaks in PNN decoder output (Fig. (Fig.3b),3b), which looked like high frequency noise. The CC (0.8657) of modified PNN decoder was larger than that (0.7061) of PNN decoder, and the MSE (0.2563) of MPNN decoder was smaller than that (0.5866) of the PNN decoder. However, both of them indicated that the addition of equation M14 as part of input vector would improve the performance of PNN decoding.

Fig. 3

Examples of pressure decoding results (rat S9-03)

In addition, we compared these two decoders with other two commonly-used neural decoding algorithms, WF and KF. The bin size of all the four was 100 ms. The output of WF was similar to other movements decoding BMI systems. It fitted the real pressure value accurately but contained much high frequency noise, with CC=0.7778 and MSE=0.2487 (Fig. (Fig.3d).3d). KF performed more smoothly than WF and estimated the pressing events (as the peaks in Fig. Fig.3c)3c) better than WF, with CC=0.7973 and MSE=0.4047. But the output of KF between the peaks was more disordered than that of WF. It is because that the pressing events in our experiment seemingly happened accidently and the pressure value changed abruptly. It was somehow different from some other BMI systems using KF, which usually decode kinematic parameters that change gradually. These linear methods did not perform as well as MPNN decoder in terms of their CC and MSE values. However, both of them had larger CC and smaller MSE than PNN decoder, since the cleavages in pressure peak in PNN output played an import part to the computation of CC and MSE. The results of other two rats, S9-04 and S9-12, are also shown in Table Table11.

Table 1

CC and MSE of neural decoders*

In the data processing of S9-03, a length of 200 s data was used to train the decoders; WF spent 0.003 s, KF 0.0075 s, PNN 0.04 s, and MPNN 0.04 s on training. In the testing process, for each bin (100 ms), WF spent less than 5×10−7 s, KF less than 2.5×10−4 s, PNN less than 1.2×10−3 s, and MPNN less than 7×10−3 s. Although MPNN seemed to take the longest time among these algorithms, the training and testing consumptions were acceptable and easily met the real-time implementation. Algorithms were tested in Matlab 7.8 (Mathworks Inc., USA) in the computer with Intel Core 2 Duo CPU 2.33 GHz.

To find out the effect of discretization level on MPNN decoder, we tested the output of MPNN decoder in a range of discretization level. In the experiment, the discretization level ranged from 10 to 150, and the results of all the three rats were computed. Although the CC values might oscillate between different discretization levels, there was no fixed relation between the discretization level and the decoding performance (Fig. (Fig.4a).4a). The results of MPNN decoder in four discretization levels are listed in Fig. Fig.4b.4b. At level 3, there were only three values for decoder output, and the result seemed like square wave (green line, Fig. Fig.4b).4b). When the discretization level increased to 10, the results gradually became smoother. At level 100, the results of MPNN look like a continuous line.

Fig. 4Fig. 4

Effect of discretization level on MPNN decoder

It can be concluded that smaller discretization levels are needed for only estimating the lever-pressing event, which decreases computational consumption. A larger number of discretization levels are demanded when more precise and smoother decoding output is required. The effect of bin size on MPNN was also tested. For rat S9-03, when the bin size ranged from 50 to 150 ms (50, 75, 100, 125, and 150 ms), the CC values were 0.6228, 0.7617, 0.8657, 0.7512, and 0.8114, respectively, showing an increasing trend.

4. Discussion

In the present study, we introduced PNN and MPNN decoders to decode neural ensemble activities around the pressing events. Results show that MPNN decoder had better performance than traditional algorithms, WF and KF decoders. In addition, MPNN decoder does not bring much more computational complexity.

MPNN decoder, whose input vector includes previous estimated pressure value equation M15, has smoother and higher CC values than PNN. Although the structure of PNN decoder is also nonlinear in Bayesian criterion, it provides no estimation of uncertainty and is difficult to analyze complex temporal movement. We, therefore, found high frequency noise in PNN decoder (Fig. (Fig.3b).3b). Moreover, we also found that MPNN decoder was better than KF, for the output of KF between pressing events was disordered. The KF is one realization of Bayesian filter under linear and Gaussian assumption, and this model has evolved to produce a smoother result (Wu et al., 2003). However, the pressing events in our experiment appeared to happen accidently and the pressure value seemed changed abruptly, which cannot be described by any linear system model. So, under the least square rule, the training of evolving matrix was compromised for the abruptly changed pressure value, resulting in disorder between pressing events.

Although various decoding models had different levels of performance, the overall results were not much affected. Perhaps it is because that we evaluated the performance of regression estimate by CC but not the correct rate of classification. The results also indicate that the bin size affected the performance of MPNN. It can be that the neural ensemble firing is likely to be a non-stationary random process and a larger bin size usually diminishes the randomness of neural firing. When we enlarged the size of the bin, the performance was improved. On the other hand, it is needed to note that the duration of pressing event was only about 1 s. When the bin size was oversized, the waveform of PNN output would be distorted and the CC would be decreased.

In this study, we solved a problem of regression using classification method by discretizing the pressure signal into levels. Firstly brought forward by Specht (1990), PNN method has been successfully applied to machine learning, artificial intelligence, automatic control, and many other fields. Compared with the multi-layer feedforward network, PNN is a simpler mathematical principle and is easy to implement. The method proposed in this paper for neural decoding could be used as a general method in various BMI systems, such as regression tasks, decoding continuous movement states or trajectories by MPNN decoder, pattern recognition tasks, and judging figure or limb movements by PNN decoder. Although, general regression neural network has already been brought forward to be used for regression tasks widely, the decoder based on PNN was used in the current study in detecting the pressing events and finding precise pressure value to face different requirements.

5. Conclusion

We presented novel neural decoding methods based on probabilistic neural network, the PNN and the MPNN decoders. By comparing with other commonly-used decoding methods, WF and KF, we found that MPNN performed best among others, with higher CC and lower MSE. By changing the discretization level, MPNN decoder could handle different problems in BMI system, such as movement state detection and decoding continuous kinematic parameter estimation. In summary, the MPNN decoder based on PNN is a viable and effective method for neural decoding in BMI systems.


We thank Prof. Bao-ming LI (Institute of Neurobiology, Fudan University, China) and Dr. Mei-fang MA and Mr. Yi LI (members of the project team, Zhejiang University) for their thought-provoking instructions on instruments and animal experimentation.


*Project supported by the National Natural Science Foundation of China (Nos. 30800287 and 60703038) and the Natural Science Foundation of Zhejiang Province, China (No. Y2090707)


1. Brockwell AE, Rojas AL, Kass RE. Recursive bayesian decoding of motor cortical signals by particle filtering. J Neurophysiol. 2004;91(4):1899–1907. doi: 10.1152/jn.00438.2003. [PubMed] [Cross Ref]
2. Brown EN, Frank LM, Tang D, Quirk MC, Wilson MA. A statistical paradigm for neural spike train decoding applied to position prediction from ensemble firing patterns of rat hippocampal place cells. J Neurosci. 1998;18(18):7411–7425. [PubMed]
3. Carmena JM, Lebedev MA, Crist RE, O(Doherty JE, Santucci DM. Learning to control a brain-machine interface for reaching and grasping by primates. PLoS Biol. 2003;1(2):E42. doi: 10.1371/journal.pbio.0000042. [PMC free article] [PubMed] [Cross Ref]
4. Eden UT, Frank LM, Barbieri R, Solo V, Brown EN. Dynamic analysis of neural encoding by point process adaptive filtering. Neural Comput. 2004;16(5):971–998. doi: 10.1162/089976604773135069. [PubMed] [Cross Ref]
5. Feng ZY, Chen WD, Ye XS. A remote control training system for rat navigation in complicated environment. J Zhejiang Univ-Sci A. 2007;8(2):323–330. doi: 10.1631/jzus.2007.A0323. [Cross Ref]
6. Gage GJ, Ludwig KA, Otto KJ, Ionides EL, Kipke DR. Naive coadaptive cortical control. J Neural Eng. 2005;2(2):52–63. doi: 10.1088/1741-2560/2/2/006. [PubMed] [Cross Ref]
7. Georgopoulos AP, Kettner RE, Schwartz AB. Primate motor cortex and free arm movements to visual targets in three-dimensional space. II. Coding of the direction of movement by a neuronal population. J Neurosci. 1988;8(8):2928–2937. [PubMed]
8. Georgopoulos AP, Lurito JT, Petrides M. Mental rotation of the neuronal population vector. Science. 1989;243(4888):234–236. doi: 10.1126/science.2911737. [PubMed] [Cross Ref]
9. Hatsopoulos N, Joshi J, O′Leary JG. Decoding continuous and discrete motor behaviors using motor and premotor cortical ensembles. J Neurophysiol. 2004;92(2):1165–1174. doi: 10.1152/jn.01245.2003. [PubMed] [Cross Ref]
10. Hochberg LR, Serruya MD, Friehs GM, Mukand JA, Saleh M. Neuronal ensemble control of prosthetic devices by a human with tetraplegia. Nature. 2006;442(7099):164–171. doi: 10.1038/nature04970. [PubMed] [Cross Ref]
11. Lebedev MA, Nicolelis MA. Brain-machine interfaces: past, present and future. Trends Neurosci. 2006;29(9):536–546. doi: 10.1016/j.tins.2006.07.004. [PubMed] [Cross Ref]
12. Lebedev MA, Carmena JM, O′Doherty JE. Cortical ensemble adaptation to represent velocity of an artificial actuator controlled by a brain-machine interface. J Neurosci. 2005;25(19):4681–4693. doi: 10.1523/JNEUROSCI.4088-04.2005. [PubMed] [Cross Ref]
13. Ministry of Health of the People’s Republic of China. Ministry of Health of the People’s Republic of China; 2000. Guide for Animal Experiment Technology; pp. 9–20.
14. Nicolelis MA. Brain-machine interfaces to restore motor function and probe neural circuits. Nature Rev Neurosci. 2003;4(5):417–422. doi: 10.1038/nrn1105. [PubMed] [Cross Ref]
15. Paxinos G, Watson CR, Emson PC. AChE-stained horizontal sections of the rat brain in stereotaxic coordinates. J Neurosci Methods. 1980;3(2):129–149. doi: 10.1016/0165-0270(80)90021-7. [PubMed] [Cross Ref]
16. Schmidt EM, McIntosh JS, Durelli L. Fine control of operantly conditioned firing patterns of cortical neurons. Exp Neurol. 1978;61(2):349–369. doi: 10.1016/0014-4886(78)90252-2. [PubMed] [Cross Ref]
17. Serruya MD, Hatsopoulos NG, Paninski L, Fellows MR, Donoghue JP. Instant neural control of a movement signal. Nature. 2002;416(6877):141–142. doi: 10.1038/416141a. [PubMed] [Cross Ref]
18. Shoham S, Paninski LM, Fellows MR, Hatsopoulos NG, Donoghue JP. Statistical encoding model for a primary motor cortical brain-machine interface. IEEE Trans Biomed Eng. 2005;52(7):1312–1322. doi: 10.1109/TBME.2005.847542. [PubMed] [Cross Ref]
19. Smith AC, Brown EN. Estimating a state-space model from point process observations. Neural Comput. 2003;15(5):965–991. doi: 10.1162/089976603765202622. [PubMed] [Cross Ref]
20. Specht DF. Probabilistic neural networks. Neural Netw. 1990;3(1):109–118. doi: 10.1016/0893-6080(90)90049-Q. [Cross Ref]
21. Taylor DM, Tillery SI, Schwartz AB. Direct cortical control of 3D neuroprosthetic devices. Science. 2002;296(5574):1829–1832. doi: 10.1126/science.1070291. [PubMed] [Cross Ref]
22. Velliste M, Perel S, Spalding MC. Cortical control of a prosthetic arm for self-feeding. Nature. 2008;453(7198):1098–1101. doi: 10.1038/nature06996. [PubMed] [Cross Ref]
23. Wang Y, Paiva ARC, Príncipe JC, Sanchez JC. Sequential monte carlo point-process estimation of kinematics from neural spiking activity for brain-machine interfaces. Neural Comput. 2009;21(10):2894–2930. doi: 10.1162/neco.2009.01-08-699. [PubMed] [Cross Ref]
24. Wessberg J, Stambaugh CR, Kralik JD, Beck PD, Laubach M, Chapin JK, Kim J, Biggs SJ, Srinivasan MA, Nicolelis MAL. Real-time prediction of hand trajectory by ensembles of cortical neurons in primates. Nature. 2000;408(6810):361–365. doi: 10.1038/35042582. [PubMed] [Cross Ref]
25. Wu W, Black MJ, Gao Y. Inferring Hand Motion from Multi-Cell Recordings in Motor Cortex Using a Kalman Filter; Workshop on Motor Control in Humans and Robots: On the Interplay of Real Brains and Artificial Devices; Edinburgh, Scotland (UK): 2002. pp. 66–73.
26. Wu W, Black MJ, Gao Y, et al. Advances in Neural Information Processing Systems 15. Cambridge, MA: MIT Press; 2003. Neural Deocoding of Cursor Motion Using a Kalman Filter; pp. 1–8.
27. Wu W, Black MJ, Mumford D, Gao Y, Bienenstock E. Modeling and decoding motor cortical activity using a switching Kalman filter. IEEE Trans Biomed Eng. 2004;51(6):933–942. doi: 10.1109/TBME.2004.826666. [PubMed] [Cross Ref]
28. Wu W, Gao Y, Bienenstock E, Donoghue JP, Black MJ. Bayesian population decoding of motor cortical activity using a Kalman filter. Neural Comput. 2006;18(1):80–118. doi: 10.1162/089976606774841585. [PubMed] [Cross Ref]
29. Ye XS, Wang P, Liu J. A portable telemetry system for brain stimulation and neuronal activity recording in freely behaving small animals. J Neurosci Methods. 2008;174(2):186–193. doi: 10.1016/j.jneumeth.2008.07.002. [PubMed] [Cross Ref]
30. Zhang K, Ginzburg I, McNaughton BL. Interpreting neuronal population activity by reconstruction: unified framework with application to hippocampal place cells. J Neurophysiol. 1998;79(2):1017–1044. [PubMed]

Articles from Journal of Zhejiang University. Science. B are provided here courtesy of Zhejiang University Press