Home | About | Journals | Submit | Contact Us | Français |

**|**HHS Author Manuscripts**|**PMC3674827

Formats

Article sections

- Abstract
- 1. Cortically-controlled brain-machine interfaces
- 2. Kalman-filter-decoder algorithm
- 3. Mapping onto spiking neural networks
- 4. Spiking neural network decoder
- 5. Off-line open-loop implementation
- 6. Online closed-loop performance
- 7. Conclusions and future work
- References

Authors

Related links

J Neural Eng. Author manuscript; available in PMC 2014 June 1.

Published in final edited form as:

Published online 2013 April 10. doi: 10.1088/1741-2560/10/3/036008

PMCID: PMC3674827

NIHMSID: NIHMS468258

Julie Dethier,^{1,}^{*}^{‡} Paul Nuyujukian,^{1,}^{2,}^{*} Stephen I. Ryu,^{3,}^{4} Krishna V. Shenoy,^{1,}^{2,}^{5} and Kwabena Boahen^{1}

Julie Dethier: eb.ca.glu@reihtedj

The publisher's final edited version of this article is available at J Neural Eng

See other articles in PMC that cite the published article.

Cortically-controlled motor prostheses aim to restore functions lost to neurological disease and injury. Several proof of concept demonstrations have shown encouraging results, but barriers to clinical translation still remain. In particular, intracortical prostheses must satisfy stringent power dissipation constraints so as not to damage cortex.

One possible solution is to use ultra-low power neuromorphic chips to decode neural signals for these intracortical implants. The first step is to explore in simulation the feasibility of translating decoding algorithms for brain-machine interface (BMI) applications into spiking neural networks (SNNs).

Here we demonstrate the validity of the approach by implementing an existing Kalman-filter-based decoder in a simulated SNN using the Neural Engineering Framework (NEF), a general method for mapping control algorithms onto SNNs. To measure this system’s robustness and generalization, we tested it online in closed-loop BMI experiments with two rhesus monkeys. Across both monkeys, a Kalman filter implemented using a 2000-neuron SNN has comparable performance to that of a Kalman filter implemented using standard floating point techniques.

These results demonstrate the tractability of SNN implementations of statistical signal processing algorithms on different monkeys and for several tasks, suggesting that a SNN decoder, implemented on a neuromorphic chip, may be a feasible computational platform for low-power fully-implanted prostheses. The validation of this closed-loop decoder system and the demonstration of its robustness and generalization hold promise for SNN implementations on an ultra-low power neuromorphic chip using the NEF.

Neural motor prostheses aim to help disabled individuals to use computer cursors or robotic limbs by extracting neural signals from the brain and decoding them into useful control signals (figure 1). Several proof-of-concept demonstrations have shown encouraging results [1–10]. Recently, a cortically-controlled motor prosthesis capable of quick, accurate, and robust computer cursor movements by decoding neuronal activity from 96-channel microelectrode arrays in monkeys has been reported [11–13]. This prosthesis, similar to other designs (e.g., [14]), uses a Kalman-filter-based decoder ubiquitous in statistical signal processing. Such a filter and its variants have demonstrated the highest levels of brain-machine interface (BMI) performance in both humans [14] and monkeys [13]. Even though successes with non-linear alternative decoder types, such as the unscented Kalman filter [15] and the population vector algorithm [16], are reported in the literature, this paper focuses on the linear version of the Kalman filter because of its ease of implementation in steady-state and the previous experience with the ReFIT-KF training algorithm [13].

System diagram of a BMI. Electronic neural signals are measured from motor regions of the brain, converted to spike trains by detecting threshold-crossings, and translated (decoded) into control signals for a computer or prosthetic arm. Current implementations **...**

These decoding systems can be implemented in one of three general reference designs (figure 2). Each design makes a tradeoff among available space for electronics, power budget (both acceptable dissipation and battery capacity), risk of infection, and cosmetic appearance. The first design implants only the recording electrodes under the skull, placing the electronics outside of the skin (see figure 2 (A)). Examples of this approach are the Neurochip [17, 18] and the Hermes systems [19–23]. This design does not have stringent power constraints because the electronic systems are not in proximity of the brain and make little contact with the skin, so there is little risk for heating tissue. Systems in this class draw 20–200mW and run on battery power for hours to weeks. The second design differs from the first one in that electronics are fully implanted beneath the skin, but are not in direct contact with the brain [24] (see figure 2 (B)). Here again there is little risk of heating the brain, but the electronics may heat the cranium. However, since the system is implanted, space constraints limit the footprint of the electronics and the battery capacity—inductive power is a viable alternate power source. Systems of this design decrease the risk of infection due to the closure of the skin and the lack of a chronic wound margin. The third design is fully implanted under the skull [25, 26] (see figure 2 (C)). This design has a further decreased infection risk relative to under-skin implantation because the dura is completely closed after the device is implanted, minimizing cerebrospinal fluid leak and maintaining the blood-brain barrier. This third design has the tightest space constraints while being the most cosmetically favorable of the three. Power for this design would be inductively delivered. In this scenario, because electronics are so close to brain tissue, heat dissipation is of greatest concern. The need to minimize power dissipation while providing a high level of computational ability makes the neuromorphic approach a potentially enabling technology for the second and third reference designs.

Cross-section of three reference designs for intracortical neural prostheses. (A) Externally head-mounted. The electrode array is the only intracortical device. The electronics are externally mounted, fixed against the cranium, with little concern of **...**

The lack of low-power electronic circuitry to run decoding algorithms is an obstacle to the successful clinical translation of implantable cortical motor prostheses. A design constraint set by the American Association of Medical Instrumentation for implanted medical devices specifies that a device should not increase the temperature of tissue chronically by more than 1°C. This translates to a power dissipation limit of 40mW/cm^{2} [27]. For the 4 × 4mm^{2} electrode array commonly used in neural prostheses applications in a 6 × 6mm^{2} hermetically sealed package, this places a power dissipation limit of about 10mW for an implanted design [28]. A modern x86 processor consumes approximately 1.8mW [29] just to perform 2D Kalman filtering on a 96-channel array. Thus this approach will not meet the demands of recording from higher electrode densities and controlling more degrees of freedom, which require substantially more computer-intensive decode/control algorithms.

In the first reference design (figure 2 (A)), all signal processing occurs outside the head, enabling the 10mW power budget to be allocated entirely to signal preprocessing (amplifying, filtering and digitizing), data preconditioning (syncing, scrambling, and coding), and wireless transmission. For 96 signals sampled at 31.25KHz and digitized at 10 bits (30Mb/s), a 120nm-CMOS FPGA preconditioner consumes 14mW (scaled from [21]) and a 65nm-CMOS UWB transmitter consumes 0.35mW (8.5 pJ/b) [30]. Assuming a custom preconditioner implementation consumes two orders of magnitude less power, the preconditioning and transmission power for 96 channels can be reduced to 0.49mW. Extrapolating this number yields 5mW for the 1000 channels thought tobe needed for fast and robust control of a six degree-of-freedom robotic arm [31]. This figure can be reduced from 50% to 0.005% of the power budget if decoding is performed locally before wireless transmission. The data rate is reduced from 30Mb/s to 3Kb/s—six signals sampled at 50Hz and digitized at 10bits—with a proportionate reduction in power. A viable approach if the neural signals can be decoded using a lot less power than it takes to transmit them in raw form^{§}.

The required power constraints could be met with an innovative ultra-low power technique: the *neuromorphic* approach. This approach follows the brain’s organizing principles and uses large-scale integrated systems containing microelectronic analog circuits to *morph* neural systems into silicon chips [34, 35]. It combines the best features of analog and digital circuit design—efficiency and programmability—offering potentially greater robustness than either of them [34, 35]. With as little as 50nW per silicon neuron when spiking at 100Hz [36], these neuromorphic circuits may yield tremendous power savings over conventional digital solutions because they use an analog approach based on physical operations to perform mathematical computations.

Before designing and fabricating such a dedicated neuromorphic decoding chip, we explored the feasibility of translating decoding algorithms into a spiking neural network (SNN) in software. Recent studies have highlighted the utility of neural networks for decoding in both off-line [37] and online [38] settings. Similarly, we encouragingly achieved off-line (open-loop) SNN performance comparable to that of the traditional floating point implementation [29]. We also realized simulation algorithm enhancements that enable the real-time execution of a 2000-neuron SNN on x86 hardware and reported preliminary closed-loop results obtained with a single monkey performing a single task [39].

In this study, we extended our preliminary closed-loop tests of the SNN Kalman-filter-based decoder from one to two monkeys and evaluated its performance on multiple tasks. We first describe the linear filter we used in our system, the Kalman filter already presented in [11–13], and how it functions as a decoder. Next, the Neural Engineering Framework (NEF), the method we use to map this algorithm on to an SNN, is outlined, together with the optimizations that enabled the software-based SNN to operate in a closed-loop experimental setting. An analysis of the performance achieved in the closed-loop tests demonstrates the validity of this approach and concludes this paper.

In the 1960’s, Rudolf E. Kálmán described a new type of linear filter that will be later called the *Kalman filter* [40]. This filter tracks the state of a dynamical system throughout time using a weighted average of two types of information: a value predicted from the output state’s dynamics and a value measured from external observations, both subject to noise and other inaccuracies. More weight is given to the value with the lower uncertainty as calculated by the *Kalman gain*, **K**, which is computed as follows.

The system is modeled by the following set of equations:

$${\mathit{x}}_{t}={\mathbf{A}\mathit{x}}_{t-1}+{\mathit{w}}_{t}$$

(1)

$${\mathit{y}}_{t}={\mathbf{C}\mathit{x}}_{t}+{\mathit{q}}_{t}$$

(2)

where **A** is the state matrix modeling the output state’s dynamics, **C** is the observation matrix, and *w** _{t}* and

$${\mathit{x}}_{t}=(\mathbf{I}-\mathbf{KC}){\mathbf{A}\mathit{x}}_{t-1}+{\mathbf{K}\mathit{y}}_{t}={\mathbf{M}}_{x}{\mathit{x}}_{t-1}+{\mathbf{M}}_{y}{\mathit{y}}_{t}$$

(3)

where **K** = (**I** + **WCQ**^{−1}**C**)^{−1}**WC**^{T}**Q**^{−1} is the steady-state formulation of the Kalman gain. **M*** _{x}* and

For neural prosthetic applications [13], the system’s state vector, *x** _{t}*, was the kinematic state of the cursor:
${\mathit{x}}_{t}=[{\mathit{pos}}_{t}^{\text{x}},{\mathit{pos}}_{t}^{\text{y}},{\mathit{pos}}_{t}^{\text{z}},{\mathit{vel}}_{t}^{\text{x}},{\mathit{vel}}_{t}^{\text{y}},{\mathit{vel}}_{t}^{\text{z}}]$ and the measurement vector,

The model parameters (**A**, **C**, **W** and **Q**) were fit by correlating recorded neural signals and measured arm kinematics, obtained during training trials. Arm measurements were captured by a Polaris Optical Measurement System (Northern Digital Inc), with a passive reflective bead tied to the tip of the monkey’s finger. The measurements were sampled at 60Hz and were plotted directly to the screen. For our application, we assumed that neural signals recorded under arm control were similar to neural signals recorded under brain control, and that the observed arm kinematics matched the desired neural cursor kinematics. Therefore, the parameters could be fit from observed arm kinematics (figure 3).

Neural and kinematic measurements (monkey J, 2011-04-16, 16 continuous trials) used to fit the Kalman-filter model. (A) The 192 channel recordings that were fed as input to fit the Kalman-filter matrices (grayscale refers to the number of threshold crossings **...**

For neuroprosthethic applications, the Kalman filter converges to its steady-state matrices, **M*** _{x}* and

To map the steady-state version of this linear neural decoder on to an SNN, we used the NEF: a formal methodology for mapping control-theory algorithms onto SNNs.

The SNNs employed in the NEF are composed of highly heterogeneous spiking neurons characterized by a nonlinear multi-dimensional vector-to-spike-rate function—*a _{i}*(

Neural representation is defined by nonlinear encoding of a stimulus, ** x**(

The nonlinear encoding process is exemplified by the *neuron tuning curve* (figure 4
*Representation*), which captures the overall encoding process from a multi-dimensional stimulus, ** x**(

$${a}_{i}(\mathit{x}(t))=G({J}_{i}(\mathit{x}(t))),$$

(4)

where *G*() is the nonlinear function describing the firing rate’s dependence on the current’s value. In the case of the leaky integrate-and-fire neuron model (LIF) that we used for this application, this function *G*() is given by:

$$G({J}_{i}(\mathit{x}(t)))={\left[{\tau}^{\text{ref}}-{\tau}^{\text{RC}}ln\phantom{\rule{0.16667em}{0ex}}(1-{J}_{\text{th}}/{J}_{i}(\mathit{x}(t)))\right]}^{-1}$$

(5)

where *J _{i}* is the current entering the soma of the cell,

NEF’s three principles. Top row represents the control-theory level and lower rows the neural level. **Representation.** Encoded signal, spikes raster, and decoded signal with a population of 200 leaky integrate-and-fire neurons. The neurons’ **...**

The conversion from a multi-dimensional stimulus, ** x**(

$${J}_{i}(\mathit{x}(t))={\alpha}_{i}\langle {\stackrel{\sim}{\phi}}_{i}^{\mathit{x}}\xb7\mathit{x}(t)\rangle +{J}_{i}^{\text{bias}},$$

(6)

where *α _{i}* is a gain or conversion factor, and
${J}_{i}^{\text{bias}}$ is a bias current that accounts for background activity. In the one-dimensional case, the preferred direction vector reduces to a scalar, either 1 or −1, resulting in a positive or negative slope, respectively (i.e., ON neurons that increase their firing rate as the value of the stimulus variable increases and OFF neurons that do the opposite).

The linear decoding process converts spike trains back into a relevant quantity in the stimulus space. This process is characterized by the synapses’ spike response, *h*(*t*) (i.e., post-synaptic current waveform), and the decoding weights,
${\phi}_{i}^{\mathit{x}}$, which are found by a least-squares method [42], described next.

A single noise term, *η*, amalgamates all sources of noise, as they all have the effect of introducing uncertainty into any signal sent by the transmitting neuron. With this noise term, the transmitted firing rate can be written as *a _{i}*(

$$E=\frac{1}{2}{\langle {[\mathit{x}(t)-\widehat{\mathit{x}}(t)]}^{2}\rangle}_{\mathit{x},\eta ,t}$$

(7)

$$=\frac{1}{2}{\langle \left[\mathit{x}(t)-\sum _{i}({a}_{i}(\mathit{x}(t))+{\eta}_{i})\phantom{\rule{0.16667em}{0ex}}{\phi}_{i}^{\mathit{x}}\right]\rangle}_{\mathit{x},\eta ,t}$$

(8)

where ·_{x,}* _{η}*, denotes integration over the range of

$$E=\frac{1}{2}{\langle {\left[\mathit{x}(t)-\sum _{i}{a}_{i}(\mathit{x}(t)){\phi}_{i}^{\mathit{x}}\right]}^{2}\rangle}_{\mathit{x},t}+{\sigma}^{2}\sum _{i}{({\phi}_{i}^{\mathit{x}})}^{2},$$

(9)

where *σ*^{2} is the noise variance. This expression is minimized by choosing the decoding weights such that:

$${\phi}_{i}^{\mathit{x}}=\sum _{j}^{N}{\mathrm{\Gamma}}_{ij}^{-1}{\mathrm{\Upsilon}}_{j},$$

(10)

with Γ* _{ij}* =

Neural transformation is a special case of neural representation performed by using alternate decoding weights in the decoding operation. The transformation, *f*(** x**(

Neural dynamics brings the first two principles together and adds the time dimension to the circuit. This principle aims at reuniting the control-theory and neural levels by modifying the matrices to render the system *neurally plausible*, thereby permitting the synapses’ spike response, *h*(*t*), to capture the system’s dynamics.

For example, in control-theory, an integrator is written ** (***t*) = **A x**(

To implement the Kalman filter in an SNN by applying the NEF, we used the three principles described in the previous section (figure 5). To render the system neurally plausible as explained in section 3.3, we started from a continuous time (CT) system in the control-theory space, and we therefore converted (3) from discrete time to CT:

$$\stackrel{.}{\mathit{x}}(t)={\mathbf{M}}_{x}^{\text{CT}}\mathit{x}(t)+{\mathbf{M}}_{y}^{\text{CT}}\mathit{y}(t)$$

(11)

where
${\mathbf{M}}_{x}^{\text{CT}}=({\mathbf{M}}_{x}-\mathbf{I})/\mathrm{\Delta}t$ and
${\mathbf{M}}_{y}^{\text{CT}}={\mathbf{M}}_{y}/\mathrm{\Delta}t$ are the CT Kalman matrices and Δ*t* is the discrete time step (50ms).

SNN implementation of a Kalman-filter-based decoder with populations *b*_{k}(*t*) and *a*_{j} (*t*) representing *y*(*t*) and *x*(*t*). Feedforward and recurrent weights, *ω*_{jk} and *ω*_{ji}, were determined by **B′** and **A′**, respectively.

From (11), by applying the dynamics’ principle, we replaced integration with convolution by the synapse’s spike response and the CT matrices with neurally plausible ones, which yielded:

$$\mathit{x}(t)=h(t)\ast ({\mathbf{A}}^{\prime}\mathit{x}(t)+{\mathbf{B}}^{\prime}\mathit{y}(t)),$$

(12)

where ${\mathbf{A}}^{\prime}=\tau {\mathbf{M}}_{x}^{\text{CT}}+\mathbf{I}=\tau ({\mathbf{M}}_{x}-\mathbf{I})/\mathrm{\Delta}t+\mathbf{I}$ and ${\mathbf{B}}^{\prime}=\tau {\mathbf{M}}_{y}^{\text{CT}}=\tau {\mathbf{M}}_{y}/\mathrm{\Delta}t$.

The *j*^{th} neuron’s input current (see (6)) was computed from the system’s current state, ** x**(

$${J}_{i}(\mathit{x}(t))={\alpha}_{j}\langle {\stackrel{\sim}{\phi}}_{j}^{\mathit{x}}\xb7\mathit{x}(t)\rangle +{J}_{j}^{\text{bias}}$$

(13)

$$={\alpha}_{j}\langle {\stackrel{\sim}{\phi}}_{j}^{\mathit{x}}\xb7h(t)\ast ({\mathbf{A}}^{\prime}\widehat{\mathit{x}}(t)+{\mathbf{B}}^{\prime}\widehat{\mathit{y}}(t))\rangle +{J}_{j}^{\text{bias}}$$

(14)

$$={\alpha}_{j}\langle {\stackrel{\sim}{\phi}}_{j}^{\mathit{x}}\xb7h(t)\ast \left({\mathbf{A}}^{\prime}\sum _{i}{a}_{i}(t){\phi}_{i}^{\mathit{x}}+{\mathbf{B}}^{\prime}\sum _{k}{b}_{k}(t){\phi}_{k}^{\mathit{y}}\right)\rangle +{J}_{j}^{\text{bias}}$$

(15)

This last equation can be written in a neural network form (figure 5):

$${J}_{j}(\mathit{x}(t))=h(t)\ast \left(\sum _{i}{\omega}_{ji}{a}_{i}(t)+\sum _{k}{\omega}_{jk}{b}_{k}(t)\right)+{J}_{j}^{\text{bias}}$$

(16)

where ${\omega}_{ji}={\alpha}_{j}\langle {\stackrel{\sim}{\phi}}_{j}^{\mathit{x}}\xb7{\mathbf{A}}^{\prime}{\phi}_{i}^{\mathit{x}}\rangle $ and ${\omega}_{jk}={\alpha}_{j}\langle {\stackrel{\sim}{\phi}}_{j}^{\mathit{x}}\xb7{\mathbf{B}}^{\prime}{\phi}_{k}^{\mathit{y}}\rangle $ are the recurrent and feedforward weights, respectively.

A software SNN implementation involves two distinct steps [39]: network creation and real-time execution. The network does not need to be created in real-time and therefore has no computational time constraints. However, executing the network has to be implemented efficiently for successful deployment in closed-loop experimental settings. To speed-up the simulation, we exploited the NEF mapping between the high-dimensional neural space and the low-dimensional control space. This mapping updated neuron interactions circuitously, using the decoding weights
${\phi}_{j}^{\mathit{x}}$, dynamics matrix **A′**, and preferred direction vectors
${\stackrel{\sim}{\phi}}_{j}^{\mathit{x}}$ from (15), rather than directly using the recurrent and feedforward weights in (16). The circuitous approach yielded an almost 50-fold speedup [39], enabling real-time execution of a 2000-neuron network.

Other improvements to the basic SNN consisted of using two one-dimensional integrators instead of a single three-dimensional one, feeding the constant 1 into the two integrators continuously rather than obtaining it internally through integration, and connecting the 192 neural measurements directly to the recurrent pool of neurons, without using the *b _{k}*(

Various parameters needed to be set for this network. They are recapitulated in table 1. Neural spike rates were computed in 50ms time-bins and, therefore, the sum
${\tau}_{j}^{\text{RC}}+{\tau}_{j}^{\text{ref}}+{\tau}_{j}^{\text{PSC}}$ had to be smaller than 50ms, which was indeed the case. It was important for heterogeneity to be included [42, 44]. Therefore, neural parameters were randomly selected from a wide distribution. Specifically, the preferred direction vectors,
${\stackrel{\sim}{\phi}}_{j}^{\mathit{x}}$, were randomly assigned to −1 and 1. The maximum firing rate, max *G*(*J _{j}*(

Noise was not explicitly added. It arose naturally from the fluctuations produced by representing a scalar through the filtering of spike trains, which has been shown to have effects similar to Gaussian noise [42]. For the purpose of computing the linear decoding weights (i.e., Γ), we modeled the resulting noise as Gaussian with a variance of 0.1 [42].

We first performed an off-line open-loop validation of our SNN decoder [29] by using a previously recorded BMI experiment that utilized a standard Kalman filter (SKF) with floating point computations. An adult male rhesus macaque (Monkey L) was trained to perform a point-to-point arm movement task in a 3D experimental apparatus for a juice reward [13]. All animal procedures and experiments were approved by the Stanford Institutional Animal Care and Use Committee. A 96-electrode silicon array (Blackrock Microsystems) was implanted in the dorsal pre-motor (PMd) and motor (M1) cortex areas responsible for hand movement as guided by visual cortical landmarks. Array recordings (−4.5× RMS threshold crossing applied to each channel) yielded tuned activity for the direction of arm movements. For both monkeys (Monkey L and J) the electrode array used in these experiments spanned approximately 4–6 mm of anterior-posterior distance on the pre-central gyrus associated with primary motor cortex (M1). Electrical stimulation and/or manual arm palpation further localized the area to the upper shoulder region/muscles (monkey J) and forearm (monkey L). It should be kept in mind that the border between M1 and the dorsal premotor (PMd) cortex is not sharp neurophysiologically, and it is possible that the anterior aspect of either array could be within PMd. In addition, for monkey J, we also recorded simultaneously from a second array, which is the same as the first array except that it was implanted 1–2 mm anterior to the first array, and is thus nominally in PMd (see [46], supplementary figure 5, for an intraoperative photo of the arrays). For the purposes of the current experiments the distinction between these two areas is not of primary importance since both areas have robust movement-related activity and modulation.

As detailed in [13], a Kalman filter was fit by correlating the observed hand kinematics with the simultaneously measured neural signals (figure 3). The resulting model was used online to control an on-screen cursor in real time. This model and 500 of these trials (L20100308) served as the standard against which the SNN implementation’s performance is compared. Starting with the steady-state Kalman-filter matrices derived from this experiment, we built an SNN using the NEF methodology and simulated it in *Nengo*, a freely available software, using the parameter values listed in table 1.

To highlight solely the computational differences between traditional floating point calculations and an SNN, the steady-state Kalman filter was run in both the SKF and SNN experimental blocks. This approach avoided any discrepancies in convergence of the Kalman filter that may have arisen from mathematical approximations by the SNN, leaving the deviation from floating point across time steps as the only source of computational variability. It also greatly simplifies the computational demands in the SNN implementation, enabling more efficient computations per neuron in the network. This approach required precomputing the steady-state Kalman-filter matrices, **M*** _{x}* and

The SNN performed better as we increased the number of neurons (figure 6). For 20 000 neurons, the x and y-velocity decoded from its two 10 000-neuron populations matched the standard decoder’s prediction to within 3% (RMS error normalized by maximum velocity) [29]. There was a tradeoff between accuracy and network size. If the network size was decreased to 2000 neurons, the network’s error increased to 6%; an even bigger decrease to 200 neurons led to an error of 21% [29]. Even with a 6% RMS error at 2000 neurons, we believed that this may provide sufficient accuracy to use in closed-loop experiments.

These off-line results encouraged us to test our SNN decoder in an online closed-loop setting. Despite the error, we suspected that the monkeys would actively compensate for any noticeable cursor deviation. Two adult male rhesus macaques (Monkey L and J) were trained to perform a point-to-point arm movement task in a 3D experimental apparatus for a juice reward using the ReFIT-KF training protocol as detailed in the methods section of [13] (figure 7 (A)). Unlike Monkey L who only had one array, Monkey J had two 96-electrode silicon arrays (Blackrock Microsystems) implanted, one in PMd and the other in M1. The Kalman filter was built as described in prior work [13]. The resulting models were used in a closed-loop system to control an on-screen cursor in real-time (figure 7 (A), Decoder block) and once again served as the base performance against which the SNN’s performance was compared.

Experimental setup and tasks. (A) Data are recorded from silicon electrode arrays implanted in motor regions of cortex of monkeys performing a center-out-and-back or pinball task for juice rewards to one of eight targets with a 500ms hold time. (B) Center-out **...**

A 2000-neuron SNN decoder was built using the simulation algorithm enhancements mentioned earlier and simulated on an xPC Target (Mathworks) x86 system (Dell T3400, Intel Core 2 Duo E8600, 3.33GHz). It ran in closed-loop, replacing the SKF as the decoder block in figure 7 (A). Real-time execution constraints with our hardware limited the network size to no more than 2000 neurons.

Once the Kalman filter was trained and the SNN was built, we tested the two Kalman filter implementations (SKF and SNN) against each other. Each test was composed of 200 trials of target acquisition on a center-out and back task. The target alternated from the center of the workspace to one of eight peripheral locations chosen at random (see figure 7 (B)). A successful trial is one in which the monkey navigates the cursor to the target and holds within the 4 cm square acquisition region for 500 ms during the alloted 3 seconds. Once a block of 200 trials was completed with one implementation, the decoder was switched to the other implementation and another block was collected. This ABA block switching was continued until the monkey was satiated and enabled an accurate comparison of just the relative difference in implementations. This ABA block style experimentation was repeated for at least three days with each monkey.

Success rates were higher than 94% on all blocks for the SNN implementation (94.9% and 99.6% for Monkey L and J, respectively) and 98% for the SKF (98.0% and 99.7% for Monkey L and J, respectively). Thousands of trials were collected and analyzed (5235 with Monkey L and 5187 with Monkey J). These reflect only center-out trials, not those that returned to the center from the periphery. The latter were not included in the analysis because the monkey anticipated the return to the center after navigating out to the periphery and thus initiated movement earlier than when the target location is unknown. About half of the trials, 2615 (2484), were performed with the SNN for Monkey L (Monkey J) and 2518 (2599) with the SKF. The average time to acquire the target was moderately slower for the SNN for both monkeys—1067ms vs. 830ms for Monkey L and 871ms vs. 708ms for Monkey J. Around 100 trials under hand control were used as a baseline comparison for each monkey.

Although the speed of both implementations was comparable, as evidenced by the traces being nearly on top of each other up until the first acquire time, the SNN has more difficulty settling into the target (figure 9). Whereas the time at which the target was first successfully acquired (average indicated by trace becoming thicker) is comparable for both BMI implementations, the time at which the target was successfully last acquired (average indicated by trace’s cutoff) occurs latter for SNN. That is, the monkey spent more time wandering in and out of the acquisition region, before subsequently successfully staying inside it for the required hold time. This longer *dial-in time* (indicated by length of thick trace) suggests that the SNN provides less precise control when attempting to stop.

SNN (red) performance compared to SKF (blue) (hand trials are shown for reference (yellow)). The SNN achieves similar results as the SKF implementation. Plot of distance to target vs. time after target onset for different control modalities. The thicker **...**

Towards the end of the day’s experiments, the SNN decoder would occasionally fall to edge of the workspace and the monkey would lose interest in the task. The cursor was reset to the center of the screen so the monkey would continue the block. This happened infrequently and counted against the monkey’s success rate on center-out trials. The off-line results shown earlier (see figure 6) suggest that this difference in usability, as well as the difference in performance, is a result of the network’s neuron count, which was limited by the real-time execution capacity of the x86 hardware. These performance issues could be improved by using more neurons, as would be the case in a neuromorphic chip. Nevertheless, even with only 2000 neurons, the success rate and acquire times of the SNN decoder are comparable to that of the SKF decoder.

Another important measure when evaluating decoders is generalization and stability. To test this, we instructed a new pinball task under the SNN decoder. In this task, targets appeared randomly in a 16 cm square workspace without any systematic structure to target placement (see figure 7 (C)). The monkey received a reward by navigating to the target and holding within the 4cm acquisition region for 500 ms. Successfully performing this task highlights the SNN decoder’s generalization. There was no block structure in this set of experiments. The monkeys were started on this task and were allowed to run continuously without interruption until satiated. Successfully performing this task highlights the SNN decoder’s stability over an extended period of time.

Both monkeys sustained performance at around 40 targets per minute for over an hour on the pinball task (figure 10). On the day tested, monkey L lost interest in the task after approximately 61 minutes and monkey J after approximately 85 minutes. The sustained performance of the SNN decoder across both monkeys in a more generalized task demonstrates the robust performance of the system over uninterrupted periods, suggesting that the decoder is capable of sustained performance across long stretches of time. This performance is comparable to the hit rate and session duration achieved under hand control [38].

The SNN decoder’s performance was comparable to that produced by a SKF. The 2000-neuron network had success rates higher than 94% on all blocks but took moderately longer to hold the cursor still over targets. Performance was sustainable for at least an hour under a generalized acquisition task. As the Kalman filter and its variants are the state-of-the-art in cortically-controlled motor prostheses [11–14], these simulations increase confidence that similar levels of performance can be attained with a neuromorphic chip, which can potentially overcome the power constraints set by clinical applications. A neuromorphic chip could implement a 10 000-neuron network while dissipating a fraction of a milliwatt, likely increasing the performance of the system compared to the simulated SNN shown here.

This demonstration is an important proof-of-concept that highlights the feasibility of mapping existing control theory algorithms onto SNNs for BMI applications. For BMIs to gain widespread clinical deployment, they must packaged in low-power, completely implantable, wireless systems. The computational demands of BMI decoding are high, and thus present a problem for low-power applications. Implementing the SNN decoder presented here onto neuromorphic chips may be a possible answer, performing complex and demanding computations at a fraction of the power draw of conventional processors. Translating the SNN from software to ultra-low-power neuromorphic chips is the next step in the development of a fully-implantable neuromorphic chip for cortical prosthetic applications. Currently, we are exploring this mapping with Neurogrid, a hardware platform with sixteen programmable neuromorphic chips that can simulate up to a million spiking neurons in real-time [35].

The authors would like to thank the support of Joline M Fan and Jonathan C Kao for assistance in collecting monkey data; Mackenzie Mazariegos, John Aguayo, Clare Sherman, and Erica Morgan for veterinary care; Hua Gao for help for figure 2; Kimberly Chin, Sandra Eisensee, Evelyn Castaneda, and Beverly Davis for administrative support. We also thank Chris Eliasmith and Terry Stewart for valuable help with Nengo.

This work was supported in part by the BAEF and F.R.S-FNRS (J. Dethier), Stanford NIH Medical Scientist Training Program (MSTP) and Soros Fellowship (P. Nuyujukian), DARPA Revolutionizing Prosthetics program (N66001-06-C-8005, K. V. Shenoy), two NIH Director’s Pioneer Awards (DP1-OD006409, K. V. Shenoy; DPI-OD000965, K. Boahen), and an NIH/NINDS Transformative Research Award (R01NS076460, K. Boahen).

1. Taylor DM, Helms Tillery SI, Schwartz AB. Direct cortical control of 3D neuroprosthetic devices. Science. 2002;296(5574):1829–32. [PubMed]

2. Carmena JM, Lebedev MA, Crist RE, O’Doherty JE, Santucci DM, Dimitrov DF, Patil PG, Henriquez CS, Nicolelis MAL. Learning to control a brain-machine interface for reaching and grasping by primates. PLoS Biology. 2003;1(2):193–208. [PMC free article] [PubMed]

3. Musallam S, Corneil BD, Greger B, Scherberger H, Andersen RA. Cognitive control signals for neural prosthetics. Science. 2004;305(5681):258–62. [PubMed]

4. Hochberg LR, Serruya MD, Friehs GM, Mukand JA, Saleh M, Caplan AH, Branner A, Chen D, Penn RD, Donoghue JP. Neuronal ensemble control of prosthetic devices by a human with tetraplegia. Nature. 2006;442(7099):164–71. [PubMed]

5. Santhanam G, Ryu SI, Yu BM, Afshar A, Shenoy KV. A high-performance brain-computer interface. Nature. 2006;442(7099):195–8. [PubMed]

6. Velliste M, Perel S, Spalding MC, Whitford AS, Schwartz AB. Cortical control of a prosthetic arm for self-feeding. Nature. 2008;453(7198):1098–101. [PubMed]

7. Ganguly K, Dimitrov DF, Wallis JD, Carmena JM. Reversible large-scale modification of cortical networks during neuroprosthetic control. Nat Neurosci. 2011;14(5):662–7. [PMC free article] [PubMed]

8. O’Doherty JE, Lebedev MA, Ifft PJ, Zhuang KZ, Shokur S, Bleuler H, Nicolelis MAL. Active tactile exploration using a brain-machine-brain interface. Nature. 2011;479(7372):228–31. [PMC free article] [PubMed]

9. Ethier C, Oby ER, Bauman MJ, Miller LE. Restoration of grasp following paralysis through brain-controlled stimulation of muscles. Nature. 2012;485(7398):368–71. [PMC free article] [PubMed]

10. Hochberg LR, Bacher D, Jarosiewicz B, Masse NY, Simeral JD, Vogel J, Haddadin S, Liu J, Cash SS, van der Smagt P, Donoghue JP. Reach and grasp by people with tetraplegia using a neurally controlled robotic arm. Nature. 2012;485(7398):372–5. [PMC free article] [PubMed]

11. Nuyujukian P, Gilja V, Chestek CA, Cunningham JP, Fan JM, Yu BM, Ryu SI, Shenoy KV. Program No. 20.7. Neuroscience Meeting Planner. San Diego, CA: Society for Neuroscience; 2010. Generalization and robustness of a continuous cortically-controlled prosthesis enabled by feedback control design. Online.

12. Gilja V, Chestek CA, Diester I, Henderson JM, Deisseroth K, Shenoy KV. Challenges and opportunities for next-generation intra-cortically based neural prostheses. IEEE Trans Biomed Eng. 2011;58(7):1891–9. [PMC free article] [PubMed]

13. Gilja V, Nuyujukian P, Chestek CA, Cunningham JP, Yu BM, Fan JM, Kao JC, Ryu SI, Shenoy KV. A high-performance neural prosthesis enabled by control algorithm design. Nat Neurosci. 2012;15(12):1752–7. [PMC free article] [PubMed]

14. Kim S, Simeral JD, Hochberg LR, Donoghue JP, Black MJ. Neural control of computer cursor velocity by decoding motor cortical spiking activity in humans with tetraplegia. J Neural Eng. 2008;5(4):455–76. [PMC free article] [PubMed]

15. Li Z, O’Doherty JE, Hanson TL, Lebedev MA, Henriquez CS, Nicolelis MAL. Unscented kalman filter for brain-machine interfaces. PLoS One. 2009;4(7):e6243. [PMC free article] [PubMed]

16. Chase SM, Kass RE, Schwartz AB. Behavioral and neural correlates of visuomotor adaptation observed through a brain-computer interface in primary motor cortex. J Neurophysiol. 2012;108(2):624–44. [PubMed]

17. Jackson A, Mavoori J, Fetz EE. Long-term motor cortex plasticity induced by an electronic neural implant. Nature. 2006;444(7115):56–60. [PubMed]

18. Zanos S, Richardson AG, Shupe L, Miles FP, Fetz EE. The neurochip-2: An autonomous head-fixed computer for recording and stimulating in freely behaving monkeys. IEEE Trans Neural Syst Rehabil Eng. 2011;19(4):427–35. [PMC free article] [PubMed]

19. Santhanam G, Linderman MD, Gilja V, Afshar A, Ryu SI, Meng TH, Shenoy KV. Hermesb: A continuous neural recording system for freely behaving primates. IEEE Trans Biomed Eng. 2007;54(11):2037–50. [PubMed]

20. Chestek CA, Gilja V, Nuyujukian P, Kier R, Solzbacher F, Ryu SI, Harrison RR, Shenoy KV. HermesC: Low-power wireless neural recording system for freely moving primates. IEEE Trans Neural Syst Rehabil Eng. 2009;17(4):330–8. [PubMed]

21. Miranda H, Gilja V, Chestek CA, Shenoy KV, Meng TH. HermesD: A high-rate long-range wireless transmission system for simultaneous multichannel neural recording applications. IEEE Trans Biomed Circuits and Syst. 2010;4(3):181–91. [PubMed]

22. Gao H, Walker RM, Nuyujukian P, Makinwa KAA, Shenoy KV, Murmann B, Meng TH. Hermese: A 96-channel full data rate direct neural interface in 0.13 um cmos. IEEE J of Solid-St Circ. 2012;47(4):1043–55.

23. Gilja V, Chestek CA, Nuyujukian P, Foster JD, Shenoy KV. Autonomous head-mounted electrophysiology systems for freely-behaving primates. Curr Opin Neurobiol. 2010;20(5):676–86. [PMC free article] [PubMed]

24. Borton DA, Aceros MYJ, Nurmikko A. An implantable wireless neural interface for recording cortical circuit dynamics in moving primates. J Neural Eng. 2013;10(2):026010. [PMC free article] [PubMed]

25. Harrison RR, Watkins PT, Kier RJ, Lovejoy RO, Black DJ, Normann R, Solzbacher F. Low-power integrated circuit for a wireless 100-electrode neural recording system. IEEE J Solid-St Circ. 2007;42(1):123–33.

26. Harrison RR. The design of integrated circuits to observe brain activity. Proc IEEE. 2008;96(7):1203–16.

27. Wolf PD. Thermal considerations for the design of an implanted cortical brain-machine interface (bmi) In: Reichert WM, editor. Indwelling Neural Implants: Strategies for Contending with the In Vivo Environment. chapter 3. CRC Press; 2008. [PubMed]

28. Kim S, Tathireddy P, Normann RA, Solzbacher F. Thermal impact of an active 3-d microelectrode array implanted in the brain. IEEE Trans Neural Syst Rehabil Eng. 2007;15(4):493–501. [PubMed]

29. Dethier J, Gilja V, Nuyujukian P, Elassaad SA, Shenoy KV, Boahen KA. Neural Engineering (NER), 2011 5th International IEEE/EMBS Conference on. Apr, 2011. Spiking neural network decoder for brain-machine interfaces; pp. 396–9. [PMC free article] [PubMed]

30. Miranda H, Meng TH. Custom Integrated Circuits Conference (CICC), 2010 IEEE. Sep, 2010. A programmable pulse uwb transmitter with 34% energy efficiency for multichannel neuro-recording systems; pp. 1–4.

31. Laubach M, Wessberg J, Nicolelis MAL. Cortical ensemble activity increasingly predicts behaviour outcomes during learning of a motor task. Nature. 2000;405(6786):567–71. [PubMed]

32. Zumsteg ZS, Kemere C, O’Driscoll S, Santhanam G, Ahmed RE, Shenoy KV, Meng TH. Power feasibility of implantable digital spike sorting circuits for neural prosthetic systems. IEEE Trans Neural Syst Rehabil Eng. 2005;13(3):272–9. [PubMed]

33. Harrison RR, Charles C. A low-power low-noise cmos amplifier for neural recording applications. IEEE J Solid-St Circ. 2003;38(6):958–65.

34. Boahen KA. Nueromorphic microchips. Sci Am. 2005;292(5):56–63. [PubMed]

35. Silver R, Boahen KA, Grillner S, Kopell N, Olsen KL. Neurotech for neuroscience: unifying concepts, organizing principles, and emerging tools. J Neurosci. 2007;27(44):11807–19. [PMC free article] [PubMed]

36. Arthur JV, Boahen KA. Silicon-neuron design: A dynamical systems approach. IEEE Trans Circuits Syst I, Reg Papers. 2011;58(5):1034–43. [PMC free article] [PubMed]

37. Fang H, Wang Y, He J. Spiking neural networks for cortical neuronal spike train decoding. Neural Comput. 2010;22(4):1060–85. [PubMed]

38. Sussillo D, Nuyujukian P, Fan JM, Kao JC, Stavisky SD, Ryu SI, Shenoy KV. A recurrent neural network for closed-loop intracortical brain–machine interface decoders. J Neural Eng. 2012;9(2):026027. [PMC free article] [PubMed]

39. Dethier J, Nuyujukian P, Eliasmith C, Stewart TC, Elasaad SA, Shenoy KV, Boahen K. A brain-machine interface operating with a real-time spiking neural network control algorithm. Adv Neural Inf Process Syst (Granada) 2011 Dec;24:2213–21. [PMC free article] [PubMed]

40. Kalman RE. A new approach to linear filtering and prediction problems. Trans ASME, J Basic Eng. 1960;82:35–45.

41. Malik WQ, Truccolo W, Brown EN, Hochberg LR. Efficient decoding with steady-state kalman filter in neural interface systems. IEEE Trans Neural Syst Rehabil Eng. 2011;19(1):25–34. [PMC free article] [PubMed]

42. Eliasmith C, Anderson CH. Neural Engineering: Computation, Representation, and Dynamics in Neurobiological Systems. Cambridge: MIT Press; 2003.

43. Eliasmith C. A unified approach to building and controlling spiking attractor networks. Neural Comput. 2005;17(6):1276–314. [PubMed]

44. Singh R, Eliasmith C. Higher-dimensional neurons explain the tuning and dynamics of working memory cells. J Neurosci. 2006;26(14):3667–78. [PubMed]

45. Eliasmith C. How to build a brain: from function to implementation. Synthese. 2007;159(3):373–388.

46. Churchland MM, Cunningham JP, Kaufman MT, Foster JD, Nuyujukian P, Ryu SI, Shenoy KV. Neural population dynamics during reaching. Nature. 2012;487(7405):51–6. [PMC free article] [PubMed]

PubMed Central Canada is a service of the Canadian Institutes of Health Research (CIHR) working in partnership with the National Research Council's national science library in cooperation with the National Center for Biotechnology Information at the U.S. National Library of Medicine(NCBI/NLM). It includes content provided to the PubMed Central International archive by participating publishers. |