BMIs have evolved from one degree-of-freedom systems 3
to many-degree-of-freedom robotic arms 4
and muscle stimulators 5
that enact complex limb movements, such as reaching 6–8
and grasping 9
. However, somatosensory feedback, which is essential for dexterous control 10–12
, remains underdeveloped in BMIs. With the exception of a few studies combining BMIs with tactile stimuli applied to the body 13
, existing systems rely almost exclusively on visual feedback. Prosthetic sensation has been studied in the context of sensory substitution 14
and targeted reinnervation 15
, however these approaches have limited application range and channel capacity. To provide a proof-of-concept method for sensorizing neuroprostheses, we implemented a BMBI that extracts movement commands from the motor areas of the brain while delivering ICMS feedback in somatosensory areas 1,2,16
to evoke discriminable percepts 17–20
. This idea received support from our pilot study 16
in which a monkey responded to ICMS cues with the movements of a BMI-controlled cursor. However the ICMS cue did not provide feedback of object-actuator interactions in this previous demonstration.
The BMBI developed here enabled active tactile exploration 21
during BMI control (). Two monkeys (M and N) received multielectrode implants in M1 and S1 (). They explored virtual objects with either a computer cursor or a virtual image of an arm (Supplementary Fig. 1a,b
). In hand control (HC), the monkeys moved a joystick with their left hands to position the actuator. They searched through a set of virtual objects, selected one with a particular artificial texture (AT) conveyed by ICMS, and held the actuator over that object to obtain reward (; Supplementary Fig. 1c,d
). During brain control (BC), the joystick was disconnected and the actuator was controlled by the activity of right-hemisphere M1 neurons 9,22,23
. The behavioural tasks varied in the number of objects on the screen, ATs employed, and the actuator type () and were more difficult than previously reported BMI tasks because of the presence of multiple objects in the workspace, a prolonged object selection period, and the necessity of interpreting ICMS feedback.
The brain-machine-brain interface
ICMS was delivered through two pairs of microwires to the hand representation area of S1 in monkey M, () and through one pair of microwires to the leg representation in monkey N. Each AT consisted of a high-frequency pulse train presented in packets at a lower secondary frequency (; Supplementary Fig. 2a
). The rewarded AT (RAT) consisted of 200 Hz pulse trains delivered in 10 Hz packets. The comparison ATs were represented by 400 Hz pulse trains delivered in 5 Hz packets (unrewarded artificial texture, UAT) or by an absence of ICMS (null artificial texture, NAT).
The major challenge solved here was the real-time coupling of ICMS feedback with the BMI decoder. As ICMS artefacts masked neuronal activity for 5–10 ms after each pulse (), we multiplexed neuronal recordings and ICMS with a 20 Hz clock rate (Supplementary Fig. 2a
). The interleaved intervals proved adequate for online motor control and artificial sensation—a result that was not clear a priori
since S1 stimulation could have affected M1 processing through the connections between these areas.
BMBI performance improved with training. In Task I (), monkey M surpassed chance performance after nine sessions; monkey N after four (P < 0.001, one-sided binomial test). Improvement continued with more difficult tasks (Tasks II–V) (; Supplementary Fig. 3a
). In particular, the time spent exploring unrewarded ATs decreased (; Supplementary Fig. 3b
). Additionally, performance improved within daily experimental sessions (). Psychometric analysis of RAT stimulation amplitudes indicated that at least 8 nC per ICMS waveform phase (100 μs wide current pulses of 80 μA) was needed for the AT discrimination (P < 0.001, one-sided binomial test). Performance was at chance level for catch trials (in Task II), where ICMS was not delivered (P = 0.90, one-sided binomial test).
The statistics of object exploration intervals (total time spent over a particular object on a given trial) indicated that the monkeys uniquely discriminated each AT type (; ) and interpreted ICMS within hundreds of milliseconds—a timescale comparable to the discrimination of peripheral tactile stimuli 24,25
. Early in Task I, exploration intervals were equal for RAT and NAT (P > 0.5, Wilcoxon signed-rank test) and, with training, became longer for RAT and shorter for NAT (Tasks I–II) and UAT (Tasks III–V). During HC, the mean interval was longest for RAT (monkey M: 1,396 ± 21 ms; monkey N: 1,165 ± 15 ms; mean ± s.e.m.), shortest for NAT (304 ± 8 ms; 300 ± 10 ms), and intermediate for UAT (452 ± 13 ms; 402 ± 14 ms) (P < 0.01, ANOVA). During BC, intervals spent exploring NAT (498 ± 15 ms; 587 ± 25 ms) and UAT (685 ± 20 ms; 764 ± 32 ms) were longer as compared to HC, but still shorter as compared to RAT (1,420 ± 28 ms; 1,398 ± 55 ms) (P<0.01, ANOVA).
Statistics of object exploration
Additional hallmarks of active exploration were seen in the conditional probabilities of selecting different ATs (). During HC trials, the monkeys stayed over the first encountered AT (arrows that loop back to the same AT in ) with high probability if it was RAT (P=0.70 for monkey M and 0.76 for monkey N), but with low probability if it was UAT (0.05 and 0.01) or NAT (0.0 and 0.0) (, left). After examining the second AT, the monkeys could identify the correct AT either by apprehending it directly or through a process of elimination. This follows from the increase in the probability of moving to RAT from NAT or UAT from chance to approximately 0.7 and the decrease in the probability of revisiting UAT or NAT to approximately 0.2 ( right). Similar effects were observed for BC (, red text).
BC started in Task IV. During BC with hand movements (BCWH), the monkeys continued to hold the joystick although it was disconnected 16,22
. During brain control without hand (BCWOH) 9,22
, the joystick was removed. In monkey M, with more than 200-recorded neurons, performance was less accurate during BCWH (73.75 ± 3.00%, mean ± s.e.m.) than during HC (91.48 ± 1.20%). In Monkey N with 50-recorded neurons, performance dropped further (50.37 ± 3.74 % versus 91.45 ± 1.91%), but still significantly exceeded the 33% chance level. M1 neurons exhibited directionally tuned modulations (Supplementary Fig. 5; 6
) that were retained across different interfering ICMS patterns both during HC (Supplementary Fig. 4a,b
) and BC ().
M1 modulations during active control versus passive observation
In BCWOH, task requirements were eased: the object selection period was reduced to 300–500 ms and monkeys were allowed to overstay at an incorrect object. Performance for monkey M, measured as the number of rewards per minute, steadily improved from 1.021 ± 0.007 to 2.962 ± 0.005 (mean ± s.e.m.) (). Similar improvements were observed for HC and BCWH (inset in ). The average frequency of actuator displacements, calculated from power spectra, was correlated with the improvement in performance during BCWOH (R2=0.16 for the X-coordinate and R2=0.26 for the Y-coordinate, P<0.001, F-test), which indicated that the monkey modulated its brain activity to scan the targets faster. This behaviour was not random, as the exploration interval for NAT (3,620 ± 350 ms, mean ± s.e.m.) was significantly shorter (P<0.02, Wilcoxon rank sum test) than for UAT (4,270 ± 310 ms). The exploration of RAT (2,255 ± 94 ms) was the shortest due to the reduced selection period. For monkey N, BCWOH performance (2.084 ± 0.085 rewards per minute) did not change within sessions, and the differences in exploration intervals were not significant.
In agreement with others 26–30
, we observed that M1 neurons represented the movements of the actuator even when it was passively observed by the monkey (Supplementary Fig. 7
). Actuator movements (task V) replayed for the monkeys could be reconstructed from M1 activity, using a separately trained decoder (), with similar accuracy to reconstructions made for HC (). M1 representation of the passively viewed actuator is consistent with our suggestion that a neuroprosthetic limb might become incorporated in brain circuitry 1
Our BMBI demonstrated direct bidirectional communication between a primate brain and an external actuator. As both the afferent and efferent channels bypassed the subject’s body, we propose that BMBIs can effectively liberate a brain from the physical constraints of the body. Accordingly, future BMBIs may not be limited to limb prostheses, but may include devices designed for reciprocal communication between and among neural structures and with a variety of external actuators.