|Home | About | Journals | Submit | Contact Us | Français|
MRI compatible robots are emerging as useful tools for image guided interventions. A shared control between a user and the MRI compatible robot makes it more intuitive instrument especially during setup phases of interventions. We present a MRI compatible, hands-on cooperative system using Innomotion robotic arm. An economic MRI compatible user input sensor was developed and its functionality was tested under typical application conditions. Performance improvement in phantom tasks shows promise of adding hands-on interface in MRI compatible robots.
Magnetic Resonance Imaging (MRI) can provide real-time visualization of anatomic structures of the beating heart and major blood vessels with circulating blood. It is an ideal imaging modality for beating heart intervention . MRI is also an ideal modality for guiding and monitoring interventions for soft tissues due to its excellent visualization of soft tissue, its sub-structure and surrounding tissues. The use of robots can be a helpful tool for MRI guided interventions. Contemporary researchers on medical MRI compatible robot have focused on percutaneous biopsy, drug injection or radiotherapy seed implantation –. Improving precision and accuracy, while maintaining compatibility and safety with MRI are the prime concerns for these robotic systems.
Planning a minimally invasive intervention of soft tissue is not a trivial task. The organs move around and thus the preplanned motion based on pre-operative images alone is not sufficient. Most of the contemporary systems make use of intra-operative images and a graphical user interface to update the planned motions. In this work, we introduce a hands-on cooperative interface in addition to existing GUIs for MRI compatible robots. We believe multiple user interfaces would improve efficacy of a surgical assistance robot. In other words, a surgeon should be able to have hands-on control of the robot and also take advantage of its precise positioning function under image guidance.
A key requirement for hands-on control of a robot is user input device such as a master teleoperator, joystick or a force sensor. Undoubtedly, master teleoperation is highly ergonomic, but it is not always cost effective. Complexity is further increased due to MRI compatibility requirements. Recently, MRI-compatible robots ,  have been developed as haptic devices for neuroscience and diagnostics. These are hydraulic or pneumatically driven with non-compatible parts placed outside the MRI room. These are fully compatible with MR imaging, however, the cost and complexity increases with the degrees of freedom. The other alternatives for user input such as optical joysticks and force sensors have been developed since 80's and recently, their MRI compatible counter parts – have been developed.
Referring to a 6-DoF user input sensor in context of industrial robots, Hirzinger wrote , “It is difficult to understand why it took years until robot manufactures realized that intuitive 6-DoF robot control with such a device is highly efficient in programming phase”. We believe that an analogue can also be applied to phases such as initial setup of the robot for image guided interventions. Non-MRI compatible robots, such as Johns Hopkins Steady-Hand surgical robot  and the systems developed by Davies' group  allow one to mount similar devices at the robot end-effector, and guide the robot in the most natural way (via reacting to user forces). However, deploying this technique in MRI robot systems has been prohibitive due to the cost and complexity of similar MRI compatible sensors. This work introduces a proof-of-concept, economical user input sensor with a wide input range (approximately 0-100 N, 0-5 Nm) and very low dead band. We demonstrate the feasibility of using such a sensor in a MRI environment along with a commercially available MRI compatible robot. Initial comparisons with existing interface shows promising improvement in performing phantom tasks.
In remainder of this paper, we first present the clinical example, followed by the system components. We present validation of the user input sensor under conditions that are not only applicable to our motivational problem, but also cover other clinically applicable use-cases. This is followed by experimental results comparing hands-on control with our current setup. Finally, we present conclusions and future works.
Transapical aortic valve replacement  provides a direct and short access to the native valve. As seen in figure 1, the surgeon has to reach inside the MRI scanner, advance the delivery device, manipulate the catheter through the delivery device, and must inflate the balloon to deploy the valve under MRI guidance. Current manual instrumentation is hard to manipulate precisely, and requires coordinated efforts between the surgeon, and the assistant while the heart and lungs are moving.
In our previous work  we developed an image-guided robotic system (Figure 1(b)) to assist in the beating heart transapical valve placement. The robotic system comprises of a 5-DoF robotic tool holder and a 3-DoF image guided valve delivery module. Before the prosthetic valve delivery process can be started, the surgeon must perform preparatory procedures of placing the trocar into the apex of the heart. Thereafter, the surgeon loads the delivery device with the prosthetic valve and inserts the delivery device into the trocar using the robotic holder. The robotic holder is manipulated to adjust the orientation of the delivery device. Finally, the robotic delivery device is manipulated under image guidance for final placement of the prosthetic valve. For introducing the delivery device into the trocar using the robotic holder, we found that imaging was unnecessary and impractical, because of large motion and variability in localization of trocar. Typically, the trocar port is about 10 - 15 cm from imaging center, thus requiring a very large image acquisition volume. Breathing motion of up to 20 mm also causes localization and registration errors. Moreover, adjustments may also be required to the entry point after a preliminary scan of the delivery device is acquired. As mentioned earlier, it is not necessary and sometimes impractical to have an image guided interface for the robotic holder such as during procedure setup. Further, registration which is prerequisite for image guidance can add procedural time.
In our case, we would like to directly manipulate the robotic holder to the trocar, insert the delivery device and then use imaging to guide the delivery device. The robotic holder acts like a laparoscopic tool holder similar to LARS or the AESOP , where the tool is an enhanced dexterous delivery device/module. In this work, we address some of the needs of developing a hands-on cooperative interface for a MRI compatible robot that serves as our robotic holder.
To address our clinical motivation, we developed a proof-of-concept system that extends some of the contemporary technologies and presents a new application. A picture of the system along with sketch showing intended use is shown in figure 2. The system comprises of a commercially available MRI compatible robot system, a custom MRI compatible user input device and algorithms to control the robot. This section briefly describe the components, while the details of the MRI compatible user input device are presented separately in section IV.
Our work employs a modified Innomotion robotic arm (Innomedic, Herxheim, German). This robotic arm was originally developed for precise needle insertion in MRI-guided therapy of spinal diseases . It has 5-DoFs with a remote center of motion (RCM) structure to place and orient an interventional tool. The robot's configuration fits into a standard closed-bore MRI scanner.
The actuators of the Innomotion robot are controlled by an off-the-shelf controller (Motion Engineering Inc/Danaher Motion, Santa Barbara, CA). A PIV (proportional position loop integral and proportional velocity) controller runs on the proprietary DSP based hardware. We have made few modifications in the standard PIV controller to facilitate hands-on control with pneumatic actuators. We describe these in this section and present the test results.
The robot controller is located outside the MRI room and communicates with the robot via optical and pneumatic lines. The low-level controller can communicate with the high-level through the computer's PCI interface. For point-to-point position moves the controller switches gains that are optimized for different phases, such as acceleration, deceleration and motion. There is a separate set of gains for fine control, which are used when the required actuator motion is below a certain threshold. The switch between the gains is made on the DSP controller. However, we found that none of these values were suitable for a hands-on mode based on user input, which required continuous update of the actuators. Hence, we added additional set of gains that are optimized for such a requirement. Here, we present the performance results for the gains that were tuned empirically along with our intuition. The important difference between hands-on control mode and other modes is that the later prefers zero or low steady state errors. Then again, Hands-on mode has an user in the control loop who can tolerate a fair amount of steady-state error by adjusting the his/her input to achieve the desired result. This mode demands a fast and smooth response without oscillations during the transient as well as steady state. It is straightforward to obtain such a response on traditional motor driven robots. However, for a pneumatic robot we had to make a compromise between smoothness, speed and error.
To achieve this behavior, we applied a sine wave as user input (velocity) with varying amplitudes. Iteratively, we adjusted the PIV gains and the maximum amplitude of user input that would produce reasonable error, till we saw no further improvement. In figure 3, the fast oscillatory response for point-to-point gains is shown along with a smoother, reasonably fast response for hands-on mode. Figure 4(a) shows a typical profile for a user input with amplitude of 3.5 mm/s and the actual Cartesian velocity measured using robot encoders. The error is defined as the difference between the user input and the actual Cartesian velocity at that instant. The average error is computed by averaging the error over the number of samples collected during 2 cycles of user input. Figure 4(b) shows this value for different amplitudes of user input. Based on this figure, our current system can tolerate a maximum user input speed of 3.6 mm/s and 3.6 deg/s (if the required error is to be kept below 0.5 mm/s and 0.5 deg/s).
Depending on how the inertias and friction forces compare with the forces applied by the environment to the robot, it is possible to classify robotic devices into two categories: impedance-type and admittance-type. More importantly, it is the control scheme that distinguishes the two types. The impedance-type robots act as a force/torque source, where the controller outputs a force based on desired input positions (and their derivatives). In admittance-type robots, the controller outputs a desired position (and its derivatives) based on user inputs such as force measurement or joystick motion. Innomotion robot is practically “non-back-drivable”, i.e., it requires significant effort to overcome internal friction to maneuver it. Thus, it is more suited for an admittance control scheme such as those explored with the LARS and the Johns Hopkins University Steady-Hand Robot . In these works, force sensors could be used to obtain user input. However, MRI compatibility poses certain challenges in obtaining a 6-DoF user input. We address some of these concerns in section IV. In this section, we briefly introduce the hands-on controller and constraints that are imposed by a pneumatic robot. The outline of our hands-on controller is as follows:
Numerical integration and rate control laws such as these are known to be “non-conservative” and may result in positional errors. However, this is not a concern here, since a human-in-the-loop can easily account for any positional error by applying required additional input to reach the desired goal. The desirable behavior here is that the robot continues to follow the user input as best as it can, even in the advent of certain limits, such as joint or velocity extremity.
Design of a system operating inside or close to the bore of a high field 1.5-3 T MRI scanner is of significant complexity since standard materials, sensors and actuators cannot be employed. The level of MRI compatibility required depends on the intended usage of the device. The highest level allows simultaneous, safe and compromise free use of both the imager and the device. Often the simultaneous use of both the imager, and the device is not necessary. This scenario could lead to a reduction in complexity of the device. However, both these cases require similar handling of the use of metals and electronics. The difference is that the later case does not impose stringent requirements on the magnitude of EM radiation under which the device must operate. In this section, we briefly describe the key issues of MRI compatibility, followed by our proof-of-concept, economic, MRI compatible user input sensor that can act as a substitute for force sensor and finally its compatibility results.
Off-the-shelf force sensors have strain gauges mounted on a metal substrate (typically ferromagnetic steel), which must be avoided because they cause image artifacts, and can become dangerous projectiles. Non-ferromagnetic metals such as aluminum, brass, and titanium or high strength plastic and composite materials are permissible, but must be carefully selected to be either nonconductive or free of loops and of carefully chosen lengths to avoid eddy currents and resonance. Two technologies that have been shown to be MRI compatible are optical sensing (cf. ,  and references therein) and hydraulics/pneumatics (cf. ). Though our force/torque ranges are similar to those seen by Gassert et al. , it is difficult to design a 6-DoF version that can be compact to act as a suitable user interface. Further, due to friction losses and compressibility there is a significant dead zone in the sensed output. However, the advantage of using their technique is that it allows use of device during imaging. On the other hand, Tada and Kanade's approach  can be easily extended to additional DoFs, but uses some electronics which may not function during imaging. Since in our application the device is not intended to be used during imaging, this is not an issue.
For our proof-of-concept, economic user input sensor, we have modified a commercially available 6-DoF user input sensor to be MRI compatible. The commercial product called “3D SpaceNavigator” (3dconnexion, Fremont, CA) is based on technology developed by DLR (German Aerospace Research Establishment) and uses a proprietary advanced 6-DoF optical sensor , . This optical signal is converted to electrical signal via a signal PC board, which then connects to a USB interface board. In its current form, the sensor is not MRI compatible due to presence of ferromagnetic material, coils and transformers. Replacement of some ferromagnetic parts with non-ferromagnetic was mandatory for safety. Since the device is powered through the USB it is not entirely possible to avoid coils and loops on the USB interface board. To avoid interference, we placed the USB interface board inside a copper shielded box, which was connected to a laptop computer outside the room via a shielded USB cable. The USB interface box could be placed either inside or outside the room, but no closer than 5 Gauss line. The USB interface board was connected to the sensor electronics with a 4 m shielded wire. The sensor electronics was also placed in a shielded box, whereas the “handle” part of the sensor was covered by a EMI/RFI Shielding Fabric (Leader Tech, Tampa, FL). Elastic bands were attached to the fabric to allow it to remain in place without affecting the motion of the sensor itself. All the shields were either of aluminum or copper to avoid effects of the static magnetic field and grounded to the shielding of the room. Further, a quick-detach mechanism was added to the sensor (figure 5), to allow it to be detached from the robot arm when the imager is being used.
To evaluate the interference between the MRI and the sensor, baseline MR images were captured using a 16 cm cylinder MRI quality assurance phantom. This was followed by another set of MR images with the robot placed inside the bore, to emulate the clinical situation as described in the section II. The test was then repeated with the sensor placed at a distance of 1.0 m, 0.5 m and 0.25 m, respectively, from outside the bore, along the axis of the bore. We should reiterate that for ergonomic use of the sensor and robot combination, the user should be about 50 cm away from MRI bore.
Standard body coil was used and with a steady-state free precession (SSFP) sequence having acquisition parameters as follows: TR = 436.4 ms, TE = 1.67 ms, echo spacing = 3.2 ms, imaging flip angle = 45 deg, slice thickness = 4.5 mm, field of view (FOV) = 340×283 mm, and matrix = 192×129. This protocol is similar to the one we used in MRI scanning for the cardiac intervention.
To measure the rise in background image noise, the SNR is calculated using SNR =mtest/σref, where mtest is the mean value of a 20 × 20 pixel test window and σref is the standard deviation of the 20 × 20 pixel reference window at the lower right of the image , . Figure 6 shows the SNR computed for the case when only phantom was imaged (Condition 1), followed by robot without sensor (Condition 2), and robot and sensor located at different distances from the bore (Condition 3a-3c). Though the absolute value depends on factors such as the placement of test and windows, size and so on, the trend in these graphs remain the same. The point to note is that the reduction in SNR caused by introduction of our proof-of-concept sensor is about the same as caused by the introduction of commercially available robot. In other words reduction in SNR from Condition 1 to Condition 2 is little more than the reduction in SNR from Condition 2 to Condition 3a. Figure 7 show the percentage difference between images under Conditions 2 and 3 with the baseline image obtained in Condition 1. In all the four cases, the maximum percentage difference of any pixel does not exceed 4%, which is less than maximum acceptable loss of 10% in the medical application , .
To establish the effect of MRI on the sensor, the sensor data was acquired with the sensor placed at different distances along the axis of the bore. A constant arbitrary load was applied, which resulted in a constant signal. Sensor SNR was calculated as the ratio of mean of the signal to the standard deviation of the acquired signal. We did not observe any noticeable difference in the Sensor SNR when the sensor was brought up to 50 cm of the bore.
The primary goals of these experiments were to test the system functionality and access the qualitative ease of use of the sensor. The secondary goal was to evaluate the hands-on sensor interface with respect to current and alternate interfaces.
The experimental design, we present in this section, is based on clinical example presented earlier in section II. This clinical scenario is best emulated by quintessential peg-in-the-hole task. Our peg was a cylinder 12.5 mm in diameter and 140 mm long, which mimicked dimension of several of our clinical tools. To design our experiments to be in line with Fitt's law, we used two different hole dimensions, viz., 13.5 mm and 16.5 mm for fine and coarse positioning accuracy, respectively. Further, the larger hole had larger bevel at entry point. The hole was placed at a known position with respect to the robot and the starting configuration of the robot was chosen at random for each trial. The starting position was chosen using an uniform random distribution from a spherical annular region of radii to mm, and the orientations were picked in the interval of ±15 - ±20 deg. In our opinion, these are clinically relevant distances, because the robot can be easily and quickly positioned to within these regions using available passive adjustments.
To evaluate the secondary goal, we considered alternate approach of voice activation as used in AESOP. In an office room environment, the typical voice recognition rates are around (70%-80%) , the error rates go higher as noise level increases. Unfortunately, the MRI room is extremely noisy environment. Hence this option was not explored further. In the current interface, which we also use for other devices including the scanner, the surgeon communicates with a person outside the MRI room via a microphone. The person at the console then enters the appropriate commands. Figure 8 shows a capture from a closed-circuit television (CCTV) screen. The CCTV video quality is adequate to monitor unforseen complications, but not suitable for robot control. We developed a simplified GUI that resembles the commands acceptable to the AESOP, that is, surgeon could annunciate one of “left”, “right”, “up”, “down”,“front”,“back” to move in the appropriate direction, the person at console would have to press the corresponding button. Figure 9 shows a screen shot of the user interface. Further, since we have 5-DoF, we added buttons to correspond to “rotate” followed by “left”, “right”, “front” and “back”. The buttons to increase/decrease speed would change the current set speed by 0.5 mm/s. In our experimental setup, the person at console could not directly visualize the robot to replicate the clinical setting. The person at console was picked at random from the pool of users to avoid bias.
Each user was asked to perform the task as quickly as possible, on the “start” cue of person on console and indicate verbally when they completed the task by announcing “done”. Comparisons of recorded task times have been shown in figure 10. As seen, hands-on approach is efficient for both the levels of accuracy. This can be attributed to the ability to perform complex continuous motions involving multiple translational and rotational degrees of freedom at one time. Further, using the GUI interface, the user announcing the commands has to contemplate his/her next moves. As expected, there is an increase in time required with an increase in the difficulty of the task. Though, this difference is not significant when compared with the difference between the two approaches.
We have developed a prototype robotic surgical system that can serve as a hands-on cooperative tool holder in a MRI environment. The MRI compatible user input sensor used in our application can be extended to other systems that require a user input inside a MRI room. We capitalize on commercially available, economic optical sensor to create a MRI compatible user input sensor with a wide dynamic range and low dead band. Our experiments suggest that it is feasible to use this sensor up to 50 cm from the bore, for hands-on control of the robot. This incidentally, is also the approximate ergonomic limit for manipulating objects while standing upright close to the bore.
There is a basic dilemma in choosing and conducting a fair interface comparison. That is, one interface might be good with respect to one aspect of performance, such as speed, while another interface could be more suitable regarding another aspect. However, it would be fair to say that hands-on cooperative control of robots is intuitive and efficient in localizing and fine placement. On the other hand, it would be impractical to use this interface when the robotic holder is inside the magnet bore. In the later case, an image-guided interface would be ideal to plan and manipulate the robotic holder or just the end-effectors/delivery device.
We believe, that in the engineering of robots for medical applications, detailed analyses of the functions of the entire system, that is, robot, interfaces and application, taken as single entity, is arguably more important than the individual performance of the subsystems (robot, surgeon, interfaces and application, separately). Thus, having a combination of more than one interface such as image-guided, console guided or hands-on based on application might yield a higher performance from the entire system. Our immediate future work is to evaluate the complete system in a phantom study and in a large animal study to mimic the clinical scenario.
This research was supported by the the Intramural Research Program of the National Institutes of Health (NIH), Clinical Center, and National Heart, Lung, and Blood Institute.
Ankur Kapoor, Clinical Center, National Institutes of Health, Bethesda, MD, USA.
Brad Wood, Clinical Center, National Institutes of Health, Bethesda, MD, USA.
Dumitru Mazilu, National Heart, Lung, and Blood Institute, National Institutes of Health, Bethesda, MD, USA.
Keith A. Horvath, National Heart, Lung, and Blood Institute, National Institutes of Health, Bethesda, MD, USA.
Ming Li, National Heart, Lung, and Blood Institute, National Institutes of Health, Bethesda, MD, USA.