PMCCPMCCPMCC

Search tips
Search criteria 

Advanced

 
Logo of nihpaAbout Author manuscriptsSubmit a manuscriptNIH Public Access; Author Manuscript; Accepted for publication in peer reviewed journal;
 
IEEE Int Conf Robot Autom. Author manuscript; available in PMC Mar 16, 2011.
Published in final edited form as:
IEEE Int Conf Robot Autom. Jul 6, 2009; 2009: 2681–2686.
doi:  10.1109/ROBOT.2009.5152541
PMCID: PMC3059312
NIHMSID: NIHMS119386
MRI-compatible Hands-on Cooperative Control of a Pneumatically Actuated Robot
Ankur Kapoor, Brad Wood, Dumitru Mazilu, Keith A. Horvath, and Ming Li
Ankur Kapoor, Clinical Center, National Institutes of Health, Bethesda, MD, USA;
Ankur Kapoor: kapooran/at/mail.nih.gov; Brad Wood: bwood/at/mail.nih.gov; Dumitru Mazilu: mazilud/at/mail.nih.gov; Keith A. Horvath: horvathka/at/mail.nih.gov; Ming Li: lim2/at/mail.nih.gov
Abstract
MRI compatible robots are emerging as useful tools for image guided interventions. A shared control between a user and the MRI compatible robot makes it more intuitive instrument especially during setup phases of interventions. We present a MRI compatible, hands-on cooperative system using Innomotion robotic arm. An economic MRI compatible user input sensor was developed and its functionality was tested under typical application conditions. Performance improvement in phantom tasks shows promise of adding hands-on interface in MRI compatible robots.
Magnetic Resonance Imaging (MRI) can provide real-time visualization of anatomic structures of the beating heart and major blood vessels with circulating blood. It is an ideal imaging modality for beating heart intervention [1]. MRI is also an ideal modality for guiding and monitoring interventions for soft tissues due to its excellent visualization of soft tissue, its sub-structure and surrounding tissues. The use of robots can be a helpful tool for MRI guided interventions. Contemporary researchers on medical MRI compatible robot have focused on percutaneous biopsy, drug injection or radiotherapy seed implantation [2]–[4]. Improving precision and accuracy, while maintaining compatibility and safety with MRI are the prime concerns for these robotic systems.
Planning a minimally invasive intervention of soft tissue is not a trivial task. The organs move around and thus the preplanned motion based on pre-operative images alone is not sufficient. Most of the contemporary systems make use of intra-operative images and a graphical user interface to update the planned motions. In this work, we introduce a hands-on cooperative interface in addition to existing GUIs for MRI compatible robots. We believe multiple user interfaces would improve efficacy of a surgical assistance robot. In other words, a surgeon should be able to have hands-on control of the robot and also take advantage of its precise positioning function under image guidance.
A key requirement for hands-on control of a robot is user input device such as a master teleoperator, joystick or a force sensor. Undoubtedly, master teleoperation is highly ergonomic, but it is not always cost effective. Complexity is further increased due to MRI compatibility requirements. Recently, MRI-compatible robots [5], [6] have been developed as haptic devices for neuroscience and diagnostics. These are hydraulic or pneumatically driven with non-compatible parts placed outside the MRI room. These are fully compatible with MR imaging, however, the cost and complexity increases with the degrees of freedom. The other alternatives for user input such as optical joysticks and force sensors have been developed since 80's and recently, their MRI compatible counter parts [7]–[9] have been developed.
Referring to a 6-DoF user input sensor in context of industrial robots, Hirzinger wrote [10], “It is difficult to understand why it took years until robot manufactures realized that intuitive 6-DoF robot control with such a device is highly efficient in programming phase”. We believe that an analogue can also be applied to phases such as initial setup of the robot for image guided interventions. Non-MRI compatible robots, such as Johns Hopkins Steady-Hand surgical robot [11] and the systems developed by Davies' group [12] allow one to mount similar devices at the robot end-effector, and guide the robot in the most natural way (via reacting to user forces). However, deploying this technique in MRI robot systems has been prohibitive due to the cost and complexity of similar MRI compatible sensors. This work introduces a proof-of-concept, economical user input sensor with a wide input range (approximately 0-100 N, 0-5 Nm) and very low dead band. We demonstrate the feasibility of using such a sensor in a MRI environment along with a commercially available MRI compatible robot. Initial comparisons with existing interface shows promising improvement in performing phantom tasks.
In remainder of this paper, we first present the clinical example, followed by the system components. We present validation of the user input sensor under conditions that are not only applicable to our motivational problem, but also cover other clinically applicable use-cases. This is followed by experimental results comparing hands-on control with our current setup. Finally, we present conclusions and future works.
Transapical aortic valve replacement [1] provides a direct and short access to the native valve. As seen in figure 1, the surgeon has to reach inside the MRI scanner, advance the delivery device, manipulate the catheter through the delivery device, and must inflate the balloon to deploy the valve under MRI guidance. Current manual instrumentation is hard to manipulate precisely, and requires coordinated efforts between the surgeon, and the assistant while the heart and lungs are moving.
Fig. 1
Fig. 1
(a) A typical procedure inside MRI bore (b) Setup with Innomotion robot and delivery module
In our previous work [13] we developed an image-guided robotic system (Figure 1(b)) to assist in the beating heart transapical valve placement. The robotic system comprises of a 5-DoF robotic tool holder and a 3-DoF image guided valve delivery module. Before the prosthetic valve delivery process can be started, the surgeon must perform preparatory procedures of placing the trocar into the apex of the heart. Thereafter, the surgeon loads the delivery device with the prosthetic valve and inserts the delivery device into the trocar using the robotic holder. The robotic holder is manipulated to adjust the orientation of the delivery device. Finally, the robotic delivery device is manipulated under image guidance for final placement of the prosthetic valve. For introducing the delivery device into the trocar using the robotic holder, we found that imaging was unnecessary and impractical, because of large motion and variability in localization of trocar. Typically, the trocar port is about 10 - 15 cm from imaging center, thus requiring a very large image acquisition volume. Breathing motion of up to 20 mm also causes localization and registration errors. Moreover, adjustments may also be required to the entry point after a preliminary scan of the delivery device is acquired. As mentioned earlier, it is not necessary and sometimes impractical to have an image guided interface for the robotic holder such as during procedure setup. Further, registration which is prerequisite for image guidance can add procedural time.
In our case, we would like to directly manipulate the robotic holder to the trocar, insert the delivery device and then use imaging to guide the delivery device. The robotic holder acts like a laparoscopic tool holder similar to LARS or the AESOP [14], where the tool is an enhanced dexterous delivery device/module. In this work, we address some of the needs of developing a hands-on cooperative interface for a MRI compatible robot that serves as our robotic holder.
To address our clinical motivation, we developed a proof-of-concept system that extends some of the contemporary technologies and presents a new application. A picture of the system along with sketch showing intended use is shown in figure 2. The system comprises of a commercially available MRI compatible robot system, a custom MRI compatible user input device and algorithms to control the robot. This section briefly describe the components, while the details of the MRI compatible user input device are presented separately in section IV.
Fig. 2
Fig. 2
System setup with Innomotion and “user input sensor”
Our work employs a modified Innomotion robotic arm (Innomedic, Herxheim, German). This robotic arm was originally developed for precise needle insertion in MRI-guided therapy of spinal diseases [15]. It has 5-DoFs with a remote center of motion (RCM) structure to place and orient an interventional tool. The robot's configuration fits into a standard closed-bore MRI scanner.
A. Low-level PIV Control on a Pneumatic Robot
The actuators of the Innomotion robot are controlled by an off-the-shelf controller (Motion Engineering Inc/Danaher Motion, Santa Barbara, CA). A PIV (proportional position loop integral and proportional velocity) controller runs on the proprietary DSP based hardware. We have made few modifications in the standard PIV controller to facilitate hands-on control with pneumatic actuators. We describe these in this section and present the test results.
The robot controller is located outside the MRI room and communicates with the robot via optical and pneumatic lines. The low-level controller can communicate with the high-level through the computer's PCI interface. For point-to-point position moves the controller switches gains that are optimized for different phases, such as acceleration, deceleration and motion. There is a separate set of gains for fine control, which are used when the required actuator motion is below a certain threshold. The switch between the gains is made on the DSP controller. However, we found that none of these values were suitable for a hands-on mode based on user input, which required continuous update of the actuators. Hence, we added additional set of gains that are optimized for such a requirement. Here, we present the performance results for the gains that were tuned empirically along with our intuition. The important difference between hands-on control mode and other modes is that the later prefers zero or low steady state errors. Then again, Hands-on mode has an user in the control loop who can tolerate a fair amount of steady-state error by adjusting the his/her input to achieve the desired result. This mode demands a fast and smooth response without oscillations during the transient as well as steady state. It is straightforward to obtain such a response on traditional motor driven robots. However, for a pneumatic robot we had to make a compromise between smoothness, speed and error.
To achieve this behavior, we applied a sine wave as user input (velocity) with varying amplitudes. Iteratively, we adjusted the PIV gains and the maximum amplitude of user input that would produce reasonable error, till we saw no further improvement. In figure 3, the fast oscillatory response for point-to-point gains is shown along with a smoother, reasonably fast response for hands-on mode. Figure 4(a) shows a typical profile for a user input with amplitude of 3.5 mm/s and the actual Cartesian velocity measured using robot encoders. The error is defined as the difference between the user input and the actual Cartesian velocity at that instant. The average error is computed by averaging the error over the number of samples collected during 2 cycles of user input. Figure 4(b) shows this value for different amplitudes of user input. Based on this figure, our current system can tolerate a maximum user input speed of 3.6 mm/s and 3.6 deg/s (if the required error is to be kept below 0.5 mm/s and 0.5 deg/s).
Fig. 3
Fig. 3
Position error in the PIV controller for one of the robot axis under two different modes. (a) Gains optimized for point-to-point move (b) Gains for hands-on mode.
Fig. 4
Fig. 4
(a) User input with amplitude of 3.5 mm/s and corresponding actual Cartesian velocity (b) Average Cartesian velocity error over time (2 cycles) with respect to amplitude of user input.
B. High-Level Admittance Control on a Pneumatic Robot
Depending on how the inertias and friction forces compare with the forces applied by the environment to the robot, it is possible to classify robotic devices into two categories: impedance-type and admittance-type. More importantly, it is the control scheme that distinguishes the two types. The impedance-type robots act as a force/torque source, where the controller outputs a force based on desired input positions (and their derivatives). In admittance-type robots, the controller outputs a desired position (and its derivatives) based on user inputs such as force measurement or joystick motion. Innomotion robot is practically “non-back-drivable”, i.e., it requires significant effort to overcome internal friction to maneuver it. Thus, it is more suited for an admittance control scheme such as those explored with the LARS and the Johns Hopkins University Steady-Hand Robot [11]. In these works, force sensors could be used to obtain user input. However, MRI compatibility poses certain challenges in obtaining a 6-DoF user input. We address some of these concerns in section IV. In this section, we briefly introduce the hands-on controller and constraints that are imposed by a pneumatic robot. The outline of our hands-on controller is as follows:
  • Switch the low-level PIV gains to ones optimized for hands-on mode (Fig. 3, graph (b)).
  • Obtain incremental motion desired by the user, that is, Δxd = Kcf, where Δxd is desired incremental motion, f [set membership] R6 is user input and Kc is a scaling matrix.
  • Compute current joint state, q, from current actuator values, a, and incremental motion in the actuators, Δa.
  • A constrained least squares problem is solved for the optimal incremental joint motion, Δq*, by the high-level controller. The least-squares (LS) problem has an objective function describing the desired outcome. It includes constraints that consider joint limits, and importantly velocity limits.
    equation M1
    (1)
    The incremental joint motion, Δq, is variable for LS problem. Matrix J is the Jacobian of the robot. qL and qU are the lower and upper bounds of the joint variables. qU is the upper bound of the joint velocities that are obtained as described in earlier section. Δt is the small time interval of the high-level control loop. Without any constraints, the above constrained LS problem, which is implemented in the high-level block, is equivalent to resolved rate control. Wc and Wq are diagonal weight matrices (cf. [16]).
    Numerically integrate the incremental joint motion to arrive at a new set of joint positions. We assume that for each iteration loop, the incremental motions are sufficiently small and Δx/Δt = Jqt represents a good approximation to the instantaneous kinematics.
  • Compute desired actuator values, ad and desired actuator velocities, ad = Δadt from q and Δq*. These are new set points for the low-level controller.
  • Repeat steps 2-6 while in hands-on mode. On exit, switch PIV gains to point-to-point mode values (Fig. 3, graph (a)).
Numerical integration and rate control laws such as these are known to be “non-conservative” and may result in positional errors. However, this is not a concern here, since a human-in-the-loop can easily account for any positional error by applying required additional input to reach the desired goal. The desirable behavior here is that the robot continues to follow the user input as best as it can, even in the advent of certain limits, such as joint or velocity extremity.
Design of a system operating inside or close to the bore of a high field 1.5-3 T MRI scanner is of significant complexity since standard materials, sensors and actuators cannot be employed. The level of MRI compatibility required depends on the intended usage of the device. The highest level allows simultaneous, safe and compromise free use of both the imager and the device. Often the simultaneous use of both the imager, and the device is not necessary. This scenario could lead to a reduction in complexity of the device. However, both these cases require similar handling of the use of metals and electronics. The difference is that the later case does not impose stringent requirements on the magnitude of EM radiation under which the device must operate. In this section, we briefly describe the key issues of MRI compatibility, followed by our proof-of-concept, economic, MRI compatible user input sensor that can act as a substitute for force sensor and finally its compatibility results.
A. Component Description
Off-the-shelf force sensors have strain gauges mounted on a metal substrate (typically ferromagnetic steel), which must be avoided because they cause image artifacts, and can become dangerous projectiles. Non-ferromagnetic metals such as aluminum, brass, and titanium or high strength plastic and composite materials are permissible, but must be carefully selected to be either nonconductive or free of loops and of carefully chosen lengths to avoid eddy currents and resonance. Two technologies that have been shown to be MRI compatible are optical sensing (cf. [7], [8] and references therein) and hydraulics/pneumatics (cf. [5]). Though our force/torque ranges are similar to those seen by Gassert et al. [6], it is difficult to design a 6-DoF version that can be compact to act as a suitable user interface. Further, due to friction losses and compressibility there is a significant dead zone in the sensed output. However, the advantage of using their technique is that it allows use of device during imaging. On the other hand, Tada and Kanade's approach [9] can be easily extended to additional DoFs, but uses some electronics which may not function during imaging. Since in our application the device is not intended to be used during imaging, this is not an issue.
For our proof-of-concept, economic user input sensor, we have modified a commercially available 6-DoF user input sensor to be MRI compatible. The commercial product called “3D SpaceNavigator” (3dconnexion, Fremont, CA) is based on technology developed by DLR (German Aerospace Research Establishment) and uses a proprietary advanced 6-DoF optical sensor [10], [17]. This optical signal is converted to electrical signal via a signal PC board, which then connects to a USB interface board. In its current form, the sensor is not MRI compatible due to presence of ferromagnetic material, coils and transformers. Replacement of some ferromagnetic parts with non-ferromagnetic was mandatory for safety. Since the device is powered through the USB it is not entirely possible to avoid coils and loops on the USB interface board. To avoid interference, we placed the USB interface board inside a copper shielded box, which was connected to a laptop computer outside the room via a shielded USB cable. The USB interface box could be placed either inside or outside the room, but no closer than 5 Gauss line. The USB interface board was connected to the sensor electronics with a 4 m shielded wire. The sensor electronics was also placed in a shielded box, whereas the “handle” part of the sensor was covered by a EMI/RFI Shielding Fabric (Leader Tech, Tampa, FL). Elastic bands were attached to the fabric to allow it to remain in place without affecting the motion of the sensor itself. All the shields were either of aluminum or copper to avoid effects of the static magnetic field and grounded to the shielding of the room. Further, a quick-detach mechanism was added to the sensor (figure 5), to allow it to be detached from the robot arm when the imager is being used.
Fig. 5
Fig. 5
(a) The quick detach button shown with an arrow (b) Pulling outwards, while holding the button detaches the sensor.
B. Compatability Tests
To evaluate the interference between the MRI and the sensor, baseline MR images were captured using a 16 cm cylinder MRI quality assurance phantom. This was followed by another set of MR images with the robot placed inside the bore, to emulate the clinical situation as described in the section II. The test was then repeated with the sensor placed at a distance of 1.0 m, 0.5 m and 0.25 m, respectively, from outside the bore, along the axis of the bore. We should reiterate that for ergonomic use of the sensor and robot combination, the user should be about 50 cm away from MRI bore.
Standard body coil was used and with a steady-state free precession (SSFP) sequence having acquisition parameters as follows: TR = 436.4 ms, TE = 1.67 ms, echo spacing = 3.2 ms, imaging flip angle = 45 deg, slice thickness = 4.5 mm, field of view (FOV) = 340×283 mm, and matrix = 192×129. This protocol is similar to the one we used in MRI scanning for the cardiac intervention.
To measure the rise in background image noise, the SNR is calculated using SNR =mtestref, where mtest is the mean value of a 20 × 20 pixel test window and σref is the standard deviation of the 20 × 20 pixel reference window at the lower right of the image [2], [9]. Figure 6 shows the SNR computed for the case when only phantom was imaged (Condition 1), followed by robot without sensor (Condition 2), and robot and sensor located at different distances from the bore (Condition 3a-3c). Though the absolute value depends on factors such as the placement of test and windows, size and so on, the trend in these graphs remain the same. The point to note is that the reduction in SNR caused by introduction of our proof-of-concept sensor is about the same as caused by the introduction of commercially available robot. In other words reduction in SNR from Condition 1 to Condition 2 is little more than the reduction in SNR from Condition 2 to Condition 3a. Figure 7 show the percentage difference between images under Conditions 2 and 3 with the baseline image obtained in Condition 1. In all the four cases, the maximum percentage difference of any pixel does not exceed 4%, which is less than maximum acceptable loss of 10% in the medical application [2], [9].
Fig. 6
Fig. 6
SNR of MR image under different conditions. (a) Condition 1: Phantom only (b) Condition 2: With robot only (c) Condition 3a: With robot inside bore and sensor at 1.0 m (d) Condition 3b: sensor moved to 0.5 m and (e) Condition 3c: sensor moved to 0.25 (more ...)
Fig. 7
Fig. 7
Mean percentage difference between images under different conditions and baseline image (Phantom only). (a) Condition 2: With robot only (b) Condition 3a: With robot inside bore and sensor at 1.0 m (c) Condition 3b: sensor moved to 0.5 m and (d) Condition (more ...)
To establish the effect of MRI on the sensor, the sensor data was acquired with the sensor placed at different distances along the axis of the bore. A constant arbitrary load was applied, which resulted in a constant signal. Sensor SNR was calculated as the ratio of mean of the signal to the standard deviation of the acquired signal. We did not observe any noticeable difference in the Sensor SNR when the sensor was brought up to 50 cm of the bore.
The primary goals of these experiments were to test the system functionality and access the qualitative ease of use of the sensor. The secondary goal was to evaluate the hands-on sensor interface with respect to current and alternate interfaces.
A. Experiment Design
The experimental design, we present in this section, is based on clinical example presented earlier in section II. This clinical scenario is best emulated by quintessential peg-in-the-hole task. Our peg was a cylinder 12.5 mm in diameter and 140 mm long, which mimicked dimension of several of our clinical tools. To design our experiments to be in line with Fitt's law, we used two different hole dimensions, viz., 13.5 mm and 16.5 mm for fine and coarse positioning accuracy, respectively. Further, the larger hole had larger bevel at entry point. The hole was placed at a known position with respect to the robot and the starting configuration of the robot was chosen at random for each trial. The starting position was chosen using an uniform random distribution from a spherical annular region of radii equation M2 to equation M3 mm, and the orientations were picked in the interval of ±15 - ±20 deg. In our opinion, these are clinically relevant distances, because the robot can be easily and quickly positioned to within these regions using available passive adjustments.
To evaluate the secondary goal, we considered alternate approach of voice activation as used in AESOP. In an office room environment, the typical voice recognition rates are around (70%-80%) [18], the error rates go higher as noise level increases. Unfortunately, the MRI room is extremely noisy environment. Hence this option was not explored further. In the current interface, which we also use for other devices including the scanner, the surgeon communicates with a person outside the MRI room via a microphone. The person at the console then enters the appropriate commands. Figure 8 shows a capture from a closed-circuit television (CCTV) screen. The CCTV video quality is adequate to monitor unforseen complications, but not suitable for robot control. We developed a simplified GUI that resembles the commands acceptable to the AESOP, that is, surgeon could annunciate one of “left”, “right”, “up”, “down”,“front”,“back” to move in the appropriate direction, the person at console would have to press the corresponding button. Figure 9 shows a screen shot of the user interface. Further, since we have 5-DoF, we added buttons to correspond to “rotate” followed by “left”, “right”, “front” and “back”. The buttons to increase/decrease speed would change the current set speed by 0.5 mm/s. In our experimental setup, the person at console could not directly visualize the robot to replicate the clinical setting. The person at console was picked at random from the pool of users to avoid bias.
Fig. 8
Fig. 8
Screen capture of closed-circuit TV screen in the MRI console room
Fig. 9
Fig. 9
Screenshot of AESOP like GUI
B. Results
Each user was asked to perform the task as quickly as possible, on the “start” cue of person on console and indicate verbally when they completed the task by announcing “done”. Comparisons of recorded task times have been shown in figure 10. As seen, hands-on approach is efficient for both the levels of accuracy. This can be attributed to the ability to perform complex continuous motions involving multiple translational and rotational degrees of freedom at one time. Further, using the GUI interface, the user announcing the commands has to contemplate his/her next moves. As expected, there is an increase in time required with an increase in the difficulty of the task. Though, this difference is not significant when compared with the difference between the two approaches.
Fig. 10
Fig. 10
Histogram of time required to complete the peg-in-hole task (a,b) hands-on and hole size of 16.5 mm and 13.5 mm, respectively (c,d) AESOP like interface and hole size of 16.5 mm and 13.5 mm, respectively. The solid curves are the fitted Gaussian distributions (more ...)
We have developed a prototype robotic surgical system that can serve as a hands-on cooperative tool holder in a MRI environment. The MRI compatible user input sensor used in our application can be extended to other systems that require a user input inside a MRI room. We capitalize on commercially available, economic optical sensor to create a MRI compatible user input sensor with a wide dynamic range and low dead band. Our experiments suggest that it is feasible to use this sensor up to 50 cm from the bore, for hands-on control of the robot. This incidentally, is also the approximate ergonomic limit for manipulating objects while standing upright close to the bore.
There is a basic dilemma in choosing and conducting a fair interface comparison. That is, one interface might be good with respect to one aspect of performance, such as speed, while another interface could be more suitable regarding another aspect. However, it would be fair to say that hands-on cooperative control of robots is intuitive and efficient in localizing and fine placement. On the other hand, it would be impractical to use this interface when the robotic holder is inside the magnet bore. In the later case, an image-guided interface would be ideal to plan and manipulate the robotic holder or just the end-effectors/delivery device.
We believe, that in the engineering of robots for medical applications, detailed analyses of the functions of the entire system, that is, robot, interfaces and application, taken as single entity, is arguably more important than the individual performance of the subsystems (robot, surgeon, interfaces and application, separately). Thus, having a combination of more than one interface such as image-guided, console guided or hands-on based on application might yield a higher performance from the entire system. Our immediate future work is to evaluate the complete system in a phantom study and in a large animal study to mimic the clinical scenario.
Acknowledgments
This research was supported by the the Intramural Research Program of the National Institutes of Health (NIH), Clinical Center, and National Heart, Lung, and Blood Institute.
Contributor Information
Ankur Kapoor, Clinical Center, National Institutes of Health, Bethesda, MD, USA.
Brad Wood, Clinical Center, National Institutes of Health, Bethesda, MD, USA.
Dumitru Mazilu, National Heart, Lung, and Blood Institute, National Institutes of Health, Bethesda, MD, USA.
Keith A. Horvath, National Heart, Lung, and Blood Institute, National Institutes of Health, Bethesda, MD, USA.
Ming Li, National Heart, Lung, and Blood Institute, National Institutes of Health, Bethesda, MD, USA.
1. McVeigh ER, et al. Real-time Interactive MRI-Guided Cardiac Surgery: Aortic Valve Replacement Using a Direct Apical ApproachReal-time. Magn Reson Med. 2006;56:958–964. [PMC free article] [PubMed]
2. Chinzei K, et al. MR Compatible Surgical Assist Robot: System Integration and Preliminary Feasibility Study. MICCAI. 2000:921–930.
3. Stoianovici D, et al. MRI Stealth robot for prostate interventions. Minim Invasive Ther Allied Technol. 2007;16(no. 4):241–248. [PMC free article] [PubMed]
4. Fischer GS, et al. Pneumatically Operated MRI-Compatible Needle Placement Robot for Prostate Interventions. Proc IEEE ICRA. 2008:2489–2495. [PMC free article] [PubMed]
5. Yu N, et al. An fMRI compatible haptic interface with pneumatic actuation. Intl Conf Rehab Robot. 2007:714–720.
6. Gassert R, et al. MRI/fMRI-Compatible Robotic System With Force Feedback for Interaction With Human Motion. IEEE/ASME Trans Mechatron. 2006;11(no. 2):216–224.
7. Harja J, et al. Magnetic resonance imaging-compatible, three-degrees-of-freedom joystick for surgical robot. Int J Med Robot. 2007;3(no. 4):365–371. [PubMed]
8. Takahashi N, et al. An optical 6-axis force sensor for brain function analysis using fMRI. Proc IEEE Sensors. 2003 Oct;1:253–258.
9. Tada M, Kanade T. Design of an MR-compatible three-axis force sensor. Proc IEEE/RSJ IROS. 2005 Aug;:3505–3510.
10. Hirzinger G, et al. Advances in Robotics: The DLR Experience. Intl J Robot Res. 1999;18(no. 11):1064–1087.
11. Taylor RH, et al. A Steady-Hand robotic system for microsurgical augmentation. Intl J Robot Res. 1999;18(no. 12):1201–1210.
12. Davies BL, et al. Active compliance in robotic surgery - the use of force control as a dynamic constraint. Proc Inst Mech Eng H. 1997;211(no. 4):285–292. [PubMed]
13. Li M, Mazilu D, Horvath KA. Robotic system for transapical aortic valve replacement with mri guidance. MICCAI. 2008 [PMC free article] [PubMed]
14. Sackier JM, Wang Y. Robotically assisted laparoscopic surgery. From concept to development. Surg Endosc. 1994 Jan;8(no. 1):63–66. [PubMed]
15. Melzer A, et al. Innomotion for percutaneous image-guided interventions: principles and evaluation of this mr- and ct-compatible robotic system. IEEE Eng Med Biol Mag. 2008;27(no. 3):66–73. [PubMed]
16. Kapoor A, Li M, Taylor RH. Constrained control for surgical assistant robots. Proc IEEE ICRA. 2006:231–236.
17. Heindl J, Hirzinger G. Device for programming movements of a robot. Tech Rep. U. S. Patent No.: 4589810. 1986.
18. Ishi C, et al. A Robust Speech Recognition System for Communication Robots in Noisy Environments. IEEE Trans Robot. 2008 June;24(no. 3):759–763.