PMCCPMCCPMCC

Search tips
Search criteria 

Advanced

 
Logo of nihpaAbout Author manuscriptsSubmit a manuscriptHHS Public Access; Author Manuscript; Accepted for publication in peer reviewed journal;
 
Lasers Surg Med. Author manuscript; available in PMC 2011 March 1.
Published in final edited form as:
PMCID: PMC3040371
NIHMSID: NIHMS269502

Semiautomated Intraocular Laser Surgery using Handheld Instruments

Abstract

Background and Objective

In laser retinal photocoagulation, hundreds of dot-like burns are applied. We introduce a robot-assisted technique to enhance the accuracy and reduce the tedium of the procedure.

Materials and Methods

Laser burn locations are overlaid on preoperative retinal images using common patterns such as grids. A stereo camera/monitor setup registers and displays the planned burn locations overlaid on real-time video. Using an active handheld micromanipulator, a 7×7 grid of burns spaced 650 μm apart is applied to both paper slides and porcine retina in vitro using 30 ms laser pulses at 532 nm. Two scenarios were tested: unaided, in which the micromanipulator is inert and the laser fires at a fixed frequency, and aided, in which the micromanipulator actively targets burn locations and the laser fires automatically upon target acquisition. Error is defined as the distance from the center of the observed burn mark to the preoperatively selected target location.

Results

An experienced retinal surgeon performed trials with and without robotic assistance, on both paper slides and porcine retina in vitro. In the paper slide experiments at an unaided laser repeat rate of 0.5 Hz, error was 125±62 μm with robotic assistance and 149±76 μm without (p < 0.005), and trial duration was 70±8 s with robotic assistance and 97±7 s without (p < 0.005). At a repeat rate of 1.0 Hz, error was 129±69 μm with robotic assistance and 166±91 μm without (p < 0.005), and trial duration was 26±4 s with robotic assistance and 47±1 s without (p < 0.005). At a repeat rate of 2.0 Hz on porcine retinal tissue, error was 123±69 μm with robotic assistance and 203±104 μm without (p < 0.005).

Conclusion

Robotic assistance can increase the accuracy of laser photocoagulation while reducing the duration of the operation.

Keywords: Retinal photocoagulation, robotics, visual servoing, micromanipulation

I. Introduction

Laser photocoagulation of the retina is a common adjunct in pars plana vitrectomy surgery for common diseases such as diabetic retinopathy (1), retinal detachment (2), macular edema (3,4), branch vein occlusion (5), and intraocular foreign body removal (6). After the anatomical goals of clearing the media and successfully re-opposing the retina to the pigment epithelium, three common laser procedures are used. Panretinal photocoagulation lowers the production of VEGF by the ischemic retinal cells and decreases the potential for regrowth of neovascular tissue. Peripheral panretinal photocoagulation in retinal detachment seals retinal breaks and stimulates fibrous metaplasia which prevents retinal breaks and retinotomies from leading to re-detachment of the retina. Focal laser patterns are used to surround retinal breaks iatrogenically created to drain subretinal fluid, to treat accidental retinal breaks occurring when scar tissue is being peeled, and to surround traumatic retinal breaks prior to removing intraocular foreign bodies.

Accuracy is important for optimal clinical results as inadvertent photocoagulation of a retinal vein can cause occlusion of the vein, possibly leading to vitreous hemorrhage (7). Laser applications frequently are applied within one millimeter of the foveola, requiring careful avoidance of unintended targets such as the optic nerve and the fovea to avoid permanent central vision loss (8). Furthermore, accidental coagulation of the macular venule or arteriole can cause foveal ischaemia or intraretinal fibrosis (9). In previously photocoagulated retinas, for best results, previous burn locations are also to be avoided (10). Research efforts to improve positioning of burns have yielded automated approaches(11), but system ergonomics and complexity have prevented clinical adaptation(12).

A recently introduced approach by Blumenkranz et al. utilizes a systems of mirrors mounted on a two axis galvanometric scanner attached to a slit lamp which deflects the laser beam in order to apply patterns of up to 50 pulses rapidly at a single command from the foot pedal(12). Patterns demonstrated by this system include circular arcs, lines, and rectangular and circular grids. The semi-automatic application of laser spots has the potential for significant reduction of treatment time, although the system could not always avoid previous laser burns when applying a new pattern (10). This approach has been commercialized as the Pascal Photocoagulator (10,13). However, it is designed for office use rather than for the operating room.

With the goal to extend the benefits of semi-automated systems to the realm of intraocular surgical application, we present the initial phases in the development of an assisted intraocular laser system that will increase the speed and accuracy of the placement of laser burns. It avoids accidental burns in dangerous areas such as the macula and optic nerve and blocks firing of the laser when the distance between the target tissue and the laser is too small or too great. The robotic platform of this system for laser photocoagulation is Micron, a fully handheld active micromanipulator that has been reported previously (14,15). Micron uses frequency-multiplexed optical tracking to sense its own motion in six degrees of freedom (6DOF) and control a 3DOF parallel manipulator built into the handle (16). By deflecting the tool tip or end-effector in real-time, Micron can compensate for undesired motion such as physiological hand tremor (17), or actively guide the tip toward a known target (15), such as a desired laser burn location.

II. Materials and Methods

Our robot-assisted laser photocoagulation system focuses on using the handheld micromanipulator Micron pictured in Figure 1 with the positions of target burn locations specified on a preoperative image of the retina. Knowing its location in space relative to the target burn locations, Micron can rapidly point the tip towards the target and fire the laser while compensating for the user’s tremor. As the operator moves the tip of Micron within range of a series of targets, Micron sequentially “snaps” to each target and fires a laser pulse.

Experiments were conducted under a board-approved protocol on paper slides and on porcine retinal tissue affixed to felt pads. Two sets of trials were performed on the paper slides at 0.5 Hz and 1.0 Hz, respectively, as these represent commonly used laser repeat rates. Another set of trials was performed in vitro on porcine retina at 2.0 Hz to determine the effectiveness of the system at a higher rate than normal.

A. System Setup

The Micron-assisted laser photocoagulation setup is shown in Figure 2. The operator uses a Zeiss® OPMI® 1 microscope to view the workspace under 25x magnification. Two Flea®2 cameras (Point Grey Research, Richmond, B.C., Canada) are mounted to the eyepiece, acquiring 30-Hz stereo video of the roughly 8×6-mm workspace at a resolution of 800×600. The real-time video is displayed to the operator on a 22″ 3D computer display (Trimon ZM-M220W, Zalman Tech Co., Ltd., Seoul, Korea). The computer display enables the system to easily overlay extra information on the real-time video, such as target locations or depth cues, to aid the operator.

Micron, pictured in Figure 1, is a fully handheld micromanipulator that self-actuates via Thunder® TH-10R piezoelectric actuators (Face International Corp., Norfolk, Virginia, USA) located between the handle grip and the tip of the instrument. Arranged in a 3-pointed star configuration, the piezoelectric actuators can flex in and out individually to give Micron a 3DOF range of movement that is approximately 0.5 mm axially and 1.8 mm transversely. The star-shaped manipulator has a maximum radius of 25 mm and a minimum radius of 14 mm. The handle has a diameter of 13 mm. The mass of the instrument is 30 g.

A custom optical tracking instrumentation system called ASAP uses two Position-Sensitive Detectors (PSDs) to detect four pulsed LEDs mounted inside small spherical diffusers near the intraocular shaft of the instrument. Accurate 6DOF position and pose information of the tooltip and handle can be obtained at 2 kHz.

An Iridex® Iriderm Diolite 532 Laser is attached to Micron, and the optical fiber from an Iridex® 20-gauge standard straight EndoProbe handpiece runs through the shaft of the instrument to the end-effector. When fired, the laser optic setup yields a laser spot size that is 200–400 μm in diameter.

B. Specifying Burn Locations

Planning software to preoperatively specify the location of each burn was written in LabVIEW® (National Instruments Corp., Austin, Texas, USA) (18). A retinal image captured through the microscope (or other imaging device, such as the binocular indirect ophthalmomicroscope (BIOM) (19)) at any time before the start of photocoagulation is used as the background upon which the desired pattern can be planned. The operator selects predefined patterns such as rectangular grids, circles, arcs, and ovals to place on the retinal image. A complete set of burn locations can be specified by moving, scaling, and stretching these primitive shapes. The whole pattern of target burn locations L consisting of individual targets l=[xl, yl, 1]T can be loaded into the control software that runs during the operation. For the remainder of this paper, it is assumed that all 2D positions in the image are measured in units of pixels and are defined in homogenous coordinates [x, y, 1]T such that x [set membership] R+, y [set membership] R+.

C. Tracking of Burn Locations and Tip Positions

Registration of the preoperative image to the real-time video aligns the burn locations to the current frame, compensating for any movement of the eye. A number of possible algorithms can be used (2022), but a straightforward feature-based approach of aligning interest points between the current frame of the video and the preoperative image works well for feature-rich images. Interest points are detected using Speeded Up Robust Features (SURF) (23) and aligned with a planar homography that is estimated using the standard computer-vision RANSAC algorithm (24).

More formally, Nt interest points found in the image at time t can be defined as the 2D position pit, belonging to set Pt such that pit[set membership]Pt, ∀i [set membership] [1, Nt]. P0 denotes the interest points in the preoperative image at t = 0. Interest points can be matched based on similarity of SURF descriptors calculated at each interest point. The cumulative motion beginning from t = 0 is desired, so K candidate matches of interest points between the preoperative image and the current frame at time t are defined as a set of pairs mkt=left angle bracketpi0,pjtright angle bracket=left angle bracketmk0t,mk1tright angle bracket[set membership]Mt,i[set membership][1,N0],j[set membership][1,Nt],k[set membership][1,K]. Assuming interest points lie on a plane and undergo general motion, a 3×3 homography Ht can be used to describe the motion of the interest points from t= 0 to the current time t. RANSAC estimates Ht from I inliers rit[set membership]Rt, where rit=mit[mid ]ri1t[equivalent]Htri0t,i[set membership][1,I],j[set membership][1,K],IK.

Burn locations lt [set membership] Lt are calculated by transforming the original locations in the preoperative image l0 [set membership] L0 with the most recently calculated homography Ht such that lt [equivalent] Htl0, ∀lt [set membership] Lt. Homographies HLt and HRt are calculated for the left and right images, respectively, and can be applied to burn locations in Lt to obtain the target points in the left and right images lLt[set membership]LLt and lRt[set membership]LRt, respectively. Thus, assuming an approximately planar scene, registration can be maintained throughout the procedure, keeping burn locations consistent with their initial placement on the preoperative image.

In addition to registering the target burn locations, the vision system also tracks the 2D position of the color-coded tip of Micron pLt and pRt in real time in the left and right video views, respectively. A color-based algorithm (25) tracks the centroid of a blue patch painted on the end of the tip of Micron. A red finder beam generated by the laser while in treatment mode provides an aiming guide to the operator between firings of the laser. The aiming beam is important because it shows the location of the burn before the laser is activated, allowing for continuous targeting. The vision system also tracks the centroid of the aiming beam in both left and right views, yielding 2D positions aLt and aRt.

Finally, the vision system is responsible for removing noisy images. During the execution of the procedure, video frames captured while the laser is firing are automatically detected and removed to avoid low contrast images and blooming effects caused by the intensity of the laser. The popular Intel® OpenCV library is used to implement the computer vision techniques.

D. Coordinate System Calibration

Only the vision system can sense the targets, the aiming beam, and the relative error between them. However, control occurs in the 3D space of ASAP by setting a 3D goal position and using a PID controller to reach that goal. This separation of sensing and control necessitates transformations between the image and the ASAP coordinate systems. This is possible by measuring the only variable that is observable in both coordinate systems and using it to calculate the transformations. The tip position of Micron is measured in three places: the left image pLt, the right image pRt, and in 3D space Pt=[X, Y, Z, 1]T |X [set membership] R, Y [set membership] R, Z [set membership] R. Measurements of pLt and pRt are obtained from color trackers while Pt is measured by the PSD optical trackers in ASAP sensing the pulsed LEDs on the shaft of Micron. For notational convenience, the explicit time will be omitted from subsequent equations with the assumption that all equations refer to the current time step t unless otherwise noted.

The coordinate system transformation takes the form pc = McP, ∀c [set membership] {L, R}, where P is the 3D homogenous coordinate of a point in space, pc is the 2D homogenous coordinate of P imaged in camera c [set membership] {L, R}, and Mc is a 3×4 camera projective matrix. A preoperative 30 s calibration is used to measure the 3D tip position P and the corresponding 2D tip positions pL and pR, which are used to estimate the transformations ML and MR using the Direct Linear Transform (26).

Because this estimation is performed with precision in the micrometer range, even very slight shifts or movements in the camera or PSD arrangement can introduce significant errors into the transformations, as can nonlinearities of the optical tracking sensors caused by axial rotations of the tool. To rectify these errors, Micron employs an online recursive least squares approach that minimizes the error of the transformation estimate for each frame during the entire operation, refining and adapting the coordinate system transformations over time for camera c:

McMc+η(pcMcP)PT
(1)

where (pcMcP) is the error of the transformation and η governs how closely the online calibration algorithm adapts the transformation to the measured data. This adaptive calibration is important for maintaining an accurate transformation between coordinate systems throughout the procedure.

Using these transformations, we define two mappings between the coordinate spaces: image projection [var phi](P, Mc) → pc and stereo triangulation Φ (pL, pR, ML, MR) → P. The first mapping [var phi] is a projection that maps a point P in 3D space, defined by the coordinate system of ASAP, to a 2D point pc on an image seen by camera c. The second mapping Φ involves solving for the intersection of two rays in space, and triangulates a pair of 2D points pL and pR, seen by the left and right cameras, to find the most likely 3D point P that corresponds to the observations pL =[var phi](P, ML) and pR =[var phi](P, MR).

E. Control Mechanism

An earlier approach (15) measured the error directly from the images as the relative distance between the finder beam (where the laser is currently aiming) and the closest aligned target i (where the laser should be aiming), transforming the error into the 3D ASAP coordinate system:

e=Φ(lLi,lRi,ML,MR)Φ(aL,aR,ML,MR)
(2)

A control signal can be directly generated by using the error as an input to a PID controller. Since both the aiming beam and target are on a locally planar surface, this control signal drives the aiming beam to the target without affecting the distance of the instrument relative to the surface. Once the target location has been acquired by the aiming beam, the laser can be fired. While yielding very accurate results, control signals can only be updated at the camera capture rate of 30 Hz, resulting in slow convergence and laser burn rates less than 0.25 Hz.

Instead of waiting for a new frame from the camera to directly measure the error between the target and the aiming beam, an alternate method is to reconstruct the model of the targets in the 3D ASAP coordinate system and use a model of Micron to update control signals between camera frames. Since the 3D tip position and pose is measured by ASAP at 2 kHz, control signals to point the tip toward the target can be generated much faster, leading to quicker convergence rates and a higher overall burn rate. Since the tip of Micron generally moves more rapidly than the retinal surface, the 3D model of burn targets can be updated at 30 Hz and still remain largely valid in between camera frames. However, as this approach depends on good models of the targets and Micron, accuracy of burns is expected to be lower than the approach of (15).

F. Surface Reconstruction

Developing a model of the targets can be accomplished by reconstructing the surface of the retina in 3D. Noting that that burn locations are applied on the surface of the retina, the 3D surface can be reconstructed using dense stereo algorithms operating on the left and right images with each new set of frames (27). However, assuming a non-deformable surface, a more relevant approach is to use structured light (28) since Micron is equipped with a red aiming beam laser. Because of the simplicity of the retinal surface being reconstructed (i.e., no discontinuities, no large spectral reflections, no opacity, no occlusions), complex coding of the structured light is unnecessary. By sweeping Micron back and forth above the surface and observing the intersection of the aiming beam with the surface in both images, the 3D structure of the surface S can be calculated by transforming each of the aiming beam points aL and aR to a 3D point Ps such that each point Ps belongs to surface S:

Ps=Φ(aL,aR,ML,MR)[mid ]Ps[set membership]S
(3)

While one could use any model for the surface S (plane, quadratic, splines, etc), our system uses a planar representation by fitting the observed 3D points Ps [set membership] S to a plane using a least squares algorithm. The planar assumption works for our experiments in vitro; however, a higher-order model such as a quadratic surface fit might be more suited for testing in vivo, in which the retina adheres to the curved shape of the eye. Only an initial 5–15 s calibration routine is necessary to collect enough correspondences from the structured light to reconstruct the planar surface. If desired, further refinement at time t of the surface S can be calculated iteratively during the procedure to yield a time-varying surface St. Once the surface has been reconstructed, the burn locations from the left and right images LL and LR are projected onto the reconstructed surface S to get the 3D burn locations:

l3D=proj(Φ(lL,lR,ML,MR),S)[set membership]L3D,lL[set membership]LL,lR[set membership]LR,
(4)

where, in the planar case, the projection maps the 3D burn location to the plane. The purpose of this projection is to reduce noise in 3D target calculations, since the Z-component of the reconstructed point is subject to the most uncertainty.

G. Control System

Mimicking the standard surgical procedure of burning target locations in sequence, the controller selects the nearest target within range, deflects the tip to aim at it, fires the laser while locking the aiming solution, and only moves on to the next nearest target location after the current burn is completed. If the next nearest target is not within range or all preoperatively specified burns have been applied, the Micron tip goes to its neutral position. As the operator executes a fly-over maneuver with the instrument, Micron can quickly flick out as it goes by to burn passing targets. With this semiautomated method, the operator performs the gross motion by sweeping Micron over clusters of targets and lets the control system handle the exact positioning of the tip and timing of the laser activation.

The controller uses the tip position P, instrument rotation R, and targets L3D, all of which are known in the 3D ASAP space. The tip position and instrument rotation are sensed by ASAP at 2 kHz, while the 3D reconstructed targets are updated from the cameras at 30 Hz. This disparity in update rates is acceptable because the targets move slowly, if at all, during the procedure.

Selection of the current target is done by choosing the yet-untreated target that requires the least amount of movement from Micron’s neutral position. The neutral position is defined by all actuators being at the zero position, thereby allowing for the greatest movement in any arbitrary direction; functionally, this state is equivalent to Micron in the off state. Since the arrangement of actuators defines a pivot point near the base of the handle, conceptually this optimization of least movement finds the target location that requires the minimum rotation of the tool about the pivot.

Micron uses a PID controller to reach specified 3D goal positions, so a goal position that causes the aiming beam to hit the target must be calculated. This calculation forms the core of the control system and is highly dependent on accurate estimated transformations and target models. While any goal position on the line connecting the pivot position with the target would result in the laser aiming at the target, but the best 3D goal position for Micron is the one that maximizes the available range of motion to account for tremor and the gross sweeping movements of the operator.

Since the workspace of Micron is shaped like a disc centered at the neutral position with its thickness tapering off towards the edges, the maximum transverse motion can be achieved on the plane normal to the shaft of the instrument and intersecting the neutral position. Because the axial range of motion is small and actuation in this direction greatly reduces available transverse movement, the controller leaves the job of depth stabilization to the operator. With this maximal maneuverability constraint, goal positions are restricted to this neutral plane NP, yielding a single goal position PG for any combination of tip position P, instrument rotation R, and target location l3D. The 3D goal PG is calculated at 1 kHz by intersecting the ray connecting the pivot point and the selected 3D target with the neutral plane:

PG=intersect(NP,l3DPV)
(5)

The pivot point is defined by the tip position offset by the length of the shaft D and rotated by the pose R of the instrument in space: PV= PR1D. Using the PID controller to directly reach the goal point PG, the system exhibits an underdamped behavior that has a tendency to overshoot the goal. Instead, a 30-ms minimum jerk trajectory is planned to the goal position, which results in much better tracking and significantly less overshoot. Once the goal position is reached, the laser is activated. The goal position is adjusted at 1 kHz to eliminate tremor or other motion until the laser has finished firing, at which point the tip returns to the neutral position and another untreated target is selected. The procedure terminates when all targets have been treated. See Figure 3 for a graphical representation of the control system.

Small errors in the model of Micron or the calibration can cause large errors during the 3D reconstruction, so an additional 5–15 s calibration procedure is executed after the surface reconstruction. Errors in the calculated intersection of the aiming beam with the reconstructed surface and the observed location of the aiming beam are recorded during this calibration. The control system then adjusts the surface position with the measured mean error to better align the reconstructed surface with the observations. This is usually then valid for the remainder of the procedure, provided the operator does not significantly rotate Micron.

Several safety measures are designed to prevent firing errors and misplaced burns. The first safety check limits the maximum distance the Micron tip can move when deflecting to aim at a target. A conservative threshold is used to allow enough range of motion to accommodate the user’s tremor or gross hand movements during targeting and burning. The second safety check allows activation of the laser only after targeting is complete and if enough reserve manipulator range is available. In the case where Micron is close to the limits of its movement, it aborts and returns to the neutral position to avoid being unable to continue targeting the burn location while the laser is activated. The third check ensures proper Z-distance between the tip and target before firing the laser, because if the laser is too close or too far away, the tissue will burn unevenly or not at all. If at any point Micron decides the target is unreachable, the tip is returned to the neutral position and Micron begins the target selection process again. A block diagram of the entire system can be seen in Figure 4.

H. Paper Slide Procedure

For ease of initial development and testing, experiments were performed on pictures of a retina printed on a color, high-resolution printer. The yellow/orange tones of the ink on the paper served as a good absorptive material for the laser, yielding distinct, solid black burns. The paper slide was placed under a formed face mask with the eyes hollowed out to give a more realistic operating environment. The Micron shaft is then inserted through the eye slot during the procedure to place laser burns on the paper. The surgeon’s hand rested on a block next to the face mask; Micron was not braced against anything.

A 7×7 grid with the preoperative dot locations placed approximately 650 μm apart was selected as a good test pattern. Two different cases were tested during the experiment: unaided and aided. During the unaided case, the laser fires at a fixed frequency when the pedal is depressed. In this scenario, Micron is switched off, and it is the operator’s responsibility to move the tip of the instrument to accurately place the burns in time with the laser repeat rate.

During the aided case, Micron is turned on and actively helps the user. The operator is responsible only for the gross movement, while Micron handles the precise targeting and firing of the laser. As with the unaided case, for safety reasons, the laser fires only when the pedal is depressed; however, the laser firing mechanism is under software control instead of being pulsed at a fixed frequency. This allows Micron to operate as fast or as slow as the operator feels comfortable moving the instrument.

During the paper trials, the individual laser pulses were set for a duration of 45 ms and a power of 3.0 W. The laser repeat rate for the unaided case was set to 0.5 Hz in the first set and 1.0 Hz in the second set of experiments. During the 0.5 Hz set, the surgeon was instructed to go slowly and steadily during the aided case. In the 1.0 Hz set, the surgeon was asked to proceed as fast as he felt comfortable during the aided task.

I. Porcine Tissue Procedure

Tests with porcine retina in vitro were performed by the surgeon in a similar fashion to the paper slide procedure. Excised pig eyes were refrigerated until used and dissected immediately before the experiment. The retina was lifted out, placed on black felt, and smoothed out to form an even thickness. The prepared retina was mounted under the microscope (without the face model for simplicity). Preparation and mounting were done immediately before experimentation, to avoid drying of the tissue. The same 7×7 grid with 650 μm spacing was used with both unaided and aided scenarios. Laser pulses were 2.2 W for 27 ms, causing 200–300 μm diameter burns that appear on the retina as milky white spots. Because the surgeon was more familiar with the procedure by this time and the aided cases were averaging near 2.0 Hz for the paper slide trials, we fixed the laser repeat rate at 2.0 Hz for a more equitable mean error comparison during the porcine experiments.

J. Effects of Visualization and Tool Ergonomics

Two potentially significant differences exist between the setup used to perform the paper slide and porcine tissue experiments and the setup a surgeon typically uses: the visualization of the workspace and the ergonomics of the tool. We devised two experiments to test each of these factors to determine the potential impact on accuracy of burn placement.

While the computer monitor can display informative overlays, such as the targets in the video stream, the quality is not as high as viewing the workspace directly with the microscope. Specifically, the cameras capture images with a lower dynamic range and at lower resolution than the human eye. Additionally, a 60-ms lag is introduced in order to capture, process, and display the real-time video. Of particular interest is how these factors affect performance. To investigate this question, we executed an additional test similar to those mentioned earlier with a 7×7 grid, except that the grid targets were printed directly onto the paper slides and the surgeon used the normal microscope eyepieces instead of the cameras and video. Only the unaided trials with an inert Micron could be run under such conditions. The surgeon executed one trial at the 0.5 Hz repeat rate, and one at the 1.0 Hz repeat rate.

The other potential factor we investigated was the ergonomics of the tool. The IRIDEX EndoProbe standard straight 20-gauge handpiece typically used in practice is significantly lighter and thinner than Micron. To determine if the ergonomics of Micron were significantly impacting performance, we ran the same two experiments again at the 0.5 and 1.0 Hz repeat rates with the IRIDEX EndoProbe instead of an inert Micron.

K. Experimental Protocol

Experiments were performed by a vitreoretinal surgeon with more than 20 years of experience. For each set of experiments, eight trials were performed sequentially by the surgeon during a one-hour period, alternating between unaided and aided. Roughly one minute of rest was provided between the interleaved unaided and aided trials. Video of each trial was recorded, along with the target locations on the preoperative image. From this information, the mean error and duration for each trial were measured. Error was measured by taking the final frame of the video sequence and hand-marking the centroid of each burn. Individual error was calculated as the distance between the centroid of the burn and the target. Nearest-neighbor matching was used to match burns to targets. To avoid spurious matches, errors greater than the spacing between the target burn locations were discarded and noted as missed targets. Thus mean error was calculated only on the subset of targets where the burn location was in the neighborhood of the target. Statistical significance was determined by calculating p-values assuming a two-tailed test.

III. Results

Figures 5 and and66 present sample trials of the aided and unaided cases for both paper slide and porcine retina experiments. Mean error and mean duration for all the sets of trials are shown in Figure 7 and Figure 8. P-values for the mean error of all trials were less than 0.005 for all three sets of experiments (p = 0.0011, 0.000014, 2.2e-16 for the 0.5, 1.0, and 2.0 Hz trials, respectively). P-values for the trial durations were less than 0.005 only for the 0.5 and 1.0 Hz sets of experiments (p = 0.0019, 0.000050, 0.0636 for the 0.5, 1.0, and 2.0 Hz trials, respectively). Table 1 lists the mean errors for the three different sets of experiments, along with the overall reduction in mean error between the unaided and aided cases.

Table 1
Reduction in overall mean error

In the investigation of the effects of different visualization systems, the mean error with the microscope view, when compared with the cameras and computer display view, was 58% lower at the 0.5 Hz repeat rate and 60% lower at the 1.0 Hz repeat rate. When running the same experiments with the normal probe instead of the inert Micron, the surgeon achieved error reductions of 72% and 68% in the 0.5 and 1.0 Hz cases, respectively.

IV. Discussion

The results presented demonstrate the general feasibility of an active handheld surgical device to decrease spot placement error and procedure duration in intraocular laser retinal photocoagulation. In both the paper slide and porcine retina trials, Micron significantly reduced the positioning error. The mean error for the unaided cases shows an interesting, but expected trend. At lower repeat rates, the surgeon has enough time to move and accurately place laser burns. At higher laser repeat rates, the mean error increases as the surgeon’s ability to precisely control the placement of burns is exceeded. The position error for the aided cases is generally consistent across experiments, indicating that Micron can handle precision targeting even with fast gross movements of the instrument. Mean duration for the procedure was consistently shorter for the aided cases. Excepting the 0.5 Hz case, in which the surgeon was instructed to execute the procedure slowly and methodically, mean duration was similar for all aided cases. Although the present demonstration is limited to simplified conditions in vitro, the experimental results show that a surgeon using the current prototype of Micron can perform the photocoagulation procedure with an accuracy improvement of 15% or more, compared to the control case, and up to four times faster, depending on the laser repeat rate selected.

Furthermore, this approach has three attractive side effects. First, because targets are specified a priori and Micron incorporates safety features to prevent firing at non-target locations, accidental extraneous movements do not risk laser burns being applied inadvertently to dangerous areas, which is risk present if the laser is in repeat mode and the surgeon experiences unintentional movement. Second, the surgeon is relieved of explicitly targeting burn locations and avoiding critical areas such as the fovea or vasculature. Using our system, the surgeon is responsible for only the gross movements of the instrument while the robotic assistance performs the precise targeting and firing behaviors. This level of automation reduces the surgeon’s cognitive workload, which is generally thought of as a worthwhile goal (29). Third, Micron’s pose information combined with the 3D reconstruction of the surface enables a safety mechanism that only fires the laser when the distance from tip to retina is within a pre-specified range. This prevents ineffectual burns when the tip is too far away from the surface and “too hot” laser burns when the probe moves too close to the retina for the specified duration and power of the laser. Avoiding burns that are “too hot” has the desirable effect of lowering the risk of unintended choroidal and retinal neovascularization as a late complication of the laser surgery (30).

The present study is a proof of concept; substantial further work is necessary in order to bring the technique to clinical feasibility. The comparisons dealing with visualization and instrument ergonomics show that both have an effect on performance. Viewing the workspace via cameras and computer monitor led to a degradation in performance compared to viewing directly through the stereo operating microscope. How much of this degradation could be eliminated by training is not known. In any case, a system that injects graphics directly into the optics of the microscope would likely alleviate the problem. As for ergonomic factors, the 11% decrease in unaided accuracy with Micron when compared with the IRIDEX EndoProbe confirms that the size and shape of the existing prototype, although small enough to be usable, nonetheless measurably degrade performance. To minimize this effect, a smaller and lighter instrument is presently under development. Future research must also include techniques for proper control in the presence of the fulcrum imposed by the entry point at the sclerotomy, and suitable calibration for accurate tracking and control of Micron when viewed through the cornea and lens of the eye. Because pre-operatively selecting burn locations can be a tedious procedure, a semi-automatic method that can suggest burn locations from learned examples of previous surgeons’ patterns on the retina could further expedite the operation. For extended procedures, the capability to re-image and reregister views of the eye to plan or re-plan new burn patterns intraoperatively also could be added and may be beneficial. Future evaluation in vivo in animal models will undoubtedly clarify the need for additional refinements.

Acknowledgments

This work was supported in part by the American Society for Laser Medicine and Surgery, by the National Institutes of Health (grant nos. R21 EY016359 and R01 EB007969), by a Graduate Student Research Fellowship from the National Science Foundation, and the ARCS Foundation.

The authors would like to thank Cristina Robles Valdivieso, Joydeep Biswas, Sandrine Voros, and Gregory D. Hager for their help and advice. Special thanks to Greg Halstead and Rick Hurst of Iridex Corp. for assistance with the laser.

References

1. Shah CP. A randomized trial comparing intravitreal triamcinolone acetonide and focal/grid photocoagulation for diabetic macular edema. Evidence-Based Ophthalmology. 2009;10(1):29. [PMC free article] [PubMed]
2. Gupta B, Elagouz M, McHugh D, Chong V, Sivaprasad S. Micropulse diode laser photocoagulation for central serous chorio-retinopathy. Clinical & Experimental Ophthalmology. 2009;37(8):801–805. [PubMed]
3. Bandello F, Lanzetta P, Menchini U. When and how to do a grid laser for diabetic macular edema. Doc Ophthalmologica. 1999;97:415–419. [PubMed]
4. Scott IU, Ip MS, VanVeldhuisen PC, Oden NL, Blodi BA, Fisher M, Chan CK, Gonzalez VH, Singerman LJ, Tolentino M. A randomized trial comparing the efficacy and safety of intravitreal triamcinolone with observation to treat vision loss associated with macular edema secondary to central retinal vein occlusion: the Standard Care vs Corticosteroid for Retinal Vein Occlusion (SCORE) study report 6. Archives of Ophthalmology. 2009;127(9):1115–1128. [PMC free article] [PubMed]
5. Parodi MB, Bandello F. Branch retinal vein occlusion: classification and treatment. Ophthalmologica. 2009;223(5):298–305. [PubMed]
6. Ahmadieh H, Sajjadi H, Azarmina M, Soheilian M, Baharivand N. Surgical management of intraretinal foreign bodies. Retina. 1994;14(5):397–403. [PubMed]
7. Infeld DA, O’Shea JG. Diabetic retinopathy. Postgrad Med J. 1998;74(869):129–133. [PMC free article] [PubMed]
8. Frank RN. Retinal laser photocoagulation: benefits and risks. Vision Research. 1980;20(12):1073–1081. [PubMed]
9. Leaver P, Williams C. Argon laser photocoagulation in the treatment of central serous retinopathy. The British Journal of Ophthalmology. 1979;63(10):674. [PMC free article] [PubMed]
10. Sanghvi C, McLauchlan R, Delgado C, Young L, Charles SJ, Marcellino G, Stanga PE. Initial experience with the Pascal photocoagulator: a pilot study of 75 procedures. British Journal of Ophthalmology. 2008;92(8):1061. [PMC free article] [PubMed]
11. Markow MS, Yang Y, Welch AJ, Rylander HG, III, Weinberg WS. An automated laser system for eye surgery. IEEE Eng Med Biol Mag. 1989;8:24–29. [PubMed]
12. Blumenkranz MS, Yellachich D, Andersen DE, Wiltberger MW, Mordaunt D, Marcellino GR, Palanker D. Semiautomated pattern scanning laser for retinal photocoagulation. Retina. 2006;26(3):370–376. [PubMed]
13. Modi D, Chiranand P, Akduman L. Efficacy of patterned scan laser in treatment of macular edema and retinal neovascularization. Clin Ophthalmol. 2009;3:465–470. [PMC free article] [PubMed]
14. Riviere CN, Ang WT, Khosla PK. Toward active tremor canceling in handheld microsurgical instruments. IEEE Trans on Robot and Autom Proc. 2003;19(5):793–800.
15. Becker B, Voros S, MacLachlan R, Hager G, Riviere C. Active guidance of a handheld micromanipulator using visual servoing. Proc IEEE Int Conf Robot Autom; Kobe, Japan. 2009. pp. 339–344. [PMC free article] [PubMed]
16. MacLachlan RA, Riviere CN. High-speed microscale optical tracking using digital frequency-domain multiplexing. IEEE Trans Instrum Meas. 2009;58(6):1991–2001. [PMC free article] [PubMed]
17. Riviere CN, Gangloff J, de Mathelin M. Robotic compensation of biological motion to enhance surgical accuracy. Proc of the IEEE. 2006;94(9):1705–1716.
18. Becker B, Voros S, MacLachlan R, Hager G, Riviere C. Active guidance of a handheld micromanipulator using visual servoing; 2009; Kobe, Japan. [PMC free article] [PubMed]
19. Spitznas M. A binocular indirect ophthalmomicroscope (BIOM) for non-contact wide-angle vitreous surgery. Graefe’s Archive for Clinical and Experimental Ophthalmology. 1987;225(1):13–15. [PubMed]
20. Zitova B, Flusser J. Image registration methods: a survey. Image and Vision Computing. 2003;21(11):977–1000.
21. Maintz JBA, Viergever MA. A survey of medical image registration. Medical image analysis. 1998;2(1):1–36. [PubMed]
22. Hajnal JV, Hawkes DJ, Hill DLG. Medical image registration. CRC; 2001.
23. Bay H, Ess A, Tuytelaars T, van Gool L. SURF: Speeded Up Robust Features. Proc IEEE Computer Vision and Pattern Recognition. 2008;110(3):346–359.
24. Fischler MA, Bolles RC. Random sample consensus: A paradigm for model fitting with applications to image analysis and automated cartography. Commun ACM. 1981;24(6):381–395.
25. Yang M-H, Ahuja N. Gaussian mixture model for human skin color and its applications in image and video databases. Proc SPIE. 1998;3656:458–466.
26. Hartley R, Zisserman A. Multiple View Geometry in Computer Vision. Cambridge, UK: Cambridge University Press; 2003.
27. Scharstein D, Szeliski R. A taxonomy and evaluation of dense two-frame stereo correspondence algorithms. International Journal of Computer Vision. 2002;47(1):7–42.
28. Batlle J, Mouaddib E, Salvi J. Recent progress in coded structured light as a technique to solve the correspondence problem: a survey. Pattern Recognition. 1998;31(7):977.
29. Carswell C, Clarke D, Seales WB. Assessing mental workload during laparoscopic surgery. Surgical Innovation. 2005;12(1):80–90. [PubMed]
30. Sandra R, Joan W, Semin O. Review of the ocular angiogenesis animal models. Semin Ophthalmol. 2009;24(2):52–61. [PubMed]