|Home | About | Journals | Submit | Contact Us | Français|
Adaptive optics (AO) describes a set of tools to correct or control aberrations in any optical system. In the eye, AO allows for precise control of the ocular aberrations. If used to correct aberrations over a large pupil, for example, cellular level resolution in retinal images can be achieved. AO systems have been demonstrated for advanced ophthalmoscopy as well as for testing and/or improving vision. In fact, AO can be integrated to any ophthalmic instrument where the optics of the eye is involved, with a scope of applications ranging from phoropters to optical coherence tomography systems. In this paper, I discuss the applications and advantages of using AO in a specific system, the adaptive optics scanning laser ophthalmoscope, or AOSLO. Since the Borish award was, in part, awarded to me because of this effort, I felt it appropriate to select this as the topic for this paper. Furthermore, users of AOSLO continue to appreciate the benefits of the technology, some of which were not anticipated at the time of development, and so it is time to revisit this topic and summarize them in a single paper.
Scanning laser ophthalmoscopy was invented about 3 decades ago by Robert Webb 1. Its basic principles of operation are the same as the scanning laser microscope, which was invented in 1955 by Marvin Minsky 2, the only difference being that, in the ophthalmoscope, the eye’s optics serve as the objective and the retina is always the sample. The SLO was a major development in ophthalmoscopy in the 20th century and SLOs now form the basis for many commercially available ophthalmoscopes (eg. Optos PLC, Dumfermline, Scotland, UK; Heidelberg Engineering, Heidelberg, Germany; Carl Zeiss Meditec, Dublin, CA).
Like a scanning laser microscope, the image in an SLO is generated over time by recording the scattered light from a focused spot as it is raster scanned across the region to be imaged. As such, it does not collect an image using a film or a CCD array. Rather, the intensity of each pixel is recorded using a single, light sensitive detector, and the location of each pixel is inferred by outputs from the scanning mirrors. Typically, this information is combined by computer or frame grabber which is used to render the final image.
Manufactured optical systems do not come close to matching the performance of the eye and visual system. But in terms of optical quality, the eye operates at a fraction of its potential. Irregularities in the corneal and lens surfaces as well as misalignments and relative tilts between the components generate aberrations that cannot be corrected by conventional methods. Like any ophthalmoscope, these imperfections impose limits on the highest resolution that can be achieved in an SLO. Most conventional SLO applications are not seriously limited by aberrations, however, as their resolution demands have traditionally not been very high. But with improved resolution, the scope of applications for SLO can be expanded greatly, as I hope to demonstrate in this paper.
The first attempt to use AO in an SLO was made by Dreher and colleagues in 1989 3. At that time a key component of an AO system, the wavefront sensor, was not used, and the AO system was capable only of correcting defocus and astigmatism. As such, the concept was laid out, but the improvements in image quality were modest. A short time later, out of the same lab, Liang et al. demonstrated for the first time a Shack Hartmann wavefront sensor for the eye 4. With such a device, which could measure the eye’s aberrations quickly and accurately, all the pieces were finally in place. In 1996, David Williams at the University of Rochester assembled a team including Junzhong Liang and Donald Miller (PhD student at the time), who built the first AO ophthalmoscope, capable of correcting higher order aberrations 5. The system employed a conventional imaging modality, using a flash lamp to illuminate the retina and a science grade CCD camera to record the image.
Extensive descriptions can be found in the literature on how an AO system works in general6, or specifically for vision applications 7,8. In brief, an AO system employs a wavefront sensor to measure the eye’s aberrations and a wavefront corrector to compensate for them. In most working instruments the wavefront sensor and the wavefront corrector are a Shack Hartmann sensor is a deformable mirror respectively.
A schematic showing the details on how AO is implemented in AOSLO is shown on figure 1. But before explaining the actual system, it’s worth first discussing how resolution is mediated in AOSLO. In an SLO, a light beam focuses to illuminate a small region of the retina. The light that scatters from the illuminated region back out of the eye through the system and is sensed by the detector yields the intensity value for that location, or pixel value. So, it follows that the resolution is governed, in part, by the size of the illuminated spot on the retina.
As shown in figure 1, there is a long train of elements between the eye and the light delivery and detection arms of the system. The light beam entering the system reflects off the deformable mirror and two scanning mirrors prior to entering the eye. The returning light passes through the same scanning mirrors and DM prior to reaching the detector. By virtue of the reversibility of light this returning light is ‘descanned’ prior to its arrival at the detector so that, even though the beam may be scanning across the retina, it is rendered stationary by the time it reaches the detector. This feature allows for placement of a key component, the confocal pinhole, prior to the detector in a plane that is conjugate to the focused spot on the retina. The confocality of SLO allows for optical sectioning. The basic concept of optical sectioning is illustrated in Figure 2, whoch shows how only the light returning from the plane of focus can pass through the pinhole and reach the detector. Most of the light from other layers gets blocked.
As mentioned at the beginning of this section, resolution is achieved, in part, by making the illuminated spot small. The lateral and axial resolution is improved further by making the confocal pinhole small, but that is only realizable if the light from the retina that is re-imaged onto the confocal pinhole plane is compact, otherwise very little light will make it to the detector. In fact, it is the double pass point spread function that reaches the confocal pinhole 9 and the more compact it is, the better the axial and lateral resolution. So, AO is used in both directions; to make the focused light on the retina more compact, and to reimage the returning light back onto the confocal pinhole. The resolution of the confocal SLO is given by:
Where PSFin and PSFout are the point spread functions in and out of the eye respectively, and D(x,y) represents the confocal pinhole. If the pinhole is tiny, then the PSF is simply the product of the ingoing and outgoing PSF, making the PSF smaller than the diffraction limit! When the confocal pinhole is optimized, the lateral and axial resolutions for a 6 mm pupil and 600 nm light are 1.9 and 33 microns respectively. With larger pupils or shorter wavelengths, the resolution improves further.
The scanning/descanning feature of SLO allows for unique implementation of wavefront sensing. Figure 1 shows how the returning light is divided between the detector and the wavefront sensor. The beam is stationary at this point, so although the beam scans an extended field, the wavefront sensor sees the light as though it is coming from a single direction. As such, any wavefront sensor exposure that is longer than one frame period will average the wavefront over the field. This does not improve the fidelity of the wavefront measurement (the wave aberration is nearly anisoplanatic over a typical imaged field) but it does offer the following advantages:
The principle of reversibility of light mean that the single wavefront measurement can be used to correct the aberration in both directions - to sharpen the focused spot on the retina, as well as to sharpen the image of that focused spot on the confocal pinhole in the return path.
Since our original 2002 paper10, many of the stated advantages of the AOSLO imaging modality have been realized by our lab as well as others. The remainder of this paper will demonstrate these advantages through a series of examples.
The obvious and direct outcome of using AO in an ophthalmoscope is retinal images and video with high lateral resolution. This benefit is common to all ophthalmoscopic imaging modalities. There are limits on the light that can be used to record a single image, so the highest signal to noise images are often obtained by averaging multiple images from the same location. But averaging multiple frames in AOSLO is not straightforward. The scanning nature of the system means that each pixel and each line is obtained in sequence. At 30 fps, each frame takes about 30 msec to acquire. During that time the eye will move by significant amounts in unpredictable directions. As a result, each frame is distorted by the unique eye motions that transpired during its acquisition, not unlike the distortions you see when a page that is being scanned or photocopied is moved. With some effort, these distortions can be corrected, after which multiple frames can be added. 11,12. Figure 3 shows a sequence of images of a photoreceptor mosaic that illustrates the motion correction procedure as well as the final product. (also see Supplemental Digital Content 1 which shows the raw AOSLO movie with typical fixational eye movements used to generate the images in figure 3 and Supplemental Digital Content 2 which shows a stabilized version of the same movie) The contrast of AOSLO images is high for two reasons. First, the illuminated spot on the retina is small. Second, the confocal aperture blocks out of focus light from other layers that would otherwise reduce the contrast of the image. The upcoming section titled Confocal Sectioning describes other benefits of the confocal pinhole..
The stabilized video itself offers additional benefits. As can be seen from the example movie, blood flow is visible in the AO-corrected retinal video. With well stabilized movies, we can extract the part of the retinal where the motion is highest, revealing the capillary network 13. This is done by looking at the variance of each pixel in a processed version of the stabilized video. Static features like photoreceptors vary little, while scattering changes by flowing blood cells change a lot. The resultant motion image reveals the entire network of retinal capillaries. These images rival some of the best fluorescein angiographic measures and are done noninvasively.
The confocal pinhole is typically aligned to be conjugate to the focused spot on the retina (ie an object at the focused spot will be imaged at the confocal pinhole and vice versa). Moreover, the conjugacy is maintained, even if the focused spot is adjusted axially on the retina, provided that the focus is adjusted by an element (or elements) in the double pass part of the optical path (see fig 1). In the AOSLO, such a change in focus can be made by the deformable mirror. These focal changes allow one to image different layers of the retina as shown in figure 4 (also see Supplemental Digital Content 3, which is a movie showing the entire through focus video sequence that was used to generate the images in figure 4).
A byproduct of solving the problem of image distortion caused by eye motion is that the eye can be tracked with an accuracy and at a frequency that is unrivaled by the best eye tracking systems available today. In a sense, the distortions that appear in each image are a chart record of the motion that has occurred during its acquisition. Owing to the high resolution of the image, the correction can be very accurate, yielding local eye position estimates that are a fraction of the size of a cone photoreceptor. The frequency of the tracking is limited only by the number of eye position estimates that are made within each frame. In our lab, we frequently use 30 estimates per frame to give eye traces close to 1 kHz.
However, the full recovery of eye motion from AOSLO videos is still a work in progress, as the correction for the distortion in the reference frame as well as eye torsion remain issues that need to be dealt with fully. Nevertheless, figure 5 shows an estimate obtained from a 20-second AOSLO video (Supplemental Digital Content 4 shows the original movie from which the eye motion traces were computed).
The possibility of modulating the scanning laser in an SLO to project an image directly on the retina was appreciated since the time of its invention 1. In addition, the exact position of the modulation could be encoded directly into the video, since in many cases it was the imaging laser that was modulated to generate the image. Implementing this feature in AOSLO therefore, was not new, but the scope of applications of this feature was expanded considerably by being able to deliver AO-corrected stimuli over small regions of the retina. Such stimuli can be localized on the scale of cone photoreceptors 14. The following sections describe applications involving laser modulation and stimulus delivery.
The AOSLO can project aberration-corrected beams directly onto the retina. As such, it can be used to test improvement in visual acuity with aberration correction. We completed a study where we compared AO-correct acuity in emmetropes and low to moderate myopes and found that myopes do not perform as well after AO-correction 15. Was the deficit because the cone spacing in myopes is larger, imposing a retinal limit to vision 16? At that time we were unable to measure the cone spacing in each subject, since the task was done at the foveal center. However, with improvements to our adaptive optics control system are now able to resolve cones very close to the foveal center. In a recent study examining cone spacing and axial length, we reported that there was no correlation between axial length and cone spacing near the foveal center 17. If fact, axial myopes are more likely to have more cones sampling the retinal image than emmetropes. So, we are left to conclude that the myope’s diminished acuity for AO-corrected images has to be post-receptoral.
The nature of the stimulus that can be delivered is only limited by the technology in the AOSLO system. In our system we can deliver animations, gray scale images and even stabilized images (more on that later). The animated stimulus has been used for experiments on fixation tracking and apparent motion detection. In the former experiment, we asked what part of the retina was used to place the image of moving objects that were being tracked as opposed to stationary objects that were being fixated. The moving and the stationary fixation targets were generated by modulating the raster-scanning laser. Surprisingly, the loci of each were different and neither was necessarily located at the point of maximum cone density 18. In the latter experiment, we studied the eye’s ability to correctly judge apparent motion (moving of a fixed object to a new location between frames, like a motion picture) in the presence of continuous eye movements. Our interests in the eye’s ability to judge motion stems from the fact that the motion of objects on the retina is the difference between the motion of the eye’s line of sight and the actual motion of the object in the world. In some cases, eye motion might cause objects moving in a certain direction in the world to move in an inconsistent direction on the retina. Using the stimulus delivery feature of the AOSLO, we presented a series of frames to our subjects where a stimulus pattern was shifted between two frames in an upward or downward direction within the raster. We found that when given a frame of reference, the eye manages to judge correctly how objects move in the world, even when eye movements had generated a confounding retinal motion. When the frame of reference is removed (ie the eye is looking at a moving object in the absence of any other visual cues) the eye still retains some of this ability, suggesting that visual system uses some non-visual cues to know which way its eyes are pointing 19.
Like a scanning laser microscope, the AOSLO can be equipped with multiple sources 20. This feature is used for a multitude of applications, which will be described below.
In this application, an infrared channel is used to image the retina while a visible light channel is used to stimulate a patch of the same field. When the photoreceptors are stimulated, many changes will occur; photopigment molecules will be photoisomerized, ions will be transported in and out of the photoreceptor cells and other retinal neurons, and choroidal and retinal blood flow will be redirected. Any or all of the changes may cause changes in the scattering of infrared light. The location, time course and magnitude of the scattering changes in response to visible light stimulation will encode details of these changes, much like the ERG encodes the electrical changes in the retina. As such, monitoring these changes holds promise to be an effective non-invasive way to measure retinal function. To date, several groups have made these recordings without AO modalities 21–23. Performing measures of this type with AOSLO allows for good optical sectioning and high lateral resolution to localize the changes, but much more work has to be done before this to validate intrinsic signals as a clinical or basic tool.24
Although it is not a strict requirement, the use of multiple imaging wavelengths has facilitated some very innovative fluorescence imaging in the retina of human and animal eyes. In cases where the fluorescence signal is very weak (eg autofluorescence from retinal pigmented epithelium cells), a simultaneous infrared reflectance video is crucial to provide eye motion correction for registration of multiple frames. In fact, simultaneous acquisition of a high signal to noise reflectance video allows for collecting and integrating any weak retinal signal, whether it is autofluorescence, phase contrast, two-photon, or any of the multitude of imaging techniques in the microscopist’s arsenal that generate useful, but weak signals. To date, the only fluorescent imaging results are coming from the University of Rochester. Figure 7 shows two results, one from fluorescent agents injected into ganglion cells 25,26 and a second where the autofluorescence of lipofuscin was used to reveal the cellular structure of the RPE cells.27
In collaboration with a group at Montana State University, we have sped up our eye tracking algorithms so that they can operate in real-time. Moreover, combining realtime eye tracking with the multiwavelength operation and stimulus delivery, we have used the real-time tracking information from an infrared video to place a visible light stimulus at a targeted retinal location. This technology has the potential for careful microperimetry on the scale of single cones, or could even be used for targeting therapeutic laser delivery to the retina. While a laboratory has yet to demonstrate either of the aforementioned applications, we describe two unique applications below.
The retina is tiled with three classes of cone, sensitive to long (L), medium (M) and short (S) wavelengths. As such, we would expect the sensitivity of the retina to a small spot of a specific wavelength should vary according to the cone type that it is stimulating. In a pilot experiment, we performed local sensitivity tests on a region of a retina whose cones had been previously characterized 28. The wavelength of the imaging light was 840 nm, which provided a dim red background (~ 40 Td) and the stimulating wavelength was 680 nm. We measured sensitivity of a small patch of retina comprising about 200 cones. As expected, the variations in sensitivity correlated with the three cone types on the retina. However, the correlation was worse than expected because it proved difficult to control the transverse chromatic aberration over the course of the experiment (TCA will change as the beam location in the pupil changes 20).
Under the right imaging conditions, the ability to monitor the transverse aberration between the imaging and the stimulus beam is much improved. In an experiment done in collaboration with Lawrence Sincich and Jonathan Horton at UC San Francisco, we tracked and stimulated individual cones while recording activity of associated neurons in the lateral geniculate nucleus (LGN) of a macaque. Although neuroscientists have been measuring from single cells for decades, this represents the first time that single photoreceptors have been optically stimulated on the input end. Prior to this demonstration, control of the stimulation in these experiments has always been hampered by either eye motion or optical blur, and most often both. Our experiments revealed that receptive field centers of LGN neurons close to the foveal center are comprised of multiple cones 29. Figure 8 shows the result of one of the measurements (see Supplemental Digital Content 5 for a stabilized video from the receptive field shown in figure 8). Future experiments promise to reveal the trichromatic inputs to LGN receptive field centers in the fovea.
Also has many potential applications but there are drawbacks as well. Some of the challenges may be overcome as new technology comes available, while others are more fundamental.
Since a point source is imaged onto the retina, it will have some coherence, even if it comes from a low coherent light source. The coherence of the source gives rise to speckle and interference artifacts. These will only be overcome with broader band light sources or by employing technique to vary the phase between adjacent cones. Overcoming the interference from the light source is one aspect of ongoing research in our lab.
The SLO relies on detecting the magnitude of scattered or fluorescent light to determine intensity. Many of the sources of retinal scatter and fluorescence are very weak and so signal:noise for many retinal features are often too low for visualization. Interference-based detection methods, such as those used in OCT systems, have proven to be vastly more sensitive and reveal some of the dimmest retinal structures. Fortunately, interference-based detection techniques can be implemented in SLO systems and some early results are being demonstrated 30.
AOSLO imaging is inefficient in the sense that the data is collected serially. Images are constructed pixel-by-pixel, and optical section by optical section. Spectral domain OCT systems, by comparison, acquire an entire axial scan at once and so a volume is generate in a single raster scan. In defense of the AOSLO, high frequency scanners and fast detectors make the pixel rates very high, but there is much room for improvement. The AO line scanning ophthalmoscope represents an example of how one can make the system much more efficient, with moderate compromises in resolution.31
This paper is intended to show the advantages and applications of adaptive optics scanning laser ophthalmoscopy for basic and clinical science. As imaging and adaptive optics technology advances, systems will become more robust and more common, and new advances and discoveries will be inevitable.
SUPPLEMENTARY DIGITAL CONTENT 1. Movie 1 shows the raw AOSLO movie with typical fixational eye movements used to generate the images in figure 3. (mov)
SUPPLEMENTARY DIGITAL CONTENT 2. Movie 2 shows a stabilized version of the same movie. (mov)
SUPPLEMENTARY DIGITAL CONTENT 3. The movie shows the entire through focus video sequence that was used to generate the images in figure 4. (mov)
SUPPLEMENTARY DIGITAL CONTENT 4. The traces and power spectrum in figure 5 were generated from this AOSLO movie. (mov)
Thanks to all those contributed directly to the work presented in this paper. David Arathorn, Dan Gray, Kate Grieve, Jonathan Horton, Girish Kumar, Kaccie Li, Jessica Morgan, Ethan Rossi, Lawrence Sincich, Scott Stevenson, Johnny Tam, Curt Vogel, David Williams and Qiang Yang. The work was supported by NIH EY14375, NIH EY13299 and the NSF Science and Technology Center for Adaptive Optics, managed by UC Santa Cruz under cooperative agreement No. AST-9876783
Publisher's Disclaimer: This is a PDF file of an unedited manuscript that has been accepted for publication. As a service to our customers we are providing this early version of the manuscript. The manuscript will undergo copyediting, typesetting, and review of the resulting proof before it is published in its final citable form. Please note that during the production process errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain.
The author holds a patent on the technology described in this paper. The patent is US Patent 7,118,216, titled “Method and Apparatus of Using Adaptive Optics in a Scanning Laser Ophthalmoscope”.