|Home | About | Journals | Submit | Contact Us | Français|
To report an image segmentation algorithm that was developed to provide quantitative thickness measurement of 6 retinal layers in optical coherence tomography (OCT) images.
Prospective cross-sectional study.
Imaging was performed with time and spectral domain OCT instruments in 15 and 10 normal healthy subjects, respectively. A dedicated software algorithm was developed for boundary detection based on a 2-D edge detection scheme, enhancing edges along the retinal depth while suppressing speckle noise. Automated boundary detection and quantitative thickness measurements derived by the algorithm were compared with measurements obtained from boundaries manually marked by 3 observers. Thickness profiles for 6 retinal layers were generated in normal subjects.
The algorithm identified 7 boundaries and measured thickness of 6 retinal layers: nerve fiber layer (NFL), inner plexiform layer and ganglion cell layer (IPL+GCL), inner nuclear layer (INL), outer plexiform layer (OPL), outer nuclear layer and photoreceptor inner segments (ONL+PIS), and photoreceptor outer segments (POS). The root mean squared error (RMSE) between the manual and automatic boundary detection ranged between 4 and 9 microns. The mean absolute values of differences between automated and manual thickness measurements were between 3 – 4 microns, and comparable to inter-observer differences. Inner retinal thickness profiles demonstrated minimum thickness at the fovea, corresponding to normal anatomy. The OPL and ONL+PIS thickness profiles displayed a minimum and maximum thickness at the fovea, respectively. The POS thickness profile was relatively constant along the scan through the fovea.
The application of this image segmentation technique is promising for investigating thickness changes of retinal layers due to disease progression and therapeutic intervention.
Optical coherence tomography (OCT) is an imaging modality that provides cross-sectional images of the retina. The commercially available time domain OCT instrument provides software for measurement of total retinal thickness, therefore limiting information on the thickness of retinal layers. Thickness measurement of retinal layers is important as it provides useful information for detecting pathological changes and diagnosing retinal diseases.
Recently various approaches have been described for retinal thickness measurements from OCT images. Ishikawa et al. have described an approach to segment and extract thickness of retinal layers from Stratus and ultrahigh resolution OCT images.1, 2 However, thickness measurements of only 4 layer segments were derived. Koozekanani et al. described a method for extracting the upper and lower retinal boundaries on OCT images with the use of a Markov model.3 While this algorithm has shown to be robust, it is limited to detection of only 2 boundaries. Recently, a method based on an extension of Markov model was reported for analysis of optic nerve head geometry.4 Determination of retinal nerve fiber layer thickness based on a deformable spline algorithm has also been reported.5 This algorithm was developed for tracking nerve fiber layer in OCT movies and may be inadequate for detecting multiple layers with varying contrast and noise.
Several factors reduce the sharpness of retinal layer boundaries in OCT images and prove challenging for continuous edge detection and accurate retinal layer identification. We have developed a new algorithm to automatically detect retinal layer boundaries and measure the thickness of 6 layers within the retinal tissue. Our method has advantages over previously reported methods because it utilizes a customized filter for edge enhancement and a gray level mapping technique that overcomes uneven tissue reflectivity and intersubject variations. In this paper, we report the application of this method to time- and spectral domain OCT images, comparison of automated and manual segmentation, and a normal baseline for thickness of retinal layers.
Approval for use of human subjects was obtained from the Institutional Review Board of the University of Illinois at Chicago. The study was explained to the subjects, and informed consents were obtained according to the tenets of the Declaration of Helsinki. OCT imaging (StratusOCT) was performed in one eye of 15 normal subjects, 10 females and 5 males, 6 left and 9 right eyes. The subjects’ ages ranged between 42 and 78 years, with an average of 57 ± 11 years (mean ± SD). OCT imaging (RTVue OCT) was performed in one eye of 11 normal subjects, 10 females and 1 males, 6 left and 5 right eyes. The subjects’ ages ranged between 45 and 75 years, with an average of 56 ± 7 years (mean ± SD). Three images were obtained in each eye of 10 subjects. The subjects had ophthalmoscopic examination and no abnormalities were identified.
Time-domain (TD) OCT imaging was performed using a Stratus OCT commercial instrument (Carl-Zeiss Meditec, Dublin, California). Six radial OCT scans, each 6 mm in length, were acquired. Each radial OCT scan was 1024 pixels (2 mm) in depth and 512 pixels (6 mm) in length and thus had a depth resolution of 2 microns/pixel and spatial resolution of 12 microns/pixel. The vertical (Scan 1 of 6) and horizontal (Scan 4 of 6) OCT scans were analyzed. The raw grayscale OCT images were exported for analysis. An example of an OCT image is shown in Fig 1. Each cell layer within the retina tissue displayed different reflectance properties and produces peaks on a line profile across the retinal depth.
Spectral-domain (SD) OCT imaging was performed using a RTVue100 OCT commercial instrument (Optovue, Fremont, CA). Thirty four raster OCT scans (MM5 protocol) were acquired and exported in TIF format. The vertical scan (Scan 17) obtained in 10 subjects was analyzed for deriving thickness profiles. Fourteen OCT scans in the foveal and parafoveal areas of one subject were selected for comparing automated and manual boundary detection. The 3 horizontal scans (Scan 3, 6, 9 of 34) and 3 vertical scans (Scan 14, 17, 20 of 34) were 640 pixels (2 mm) in depth and 669 pixels (5 mm) in length. The 4 horizontal scans (Scan 23, 25, 26, 28 of 34) and 4 vertical scans (Scan 29, 31, 32, 34 of 34) were 640 pixels (2 mm) in depth and 401 pixels (3 mm) in length. The scans had a depth resolution of 3 microns/pixel and spatial resolution of 7.5 microns/pixel.
A dedicated software program was developed in Matlab (The Mathworks Inc. Natick, Massachusetts) for automated image analysis, without manual processing. The image analysis algorithm6 segmented the OCT image by the following steps: 1) alignment of A-scans; 2) gray-level mapping; 3) directional filtering; 4) edge detection; and 5) model-based decision making.
Common reference points in each A-scan were identified using a two-pass edge detection algorithm.6and A-scans were shifted so that each reference point was at the same level as the neighboring column. At each pass, the images were smoothed using a Gaussian filter, the derivative of the image in the vertical direction (depth) was calculated, single pixel edges were obtained using non-maximum suppression, and broken edges were eliminated by hysteresis thresholding, similar to Canny edge detection.7 Gaussian filtering with a larger support provided the two most prominent edges corresponding to the inner limiting membrane (ILM) and retinal pigment epithelium (RPE). The image was aligned according to the RPE boundary, by shifting each A-scan vertically. Due to heavy blurring to suppress noise, these locations were approximate. Therefore, a second pass was performed with a Gaussian filter with a smaller support for more accurate alignment. Only edges that coincided with the first pass were chosen and A-scans were shifted to their final position. The blurring operations were only used to locate the RPE boundary initially, and did not change the pixel intensity values. The image shown in Fig 1,Top was aligned using this algorithm and the output is shown in Fig 1, Bottom. The algorithm then aligns the A-scans according to a finer boundary, namely, the junction between photoreceptor inner and outer segments (IS/OS). Alignment of A-scans was more needed for TD OCT images, due to eye motion artifacts because of slower image acquisition time, as compared to SD OCT (Fig 4, Top Row). This alignment step may be eliminated for SD OCT images, since it did not significantly improve images.
In OCT images, the region between the RPE boundary and ILM contains primarily 3 stratified brightness levels and 6 layers. The transitions between these layers produce edges with different strengths. For example a typical transition from nerve fiber layer (NFL) to inner plexiform layer and ganglion cell layer (IPL+GCL) represents a change in an approximate gray level value from 1800 to 1100 (12-bit data), whereas a transition from outer plexiform layer (OPL) to outer nuclear layer (ONL) translates to a change in approximate gray level value from 1200 to 900. The second transition produces a weaker edge. To strengthen the weak edges, an adaptive gray-level mapping using two mapping functions was performed.6 The parameters that defined the mapping were decided adaptively for each OCT image using the expectation maximization (EM) algorithm.8 The two mapping functions, G1 and G2, obtained for the image in Fig. 1 are plotted in Fig. 2, Top Right. The boundaries between NFL and IPL+GCL, IS/OS interface and RPE layer were determined with G1 (Fig. 2, second row, left), while the remaining boundaries were determined with G2 (Fig 2, third row, left). The IS/OS interface appears flat, because of the initial alignment step.
To overcome edge blurring, a 2-D filter with a wedge-shaped pass band was employed. The design of the directional filter banks was based on our previously published method.9 This filter suppressed the speckle noise that produces high frequency variances in horizontal direction, while keeping the detail in vertical direction, which contained the boundary information. The filter preserved edges lying within ±45° of the horizontal axis, which emphasizes the importance of the aligning columns so that boundaries had a slope close to 0°. The frequency response of the filter is shown in Fig 2, Top, left. Images following gray-level mapping in Figs 2, left panels, were processed by directional filtering algorithm and displayed in Figs 2, right panels, respectively.
An edge detection kernel based on the first derivative of Gaussian in the vertical direction was used for obtaining candidate contours for boundaries.6 The edge detection kernel was applied twice. First, the boundaries between each pair of adjacent bright and dark regions, with bright on the anterior were extracted. On the second pass, boundaries between each pair of adjacent bright and dark regions, with dark on the anterior, were detected. The peak values were marked as edges, using non-maximum suppression and hysteresis thresholding.7 At the output of this step, major significant contour segments and their polarities were marked. The output consisted of contour segments with gaps. The contour segments derived by edge detection were overlaid on the original image. Fig 3, Top, left displays NFL boundary and Figs 3, second row, left, and 3, top right, display bright to dark transitions and dark to bright transitions; respectively.
The classification and labeling of edges and creation of continuous contours were performed in this step. The marking of ILM and RPE boundaries in the first step were found to be robust. These boundaries were therefore used to serve as reference points. A model that contained the approximate location of each boundary, along with the polarity of the edges that formed the boundary and relative location of boundaries with respect to each other was used. For any column in the image (along the retinal depth), starting from the top (anterior) we expected to view boundaries between NFL, IPL+GCL, INL, OPL, ONL, which correspond to edges with the polarities (+1, +1, −1, +1), respectively. The set of contours obtained at the edge detection step that conformed to this particular order were labeled using this model. For a given column in the image, if the polarities of the contours did not match this order, they were not included in the selected contours. This selection was automatic. This method worked well for strong edges, but occasionally (13.3% of TD OCT and 6.6% of SD OCT images) some scattered light on the image was also marked as a boundary pixel. These false positives appeared as discontinuities along the boundary and were removed using a simple median filter. In the final step, the gaps along the boundary due to insufficient edge clues were filled. At this point, the model was fine- tuned using edge clues that were already labeled, and the gaps along the boundary were filled in according to the updated model. After RPE boundary detection, the image was aligned again according to the RPE boundary to maintain the curvature of the IS/OS interface. The connected contours are marked on the image in Fig 3, Bottom, which is the final output of the algorithm.
On each OCT image, 7 boundaries were detected automatically by the segmentation algorithm. The same images were evaluated by 3 observers (MB, RZ and NB) by manually drawing the boundaries. ImageJ software (NIH, Bethesda, MD) was used to open the images, draw the boundaries, and then save the images. A Matlab program was written to compare images with automatically and manually drawn boundaries. The average root mean squared error values (RMSE) between the manually and automatically marked boundaries was calculated. Thickness profiles for each of the 6 retinal layers were calculated by first subtracting edge locations along the vertical direction (retinal depth) and then averaging thickness values along the horizontal direction, at every 200 microns interval. Intrasubject variability (reproducibility) was determined as the standard deviation of 3 measurements in each subject, averaged over all subjects. Intersubject variability was determined as the mean standard deviation of thickness measurements across all subjects.
The algorithm used the fact that the inner and outer plexiform layers appeared brighter than the inner and outer nuclear layers in the OCT image. In addition, NFL, RPE and IS/OS appeared brighter than the rest of the layers. Overall, the algorithm identified the following 6 layers on the retinal image: nerve fiber layer (NFL) inner plexiform layer and ganglion cell layer (IPL+GCL), inner nuclear layer (INL), outer plexiform layer (OPL), outer nuclear layer and photoreceptor inner segments (ONL+PIS), photoreceptor outer segments (POS), as marked on the sample OCT scan shown in Fig 3, Bottom. The POS layer was bounded by IS/OS and RPE inner boundaries. The processing steps and layer segmentation for SD OCT images are depicted in Fig 4.
Seven boundaries were detected automatically by the segmentation algorithm in 15 TD OCT and 14 SD OCT images. The average RMSE between the manual and automatically marked boundaries on TD OCT images are presented in Table 1. The RMSE for detection of boundaries ranged between 3.5 and 9.9 microns (TD OCT). The average RMSE between the manual and automatically marked boundaries on SD OCT images are presented in Table 2. The RMSE for detection of boundaries ranged between 4.3 and 8.8 microns (SD OCT). For both the TD and SD OCT images, the least RMSE values were measured for detection of the ILM and IS/OS interfaces. The interface between the OPL and ONL had the largest RMSE value.
Thickness profiles of 6 retinal layers were derived based on automated and manual segmentation of 15 TD OCT images (vertical scans). All 6 retinal layers were identified on each scan acquired in each of the 15 eyes. The thickness profiles derived from the automated algorithm (solid line) and manual segmentation, average data from 3 observers (symbols) are shown in Fig 5. The automated and manual thickness profiles appeared similar. NFL thickness showed an increase superior and inferior to the fovea, as expected. The IPL+GCL demonstrated a foveal depression, corresponding to normal anatomy. The mean absolute values of differences between the automated and manual thickness measurements were calculated. Combining measurements from 6 layers, resulted in differences of 3.7 ± 2.0 microns (Auto-MB), 4.2 ± 1.7 microns (Auto-NB), and 3.0 ± 1.1 microns (Auto-RZ). The inter-observer differences for 6 layers were 3.4 ± 1.7 microns (MB-NB), 4.4 ± 2.1 microns (MB-RZ), and 3.8 ± 2.6 microns (NB-RZ). A comparison of thickness measurements for each layer by automated and manual (average measurements by 3 observers) segmentation is shown in Table 3. Mean and standard deviation of thickness measurements by automated segmentations closely matched measurements obtained by manual segmentation. The absolute thickness difference ranged between 0 and 5 microns.
Thickness profiles of 6 retinal layers measured along the 6-mm TD OCT vertical scan, averaged over 15 subjects, are shown in Figure 6. The thickness profiles of the inner retinal layers (NFL, IPL+GCL, INL) demonstrated a minimum thickness at the fovea. The maximum thickness of the NFL was 48 microns along the 6-mm scan. The thickness of the IPL+GCL ranged between 13 – 102 microns. The thickness of the INL ranged between 7 – 40 microns. The OPL thickness profile displayed a minimum thickness at the fovea and ranged between 16 – 54 microns along the 6-mm scan. The ONL+PIS thickness ranged between 55 – 132 microns and the profile displayed a maximum thickness of at the fovea. The OPL was thicker and the ONL+PIS layer was thinner superior to the fovea as compared to a similar location inferior to the fovea. The POS thickness profile was relatively constant along the 6-mm scan through the fovea, ranging between 34 – 45 microns. Thickness profiles of 6 retinal layers measured along the 5-mm SD OCT vertical scan, averaged over 10 subjects, is shown in Figure 7. Thickness profiles obtained from SD OCT images displayed similar characteristics to those observed from analysis of TD OCT images. The mean intrasubject thickness measurement variability of NFL, IPL+GCL, INL, OPL, ONL+PIS, and POS were 3, 6, 4, 5, 4, and 2 microns, respectively. The mean intersubject thickness measurement variability of NFL, IPL+GCL, INL, OPL, ONL+PIS, and POS were 5, 10, 4, 6, 8, and 4 microns, respectively.
The ability to measure thickness of retinal layers has potential value for early detection of pathologies and disease diagnosis. In this paper, a new segmentation algorithm for automated detection of retinal layer boundaries and measurement of thickness of 6 layers within the retina tissue was presented. The results of automated segmentation corresponded well with normal retinal anatomy and with measurements obtained using manual segmentation by 3 observers. Thickness profiles for 6 retinal layers were established in normal healthy subjects at 200 micron intervals along a 6 mm vertical distance on the retina.
Retinal layer boundary detection by our automated algorithm was similar to manual outlining of the boundaries by human observers. The calculated RMSE was between 3 and 10 microns for detection of 7 boundaries on both TD and SD OCT images. Thickness profiles derived by the automated algorithm were similar to profiles calculated from manual boundary detection. On average, the difference between automated and manual segmentation was ≤ 4.2 microns, and almost identical to the difference between manual measurements by 3 observers (≤4.4 microns).
Thickness profiles of inner retinal layers (NFL, IPL+GCL, INL) displayed a minimum at the center of fovea, corresponding to normal anatomy. At the foveal center, thickness profiles of OPL and ONL+PIS displayed a minimum and maximum, respectively. Along the vertical scan, an asymmetry was observed in these layers. The OPL was thicker and the ONL+PIS was thinner superior to the fovea as compared to a similar location inferior to the fovea. The thickness asymmetry was observed on original TD and SD OCT vertical scans, even before image processing (Fig 1A), but not on horizontal scans. The observed asymmetry in reflectivity superior and inferior to the fovea was more prominent in TD OCT scans. The observed thickness asymmetry may be attributed to image acquisition artifacts or anatomy. Since most histological studies are based on temporal-nasal sectioning of human retinas, a literature search yielded no reported information with regard to anatomical variations along the superior-inferior axis. Additional studies are needed to investigate the origin of the observed thickness asymmetry.
Previously reported edge detection algorithms were implemented on each image column individually to avoid complications due to misalignment of A-scans.3 In contrast, we utilized the correlation between adjacent A-scans which increases the effectiveness of edge detection. In previous studies, image noise was reduced by Gaussian filtering10 and variants of median filtering.2, 3 While these methods are effective in suppressing noise, they also tend to reduce edge sharpness. To overcome edge blurring, we employed a 2-D filter with a wedge-shaped pass band. This filter suppressed the speckle noise that produces high frequency variances in horizontal direction, while keeping the detail in vertical direction, which contained the boundary information. Additionally, previous studies performed border detection initially on each A-scan individually, which limits the information from adjacent scans. In our algorithm, histogram equalization was performed on each A-scan to overcome uneven tissue reflectivity, which may also cause instability by amplifying the noise. Inhomogeneous retinal pathologies may cause local changes in reflectance from retinal layers, and thereby limit boundary detection. The influence of these changes on our segmentation algorithm and thickness measurements needs further investigation. Future studies are needed to establish the utility of this algorithm for images obtained in patients with retinal pathologies. An important application of this technique is for detection and monitoring of early and uniform changes in thickness of retinal layers. In more advanced disease, the normal stratification of the retinal layers may be lost. The influence of these changes on detection of retinal layer boundaries needs further investigation. In some instances, it may be necessary to measure thickness of combined layers.
Since our algorithm determines thickness of 6 retinal layers, the thickness sum of layers had to be calculated to compare results with previously reported techniques that measure thickness of up to 4 layers. The inner retina thickness measurements reported previously10 were 170 ± 39 microns and consistent with measurements (161 ± 33 microns) obtained by our algorithm as sum thickness of NFL+IPL+GCL+INL+OPL. The outer retinal thickness measurements reported previously10 were 78 ± 10 microns, similar to thickness measurements of 77 ± 11 microns derived for ONL+PIS by our algorithm. Thickness measurements of NFL and inner retina complex (IRC) were reported to be 28.4 ± 4.7 and 90.7 ± 4.2 microns.2 These values are consistent with thickness measurements of NFL (31 ± 6 microns) and IPL+GCL+INL (98 ± 19 microns). However, the previously reported2 thickness measurement of OPL (51.0 ± 4.2 microns) based on low resolution TD images (128 A-scans) are larger than thickness measurements of OPL (32 ± 8 microns) in the current study based on high resolution TD images (512 A-scans). This discrepancy may be attributed to the difference in OCT image resolution. Outer retina complex (ORC) thickness has been reported to be 93.8 ± 7.3 microns2 and 91.1 ± 7.9 microns1 which is lower than thickness of ONL+PIS+POS, measured along the 6 mm scan to be in the range of 91 and 177 microns (114 ± 16 microns) by our algorithm. From histological studies11, POS has been measured to be 20 – 30 microns, which is consistent with measurements of 37 ± 4 microns (TD OCT) and 35 ± 4 microns (SD OCT) in our study. The observed differences is the ORC thickness derived by our algorithm and a previously reported algorithm2 may be due to a dissimilarity in localization of IS/OS interface and RPE. Additionally, in the current study results were obtained from a single OCT image, while thickness was averaged over 6 radial OCT images and incorporated interpolated point values, in the previous study.
In summary, our algorithm has advantages over previously reported methods because it utilizes a customized filter to enhance edges along the retinal depth while suppressing speckle noise and a gray level mapping technique to overcome uneven tissue reflectivity and variance across subjects. The application of this technique is promising for investigating thickness changes of retinal layers due to disease.
a. Funding / Support (including none): Department of Veterans Affairs, Washington, DC, NIH grants EY14275 and EY01792, Bethesda, Maryland; and an unrestricted departmental grant from Research to Prevent Blindness, Inc., New York, New York.
b. Financial Disclosures(including none): None
c. Contributions to Authors in each of these areas: Design of the study (MS,AB,RA); Conduct of the study (MS,AB,RA); collection, management, analysis, and interpretation of the data (NB,MB,RZ,AB,RA,MS); preparation, review, or approval of the manuscript (MS,NB,MB,AB,RA).
d. Statement about Conformity with Author Information: The study was approved by University of Illinois at Chicago IRB and proper informed consent for participation in the research and HIPAA compliance were obtained.
e. Other Acknowledgments: None.
Ahmet M Bagci, MS, received his BS degree in 1999 and MS degree in 2001 in Electrical Engineering from Bilkent University. Currently he is a doctoral candidate at the Electrical and Computer Engineering Department at the University of Illinois at Chicago. His area of research is signal and image processing, development of algorithms and methods for segmentation of medical images.
Publisher's Disclaimer: This is a PDF file of an unedited manuscript that has been accepted for publication. As a service to our customers we are providing this early version of the manuscript. The manuscript will undergo copyediting, typesetting, and review of the resulting proof before it is published in its final citable form. Please note that during the production process errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain.