Search tips
Search criteria 


Logo of nihpaAbout Author manuscriptsSubmit a manuscriptHHS Public Access; Author Manuscript; Accepted for publication in peer reviewed journal;
IEEE Trans Biomed Eng. Author manuscript; available in PMC 2013 July 29.
Published in final edited form as:
PMCID: PMC3725742

Automated Segmentation of Intraretinal Cystoid Fluid in Optical Coherence Tomography


Cystoid macular edema (CME) is observed in a variety of ocular disorders and is strongly associated with vision loss. Optical coherence tomography (OCT) provides excellent visualization of cystoid fluid, and can assist clinicians in monitoring the progression of CME. Quantitative tools for assessing CME may lead to better metrics for choosing treatment protocols. To address this need, this paper presents a fully automated retinal cyst segmentation technique for OCT image stacks acquired from a commercial scanner. The proposed method includes a computationally fast bilateral filter for speckle denoising while maintaining CME boundaries. The proposed technique was evaluated in images from 16 patients with vitreoretinal disease and three controls. The average sensitivity and specificity for the classification of cystoid regions in CME patients were found to be 91% and 96%, respectively, and the retinal volume occupied by cystoid fluid obtained by the algorithm was found to be accurate within a mean and median volume fraction of 1.9% and 0.8%, respectively.

Keywords: Biomedical imaging, computer-aided diagnosis, macular edema, optical coherence tomography (OCT)

I. Introduction

CYSTOID macular edema (CME) is a pathological consequence of several ocular disorders including diabetic retinopathy, retinal vein occlusion, ocular inflammation, and age-related macular degeneration [1], [2]. Diabetic retinopathy and age-related macular degeneration are leading causes of irreversible blindness in the U.S. [3]. The number of people expected to experience vision loss is predicted to double over the next 30 years. The presence of CME in these conditions is often associated with loss of visual acuity. New and improved methods are needed for the identification and characterization of CME to enhance prevention and inform treatment options for vision loss [4], [5].

Optical coherence tomography (OCT) depth resolves optical reflections from internal structures in biological tissues by using noninvasive, low-coherence light [6]. OCT is widely employed for the assessment of macular diseases [7] and has enabled detailed characterizations of CME [8]—[10]. As shown in Fig. 1, OCT is highly effective for visualizing CME because the cystoid fluid has less optical scattering than the surrounding retinal tissues. Typical methods for OCT-based assessment in disorders associated with CME involve the measurement of foveal thickness because of its strong anticorrelation with visual acuity [8]—[10]. However, a recent study describes CME in the absence of macular thickening in several retinal disorders and recognizes that CME may not always be associated with macular thickening [4]. Measurements of macular thickness can also be more error prone in the presence of subretinal fluid [11], [12].

Fig. 1
B-mode OCT image of human retina exhibiting CME. (Top panel) Original color image acquired from the Cirrus OCT. Retinal layers have been indicated as follows: RPE, OPL, ONL, ganglion cell layer, inner nuclear layer, inner plexiform layer, and NFL. (Bottom ...

Structurally, CME comprises a contiguous fluid-filled space containing columns of tissue; these spaces may falsely appear as separated cysts when viewed by OCT [13]. Recent findings suggest that retinal tissue volume can be a better predictor of visual acuity than central macular thickness in CME patients [13]. As such, we expect that the volume occupied by cystoid fluid in the retina may be a useful diagnostic metric. Automated methods for segmentation of the intraretinal cystoid fluid are necessary to efficiently assess an entire 3-D OCT image stack and to estimate the total cyst volume. In this paper, we describe an automated algorithm to segment fluid-filled regions within a 3-D image stack acquired from a Cirrus HD-OCT Model 4000 system (Carl Zeiss Meditech, Dublin, CA), and subsequently to compute the cystoid fractional volume, using only the information available via standard features on the OCT system.

Before segmentation in OCT images can be successfully achieved, denoising must be performed to mitigate the effects of speckle. Speckle occurs in both OCT and ultrasonic imaging, and arises from the random interference of waves reflected from subresolution variances within the object [14], [15]. Maintaining edge-like features in the image after speckle denoising is particularly important in segmentation applications. Specifically, in retinal image segmentation applications, OCT speckle denoising has been performed by various methods including a spatially adaptive wavelet filter [16], anisotropic diffusion filters [17]—[19], Bayesian least-squares estimation [20], and a combination of bilateral and median filtering [21], the latter of which is employed in this study. Importantly, we have adapted a new bilateral filter algorithm reported in [22] for OCT speckle denoising that provides a significant speed advantage over standard filters, enabling rapid processing of the OCT image stack.

Numerous methods for segmentation of retinal layers in OCT are available [17], [18], [21], [23], [24], including user-friendly software applications [25]. To our knowledge, however, there are currently no methods reported for the automated segmentation of the cystoid fluid volume in CME. This paper presents a fully automated process that identifies regions of cystoid fluid within the 3-D retinal stack, while eliminating false positives (FP) from regions of interest (ROIs) that lack the characteristics of the intraretinal fluid spaces.

II. Method

The Cirrus HD-OCT Model 4000 (Carl Zeiss Meditech) was used to acquire the OCT images with software version 5.2. OCT images were acquired from 16 patients with vitreoretinal disorders and evidence of intraretinal cysts, and three patients without intraretinal cysts. Images were acquired at the Kittner Eye Center, University of North Carolina at Chapel Hill, Chapel Hill, and were anonymized to comply with Health Insurance Portability and Accountability Act privacy standards. OCT image stacks comprised a 6 mm × 6 mm × 2 mm data cube with a voxel size of 15 μm × 47 μm × 7.4 μm in x × y × z, respectively. They were stored and analyzed in x—z (B-mode) frames of 405 × 270 pixels. We analyzed four full datasets (one CME and three control) which extended over 128 frames in y, and 15 partial datasets (all CME) which extended over 8—20 frames centered over the macula in y. These partial stacks were used because manual evaluation of the accuracy of the algorithm was time consuming; since intraretinal cysts appeared only in a distinct subset of the full set, small subsets allowed us to rapidly assess a larger number of patients.

In overview, our method involves the following steps in sequence: conversion to grayscale, retinal layer segmentation, median filtering, signal-to-noise ratio (SNR) balancing, bilateral filtering, thresholding, boundary tracing, and rejection of FPs. The entire method is written in MATLAB version R2010a, MathWorks, Inc. It is fully automated and runs as a single function, with the only user-defined input being the image stack files obtained directly from the Cirrus OCT. Each step is described sequentially in the following.

A. Color Mapping and Retinal Layer Segmentation

Initially, images obtained from the Cirrus OCT system are in a 24-bit color bitmap format, and contain a white segmentation line for the nerve fiber layer (NFL) and a black segmentation line for the retinal pigment epithelium (RPE) layer. We use these segmentation lines to define the upper and lower bounds, respectively, of the retina (retinal ROI) in which we segment the cystoid fluid.

First, to condition the images for analysis, we map the color bitmap to a grayscale image according to the National Television System Committee standard using the MATLAB function “rgb2gray.” While this function is not the true inverse of the proprietary color mapping used in the Cirrus OCT, it importantly maintains the relevant contrast between tissue and cystoid fluid regions (see Fig. 1). This step allows one to implement this algorithm on other OCT imaging devices without the need for proprietary software. We note that software to directly obtain the gray level values and segmentation lines of data obtained from the Cirrus is available under a contractual agreement with Zeiss. Next, we identify the locations of the Cirrus NFL and RPE lines using the fact that they are each two rows thick with values of 255 and 0, respectively. An initial top-to-bottom search in the leftmost column is used to identify row positions for each line, and each adjacent column is, then, searched within the immediately neighboring rows. However, in the instances where NFL and RPE lines are noncontiguous, the algorithm resorts to interpolating the NFL and RPE layers by an intensity density method, where the average within a 5 × 5 pixel window is thresholded by empirically determined values of >35 and <22 for the white NFL curve and the black RPE curve, respectively. The highest row that satisfies these threshold conditions in each column is recorded, and each line is, then, interpolated by a fifth-order polynomial. Identification of these lines is, then, used to define our retinal ROI in each B-mode image (see Fig. 2).

Fig. 2
OCT image in Fig. 1 after identification of the RPE and NFL segmentation lines (shown in red), which define the lower and upper bounds of the retinal ROI, respectively.

B. Median Filtering and Signal-to-Noise Balancing

To suppress shot noise, we, then, used median filtering (MATLAB function “medfilt2”) over 3 × 3 pixels in x × z. We, then, balance the apparent SNR of each retinal scan. This is performed because the SNR of OCT images is variable from patient to patient, and adjustment of the SNR ensures consistent segmentation of cystoid fluid. The apparent noise N in an image stack is taken as the mean pixel value within a 27 × 40 window in the upper left portion of the image. The signal S is taken as the mean pixel value within a 27 × 40 window located 54 pixels from the rightmost image side proceeding from the rows adjacent to the bottommost row of the NFL interpolated curve. The values for N and S are averaged across the stack in y. The image data are, then, SNR balanced using the equation If = (I0N)/(SN), where I0 is the initial pixel value and If is the final pixel value, which is stored as a floating point value between 0 and 1 (see Fig. 3, top panel).

Fig. 3
(Top panel) Median-filtered and SNR-balanced OCT image. (Bottom panel) Bilaterally filtered OCT image.

C. Bilateral Filtering

Bilateral filtering acts to preserve edges while smoothing image data by weighing neighboring pixels based both on distance and similarity in pixel intensity. However, computation of a bilateral filter in its Gaussian functional form is computationally expensive and impractical for OCT image stack analysis. Here, we employ a fast bilateral filtering method described in [22] that extends the 2-D image to a 3-D space and strategically down-samples to speed up the filter without adversely influencing the quality of the results. The photometric spread and the geometric spread were, respectively, σp = (intensity range/10) = (1/10) = 0.1 and σg = (width/16) = (270/16) = 17 pixels. The computation time per B-mode image by a traditional bilateral filter (MATLAB function “bfilter2”) is ~4 s, while the computation time per image by the method in [22] is only ~0.4 s. We note that this per-image processing time is competitive with CPU times reported for other speckle denoising methods [19], [20], with the caveat that an exact comparison can only be drawn using the same image data on the same processor. As shown in the bottom panel of Fig. 3, the bilateral filter is effective at denoising the speckle while maintaining crisp edges between cystoid fluid and retinal tissue.

D. Thresholding and Boundary Tracing

Potential cystoid ROIs were, then, defined by contiguous pixel regions within the retinal ROI that were below an empirical value of 31, as shown in the top panel of Fig. 4. We chose this threshold to be very sensitive but with low specificity, with the plan to reject FPs in the following step. To enable this next step, we traced the thresholded pixel boundaries using a Moore-neighbor tracing algorithm modified by Jacob’s stopping criteria [26], [27] in each 2-D B-mode image, as provided by the “bwboundaries” function in MATLAB. This defines a discrete number of contiguous regions (the cystoid ROIs) in each B-mode image.

Fig. 4
Thresholded OCT images. (Top panel) Before cyst discrimination. (Bottom panel) After cyst discrimination.

E. Rejection of FPs

While the process described previously was tailored to catch as many cystoid ROIs as possible, we found that, in practice, it also identified a number of FP CME regions. In order to improve upon the specificity, we employed two criteria to reject FP ROIs. First, cystoid ROIs had to have a traced area of at least 7 pixels. As shown by comparing the top and bottom panels of Fig. 4, this tended to remove FPs within the outer plexiform layer (OPL) where the optical scattering signal was lower. Second, we found that true positives (TPs) had a pixel intensity distribution that was reasonably uniform, and therefore, we rejected regions with pixel values exhibiting a standard deviation greater than an empirically determined value of 45. This tended to reject FPs from blood vessels that have a shadowing artifact extending into the layer immediately below. However, we found that in stacks of low SNR this criterion also caused rejection of some TPs. For this reason, the pixel uniformity metric was assigned a switch. If the SNR was ≥22, the pixel uniformity metric was applied; if not, the data were processed without it.

Fig. 5 displays the final cystoid ROIs overlayed on the original retinal image in Fig. 2.

Fig. 5
Overlay of an original grayscale OCT image with the final cystoid ROIs displayed as white.

III. Experimental Results and Discussion

To assess the performance of this automated method, the method was applied to all 19 OCT image stacks, and the results were manually evaluated to determine the specificity and sensitivity of its ability to segment cystoid ROIs. The evaluation was performed under the supervision of a board-certified ophthalmologist specializing in vitreoretinal disorders. A custom GUI was implemented in MATLAB to enable easy recognition and tabulation of both FPs (noncystoid regions incorrectly assigned as cystoid ROI) and false negatives (FNs) (cystoid regions that were not identified by the algorithm). The number of TPs was, then, defined as the difference in the total number of cystoid ROIs and the number of FPs. While true negatives (TNs) are defined as noncysts correctly identified as noncysts, within the context of this study we defined them as initially identified cysts elicited by the process of thresholding (see Section II-D) that were discarded after the rejection protocols (see Section II-E). The sensitivity and specificity were, then, calculated according to sensitivity = TP/(TP + FN), and specificity = TN/(TN + FP).

The computational time needed to perform this method is minimal. We find that a full (noncontrol) stack of 128 frames is processed in 2.6 min on a 32-bit PC with 3 GB of RAM and a 2-GHz processor. The step that consumes the most time is the bilateral filtering (44%). Further reduction in computation time might be obtained by implementing the algorithm in a compiled programming language.

We, then, tabulated the total volume occupied by cystoid ROIs identified by the algorithm, the volume occupied by the sum of the TP and FN cystoid ROIs (the total actual volume), and the total retinal ROI volume. The fractional volumes of cystoid ROIs within the retina (both according to the algorithm and the true values) were, then, computed and are displayed in Table I. As shown, the average sensitivity and specificity are 91% and 96%, respectively, and in all but one case were both ≥86% for the patients presenting with CME. In the one case where this was not true (the second to last dataset), we found poor sensitivity (74%) due to poor contrast in that image set. We also note that the third dataset (a control) exhibited an unusually large volume error of 12%; this was due to the image set having excessively poor SNR resulting in the outer plexiform and outer nuclear layers (ONLs) being falsely identified as CME.

Results of CME Segmentation in 16 Patients With Vitreoretinal Disease and Three Controls Using the Methods Described

On average, the cystoid fractional volume in the CME patients was 10% by the algorithm and 12% by manual inspection. (We note that, since partial stacks were used in 15 out of 16 CME cases, the fractional volumes may be an overestimate of that for the entire stack.) The slight (2%) underestimation by the algorithm is due to FNs. An example of this can be seen by comparing Figs. 1 and and5,5, where the cystoid ROI second from the right is rejected by the algorithm. This occurs because, despite the efficacy of the filters, erosion of the perimeters of cystoid ROIs is still possible, and the intensity of the inner area may be brightened just enough that they do not survive the threshold procedure. Generally, these FNs occur for small regions which have less impact on the total volume. FPs, on the other hand, tend to occur in regions of low signal arising from structures such as blood vessels; however, our pixel uniformity criterion tends to reject a large portion of these, as shown in Fig. 6. Others have reported discrimination of blood vessels based upon shadow artifacts, [21], which may be one way of improving this method in future work.

Fig. 6
Example of FPs that are rejected by the algorithm based on size and pixel uniformity. (Top panel) OCT image. (Middle panel) Associated cystoid ROIs before discrimination. (Bottom panel) Cystoid ROIs after discrimination. Small cyst ROIs tend to lie in ...

This study has focused on OCT images obtained from a Cirrus 4000 OCT system. Because the method includes SNR balancing, we expect that retinal images from other OCT systems with similar SNR and resolution can be segmented using the same settings and threshold value as in this study. More advanced and research-grade systems may require some modifications as follows: 1) higher spatial sampling per resolution volume would require a proportionally higher geometric spread in the bilateral filter; 2) higher dynamic range may require a higher photometric spread in the bilateral filter; 3) higher SNR may dictate a lower threshold value; and 4) higher spatial sampling would dictate a proportionally larger number of cystoid ROI pixels for the rejection of FPs. Currently, the accuracy of the segmentation algorithm is limited primarily by poor SNR, which should improve with newer systems that offer higher SNR.

IV. Conclusion

This paper presented a method to segment and quantify the total volume occupied by CME from OCT image stacks, in order to provide a metric that can be evaluated as a potential diagnostic for visual acuity. While the average sensitivity, defined by counting individual cystoid ROIs, was 91%, we found that missed cystoid ROIs were typically small and did not contribute much error to the total volume estimation. The average fractional volume of CME in our sample set of 16 CME patients was 10%, and our average error in fractional volume was 1.9% when comparing against results by manual inspection. Importantly, the median difference between the cystoid fractional volume by the algorithm and by manual inspection was only 0.8%. This suggests that, in most patients, (excepting a few outliers in our study), this algorithm represents an accurate method of total cystoid volume assessment.

To establish this method as an effective tool, further validating studies are needed to determine whether the specificity, sensitivity, and reproducibility are sufficient using a broader patient base. In particular, it is important to verify that this technique has the ability to distinguish intraretinal cysts from other features such as subretinal fluid or an epiretinal membrane draped over an irregular inner retinal surface. While the 2.6 min processing time is sufficient, further reduction in the processing time may allow for more extensive studies. Another important feature would be incorporation of a graphical interface to compare intraretinal cysts over multiple time points. Currently, this method is fully automated and operates on images directly obtained from a Cirrus HD-OCT system with only standard features. As such, we expect that this method can be broadly employed and will provide a new and accurate metric for clinical analysis.


The authors would like to thank S. Moyer and D. Mills in the Department of Ophthalmology, University of North Carolina (UNC) at Chapel Hill, Chapel Hill, for aid in obtaining OCT images, and Prof. M. Niethammer and Prof. R. Taylor in the Department of Computer Science, UNC, for helpful discussions on visualization.


An external file that holds a picture, illustration, etc.
Object name is nihms-476764-b0007.gif

Gary R. Wilkins was born in Salisbury, NC, in 1988. He received the B.S. degree in mathematics from the University of North Carolina (UNC) at Chapel Hill, Chapel Hill, in 2010.

In 2011, he joined the Neuroscience Lab of Dr. E. Anton of UNC’s Neuroscience Center, Department of Cell and Molecular Biology, University of North Carolina at Chapel Hill, Chapel Hill, where he is currently involved in the study of neuronal migration and computational neuroscience. His current research interests include applied mathematics, physics, computer science, and biological imaging.

An external file that holds a picture, illustration, etc.
Object name is nihms-476764-b0008.gif

Odette M. Houghton received the B.Sc. (Hons.) degree in biochemistry from the University of Bristol, Bristol, U.K., the B.S. degree in biology from the University of South Carolina, Spartanburg, and the M.D. degree from the Medical University of South Carolina, Charleston. She completed a residency in ophthalmology at the University of Michigan, Ann Arbor, and received a vitreoretinal surgery fellowship from the Kresge Eye Institute, Wayne State University, Detroit, MI.

In 2006, she joined the University of North Carolina at Chapel Hill, Chapel Hill, where she is currently an Assistant Professor of ophthalmology and specializes in surgery and diseases of the vitreous and retina.

An external file that holds a picture, illustration, etc.
Object name is nihms-476764-b0009.gif

Amy L. Oldenburg was born in Berwyn, IL, in 1974. She received the Diploma from the Illinois Mathematics and Science Academy, Aurora, in 1991, the B.S. (Hons.) degree in applied physics from the California Institute of Technology, Pasadena, in 1995, and the M.S. and Ph.D. degrees in physics from the University of Illinois at Urbana-Champaign, Urbana, in 1997 and 2001, respectively.

During her doctoral research, she was involved in the study of alkali metal vapor dynamics with pulsed femtosecond lasers. Her Postdoctoral research at the Beckman Institute for Advanced Science and Technology, University of Illinois at Urbana-Champaign, included developing novel molecular contrast imaging techniques and associated nanoparticles for use with optical coherence tomography. In 2008, she joined the faculty at the University of North Carolina at Chapel Hill, Chapel Hill, where she is currently an Assistant Professor of Physics and Astronomy, has an affiliation with the Department of Biomedical Engineering, and is the optical imaging modality leader at the Biomedical Research Imaging Center. Her current research interests include the study of tissue viscoelastic properties using novel optical and acoustic coherence imaging techniques and nanoprobes.

Contributor Information

Gary R. Wilkins, Department of Physics and Astronomy, University of North Carolina at Chapel Hill, Chapel Hill, NC 27599 USA (ude.cnu.da@yragw).

Odette M. Houghton, Department of Ophthalmology, University of North Carolina at Chapel Hill, Chapel Hill, NC 27599 USA (ude.cnu.dem@nothguoh_ettedo).

Amy L. Oldenburg, Department of Physics and Astronomy, the Department of Biomedical Engineering, and the Biomedical Research Imaging Center, University of North Carolina at Chapel Hill, Chapel Hill, NC 27599 USA.


[1] Rotsos TG, Moschos MM. Cystoid macular edema. Clin. Ophthalmol. 2008 Dec;vol. 2(4):919–930. [PMC free article] [PubMed]
[2] Group The Eye Diseases Prevalence Research. The prevalence of diabetic retinopathy among adults in the United States. Arch. Ophthalmol. 2004 Apr;vol. 122:552–563. [PubMed]
[3] Jampol LM, Tielsch J. Race, macular degeneration, and the macular photocoagulation study. Arch. Ophthalmol. 1992 Dec;vol. 110(12):1699–1700. [PubMed]
[4] Jun JJ, Duker JS, Baumal CR, McCabe F, Reichel E, Rogers AH, Seddon JM, Tolentino FI. Cystoid macular edema without macular thickening: A retrospective optical coherence tomographic study. Retina—The J. Retinal Vitreous Diseases. 2010 Jun;vol. 30(6):917–923. [PubMed]
[5] Antonetti DA, Barber AJ, Bronson SK, Freeman WM, Gardner TW, Jefferson LS, Kester M, Kimball SR, Krady JK, LaNoue KF, Norbury CC, Quinn PG, Sandirasegarane L, Simpson IA. Diabetic retinopathy: Seeing beyond glucose-induced microvascular disease. Diabetes. 2006 Sep;vol. 55(9):2401–2411. [PubMed]
[6] Huang D, Swanson EA, Lin CP, Schuman JS, Stinson WG, Chang W, Hee MR, Flotte T, Gregory K, Puliafito CA, Figomoto JG. Optical coherence tomography. Science. 1991 Nov 22;vol. 254(5035):1178–1181. [PubMed]
[7] Puliafito CA, Hee MR, Lin CP, Reichel E, Schuman JS, Duker JS, Izatt JA, Swanson EA, Fugimoto JG. Imaging of macular diseases with optical coherence tomography. Ophthalmology. 1995 Feb;vol. 102(2):217–229. [PubMed]
[8] Hee MR, Puliafito CA, Duker JS, Reichel E, Coker JG, Wilkins JR, Schuman JS, Swanson EA, Fugimoto JG. Topography of diabetic macular edema with optical coherence tomography. Ophthalmology. 1998 Feb;vol. 105(2):360–370. [PMC free article] [PubMed]
[9] Otani T, Kishi S, Maruyama Y. Patterns of diabetic macular edema with optical coherence tomography. J. Ophthalmol. 1999 Jun;vol. 127(6):688–693. [PubMed]
[10] Catier A, Tadayoni R, Paques M, Erginay A, Haouchine B, Gaudric A, Massin P. Characterization of macular edema from various etiologies by optical coherence tomography. Amer. J. Ophthalmol. 2005 Aug;vol. 140(2):200–206. [PubMed]
[11] Sadda SR, Wu Z, Walsh AC, Richine L, Dougall J, Cortez R, LaBree LD. Errors in retinal thickness measurements obtained by optical coherence tomography. Ophthalmology. 2006;vol. 113(2):285–293. [PubMed]
[12] Keane PA, Mand PS, Liakopoulos S, Walsh AC, Sadda SR. Accuracy of retinal thickness measurements obtained with Cirrus optical coherence tomography. The Brit. J. Ophthalmology. 2009 Nov;vol. 93(11):1461–1467. [PubMed]
[13] Pelosini L, Hull CC, Boyce JF, McHugh D, Stanford MR, Marshall J. Optical coherence tomography may be used to predict visual acuity in patients with macular edema. Investigative Ophthalmol. Visual Sci. 2011 Apr;vol. 52(5):2741–2748. [PubMed]
[14] Burckhardt CB. Speckle in ultrasound B-mode scans. IEEE Trans. Sonics Ultrason. 1978 Jan;vol. 25(1):1–6.
[15] Schmitt JM, Xiang SH, Yung KM. Speckle in optical coherence tomography. J. Biomed. Opt. 1999 Jan;vol. 4(1):95–105. [PubMed]
[16] Adler DC, Ko TH, Fujimoto JG. Speckle reduction in optical coherence tomography images by use of a spatially adaptive wavelet filter. Opt. Lett. 2004 Dec 15;vol. 29(24):2878–2880. [PubMed]
[17] Fernandez DC, Salinas HM, Puliafito CA. Automated detection of retinal layer structures on optical coherence tomography images. Opt. Exp. 2005 Dec 12;vol. 13(25):10200–10216. [PubMed]
[18] Garvin MK, Abramoff MD, Kardon R, Russell SR, Xiaodong W, Sonka M. Intraretinal layer segmentation of macular optical coherence tomography images using optimal 3-D graph search. IEEE Trans. Med. Imag. 2008 Oct;vol. 27(10):1495–1505. [PMC free article] [PubMed]
[19] Puvanathasan P, Bizheva K. Interval type-II fuzzy anisotropic diffusion algorithm for speckle noise reduction in optical coherence tomography images. Opt. Exp. 2009 Jan;vol. 17(2):733–746. [PubMed]
[20] Wong A, Mishra A, Bizheva K, Clausi DA. General Bayesian estimation for speckle noise reduction in optical coherence tomography retinal imagery. Opt. Exp. 2010 Apr;vol. 18(8):8338–8352. [PubMed]
[21] Lu S, Cheung CY, Liu J, Lim JH, Leung CK, Wong TY. Automated layer segmentation of optical coherence tomography images. IEEE Trans. Biomed. Eng. 2010 Oct;vol. 57(10):2605–2608. [PubMed]
[22] Paris S, Durand F. A fast approximation of the bilateral filter using a signal processing approach. Int. J. Comput. Vision. 2009 Jan;vol. 81(1):24–52.
[23] Garvin MK, Abramoff MD, Wu X, Russell SR, Burns TL, Sonka M. Automated 3-D intraretinal layer segmentation of macular spectral-domain optical coherence tomography images. IEEE Trans. Med. Imag. 2009 Sep;vol. 28(9):1436–1447. [PMC free article] [PubMed]
[24] Mishra A, Wong A, Bizheva K, Clausi DA. Intra-retinal layer segmentation in optical coherence tomography images. Opt. Exp. 2009 Dec;vol. 17(26):23719–23728. [PubMed]
[25] Debuc DC, Salinas HM, Ranganathan S, Tatrai E, Gao W, Shen M, Wang J, Somfai GM, Puliafito CA. Improving image segmentation performance and quantitative analysis via a computer-aided grading methodology for optical coherence tomography retinal image analysis. J. Biomed. Opt. 2010 Jul-Aug;vol. 15(4):046015-1–046015-12. [PubMed]
[26] Berezsky O, Bat’ko Y. Algorithm of determination of image contours of biological nature. Proc. Int. Conf. Modern Problems Radio Eng., Telecommun., Comput. Sci. 2006:642–644.
[27] Gonzalez RC, Woods RE, Eddins SL. Digital Image Processing Using MATLAB. Pearson Prentice Hall; Englewood Cliffs, NJ: 2004.