We have presented in this paper a novel automated method for detection of the footprint of SEADs in SD-OCT scans from AMD patients. This method relies on the characterization of ten automatically segmented intraretinal layers by their thickness as well as by their 3-D textural features. The method utilizes a multiscale 3-D graph search approach to automatically segment 11 retinal surfaces defining 10 intraretinal layers from 3-D macula OCT scans. Since SEADs can appear anywhere within, between, or under these layers, their footprints were detected by classifying vertical, cross-layer, macular columns. A SEAD probability was measured for each macular column by assessing in each layer the local deviations from the normal appearance of maculae in the space of the most relevant features; the normal appearance of maculae was obtained from thirteen normal OCT scans. The reported approach was trained using a set of 78 SD-OCT volumes from 23 patients in a leave-one-eye-out fashion and evaluated against a human reference standard. To define the SEAD independent standard, the manual segmentation of the SEADs in the 3-D volumes was made possible by a novel semi-automated segmentation based on a graph-cut approach. While serving well for our purpose of independent standard definition, the interactive SEAD definition in 3-D remains tedious and time consuming, despite the semi-automated character of the process.
Good layer segmentation results were obtained and the performance of automated SEAD footprint detection based on 3-D texture and layer thickness is excellent. An area under the receiver-operating characteristic of 0.961±0.012 (average ± standard deviation across the six testing sets) was obtained for the classification of macular columns of using a 15 × 15 square base while varying a threshold of local SEAD probability. This performance is slightly higher than the classification of macular columns using a 10 × 10 or a 20 × 20 square base. The false positives observed on this dataset are caused by abnormal tissues other than SEADs such as vascular growths. In normal subjects, where there are no SEADs and thus no footprint, the highest SEAD probability is usually observed at the center of the macula. As can be seen in representing a normal case, a low-probability SEAD-like region is usually detected between surfaces 8 and 9 at this location. However, the SEAD probability is lower than the probability threshold observed at the center of any SEAD in the dataset and no SEAD footprint is therefore detected. We can see from that a higher detection performance can be achieved with a smaller number of training images if the SEAD probability in a macular column is derived from the local deviations of the relevant features against the normal appearance of maculae, instead of being derived from the relevant features themselves. This is to be expected because two columns from different areas of the macula can only be compared when compensating for the normal variations of their features across the macula. This implies that more training samples are required if the features are not normalized with respect to their deviations from the characterization of normal maculae.
A repeatability study conducted from a set of 12 pairs of scans obtained on the same day from the same eye reveals that the automated approach is at least as repeatable as the semi-automated definition of the human expert standard in 3-D followed by x – y projection ().
Further experiments suggested that the method is quite robust across OCT scanners from different manufacturers and certainly for the same model across OCT scanners from the same manufacturer (even across generations like 2-D time-domain OCT Stratus versus 3D spectral-domain OCT Zeiss Cirrus). As for the SEAD footprint finding method, should the proposed method be applied to another dataset obtained with a different OCT acquisition properties, it would be advisable to repeat the training process due to its reliance on k-NN, an example-based classifier.
Because SEADs are fluid-filled regions, two features were highly expected to be relevant: the thickness of the layers (due to the appearance of layer swelling) and the average intensity (the reflectance of fluids or fluid filled tissue being lower than “dry” macula tissues). This study reveals that several 3-D textural features compare favorably with these two features (), for example the gray level nonuniformity and the run length nonuniformity, two run length analysis features, as well as the angular second moment, the contrast, the inertia or the inverse difference moment, and the four co-occurrence matrix analysis features. Combining the thickness of the layers and the average intensity with other features reduces the number of false positives, hence the increase of the per-column AUC after feature combination. The optimal set of features, that was identified by cross-validation, includes the best three features as measured independently: the average intensity, the average thickness of the layers and the inertia. It also includes two features with a lower performance: the standard deviation of high frequency wavelet coefficients and the entropy (co-occurrence matrix analysis). The reason why these features were most frequently selected instead of seemingly better features is that they are less correlated to the thickness of the layers, i.e., they bring more additional information to the combined set of features. This can be seen in . The normal distribution of the entropy and of the standard deviation of wavelet coefficients across the macula are much less correlated to the thickness of the layer than that of inertia. In fact, the normal distribution of these two features is almost invariant across the macula, which suggests that they better characterize the nature of the tissues, as opposed to inertia which better characterizes the shape and deformations. These properties are desirable to detect SEADs: 1) the appearance of a SEAD within a layer obviously modifies the tissue optical properties of the layer (the layer consists of normal tissues plus fluid) and 2) the surrounding tissues, which are included in the macular column vector that form the input for the classifier, are stretched.
One conclusion we can derive from this study is that although SD-OCT suffers from presence of noise—in particular laser speckle noise—it is still possible to extract useful 3-D textural information for clinical applications. It may even be possible to exploit the variance information to characterize tissues, as suggested by the inclusion of the standard deviation of high frequency wavelet coefficients in the optimal feature set.
The presented automated method may be applicable to clinical setting in its present state. The sum of the SEAD probabilities obtained for each macular column could be used as an estimate of SEAD volumes. Nevertheless, we plan to develop a method for determination of the true SEAD volume in future studies: the automatically detected SEAD footprints will be used to initialize the currently semi-automated SEAD segmentation method and the relevant 3-D textural indices identified in this study will be used to derive a fully automated 3-D SEAD segmentation method. Moreover, the methodology presented in this paper may be applied to the 3-D textural characterization of SD-OCT scans of the optic nerve head, for which automated layer segmentation methods were previously reported by our group, in order to improve the OCT-guided assessment of the optic nerve head.