PMCCPMCCPMCC

Search tips
Search criteria 

Advanced

 
Logo of nihpaAbout Author manuscriptsSubmit a manuscriptNIH Public Access; Author Manuscript; Accepted for publication in peer reviewed journal;
 
Med Image Anal. Author manuscript; available in PMC Jan 1, 2013.
Published in final edited form as:
PMCID: PMC3191270
NIHMSID: NIHMS300484
Coupled Parametric Model for Estimation of Visual Field Tests Based on OCT Macular Thickness Maps, and Vice Versa, in Glaucoma Care
Andy Tsai,a Joseph Caprioli,b and Lucy Shenc
aDepartment of Radiology, Children’s Hospital Boston, 300 Longwood Avenue, Boston, MA 02115
bGlaucoma Division, Jules Stein Eye Institute, 100 Stein Plaza, 2-118, Los Angeles, CA 90095
cGlaucoma Service, Massachusetts Eye and Ear Infirmary, 243 Charles Street, Boston, MA 02114
The current standard of care for glaucoma patients consists of functional assessment of visual function via visual field (VF) testing, which is sensitive but subjective, time-consuming, and often unreliable. A new imaging technology, Fourier domain optical coherence tomography (OCT), is being introduced to assess the structural characteristics of the macula. This new complementary exam is efficient, objective, and reliable. A complex, but consistent, relationship exists between the structural information provided by macular OCT and the functional information gathered by VF maps. We propose a learning-based framework with the ability to estimate the VF map based on OCT macular thickness measurements as input (and vice versa). Central to this algorithmic framework is a coupled parametric model that captures not only the individual variabilities of the OCT macular thickness measurements and the VF maps, but also their co-dependencies. This model is derived by applying principal component analysis (PCA) to a library consisting of various pairs of OCT and VF maps. The parameters of this coupled model are obtained by solving a linear least squares problem. Next, these estimated parameters are used, in conjunction with the eigenvectors derived from PCA, to compute the estimate. The accuracy of this coupled parametric estimation model was evaluated by performing multiple leave-one-out cross validation experiments.
Keywords: OCT, visual field testing, glaucoma, PCA
Glaucoma is the leading cause of irreversible blindness, the fear of which ranks third among most people surveyed, followed only by cancer and heart disease (Prevent Blindness America Survey, 2002). Early detection and regular monitoring are essential to achieving optimal care for glaucoma patients. One important tool is the visual field (VF) exam, a functional test that is part of the standard of care for diagnosis and evaluation of glaucoma. It is performed regularly in the management of glaucoma. While the VF exam is sensitive to detect and monitor the disease; it is, unfortunately, subjective, time-consuming, and often unreliable. To offset these limitations, Fourier domain optical coherence tomography (OCT) has recently been introduced to provide important structural information about the posterior segment of the eye. This structural evaluation by OCT has been shown to be objective, efficient, and reliable (Leung et al. 2010; Wolf-Schnurrbusch et al. 2009). Several investigators have reported a significant but complex relationship between the structural damage as measured by OCT, and the functional deficits as gathered by VF testing in glaucoma patients (Ferreras et al. 2008; Harwerth et al. 2007; Hood et al. 2007a, 2007b; Shen et al. 2010). However, this relationship has not been well defined to become clinically useful.
The ability to estimate the results of a functional test based on structural information, and vice versa, is the topic of this paper. Such predictive ability may improve the accuracy of glaucoma detection and monitoring, as the predicted VF map (based on OCT images) may be used to confirm real defects in the subjective VF test. Furthermore, this ability may reveal a functional-structural correspondence that effectively leads to a better understanding of glaucoma. In the following three subsections, we provide a short introduction to VF testing and OCT macular thickness maps; we briefly summarize our model and related research; and we argue that our novel methodology proposed here is superior to existing techniques.
1.1. Ophthalmologic Evaluation in Glaucoma
As mentioned earlier, two complementary exams for glaucoma patients include the functional assessment of vision via VF testing and OCT structural assessment of the posterior segment of the eye. These two exams are further discussed below.
1.1.1. Visual Field Function Test
This exam maps the patient’s central and peripheral visual function based on his or her response (e.g. pressing of a button) to seeing a flashing light of varying intensities presented at predetermined locations. The most common VF test has 54 test locations within the central 24 degrees of the field of view. At the conclusion of the exam, the patient’s responses are analyzed statistically and compared to a normative database to generate a detailed map (conventionally presented in units of decibels) noting the areas of VF defect. Locations labeled with negative values correspond to areas where the patient performed below the expected norm. The exponential magnitude of these negative values reflects how far below the norm the patient performed at the implicated location of the VF test. Similarly, VF locations labeled with positive values correspond to locations where the patient performed above the expected norm. Each eye is tested separately, yielding two VF maps for each patient. Only the central subset of the VF test locations which overlaps with the portion of the macula captured by the OCT is utilized for our analysis. This subset of VF points, in relationship to the entire 54 point spatial test locations, is encircled in green as shown in the second row of Figure 1. After excluding the blind spot (blue areas shown in Figure 1), each of these VF maps contains a total of 34 data points. Figure 1(a) is an example of a normal individual’s 34-point VF map where all the numbers hover around 0, indicating that this particular patient performed at or close to the expected norm in these regions. The two dark blue pixels along the last column correspond to the the patient’s blind spots. In contrast, Figure 1(b) is an example of a glaucoma patient’s VF map demonstrating superior hemi-field defect. Notice that numerous points within the superior visual field take on values well below 0, indicating a significant VF defect.
Figure 1
Figure 1
Two examples of VF maps, one is from a normal individual (a), and one is from an individual with superior hemi-field deficit (b). First row are the total deviation plots. Second row illustrates the central subset of the VF test points used for this study, (more ...)
1.1.2. Optical Coherence Tomography (OCT)
OCT is an emerging imaging modality based on the principles of low-coherence interferometry. With spectrometer detection methods and a Fourier transform algorithm, Fourier domain OCT can perform 26,000 A-scans in the macula in one second. Each map generated consists of a 6 mm × 6 mm grid (36 mm2) centered on the fovea, with spacing of 0.25 mm to 0.5 mm within the grid, and a depth resolution of 5 µm. This fast and efficient data acquisition minimizes motion artifacts, while maximizing the signal-to-noise ratio (Gonzalez-Garcia et al. 2009; Wojtkowski et al. 2005). Images in the top row of Figure 2 represent printouts from EMM5 macular scans. The raw data were exported, re-centered to the fovea, normalized with the highest thickness values in each scan to account for inherent differences in the thicknesses of the maculas across different patients (such as may be due to age, ethnicity, and length of the eye). These thickness ratio maps are represented on the bottom row of Figure 2 as colored contour maps. Figure 2(a) shows the normalized macular OCT thickness map of an individual with a normal VF map as depicted in Figure 1(a). Notice the donut-shaped macula with a thickness close to a normalized value of 1, and the central area of decreased macular thickness corresponding to the fovea. In contrast, Figure 2(b) illustrates the normalized macular OCT thickness map of a patient with glaucoma; the inferior macular thinning corresponds to the superior VF loss as shown in Figure 1(b).
Figure 2
Figure 2
Two examples of OCT thickness maps, one is from an individual with a normal VF (a), and one is from a glaucoma patient with inferior macular thinning (b). Cross sections of full-thickness retina are shown in black boxes on the first row, with their locations (more ...)
1.2. Related Research
One important class of unsupervised learning algorithms is principal component analysis (PCA). It is a classic statistical technique first described in 1901 (Pearson, 1901). PCA can automatically learn and capture the complex patterns and relationships that characterize a training database and leverage this information for various applications (Jolliffe, 2002).
As reported by Tsai et al. (2004), this PCA-based approach can be extended to simultaneously handle multiple images of differing types within a single linear model. In this 2004 paper, PCA was applied to a sample covariance matrix consisting of example images drawn from different families of images. The derived eigenvectors not only capture the variations present within each family of images; but also the co-variations that exist among the various families. The approach we take in this paper bears some semblance to this particular aspect of their methodology.
When two images are strongly related to one another, and one of the images is unknown (as in our application), an estimation-theoretic approach can be taken to estimate that unknown image by incorporating this pair of images as an additional entry in a training database, and judiciously labeling the unknown image as the missing data. Viewed in this light, this type of problem is transformed into the estimation of this missing data within the training database. Various PCA-based methods have been reported in the literature that can provide an estimate of these missing data, each with their own way of handling missing values. For example, a probabilistic PCA, formulated in an EM-framework, was introduced in separate studies by Roweis (1998), and Tipping and Bishop (1999). Their algorithms simultaneously provide an estimation of the principal subspace and a reconstruction of the missing data. More recently, a PCA-based variational Bayesian subspace learning approach is proposed in the Raiko paper (Raiko et al. 2007) that can handle large scale problems with numerous missing values. Several other PCA-based approaches that can deal with missing data within the training database are presented in the book by Little and Rubin (2002). In general, however, the computational costs of all these methods are too expensive for this particular application, as this type of approach calls for re-calculation of the principal components with each query (rather than simply utilizing the same principal components calculated from the initial off-line training).
An algorithm developed by Essock et al. (2007) that is based on both the PCA method and wavelet-Fourier analysis is particularly pertinent to this paper as it is applied to a clinical problem that is similar to ours. Specifically, the goal of Essock’s algorithm is to provide a binary assessment of whether a patient with ocular hypertension will later develop visual field loss or not. To this end, wavelet-Fourier analysis is applied to a library database consisting of retinal nerve fiber layer thickness estimates from both glaucomatous and healthy individuals. This wavelet-base approach extracts feature vectors that are then stacked row by row to form a feature matrix. PCA is then applied to transform this feature matrix into a smaller number of uncorrelated variables. Next, Fisher’s linear discriminant analysis is employed to calculate the discriminant function based on these uncorrelated variables. For binary classification, this discriminant function is applied to another data set (outside of the training database), consisting of patients with ocular hypertension, to predict the subset of this patient population that will eventually convert to glaucoma and develop visual loss, and those who will not.
1.3. Contributions
The main contribution of the work presented here lies in the successful adaptation of the coupling technique developed by Tsai et al. (2004) for image estimation (as opposed to image segmentation for which it was originally formulated). In this paper, we propose a parametric linear model that utilizes this coupling technique to capture the complex structural-functional relationship that exists between the OCT thickness ratio map and its paired VF map. Central to this model is the application of PCA to a sample covariance matrix composed of various pairs of OCT and VF images to arrive at a set of eigenvectors that can serve as the orthogonal basis function for image representation. By design, this orthogonal basis function efficiently encodes the variabilities within the library database, and naturally captures the co-dependencies between the OCT and the VF images. The parameters applied to our model correspond to the weighting scheme for this set of orthogonal basis functions, and are estimated by solving a linear least squares problem.
We begin the description of our model with definition notations. Let Ψoct and Ψvf denote the macular OCT thickness ratio map and its paired VF map, respectively. Suppose there are n such example pair images in our training library database as denoted by equation M1. The n OCT ratio maps are aligned with one another because of the manner by which they are obtained (all centered over the fovea). In a similar fashion, the n VF maps are aligned with one another. Hence, alignment is not an issue for either the OCT or the VF images, as they are all performed/obtained with reference to the same fixed structure.
We first compute the mean image functions [Phi w/ macron]oct and [Phi w/ macron]vf, representing the mean OCT ratio map and mean VF map of our library database, respectively. To extract the variability of the OCT and VF images, the mean images [Phi w/ macron]oct and [Phi w/ macron]vf are subtracted from each of the n OCT and VF images, respectively. This operation yields n mean-offset OCT and n mean-offset VF images as denoted by [Psi]oct and [Psi]vf, respectively, as shown below:
equation M2
(1)
equation M3
(2)
where
equation M4
(3)
equation M5
(4)
To prepare the training library, we form n column vectors {[psi]1, …, [psi]n} where each column vector [psi]i consists of P samples of equation M6 directly concatenated on top of Q samples of equation M7. Therefore, each [psi]i consists of N = P + Q elements. Specifically, in our setup, for OCT and VF imaging pair i, the 501 × 501 pixel size OCT image is used to generate a column of P = 251, 001 lexicographically ordered pixels (where the columns of the image grid are sequentially concatenated on top of one another to form one large column). This same strategy is applied to transform the 6 × 6 VF image to generate a column of Q = 34 lexicographically ordered pixels (after excluding the blind spot which occupies 2 test locations in the VF). These two column vectors are concatenated on top of each other to form a large column vector [psi]i of dimension N = 251, 035. This process is repeated n times, one for each OCT and VF image pair, to form a tall and skinny matrix of size 251, 035 × n. We define this matrix S as
equation M8
(5)
A simple schematic diagram to illustrate the formation of S is shown in Figure 3.
Figure 3
Figure 3
Diagram illustrating the formation of the matrix S and the column vectors [psi]i.
Eigenvalue decomposition is employed to factor equation M9 as:
equation M10
(6)
where X is a tall and skinny rectangular N × n matrix whose columns represent the n principal variational modes or eigenvectors of the paired OCT and VF images, and Λ is an n × n diagonal matrix whose diagonal elements, denoted by λ1, …, λn, represent the corresponding non-zero eigenvalues. Because these eigenvectors are derived by performing principal component analysis on the matrix S, which contains not only information about the variabilities of the OCT and VF images, but also about their co-dependencies, the eigenvectors derived will naturally demonstrate strong coupling between the OCT and the VF images. This idea is central to the success of our algorithm. Further, as a direct result of concatenating the vectorized OCT image representation directly on top of the vectorized VF image representation, the N × n-dimensional matrix X can be easily partitioned into an upper and a lower submatrix. Specifically,
equation M11
(7)
where Xoct is of dimension P × n, and Xvf is of dimension Q × n. As we will show later, these submatrices Xoct and Xvf are important for image reconstruction.
From a computational standpoint, calculating the eigenvectors and eigenvalues of the N × N matrix equation M12 is simply not practical. To offset this difficulty, in standard practice, we typically compute the eigenvectors and eigenvalues from a much smaller n × n matrix W given by:
equation M13
(8)
Thus, if d is an eigenvector of W with corresponding eigenvalue λ, then Sd is an eigenvector of equation M14 with eigenvalue λ (see proof on page 59 in Leventon (2000)).
Because of the projection operation, the weighted sum of these eigenvectors can be used to generate a reconstructed image vector, which, in turn, can then be unconcatenated to yield two separate vectorized image representations–one to reconstruct the OCT image, and one to reconstruct the VF image. More precisely, the first P samples of the reconstructed image vector can be rearranged (by undoing the earlier lexicographical concatenation of the grid columns) to form a 501 × 501 rectangular OCT image. Likewise, the last Q samples of the reconstructed image vector can be rearranged in a similar fashion to form a 6 × 6 rectangular VF image (after adding back in the two blind spots).
We begin this section by introducing a framework for estimating VF maps using the OCT thickness ratio maps as input. Suppose we have noisy measurement of both the OCT and the accompanying VF images. Denote the lexicographically vectorized representation of these two noisy images as Yoct and Yvf, respectively. To be precise, the dimensions of Yoct are P × 1, and the dimensions of Yvf are Q × 1. Concatenating them on top of one another forms the N × 1 dimensional measurement vector Y:
equation M15
(9)
As we mentioned earlier, both of these measurements are contaminated by noise. We model the OCT measurement noise as independent identically distributed (iid) zero-mean Gaussian process with standard deviation of σoct. In a similar fashion, we model the VF measurement noise as iid zero-mean Gaussian process with standard deviation of σvf. In our learning-based coupled parametric model, the explicit dependence of these measurements Y on the eigenvectors X is given by the following measurement model:
equation M16
(10)
where
equation M17
(11)
with
equation M18
(12)
equation M19
(13)
The n × 1 parameter α determines the weighting scheme of the eigenvectors in X that best approximates the measurements Y. Using Equations 7, 9, and 11 to expand Equation 10, we obtain the following measurement model:
equation M20
(14)
To calculate α for the above equation, we solve the following quadratic minimization problem:
equation M21
(15)
equation M22
(16)
Because the columns of the matrix X are linearly independent (which is, by design, as they are derived based on PCA), this minimization problem has a unique solution given by:
equation M23
(17)
Recall that in formulating our problem earlier, we assumed that both the OCT and VF measurements are available. However, in reality, we only have the OCT measurements and not the VF measurements, as the goal is to estimate the VF measurements based on the provided OCT measurements. Hence, the standard deviation of the measurement noise σvf in Equation 17 is set to infinity, which is one way to indicate complete lack of information regarding Yvf. With σvf = ∞, Equation 17 reduces to:
equation M24
(18)
The [alpha]oct derived based on Equation 18 is used in conjunction with the eigenvectors Xvf to estimate Yvf. Specifically, an estimation of Yvf is given by:
equation M25
(19)
Thus far, we have shown the methodology by which we estimate the VF measurements based on OCT measurements. Importantly, this method is flexible enough to accommodate the reverse scenario–that is, estimating OCT measurements based on VF measurements, by altering our algorithm in a simple and straightforward manner. Specifically, instead of setting σvf to infinity, we set σoct to infinity so that Equation 17 becomes:
equation M26
(20)
Since we are now interested in estimating Yoct, it is given by the following:
equation M27
(21)
In this section, we demonstrate the performance of our coupled parametric estimation model. We begin in Section 4.1 by displaying the VF and OCT training databases that we employ for this paper as well as the calculated principal modes associated with these databases. In Section 4.2, we qualitatively and quantitatively assess the performance of our model by showing the results of our leave-one-out experiments in the estimation of VF images based on OCT data. Next, we switch the roles of the VF maps and the OCT thickness ratio maps by performing leave-one-out experiments in the estimation of OCT images based on VF data, the results of which are displayed in Section 4.3.
4.1. Training Databases and Their Principal Modes
A total of 14 patients with glaucomatous superior VF loss to a single eye underwent VF testing and OCT imaging of both eyes to yield 28 pairs of VF and OCT data. Patients with glaucomatous superior VF defect were chosen because this is, by far, the most common VF defect in glaucoma. As a result, it is easier to collect a richer database for training and testing. Of these 28 pairs of data, 8 pairs of data were discarded by the glaucoma specialists because of one of the following 3 reasons: the VF data were unreliable, the OCT image had poor image quality, and the VF/OCT pair did not show the expected anatomic correpondence. The 20 remaining data were used in the algorithm. These 20 data set are displayed in Figures 4 and and5,5, respectively.
Figure 4
Figure 4
VF Database.
Figure 5
Figure 5
OCT Database.
We next show the training phase of our coupled parametric model. Specifically, we applied the methodology as described in Section 2 to the paired VF and OCT databases illustrated in Figures 4 and and55 to arrive at 20 eigenvectors that are used to effectively capture the variations and the co-dependencies between the VF maps and the macular OCT thickness ratio maps. The cummulative sum of the eigenvalues are plotted in Figure 6, respectively. Notice that the first mode captures approximately 25% of the variation, and this amount tapers toward 0 with each additional mode.
Figure 6
Figure 6
Cummulative distribution of the eigenvalues.
To provide some intuition behind how the eigenvectors capture the variabilities and co-dependencies of the OCT and VF pair images, the first three eigenvectors are varied by different amounts and shown in Figures 7, ,8,8, and and9.9. Each figure demonstrates the effect of varying a particular principal mode in changing the appearances of the macular OCT thickness ratio map and the VF map. For example, notice that by varying the first principal mode from equation M28, a large superior hemifield defect is created in VF, which is associated with thinning in the lower half of the macula (Figure 7), an expected relationship based on the anatomy of the eye. Because PCA maximizes the largest singular value, the first principal mode is the pattern that accounts for the greatest amount of the variance-covariance structure. By varying the second principal mode from equation M29, a superonasal VF defect is created, which in turn, is associated with predominant thinning in the peripheral inferior macula (Figure 8). The third eigenvector is predominantly associated with changes in the width and radius of the donut appearing macula (Figure 9). As the structural-functional relationship is complex, with each successive eigenvector, the coupling between VF maps and OCT images are more subtle and harder to identify visually.
Figure 7
Figure 7
First Eigenmode.
Figure 8
Figure 8
Second Eigenmode.
Figure 9
Figure 9
Third Eigenmode.
4.2. Estimation of VF Maps Based on OCT Images of the Macula
To evaluate the performance of our coupled parametric model in estimating VF maps based on OCT images as described in Equations 18 and 19, we performed 20 leave-one-out experiments. The estimates from these experiments were then compared to actual measurements.
One method to determine the optimal number of eigenvectors to use in our model is to calculate the root-mean-square error (RMSE) between the measured and the estimated VF maps, as defined by
equation M30
(22)
while the number of eigenvectors employed is varied from 1 to 19. The idea behind this approach is to find the ideal number of eigenvectors to employ so that the estimated VF image best mimics the measured VF image. The results of this exercise are shown in Figure 10, which demonstrated that the lowest RMSE (5.37) is accomplished by utilizing just the largest eigenvector. We suspect that only the largest eigenvector is needed because the variability of our database is limited, as we restricted ourselves to only patients with glaucomatous superior VF defects. Expansion of the database with inclusion of other visual field deficits is needed to test this hypothesis.
Figure 10
Figure 10
RMSE in VF Prediction as a Function of Number of Eigenvectors Used.
A representative sample of the results from our 20 leave-one-out experiments, based on using the single largest eigenvector, is displayed in Figure 11. Three images are shown in a row for each patient. The first column is the input OCT data, the second column is the output estimated VF image based on our algorithm, and the third column is the measured VF map. The estimated VF image is purposely positioned next to the actual VF image to facilitate a direct qualitative comparison. Qualitatively, the VF estimation results appear reasonably accurate overall.
Figure 11
Figure 11
Examples of VF Prediction Results.
To provide a quantitative assessment of the performance of our algorithm, we generated a scatter plot of estimated versus measured VF values for each of the 34 points in each of our 20 leave-one-out experiments (using just the single largest eigenvector), the results of which are shown in Figure 12. A best fit line, shown in red, is plotted through these 680 points. The slope and y-intercept of this line is 0.743 and −1.077 respectively, with the ideal slope and y-intercept being 1 and 0 respectively. The R-value for this line fit is 0.844 with p-value of < 0.0001. This R-value is considered very good in this application, especially given the fact that our estimation results are based on relationships captured between an objective structural test and a subjective functional test.
Figure 12
Figure 12
Scatter Plot of Predicted Versus Measured VF.
As mentioned previously, the VF measurements are subjective and often unreliable. However, these are the established and clinically accepted measurements. One way to assess a new method of measurement with an established one is via the Bland-Altman plot. To visually judge the two methods via the Bland-Altman plot, the mean difference between these two methods (i.e. the bias), and the 95% limits of agreement are overlaid on the Bland-Altman plot as shown in Figure 13. The bias here is −0.04dB, and the standard deviation is 5.48dB.1 Visually, there is relatively good agreement between the measured and the estimated values.
Figure 13
Figure 13
Bland Altman Plot of VF Prediction.
4.3. Estimation of OCT Images of the Macula Based on VF Testing
In a similar fashion, we qualitatively and quantitatively evaluated the performance of our parametric model in estimating OCT thickness ratio maps based on VF maps. Specifically, we performed 20 leave-one-out experiments based on the model described by Equations 20 and 21, and then compared our estimates to actual measurements.
Again, to determine the optimal number of eigenvectors to use, we calculated the RMSE between the measured and the estimated OCT macular thickness maps, as defined by
equation M31
(23)
while the number of eigenvectors employed is varied from 1 to 19. The results of this exercise are shown in Figure 14, which demonstrates that the lowest RMSE (0.081) is from using just the single largest eigenvector. As already mentioned, we suspect that only the largest eigenvector is needed because of the limited variability of our database.
Figure 14
Figure 14
RMSE in OCT Prediction as a Function of Number of Eigenvectors Used.
Representative results from our 20 leave-one-out experiments, based on using the single largest eigenvector, is displayed in Figure 15. In a similar fashion as to the prior section to facilitate direct comparison, three images are shown in a row for each patient. The first column is the input VF data, the second column is the output estimated OCT image based on our algorithm, and the third column is the measured OCT map. These results appear reasonably accurate.
Figure 15
Figure 15
Examples of OCT Prediction Results.
For a more quantitative assessment, we again generated a scatter plot of estimated versus actual OCT values for each of the 251,001 points in each of our 20 leave-one-out experiments (using only the single largest eigenvector), the result of which is shown in Figure 16. A line of best fit (shown in red) is plotted through the 5,020,020 points. The slope and y-intercept of this line is 0.801 and 0.127 respectively, with the ideal slope and y-intercept being 1 and 0 respectively. The R-value for this line fit is 0.893 with p-value of < 0.0001. Again, this R-value is considered very good in this application.
Figure 16
Figure 16
Scatter Plot of Predicted Versus Measured OCT.
One way to assess the accuracy of our estimation method is via the Bland-Altman plot. To visually judge the two methods via the Bland-Altman plot, the bias and the 95% limits of agreement are overlaid on a Bland-Altman plot shown in Figure 17. The bias here is 0.0006, and the standard deviation is 0.104. Again, there is relatively good agreement between the estimated and the measured OCT data.
Figure 17
Figure 17
Bland Altman Plot of OCT Prediction.
In this final section, we summarize the contributions of this paper, and provide suggestions for extensions and future research directions. In the preceding sections, we presented a mathematically principled framework that is capable of providing a reasonable estimate of the VF map when given the corresponding OCT image, and vice versa. One key attraction of this technique is its versatility as it can be used to estimate a functional test based on structural information, and vice versa. Moreover, the training phase of our formulation requires no supervision and is easy to implement. In this phase, we employ the method of PCA to decompose a covariance matrix into a series of orthogonal patterns, then embody (in a step down fashion) not only the variability in the OCT and VF images respectively, but also in their co-dependencies. The estimation phase of our algorithm is accomplished rapidly, as it simply involves solving a linear least squares equation for a small set of parameters. The results of the leave-one-out experiments clearly demonstrate the estimation capabilities of our model and show promise for new applications.
This coupled parametric model is simple to understand and implement; it is computationally efficient; and it offers both strong unsupervised learning and estimation capabilities. This versatile model has two distinct advantages over most of the techniques previously described: first, it does not require recalculation of the eigenvectors with each query; and second, it accommodates a wider subspace than the training database which admits query data that may be far away from the training set but still close to the principal subspace.
In terms of clinical utility, this is one of the first studies to use OCT macular thickness maps to predict VF defects. The preliminary data generated based on our approach can be used by clinicians for patients with superior VF defects, even at this immature stage. For example, our predicted visual field can be compared to the subjective VF test, and verify which defects are indeed real. It is known that when the VF defect is significant (i.e. >−40 dB), the defect can be quite variable (even if the patient is asked to retake the test). Hence, it is not unexpected to find differences between the predicted and measured VF values. In short, this algorithm may help to incorporate Fourier domain OCT of the macula into the clinical care for glaucoma patients.
Unfortunately, our approach has a few shortcomings: first, the prediction capability is somewhat limited because the algorithm depends heavily upon a relatively small training library database for its implementation, and only superior VF defects were included in the training database. The larger and more accurate the database, the better our technique is expected to perform (as is true in most training-based algorithms). To offset this weakness, we are actively recruiting more patients to enrich our library database and expand the versatility of this estimation algorithm. Second, the performance of our methodology is also limited by our use of the PCA which assumes our data set to be linear combinations of basis function. Perhaps non-linear methods such as kernel PCA (Scholkopf, 1998) may yield improved performances. Third, 2D correspondence maps between VF maps and OCT images are difficult to create because, when using PCA, we must first transform the 2D image matrices into 1D image vectors, and then compute the corresponding 1D eigenvectors from the sample matrix composed of a concatenation of 1D image vectors. The 1D eigenvectors are then transformed back to 2D matrix for image reconstruction. This matrix-to-vector conversion unfortunately causes the loss of inherent structural information present within the original 2D images. In an effort to correct this problem, we are currently investigating the use of 2D PCA (Yang, 2004). Finally, we are exploring improved mechanisms of capturing co-dependencies among different families of images.
Research Highlights
We propose a learning-based framework with the ability to generate an estimate of the visual field (VF) map given as input the corresponding optical coherence tomography (OCT) scan, and vice versa.
This model is derived by applying principal component analysis (PCA) to a library consisting of various pairs of OCT scans and VF maps, and the parameters of this coupled model are obtained by solving a linear least squares problem. These estimated parameters are used in conjunction with the eigenvectors derived from PCA to form the estimate.
The accuracy of this coupled parametric estimation model is evaluated by performing multiple leave-one-out cross validation experiments.
Acknowledgments
This work was supported in part by Department of Radiology, Children’s Hospital Boston, American Glaucoma Society, unrestricted grants from Research to Prevent Blindness and Pfizer Inc., and Harvard Catalyst/the Harvard Clinical and Translational Science Center. The authors would like to thank Optovue, Inc. for providing technical assistance with RTVue, Dennis Mock for his role in data collection and processing, and Nancy Drinan for her help in the preparation of this manuscript. Finally, the authors would like to thank the anonymous reviewers for their valuable comments and thoughtful suggestions.
Footnotes
This is a PDF file of an unedited manuscript that has been accepted for publication. As a service to our customers we are providing this early version of the manuscript. The manuscript will undergo copyediting, typesetting, and review of the resulting proof before it is published in its final citable form. Please note that during the production process errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain.
1As a reminder, units of dB are used in the Bland-Altman plot because the inherent units of VF maps are in dB.
  • Blumenthal EZ, Williams JM, Weinreb RN, Girkin CA, Berry CC, Zangwill LM. Reproducibility of nerve fiber layer thickness measurements by use of optical coherence tomography. Ophthalmology. 2000;107:2278–2282. [PubMed]
  • Budenz DL, Chang RT, Fredette MJ, Feuer WJ, Anderson DR. Reproducibility of peripapillary retinal nerve fiber thickness measurements with Stratus OCT in glaucomatous eyes. Ophthalmology. 2008;115(4):661–666. [PubMed]
  • Bulletin of the World Health Organization. 2004;82(11) [PubMed]
  • Carpineto P, Ciancaglini M, Zuppardi E, Falconio G, Doronzo E, Mastropasqua L. Reliability of nerve fiber layer thickness measurements using optical coherence tomography in normal and glaucomatous eyes. Ophthalmology. 2003;110:190–195. [PubMed]
  • Essock E, Gunvant P, Zheng Y, Garway-Heath D, Kotecha A, Spratt A. Predicting visual field loss in ocular hypertensive patients using wavelet-fourier analysis of GDx scanning laser polarimetry. Optometry and Vision Science. 2007;84:E380–E387. [PubMed]
  • Ferreras A, Pablo LE, Garway-Heath DF, Paolo F, Garvia-Feijoo J. Mapping standard automated perimetry to the peripapillary retinal nerve fiber layer in glaucoma. Invest Ophthalmol Vis Sci. 2008;49:3018–3025. [PubMed]
  • Gonzalez-Garcia AO, Vizzeri G, Bowd C, Medeiros FA, Zangwill LM, Weinreb RN. Reproducibility of RTVue retinal nerve fiber layer thickness and optic disc measurements and agreement with Stratus optical coherence tomography measurements. American Journal of Ophthalmology. 2009;147:1067–1074. 1074 e1. [PMC free article] [PubMed]
  • Harwerth RS, Vilupuru AS, Rangaswamy NV, Smith EL., 3rd The relationship between nerve fiber layer and perimetry measurements. Invest Ophthalmol Vis Sci. 2007;48:763–773. [PubMed]
  • Hood DC, Kardon RH. A framework for comparing structural and functional measures of glaucomatous damage. Prog Retin Eye Res. 2007;26:688–710. [PMC free article] [PubMed]
  • Hood DC, Anderson SC, Wall M, Kardon RH. Structure versus function in glaucoma: an application of a linear model. Invest Ophthalmol Vis Sci. 2007;48:3662–3668. [PubMed]
  • Jolliffe I. Principal component analysis. Springer-Verlag; 2002.
  • Leung CK, Cheung CY, Weinreb RN, Qiu K, Liu S, Li H, Xu G, Fan N, Pang CP, Tse KK, Lam DS. Evaluation of retinal nerve fiber layer progression in glaucoma: a study on optical coherence tomography guided progression analysis. Invest Ophthalmol Vis Sci. 2010;51:217–222. [PubMed]
  • Leventon M. MIT Ph.D. Thesis. 2000. Statistical models for medical image analysis.
  • Little R, Rubin D. Statistical analysis with missing data. Wiley; 2002.
  • Pearson K. On lines and planes of closest fit to systems of points in space. Philosophical Magazine. 1901;2:559–572.
  • Prevent Blindness America Survey. 2002
  • Raiko T, Ilin A, Karhunen J. Principal component analysis for large scale problems with lots of missing values; European Conference on Machine Learning; 2007. pp. 691–698.
  • Roweis S. EM algorithms for PCA and SPCA. Advances in neural information processing systems. 1998:626–632.
  • Scholkopf B, Smola A, Muller KR. Nonlinear component analysis as a kernel eigenvalue problem. Neural Computation. 1998;10:1299–1319.
  • Schuman JS, Pedut-Kloizman T, Hertzmark E, Hee MR, Wilkins JR, Coker JG, Puliafito CA, Fujimoto JG, Swanson EA. Reproducibility of nerve fiber layer thickness measurements using optical coherence tomography. Ophthalmology. 1996;103:1889–1898. [PMC free article] [PubMed]
  • Shen LQ, Mock D, Sami M, Caprioli J. Regional macular thickness correlates with glaucomatous visual field loss. American Glaucoma Society Annual Meeting; March, 2010; Naples, FL. 2010.
  • Sommer A, Katz J, Quigley HA, Miller NR, Robin AL, Richter RC, Witt KA. Clinically detectable nerve fiber atrophy precedes the onset of glaucomatous field loss. Arch Ophthalmology. 1991;109:77–83. [PubMed]
  • Tipping ME, Bishop CM. Probabilistic principal component analysis. Journal of the Royal Statistical Society. 1999;61:611–622.
  • Tsai A, Wells W, Tempany C, Grimson E, Willsky A. Mutual information in coupled multi-shape model for medical image segmentation. Medical Image Analysis. 2004;8(4):429–445. [PubMed]
  • Turk M, Pentland A. Eigenfaces for recognition. J. Cognitive Neuroscience. 1991;3:71–86. [PubMed]
  • Wojtkowski M, Srinivasan V, Fujimoto JG, Ko T, Schuman JS, Kowalczyk A, Duker JS. Three-dimensional retinal imaging with high-speed ultrahigh-resolution optical coherence tomography. Ophthalmology. 2005;112:1734–1746. [PMC free article] [PubMed]
  • Wolf-Schnurrbusch UE, Ceklic L, Brinkmann CK, Illiev ME, Frey M, Rothenbuehler SP, Enzmann V, Wolf S. Mascular thickness measurements in healthy eyes using six different optical coherence tomography instruments. Invest Ophthalmol. 2009;50(7):3432–3437. [PubMed]
  • Yang J, Zhang D, Frangi AF, Jing-yu Y. Two dimensional PCA: a new approach to appearance-based face representation and recognition. IEEE Trans. PAMI. 2004;26:131–137. [PubMed]