|Home | About | Journals | Submit | Contact Us | Français|
We propose in this work a novel variational method for computing maps between surfaces by combining informative geometric features and regularizing forces including inverse consistency and harmonic energy. To tackle the ambiguity in defining homologous points on smooth surfaces, we design feature functions in the data term based on the Reeb graph of the Laplace-Beltrami eigenfunctions to quantitatively describe the global geometry of elongated anatomical structures. For inverse consistency and robustness, our method computes simultaneously the forward and backward map by iteratively solving partial differential equations (PDEs) on the surfaces. In our experiments, we successfully mapped 890 hippocampal surfaces and report statistically significant maps of atrophy rates between normal controls and patients with mild cognitive impairment (MCI) and Alzheimer’s disease (AD).
Surface mapping is an important problem in medical image analysis with applications in population studies and the creation of 3D shape prior models. While significant progresses have been made with the development of various techniques[3–9], most of them focus heavily on the regularization part of the problem. The lack of informative data terms makes it still a challenging problem to resolve the ambiguity in defining homologous points on smooth surfaces. To this end we propose in this paper a novel variational framework to compute maps directly between surfaces. By combining data terms from Laplace-Beltrami eigen-features and regularizing forces from inverse consistency and harmonic energy, our method can generate robust and high quality maps for a class of subcortical structures with clinical significance.
Many interesting techniques have been proposed in previous work to solve various surface mapping problems. A popular approach is to first map the surface onto the sphere and then solve the registration problem in this canonical domain [5–7, 9]. By viewing the surface as a subset of 3, successful image registration techniques were applied to compute surface maps with possible landmark constraints [3, 4, 8]. Landmark curves detected by shape context features were used to guide the mapping of hippocampal surfaces. The medial model provides a compact surface representation and is also very useful to construct maps between surfaces[11–13].
We propose in this work a variational approach to automatically compute both the forward and backward maps between two surfaces with inverse consistency. An important feature of our method is that we use the Reeb graph of the Laplace-Beltrami eigenfunctions to define intrinsic features for the data fidelity term in our energy function. These features provide quantitative descriptions of the salient geometry shared by elongated structures such as the hippocampus and significantly remove the ambiguity in finding correspondences for such surfaces. Another interesting feature of our method is that we incorporate both inverse consistency and harmonic energy  as the regularization term to improve the quality of maps. The harmonic energy encourages the smoothness in the maps and the inverse consistency term helps reduce area distortions. In our experiments, we perform extensive tests to demonstrate the quality of the maps generated by our method. We also demonstrate the robustness of our method with its successful mapping of 890 hippocampal surfaces and report statistically significant results.
The rest of the paper is organized as follows. In section 2, we develop the variational framework that combines data terms of intrinsic features and regularization terms including both inverse consistency and harmonic energy. We propose the Laplace-Beltrami eigen-features for the modeling of elongated structures such as hippocampus in section 3. Experimental results are presented in section 4 to demonstrate the quality and robustness of our mapping algorithm. Finally conclusions and future work are discussed in section 5.
In this section, we propose a variational framework for the direct mapping of surfaces. Let (, g) and (, h) be two Riemann surfaces with their metric tensor g and h. To obtain inverse consistency, we compute two maps jointly in our method. We denote u1: → as the map from to , and u2: → the map from to . We also assume there are L feature functions defined on each surface and denote and as the j-th feature function on and , respectively.
In our variational framework, the maps are computed as the minimizer of the following energy function:
There are three terms in the energy function and all of them are symmetric with respect to and . We call the first energy ED as the data fidelity term and it is defined as:
This energy penalizes the mismatch of feature functions induced by the two maps, and the parameters are used to assign proper weights for different feature functions.
The other two terms are for regularization, where EIC encourages inverse consistency and EH is the harmonic energy and it ensures the smoothness in the maps. Let I denote the identity function such that I(x) = x. We define the energy EIC as:
where αIC is a regularization parameter. For inverse consistency, this energy penalizes the difference between the identity map I and the composition of the two maps u2 u1 and u1 u2. The harmonic energy term EH is defined as [15, 16]:
where Ju1 and Ju2 are the Jacobian of the two maps defined on the surfaces, and αH is the regularization parameter for this term.
To minimize the energy function with respect to the maps u1 and u2, we derive their gradient flows and compute them iteratively via the solution of these partial differential equations(PDEs) on the two surfaces and . For the energy ED, the gradient flows of u1 and u2 are:
where and denote the intrinsic gradient on the surfaces. For the energy EIC, the gradient flows are:
where and are the inverse maps of u1 and u2, and the determinant of their Jacobian are denoted as and .
To derive the gradients of the harmonic energy, we denote (x1, x2) and (y1, y2) as the local coordinates of and . In the local coordinates, the maps can be represented as and . The gradient flow of u1 can then be expressed in the form of Einstein’s summation as :
where is the Laplace-Beltrami operator on , gαβ = (gαβ)−1, and is the Christoffel symbol on the manifold . Similarly, the gradient flow of u2 is:
where is the Laplace-Beltrami operator on , hpq = (hpq)−1, and is the Christoffel symbol on the manifold .
By combining the above results, we have the gradient descent flows of u1 and u2 to minimize the energy:
To numerically solve these two equations on and , we use the approach of solving PDEs on implicit surfaces[17, 18, 16] and represent and as a signed distance function and ψ, respectively. With the implicit representation, we have a very simple formulation for the gradient flow of the harmonic energy. Using the signed distance function, we can express gradient operators on surfaces in terms of conventional gradient operators in 3. For example, the intrinsic gradient f of a function f on can be expressed as Π f, where f is the gradient of f in 3 and Π = I − T is the projection operator. By substituting the implicit form of gradient operators into (9), we can write the gradient descent flow of u1 and u2 as:
For numerical efficiency, all computations are performed on narrow bands surrounding the surfaces. More details of the computational schemes can be found in .
To use the above variational framework, it is important to design proper features that can capture the common geometry across surfaces. This is generally a difficult problem and the solution is application dependent. In this section, we propose two feature functions to characterize the global geometry of elongated structures with neuroanatomical significance such as hippocampus and putamen. Both features are invariant to scale differences and natural pose variations. For a surface , the first feature function ξ1: → characterizes the tail-to-head trend of elongated structures, and the second feature ξ2: → describes the lateral profile of the surface. We next develop the algorithm to compute both features from the Laplace-Beltrami eigenfunction of .
The spectrum of is discrete and the eigenvalues can be ordered as 0 = λ0 ≤ λ1 ≤ λ2 ≤ ···. For λi, the corresponding eigenfunction of λi is denoted as fi. For the first eigenvalue λ0 = 0, the eigenfunction f0 is constant, so it is not useful to describe the shape. Among the rest eigenfunctions, the second eigenfunction f1 is particularly interesting for modeling the global characteristics of elongated subcortical structures such as the hippocampus. If we view f1 as a map from to , it has the following property:
Thus this function f1 can be viewed as the smoothest, non-constant map from to in the space orthogonal to f0. To numerically compute the eigenfunction, we represent as a triangular mesh = (, ), where is the set of vertices and is the set of triangles. By using the weak form of (11) and the finite element method, we can compute the eigenfunction by solving a generalized matrix eigenvalue problem:
where Q and U are matrices derived with the finite element method.
To show how the eigenfunction f1 can model the global shape of elongated structures, we construct its Reeb graph to obtain an explicit representation . For a function f1 defined on a manifold , its Reeb graph is defined as the quotient space with its topology defined via the equivalent relation x y if f1(x) = f1(y) for x, y . To numerically construct the Reeb graph, we trace a set of level contours of the eigenfunction f1 on the triangular mesh representation of . To ensure the level contours distribute evenly over the entire surface, we use an adaptive sampling scheme developed in . These level contours are used as the nodes of the Reeb graph and the connectivity of these nodes are established according to the neighboring relations of level contours. As an example, we show in Fig. 1(a) the Reeb graph of a hippocampus. By representing each node of the Reeb graph as the centroid of the level contour, we can see this graph has a chain structure and provides a compact model of the essentially one dimensional, tail-to-head trend of the hippocampus.
Based on the Reeb graph of f1, we define the first feature function ξ1. Because the eigenfunction is generally a Moss function , its Reeb graph has a tree structure for genus zero surfaces. For hippocampus, the Reeb graph of f1 typically has a chain structure as shown in Fig. 1. In the case there are branches in the Reeb graph, we prune the smaller branches according to the size of the associated level contour to ensure the pruned graph has a chain structure. We order the nodes on this chain with the increase of the function f1 and denote the level contour at each node as Ci(i = 1, ···, N). Each contour is digitized into K points Ci = [Ci,1, Ci,2,, ···, Ci,K]. Because these points are obtained from the vertices of the mesh, we have the following relation
where C = [C1, C2, ···, CN]T is the set of all the points on level contours, and the matrix A represents the linear interpolation operation that generates the level contours. To quantitatively describe the tail-to-head trend of the shape, we first define the feature function ξ1 on the level contours as ξ1(Ci,k) = −1 + 2 * i/N for points on Ci. To define the function ξ1 on the entire mesh, we solve the following regularized linear inverse problem:
where ξ1(C) and ξ1() are vectors of the values of ξ1 on the level contours and the vertices of the mesh, respectively. The matrix Q is the same as in (14) and the term ξ1()T Qξ1() encourages smoothness of the function ξ1. For this least square problem, we have the solution for ξ1() as
We define the second feature function ξ2 also based on the eigenfunction of the Laplace-Beltrami operator to characterize the lateral profile of elongated structures. For each level contour Ci, we generate a surface patch that approximates the minimal surface of Ci. This is achieved by first building a Delaunay triangulation of Ci,k (k = 1, ···, K) using the software triangle  and then applying Laplacian smoothing to this mesh to obtain a smooth patch interpolating the interior of the boundary Ci as shown in Fig. 1(b). For this surface patch, we compute the second eigenfunction of its Laplace-Beltrami operator and denote it as . The Reeb graph of is then computed with N level contours Di ordered with the increase of the function . For each level contour Di, we assign a value −1 + 2 * i/N to describe its lateral position on the surface. The value of the feature function ξ2 on Ci,k is defined using linear interpolation from the values of neighboring level contours. Once we define the feature function ξ2 on the level contours, we can compute its value on the vertices of the entire mesh similarly as:
where ξ2(C) and ξ2() are the vectors of values of ξ2 on the level contours and the vertices, respectively.
As an illustration, we show the feature functions of a hippocampus in Fig: 2 (a) and (b) with the parameter β = 10, N = 100, K = 100, which clearly illustrates the power of the eigen-features in characterizing the relative locations of points on the surfaces.
In this section we present experimental results to demonstrate our method. We will first illustrate our algorithm on the mapping of two types of surfaces: hippocampus and putamen. After that, we apply our method to a data set of 890 hippocampal surfaces and present statistically significant results.
In the first experiment, we tested our algorithm on nine hippocampal surfaces. We chose one of them as the atlas and computed maps between this surface and the other eight surfaces. The parameters in the energy function are , αIC = 6, αH = 1. To start the iterative algorithm, we use the same approach that we developed in  to automatically find an initial map. For all the surfaces, the mapping algorithm converged in less than 1500 iterations and the computational process took less than 10 minutes on a PC of 1.6 GHz and 1.5 GB memory. The mapping results between the atlas and the eight surfaces are visualized in Fig. 3 by projecting a texture pattern onto these surfaces with the correspondences established by the computed maps. While the surfaces vary quite significantly, we can see the corresponding parts are matched correctly and this shows the robustness of our method to structural variations across population.
To illustrate the importance of the Laplace-Beltrami eigen-features and inverse consistency, we turned off these terms separately and measured quantitatively their impact on the quality of maps. In our experiments, we computed the standard deviation of |Ju1| and |Ju2| as an indicator of the quality of maps because it measures the area distortion resulting from the maps. As shown in Fig. 4(a) and (b), smaller area distortions have been achieved for all the surfaces by incorporating both the eigen-features and inverse consistency.
In the second experiment, we tested our method on a group of 30 putamen surfaces. By choosing one of the surfaces as the atlas, we computed the maps from the other 29 surfaces to the atlas with the same parameters as in the first experiment. Using the correspondences across the 30 putamen surfaces that have been established by the maps, we can build a shape prior model for the putamen by performing a principal component analysis (PCA) . The mean shape of the putamen is shown at the top of Fig. 5(a), where a texture pattern is also generated for visualization. The eigenvalues computed from the PCA process are plotted at the bottom of Fig. 5(a) where we can see most of the variations are captured by the first few components. As an illustration of the shape prior model, we plotted the shapes generated with the first three principle components. The texture pattern on the mean shape is also plotted on each synthesized surface in Fig. 5(b) to visualize correspondences across the shapes. From the results we can see the shape prior model is able to generate valid shapes with a quite large range (±5σi) of coefficients for the principal components.
In the third experiment, we apply our mapping algorithm to a data set of 890 left hippocampal surfaces. The hippocampi were segmented automatically with the algorithm in  from the screening and 12-month follow-up scans of 445 subjects from the Alzheimer’s Disease Neuroimaging Initiative(ADNI). Among the subjects, there are 136 normal controls (NC), 228 patients with mild cognitive impairment(MCI), and 81 patients with Alzheimer’s disease(AD). The goal of this experiment is to test the differences of normal aging, MCI, and AD in terms of the atrophy rates in the hippocampus.
As a first step in this experiment, we computed maps from all 890 surfaces to the atlas surface used in the first experiment as shown in the center of Fig. 3, and the same parameters as in the first experiment were used for all surfaces. In order to compute such a large amount of surface maps, we implemented our method on a grid of more than 1000 CPUs and this enables us to compute all the maps in parallel.
To perform statistical analysis, we first generated a regular mesh representation of the atlas surface with 2000 vertices. Using the maps computed above, we projected the triangular mesh of the atlas onto the 890 surfaces. As a result, all surfaces were represented with the same triangulation and their vertices have one-to-one correspondences. To quantify the atrophy rate of the hippocampus locally, we define a thickness measure at each vertex of the mapped surfaces as the distance from the vertex to the Reeb graph of the first Laplace-Beltrami eigenfunction as shown in Fig. 1(a). By viewing this Reeb graph as a medial core of the hippocampus, our thickness measure is very similar to the definition in  except that our definition is completely intrinsic and invariant to pose variations such as translation, rotation and reflection. For the hippocampal surfaces of a subject, the atrophy rate at each corresponding vertex then equals the change of the thickness over the 12-month period as a percentage of the thickness at the screening scan.
Given the atrophy rates at corresponding vertices of all subjects, we performed two group analyses. In the first group study, we compared the NC group with the MCI group. At each vertex, a one-tailed t-test is applied to test the hypothesis that the MCI group has a higher atrophy rates than the NC group. The resulting p-value map is plotted onto the mean shape of the NC group in Fig. 6(a) and (b) in the top and bottom view. This map shows the significance level across different regions of the hippocampus. To correct for multiple comparisons, we performed 10,000 permutation tests and an overall statistically significance p-value 0.0063 was obtained. In the second group analysis, we compared the NC group with the AD group. Similarly, a one-tailed t-test was applied to test the hypothesis that the AD group has a higher atrophy rate than the NC group. The p-value map of this group analysis is plotted in Fig. 6(c) and (d). We also applied 10,000 permutation tests to correct for multiple comparisons and the overall p-value is 0.001, which shows the map is statistically significant. By comparing the maps in Fig. 6(c)(d) and Fig. 6(a)(b), we can clearly see the expansion of the regions with higher atrophy rates from the MCI to the AD group.
We proposed a general framework for surface mapping with both eigen-feature-based data terms and regularizing terms for inverse consistency and smoothness. We successfully demonstrated its application in computing maps between elongated structures such as hippocampus and putamen. For future work, we will design new eigen-features and apply the current framework to other anatomical structures. We will also perform validations to compare our method with previous techniques.