|Home | About | Journals | Submit | Contact Us | Français|
We present a matching criterion for curves and integrate it into the large deformation diffeomorphic metric mapping (LDDMM) scheme for computing an optimal transformation between two curves embedded in Euclidean space d. Curves are first represented as vector-valued measures, which incorporate both location and the first order geometric structure of the curves. Then, a Hilbert space structure is imposed on the measures to build the norm for quantifying the closeness between two curves. We describe a discretized version of this, in which discrete sequences of points along the curve are represented by vector-valued functionals. This gives a convenient and practical way to define a matching functional for curves. We derive and implement the curve matching in the large deformation framework and demonstrate mapping results of curves in 2 and 3. Behaviors of the curve mapping are discussed using 2D curves. The applications to shape classification is shown and experiments with 3D curves extracted from brain cortical surfaces are presented.
Matching curves embedded in 2 and 3 is important in the fields of medical imaging and computer vision (Zhang 1994; Besl and McKay 1992; Feldmar and Ayache 1996). In medical imaging, the most striking gross morphological features of the cerebral cortex are the sulcal fissures on the cortical surface (Welker 1990). They are selected as the basis for structural analysis on the internal surface anatomy of the brain because they separate functionally distinct regions and provide a natural topographic partition of the anatomy. Even though major sulci consistently appear in all anatomies, they exhibit variability in size and configurations (Welker 1990). A quantitative study of sulcal anatomical structures requires developing mathematical models for characterizing shape and geometry of curves, and for accommodating the variability present across populations (Durrleman et al. 2007; Fillard et al. 2007; Thompson et al. 1996; Rettmann et al. 2002). Moreover, image boundary, contours, and skeletons are often used to represent shapes of objects in many applications of computer vision, such as tracking human body motion in motion tracking, finding correspondence between maps and terrain images, object recognition and classification. All of these formulate two problems: representation of curves and mathematical models of transformations.
In non-rigid registration problems, desired transformations are often constrained to be diffeomorphic—one to one (invertible) and smooth with smooth inverse so that connected sets remain connected, disjoint sets remain disjoint, smoothness of features such as curves is preserved, and coordinates are transformed consistently. Consequently, many researchers have paid great attention to such diffeomorphic transformations particularly in the field of Computational Anatomy for studying the geometric variation of human anatomy (e.g. Grenander and Miller 1998; Gee and Bajcsy 1999; Miller et al. 2002; Twining et al. 2002; Beg et al. 2005; Joshi et al. 2004; Avants and Gee 2004; Thompson and Toga 1996; Thompson et al. 2004).
Over the past several years we have been using large deformation diffeomorphic metric mapping (LDDMM) (Joshi and Miller 2000; Camion and Younes 2001; Beg et al. 2005) to map 3D volume coordinates in the brain as well as in the heart (Helm et al. 2006). This method not only provides a diffeomorphic correspondence between shapes C and S, but defines a metric distance in a shape space as well. The basic diffeomorphic metric mapping approach taken for understanding structures of object shapes is to place the set of objects into a metric space. The idea is to model the mapping of one from the other via a dynamic flow of diffeomorphisms (time dependent deformation) t [0, 1] of the ambient space d. Instead of working directly in the space of diffeomorphisms, we work with time dependent vector fields, υt : d → d for t [0, 1], which model the infinitesimal efforts of the flow. The flow is then defined via the differential equation
with , and the resulting diffeomorphism is given by the flow at time t = 1, . In order to ensure that the resulting solution to this equation is a diffeomorphism, υt must belong to a space, V, of regular vector fields (see Trouvé 1995; Dupuis et al. 1998 for specific requirements), for t [0, 1] with . V is a Hilbert space with an associated kernel function kV, and norm ‖·‖V. This norm models the infinitesimal cost of the flow.
Assume that for any pair objects of C and S, there exists some vector fields υt such that the diffeomorphism transforms shape C to shape S, which we write . Then one can look at the optimal vector fields t satisfying and minimizing the integral of the infinitesimal costs. The metric distance between shapes, ρ(C, S), is defined as the length of the geodesic path for t [0, 1] generated from C to S in the shape space. Such a metric between C and S has the form
In practice, we often define an inexact matching problem: find a diffeomorphism t between two objects C and S as a minimizer of
Equivalently, we have , where the vector fields t minimize the energy
and γ is a trade-off parameter. The first term in (3) is a regularization term to guarantee the smoothness of deformation fields and the second term is a matching functional to quantify the mismatching between the objects .
In this LDDMM framework, two matching techniques are commonly used that result in different models for . The first one is volume image matching, which works directly with the raw 3D volume images, multi-valued vectors, as well as tensor matrices arising from DTI studies (Beg 2003; Beg et al. 2005; Cao et al. 2005a, 2005b), but is computationally intensive. The second one is landmark matching which requires a manual or semi-automated selection of landmarks on the images (Joshi and Miller 2000). In the case of brain images, however, landmarks are usually not just isolated points, but lie along sulcal or gyral curves extracted from cortical surfaces which are themselves segmented from the 3D images. In such a situation, the landmark matching tool is commonly used (Joshi et al. 2007; Bakircioglu et al. 1999). It is imperfect, however, because it first requires to select the same number of points on both source and target curves, and second it specifies arbitrary correspondences between specific points on these curves, giving artificial constraints to the registration process. To overcome this issue, some new methods were proposed, e.g. using level-sets representations of the curves (Leow et al. 2004). In Glaunès et al. (2004), an extension of the LDDMM approach in which shapes are considered as unlabelled sets of landmarks has been proposed. However, as geometric objects, curves or surfaces should not be treated as a sequences of points because the higher order information (tangent vectors or curvature) is discarded when reducing 1D or 2D manifolds into 0-dimensional point sets. In the case of surfaces, a representation via currents, or vector-valued measures, has recently been developed (Vaillant and Glaunès 2005). It allows to put into the metric both location and tangential information of the surfaces. Tools presented in Glaunès et al. (2004) and Vaillant and Glaunès (2005) actually derive from the same general framework that defines computable metrics on any m-dimensional submanifolds of d with m ≤ d. The case of curves (m = 1) has been introduced shortly in Glaunès et al. (2006), Qiu et al. (2007). It is the purpose of this paper to present a full mathematical explanation and derivation of this representation of curves in the LDDMM framework.
This model for curve matching is a special case of a general framework for submanifolds, which could be called measure or current matching, and uses ideas from geometric measure theory and reproducing kernels while setting up into Grenander’s theory of deformable templates (Grenander and Miller 1998). Other examples had already been presented: surfaces in Vaillant and Glaunès (2005) and unlabelled point sets (which correspond to the 0-dimensional case) in Glaunès et al. (2004). In particular, the study and use of the metric distance defined in the shape space for shape analysis has received great attention in computer vision and medical imaging as partly shown here in Sect. 5. The common approach consists in defining a Riemannian metric on the tangent space to a curve, i.e. the space of infinitesimal transformations of a curve, and extending it to the whole space. Sharon and Mumford (2006) proposed a very elegant mathematical model for closed planar curves, which can not be generalized. Younes (1998) proposed to use an elastic metric that measures the cost of deformation of the ambient space, which corresponds to the LDDMM setting described previously and considered in the present paper. Recent works (Klassen et al. 2003; Mio and Srivastava 2004; Schmidt et al. 2006) have emphasized the efficiency of the shape space model or developed its mathematical theory (Michor and Mumford 2007). However these constructions require that a smooth transformation actually exists between any two curves in the space, which usually restricts the study to the subclass of closed, non crossing curves. Our approach is different in the sense that we do not claim to find exact correspondences between curves, but rather find a diffeomorphic transformation that match two curves up to an acceptable error. What we propose here is a way to compute this error, i.e. a model for evaluating the differences between shapes that remain after matching. It allows to compare curves that may present noisy parts, or be composed of different numbers of connected components, or to match a closed curve to an open curve, etc. Such situations are common in practical applications, which indicates that the curve mapping presented in this paper will be more suitable to applications in computer vision and medical imaging analysis.
This paper is organized as follows. We start with the geometric representation of curves that incorporates the geometry of the curves due to the sensitivity to both location and their tangent vectors (the first order local geometric structure) in Sect. 2.1. In Sect. 2.2, we describe that such a representation of curves as mathematical objects provide elements in a vector space equipped with a computable normed distance. In Sect. 3, “closeness” between two curves is then quantified by the norm-square distance between their associated representations, which serves as matching criterion for finding an optimal transformation from one curve to the other. In Sect. 3.2, we integrate this matching functional with a variational optimization problem of the LDDMM and compute its gradient with respect to a linear transformation of the velocity, momentum. Appendix B, demonstrates the other use of the matching functional in the derivation of the similarity transformation (rotation, translation, and isotropic scaling). In Sect. 5, we finally demonstrate the application of this method to curves in 2 and 3 and compare matched results and deformation fields between this method and the landmark matching method (Joshi and Miller 2000) in the LDDMM framework.
We define a norm that quantifies the similarity between two curves by first introducing a geometric representation of the curve and then inducing its norm in a Hilbert space via a differential operator.
We follow the approach presented (Vaillant and Glaunès 2005) for surfaces. However, we will not use the term “currents” in this paper, nor use any notion of exterior calculus. Let C d be an open or closed curve. We assume that C is both piecewise C0 and piecewise C1, which means that it may be composed of several disconnected components, and may present a finite number of angular points. Let γC : [0, 1] → d, s γC(s) = [x1(s), …, xd (s)] be a parametrization of this curve. At almost any point on the curve, the tangent vector is given by . The existence of these tangent vectors along the curve gives us a natural action of the curve on vector fields w : d → d by the rule:
where · denotes the dot product in d. Intuitively, µC|w is large when the direction of the vector field w(x) is close to the tangent vectors along the curve. It is close to zero when w(x) is almost perpendicular to the curve, or when w(x) take small values (in norm) along the curve. Figure 1 demonstrates the evaluation of µC|w in several cases when w(x) is parallel to the tangents of C (first panel), perpendicular to C (middle panel), and the norm of w(x) is close to zero along the curve (last panel).
Hence both location and tangential information of the curve is integrated in this expression. Another important point here is that it is invariant under re-parametrization of the curve, so that if we have a new parameter t = ψ(s), with ψ increasing, it is simple to show that µC|w is unaffected. However using t = ψ(s) with ψ decreasing will give an opposite value for µC|w, which means that the representation is orientation dependent.
Equation (4) associates to every oriented curve C a unique representant µC which is a linear functional acting on vector fields of d. Equivalently, µC can be understood as a vector-valued Borel measure. Linear combinations of measures are defined with the rule αµC + βµS|w = α µC|w + β µS|w for any α, β and curves C, S. In general, these combinations are not associated to any curve, unless |α| = |β| = 1: for example, µC − µS corresponds to the concatenation of curve C and curve S with opposite orientation.
The advantage here is to embed the curves in a vector space (of vector-valued measures) on which it is possible to use functional analysis tools to define norms and compare curves via these norms. This will be developed in Sect. 2.2.
We would like to define a norm that computes a geometrical distance between two curves. For instance, Fig. 2 shows two parallel segments C and S with length 1 in 2, separated by a small distance ε. We want the norm of the difference µC − µS of the corresponding vector-valued measures to be small since the segments are close to each other.
This can be achieved if one puts constraints on the variations of the test vector fields w so that the evaluation of the curve representors µC and µS on w gives close results. To do so, we define a norm on vector fields in the form of
where L is a linear differential operator of sufficient order such that the following property on the norm is guaranteed
for some fixed constant cW > 0. In this setting, test vector fields w will belong to the Hilbert space W associated with the norm ‖·‖W.
Proof For any w W,
where lC is the length of C.
We can therefor compare vector-valued measures via the dual norm, defined by:
Going back to example of Fig. 2, the evaluation of the difference µC − µS is
With γS(s) = (s, 0), γC(s) = (s, ε), , we get
where w1(s, 0) denotes the first coordinate of vector w(s, 0). The integrand w1(s, 0) − w1(s, ε) is bounded by ε ‖w‖1,∞, which in turn is bounded by εcW ‖w‖W from property (6). Hence the dual norm ‖µS − µC‖W* is bounded by εcW. When ε decreases, the norm ‖µC − µS‖W* goes to zero, as expected.
The explicit expression of the dual norm involves the Green’s function kW(x, y) of the operator L*L, where L* is the adjoint of L. kW(x, y) is also the reproducing kernel of the space W. It is defined by the property
where ·, · W represents the inner product of two vector fields in Hilbert space W. Note that since we are dealing with vector fields, kW(x, y) is a matrix operator, we need to apply it to a vector ξ d to get a proper definition.
To fix ideas at this point, one may choose L such that L*L = (Id − α22)k for some constants α > 0 (scale parameter) and integer k, which gives the Sobolev Hk Hilbert space. The corresponding kernel operator has explicit expression in terms of modified Bessel functions of the second kind (McLachlan and Marsland 2007).
The norm of the vector-valued measure µC in the dual space W* is equal to
By Cauchy-Schwartz inequality, this achieves the maximum for ‖w‖W = 1 if
Therefore, we have
where (a) follows from the reproducing property of the kernel kW stated in (7).
The expression of the dual norm indicates that the central object in computations is the kernel kW. Mathematically it is possible to build the Hilbert space W and its dual starting from a given positive kernel (without any knowledge of operator L), with enough regularity to satisfy the property in (6). Links between properties of the kernel and properties of the space in this setting can be found in Glaunès (2005). In short, kW should be twice differentiable and vanish at infinity up to order 1. If one additionally requires invariance with respect to rigid motions, one can prove that kW is defined by two scalar functions h, h : + → such that for every x, y d, kW(x, y)α = h(|x − y|2)α if α is colinear to x − y, and kW(x, y)α = h(|x − y|2)α if α · (x − y) = 0. For our applications we only consider “scalar” kernels kW(x, y) = h(|x − y|2)Id i.e. such that h = h, and in practice we tried Gaussian and Cauchy kernels. However it would be interesting to study the effect of using non scalar kernels.
In the previous section, we introduced representations of curves by vector-valued measures and defined a computable norm in the space of measures to compare them. These tools give us a practical way to define a matching functional between two curves C and S in d. Let : d → d be a deformation map which is supposed to give an approximate correspondence between C and S. The accuracy of this matching can be measured by:
which can be rewritten in terms of kernel kW, according to (8),
The first and last terms are intrinsic energies of the two curves. Roughly, the more curved they are, the higher energy they possess, since flat shaped parts of the curves tend to vanish in the space of vector-valued measures. The middle term penalizes to mismatching between tangent vectors of S and those of (C).
Optimal matching between curves will be defined via minimization of a functional composed of the matching term E and optionally a regularization term to guarantee smoothness of the transformations. We model deformation maps in the LDDMM framework. Certainly any other model of transformations could be directly applied at this point. For instance, the similarity transformation (rotation, translation, and uniform scale) is derived in Appendix B.
In the LDDMM setting, we define the optimal matching, between two curves C and S as , t being the minimizer of the energy
and is defined in (1).
The first term is the regularization term which controls the smoothness of the diffeomorphism, and the second term is the data attachment to quantify mismatching between deformed C and S. A general result (Trouvé 1995; Dupuis et al. 1998) guarantees existence of the solution to this minimization problem.
In Glaunès et al. (2006), such variational problems for 2D curve matching in the large deformation framework were studied, and precise forms of the optimal solutions were given. In this paper we will focus on the discrete formulation, leading to a practical matching algorithm.
First suppose that the curve C is continuous and discretized in a sequence of points x = (x1,…, xn) such that each xi belongs to the curve: for any parametrization γC we have xi = xsi for si [0, 1]. We do not make any assumption on the way points xi are distributed along the curve. If C is a closed curve then we assume xn = x1. We can associate to this sequence of points a specific measure given by vector-valued Diracs:
where is the center between points of xi and xi+1, and τx,i = xi+1 − xi gives an approximate tangent vector. This vector-valued measure is also defined by its pairing with smooth vector fields w:
The important point here is that both the curves and their discrete representants (possibly with different sampling rates) are embedded into the same space of vector-valued measures. Hence it will be possible to compare them via the dual norm introduced earlier.
If C is piecewise continuous, we build a discrete representation by summing the discrete measures µx corresponding to each connected component.
The dual norm of the discrete representant µx expresses conveniently in terms of the kernel kW:
Note that this is not an approximation of the dual norm, but an exact expression of the dual norm of the discrete approximation of the curve. The following theorem shows that this discrete representant is a good approximation of the curve in space W.
If the length of the piece of curve connecting two consecutive points is bounded by δ, then ‖µx − µC‖W* ≤ cWδlC.
Proof For any w W,
We have , so we can write:
Now , so we get
When curves C and S are discretized by point sequences , the sequence of transformed points gives a discretization of the deformed curve (C). Here we will derive algorithms based on the following matching functional:
which is explicitly
Note that curves C and S are not necessarily discretized into the same number of points from the above equation. This expression was also given in Qiu et al. (2007). We show the derivation of gradient of E(z, y) with respect to in Appendix A.
Let be discrete samples of curves C and S respectively, and define the trajectories xi(t) t (xi) for i = 1, …, n. The optimal matching between samples x and y will be defined as a minimizer of
where . The second term of J depends only on the positions of the finite number of points . In such a situation, like all point-based matching problems in the large deformation setting (Joshi and Miller 2000; Camion and Younes 2001), the following reasoning apply: starting from any arbitrary state t of the time-dependent vector fields, let us find the optimal vector fields υt that keep trajectories unchanged and minimize the functional. Since trajectories are fixed, in particular the end-points (xi (1)) are also fixed; hence the second term of J is constant for this sub-problem and we are left to find vector fields which minimize the V-norm and satisfy for each time t and each 1 ≤ i ≤ n the constraint υt (xi (t))= i (t) where xi (t) are the fixed trajectories. This is a classical interpolation problem in a Hilbert space setting, for which the solution is expressed as a linear combination of spline vector fields involving the kernel operator kV:
where αk(t) are referred to as momentum vectors. These vectors can be computed from trajectories xi(t) by solving the system of linear equations
Now coming back to the main variational problem, we can restrict the domain of minimization to the subspace of vector fields of the form given by expression in (14), and choose either trajectories xi(t) or momentum αi(t) as minimization variables. The choice of αi(t) avoids the inversion of linear systems, since in this case (15) is seen as an ODE system in the unknown xi(t).
The regularization term can be rewritten as a function of variables αi (t) and xi (t). From (14) and the reproducing property of kernel kV, we get
Hence the explicit expression of functional in (13) is the following:
The derivation of the gradient of J with respect to αi(t) is given in Appendix D.
We used a conjugate gradient routine to perform the minimization of functional J in (18) with respect to variables αi (t). Of course any other optimization scheme could be considered at this point. The different steps required to compute the functional and its gradient for each iteration are the following:
All time-dependent variables were evaluated on a uniform grid t1 = 0, …, tT = 1 and a predictor/corrector centered Euler scheme was used to solve the systems of ODE in (15) and (28). The complexity of each iteration is of order dTN2, where N = max(n, m). To speed up computations when N is large, all convolutions by kernels kV and kW are accelerated with a multipole method (Yang et al. 2003), which reduces the complexity to dTN log(N).
In our experiments, we first used the similarity transformation to transform curves into the same orientation and scale, as described in Appendix B. Then, we applied the curve matching under the LDDMM setting to deform one curve to the other. Kernels of kV (x, y) and kW(x, y) were radial with the form , where Id is the d × d identity matrix. σV and σW represent the kernel sizes of kV and kW, respectively. They were experimentally adjusted.
Shown in Fig. 3 are two-dimensional examples where the shape of curves represent bones (the first row), or birds (the second row), or hands (the third row). In this experiment, we chose σV = 0.25 and σW = 0.1 and deformed source shapes to target shapes. The first column shows the target shape, while the rest columns show source shapes. In each panel of the second and third rows, the blue curve represents the source shape before the curve matching and the green curve represents the deformed source shape after the curve matching. The background grid shows the deformation field. Figure 4 shows the time sequence of deformation of the source configuration. The first panel shows the source hand in green and the target hand in red. The last panel illustrates how close the deformed source curve (green) are to the target curve (red) after the curve matching. The middle panels demonstrate the smooth deformation as the source hand moves to the target hand along the time.
One application of studying transformations is to classify objects into different shape groups. In our LDDMM framework, the velocity vectors transform one object to the other object and give the shortest geodesic path connecting these two objects. The length of the geodesic path defines a metric distance given in (2) and feature in classification. As the purpose of classification, the metric distance should be robust against noise to achieve accurate classification. In this section, we claim that the curve matching provides a more accurate metric distance measurement and overcomes noise effect on matching results compared with the landmark matching (Joshi and Miller 2000; Allassonnière et al. 2005) when outliers are chosen as landmark points.
Example A in Fig. 5 shows boats without and with a mast as the source and target curves in panels (a, b), respectively. The curve and landmark matchings are applied to transforming the source boat to the target boat and results are shown in panels (c, d). The matching term in (18) for the template is near to zero in the region of the mast so that there is no force to drive the source boat to match the mast. The peak point of the mast is chosen as landmark in the landmark matching procedure so the source boat is deformed to the target boat at the mast. However, the metric distance (1.63) in the landmark matching is significantly larger than the one (0.13) in the curve matching. If a threshold is selected to classify these boats, the landmark matching probably considers them in different shape categories while the curve matching recognizes them as boats. When classifying shapes this problem typically arises as the result of imperfect extraction. The curve matching has less noise effect on the metric distance compared with the landmark matching. A similar example is given in Fig. 5(B), one curve without a gap (source shape) and the other with a small gap (target shape). The metric distance (1.37) in the landmark matching is larger than the one (0.72) in the curve matching.
To study the behaviour of our curve matching algorithm, we computed matchings between two shapes (cyan and red curves in Fig. 6) composed of a box with a semi-circle. Each row in Fig. 6 shows one example representing the time evolution of the optimal diffeomorphism by green curve in each panel. The desired behaviour, in which the bump slides along the edge, occurs only when the bumps locations on the two shapes are close (see the top row of Fig. 6). Otherwise the deformation that flattens one bump and reforms the other one is prefered, because the deformation cost is lower (see the bottom row of Fig. 6). Note again that our algorithm matches shapes up to a small error, measured by the metric W*. Avoiding such solutions would require to put higher-order (typically curvature) information into the metric W*, while our metric only relies on tangential information.
Computing all pairwise matchings between 17 shapes composed of a box with a semi-circle when the locations of the bump on each curve subsequently moves further away, we mapped the resulting distance matrix into a 2-dimensional coordinate system using classical multidimensional scaling (MDS) (Cox and Cox 2001). Figure 7 shows the locations of each shape in the first two scaling dimensions. This confirms the previous analysis: while shapes are correctly ordered into a sequence, they seem to follow a highly curved path, because the “flatten and reform” solutions bound the metric distances between shapes.
We demonstrate pattern classification of metric distances among curves shown in Fig. 8. The shapes in Fig. 8 were chosen from hand (10), dude (10), turtle (7), and fish (5). Special cases (hands and dudes with noise effect) are shown in the last two columns of rows (a, b), which are similar cases as in Fig. 5. The reason for choosing fish and turtles is that some of fish are outliers which are similar to turtles.
Our classification approach has three steps: map any two curves to each other and compute metric distances between them using (2), map metric distances into Euclidean space via MDS (Cox and Cox 2001), classify shapes via K-means clustering. Figure 9 shows the scatter plot of the curves in the features spaces generated from MDS. The numbers (1, 2, 3, 4) give the classification results and respectively denote the categories of hand, turtle, fish, and dude. Only two curves were wrongly classified as turtles. One is the hand shown in the second column from the right in row (a), the other is the fish shown in the last column of row (d). As we mentioned in the previous section, the small noise (e.g. the hand and dude in the last column of rows (a, b)) does not inference the classification using the curve matching approach. The successful performance rate of our classification approach is 93.75%.
In this section, we illustrate matching results of 3D curves. All curves were extracted from brain cortical surfaces and matched to the template curve. For the purpose of visualization, we show deformed brain surfaces where the deformation fields obtained from the curve matching were interpolated.
One major application of the curve matching in medical imaging is to transform one anatomical structure to the other using anatomical curve information. For instance, curves in the brain anatomy can be sulci or gyri that are conventional boundaries of cortical regions. In this experiment, we demonstrate how to warp one cortical surface to the other using the curve matching method. Cortical surfaces shown in the following were generated from MRI image volumes. A 3D subvolume encompassing the region of interest (ROI) was defined manually in each volume. A Bayesian segmentation using the expectation-maximization algorithm to fit the compartmental statistics was used to label voxels in the subvolume as gray matter (GM), white matter (WM), or cerebrospinal fluid (CSF) (Joshi et al. 1999; Miller et al. 2000). Surfaces were then generated at the GM/WM interface using a topology-correction method and a connectivity-consistent isosurface algorithm (Han et al. 2001, 2002).
Figure 10(a) shows the target of planum temporale (PT) surface located on the superior temporal gyrus (STG) posterior to the Heschl’s sulcus (HS) and extending to Sylvian fissure (SF). The structure of PT can be determined by three principal curves (STG, HS, SF) tracked by dynamic programming (Ratnanather et al. 2003). In this experiment, we fixed one surface as the target PT surface shown in Fig. 10(a). We extracted the three curves from twenty PT surfaces as curve objects and registered them with the curves on the target curves using our curve matching algorithm. Finally, the deformation field was interpolated to the interior region of PTs. The columns from left to right on rows (a)–(d) in Fig. 11 respectively illustrate the source PT surface, deformed source PT, source PT superimposed with the target PT, and deformed source PT with the target PT. As shown in Fig. 11, the performance for this dataset was quite good. Panel (a) in Fig. 13 shows cumulative distance distributions before and after the matching process. The cumulative distance distribution plots the percentage of vertices whose distance to the target surface is less than d as a function of distance d. Gray curves correspond to the source surfaces, while black curves are for the deformed source surfaces after the matching. Red curves are the mean surface distance graphs before and after the matching process from the bottom to the top, respectively. It is obvious that more vertices move close to the target surface after the matching.
We applied the same strategy to match ten cingulate gyri (CG) to the target CG shown in panel (d) of Fig. 12. CG is a part of the limbic cortex and has functions related to symptoms of Schizophrenia and Alzheimer’s disease. Panel (b) in Fig. 10 shows the six curves defined on the target CG. Each curve is color coded with the index in the same color scheme. Panels (a)–(c) in Fig. 12 give examples of the source CG surfaces and their deformed versions. To validate the matched results, we also computed the cumulative distance distributions shown in panel (b) of Fig. 13. This set of the CG curve mapping results have been used to detect the group differences in the CG thickness between healthy comparison controls and patients with schizophrenia, which illustrated a successful application of the curve mapping in neuroimaging studies (Qiu et al. 2007).
The first purpose of this section is to compare the matched results of PT surfaces via the curve matching algorithm with those from the landmark matching algorithm. Twenty PTs were used in this experiment. As for the PT structure, three corner points and three boundary curves of the PT are defined as point and curve objects across the population. In the landmark matching process, five points equally spaced on each boundary and the three corner points were chosen as point landmarks on each PT surface. Then, the landmark matching algorithm (Allassonnière et al. 2005; Joshi and Miller 2000) was applied to PTs to obtain the deformation field that was used to deform PT surfaces to the template. Since the PT structure is relatively flat and simple compared with other structures of the brain, the cumulative distance distribution between deformed source and target surfaces, as described in the previous section, can be used as quantitative validation measurement. Figure 14 shows the averages for the original source surfaces (solid line), deformed ones via the landmark matching (dashdot line), and deformed ones via the curve matching (dash line). This figure suggests that the cumulative distance distribution from the curve matching algorithm is above that from the landmark matching algorithm, which implies that the curve matching algorithm introduced here significantly improves matching results in terms of the surface distance measurement compared to those from the landmark matching method.
The second aim of this section is to compare deformation fields of CG surfaces from the landmark and curve matching algorithms. Landmark matching methods have been used to map curves and surfaces by extracting a pair labeled point sets from the curves as landmarks. However, manually labeling landmarks is labor-intensive as well as highly variable, and landmark matching methods are sensitive to the labeling. Consequently, it may result in giving incorrect mapping. In this experiment, we give such an example to show matching results and deformation fields via both landmark (Joshi and Miller 2000; Allassonnière et al. 2005) and curve matching methods under the LDDMM framework. In the landmark matching process, seventy two landmarks were selected from the six curves shown in panel (b) of Fig. 10, twelve landmarks per curve (Bakircioglu et al. 1998). After the landmark matching, the deformation field was interpolated for the other vertices on the CG. Figure 15 and Figure 16 give examples what the deformation fields are from the landmark and curve matchings. Panels (a)–(d) in both figures respectively show the target, source, deformed source CG surface after the landmark matching, and deformed source CG surface after the curve matching. Two red points are marked on the target surface in Figs. 15(a) and 16(a). Compared to the positions of these two points on the source surface, the relative positions are changed after the landmark matching process, which can be observed in panel (f) on both figures (zoom in version of the column in the triangulated mesh format). However, we do not observe such a change in the curve matching shown in panels (g) of Figs. 15 and and16.16. Thus, we conclude that the curve matching gives more proper deformation field due to incorporating the geometry information of the curves.
In this work we have proposed a mathematical framework in which both curves and their discrete samples are represented as vector-valued measures and embedded in a Hilbert space structure. Coupled with a model of deformations of the ambient space, we designed variational problems for curve matching takes convenient explicit formulations due to the use of the kernel of the Hilbert norm. We derived the case of the large deformations, given by integration of time-dependent vector fields. We also derived the case of the similitude transformation in Appendix B. We presented experiments for 2D and 3D curve mappings, and validated the accuracy of matching as well as demonstrated the behaviors of the curve mapping. Results indicate that the curve matching gives good matching and proper deformation fields compared to the landmark matching algorithm (Joshi and Miller 2000). We also proposed to apply this technique for shape classification using the metric distances provided by the large deformation framework.
The work reported here was supported by grants: NIH P41 RR15241, NSF DMS 0456253, and National University of Singapore start-up grant R-397-000-058-133.
As the first step, we compute the derivative of the dissimilarity measure between two sequences of points with respect to z, given in (19):
Let 1 ≤ q ≤ n and zq + εq be a small perturbation of point zq d, with other points fixed. Since and τz,i = zi+1 − zi, we have
We consider kernels of the form kW(x, y) = h(|x − y|2) and we note hx,y,i,j = h(|cx,i − cy,j|2) and for any sequences of points x, y and indices i, j. We get
Hence the gradient of E with respect to zq is given by
Here we set d = 3 and look at transformations of the form (x) = Rx + b, where R is a similitude matrix defined by the direct product of uniform scale with orthogonal spatial rotation and b is a translation vector. The algebra of quaternions is a useful mathematical tool for formulating the composition of arbitrary spatial rotations, and establishing the correctness of algorithms founded upon such compositions. Here, we use the quaternion q = [a, s] to represent a similitude transformation, where a is a vector with three entries a1, a2, a3 to describe spatial rotation and s is a scalar. We do not restrict the length of q to 1 so that the scaling factor is also considered.
The group (Q, ) is defined as the set of quaternions Q with the law of composition : Q × Q → Q, given by
The inverse of q is , where q* is the conjugate of q defined as q* = [−a, s]. According to the law of composition in (20), the similitude operation R on x 3 is described as [Rx, 0] = q [x, 0] q*, where
In the continuous setting, the alignment of two curves C and S is performed through minimization of the following functional:
where (x) = Rx + b, with R parametrized by variables s and a via formula (21). When curves C and S are discretized via sequences , we consider the functional
where zi = Rxi+ b. The derivative of this functional is computed in Appendix C.
We need to compute the gradient of a dissimilarity functional
where and (x) = Rx+ b, R computed from parameters s and a via formula (21): Rx = 2sa × x + 2(a · x)a + (s2 − |a|2)x. Let first compute the gradient of J with respect to parameter s. We have
The gradient of J with respect to parameters a can be computed as follows. Let aε = a + εã and take derivatives at ε = 0. We have
Since (ã × xi) · (xi)E = (xi × (xi)E) · ã, we obtain
The first and third terms form a double wedge product: −(xi · (xi)E)a + (a · (xi)E)xi = (a × xi) × (xi)E so finally the gradient is
We can compute the derivative of the gradient of J with respect to translation b in the same way as for s and get
We rewrite J into a matrix form
where αt and xt are vectors of momentum and coordinates of vertices at time t. E denotes the matching functional.
The gradient of J with respect to momentum αt with metric given by matrix kV (xt, xt) at time t is of the form (J)t = 2γ αt + ηt, where vector and x1E is given in (19).
Proof Our first goal is to compute variation of velocity fields υt and trajectories xt given a variation αϵ = α + ϵ.
For a variation αϵ = α + ϵ, computing the derivative with respect to ϵ in (26) yields the corresponding variation of velocity field = ϵυϵ as the form
where i is the derivative with respect to the ith variable in kV. Similarly, the corresponding variation of trajectory from (15) is of the form
To obtain t a manipulation is needed. The above equation can be rewritten as
which is a linear differential equation in variable t. By the variation of constants method, we are able to write the solution of t as
where operator Rst gives the solution of the homogeneous equation
It remains to express the variation of the energy ϵJ in the form
Substituting t (x) and t in ϵJ, we have the second term as
The third term can be rewritten as
We finally obtain
The gradient J L2([0, 1], (d)n) at time t is given
Vector and it can be written in the backward integral equation
The choice of the metric over momentum αt in the space of L2 is arbitrary. A reasonable choice of metric is given by kV (xt, xt) with two-fold advantages of being closer to the metric inducing the space of velocity field υ and simplifying the formula for the gradient. Therefore, the gradient (J)t can be simplified as
Joan Glaunès, MAP5, CNRS UMR 8145, Université Paris Descartes, 75006 Paris, France.
Anqi Qiu, Division of Bioengineering, National University of Singapore, 7 Engineering Drive 1, Block E3A 04-15, Singapore 117574, Singapore ; Email: gs.ude.sun@aqeib.
Michael I. Miller, Center for Imaging Science, Johns Hopkins University, Baltimore, MD 21218, USA.
Laurent Younes, Department of Applied Mathematics and Statistics, Johns Hopkins University, Baltimore, MD 21218, USA.