PMCCPMCCPMCC

Search tips
Search criteria 

Advanced

 
Logo of nihpaAbout Author manuscriptsSubmit a manuscriptHHS Public Access; Author Manuscript; Accepted for publication in peer reviewed journal;
 
Med Image Comput Comput Assist Interv. Author manuscript; available in PMC 2010 August 13.
Published in final edited form as:
Med Image Comput Comput Assist Interv. 2006; 9(Pt 1): 249–256.
PMCID: PMC2921189
NIHMSID: NIHMS223286

A New Closed-Form Information Metric for Shape Analysis

Abstract

Shape matching plays a prominent role in the analysis of medical and biological structures. Recently, a unifying framework was introduced for shape matching that uses mixture-models to couple both the shape representation and deformation. Essentially, shape distances were defined as geodesics induced by the Fisher-Rao metric on the manifold of mixture-model represented shapes. A fundamental drawback of the Fisher-Rao metric is that it is NOT available in closed-form for the mixture model. Consequently, shape comparisons are computationally very expensive. Here, we propose a new Riemannian metric based on generalized ϕ- entropy measures. In sharp contrast to the Fisher-Rao metric, our new metric is available in closed-form. Geodesic computations using the new metric are considerably more efficient. Discriminative capabilities of this new metric are studied by pairwise matching of corpus callosum shapes. Comparisons are conducted with the Fisher-Rao metric and the thin-plate spline bending energy.

1 Introduction

Shape analysis is a key ingredient to many medical imaging applications that seek to study the intimate relationship between the form and function of medical and biological structures. In particular, landmark-based deformable models have been widely used in quantified studies requiring size and shape similarity comparisons. A brief, cross-cutting survey of existing work in shape analysis illustrates several taxonomies and summaries. Shape deformation parameterizations range from Procrustean metrics [1] to spline-based models [2, 3], and from PCA-based modes of deformation [4] to landmark diffeomorphisms [5, 6]. Shape representations range from unstructured point-sets [7, 8] to weighted graphs [9] and include curves, surfaces and other geometric models. These advances have been instrumental in solidifying the shape analysis landscape. However, one commonality in virtually all of this previous work is the use of separate models for shape representation and deformation. In recent work, it was shown that shape representation and deformation can be unified within a probabilistic framework [10]. By using a mixture of densities to represent shapes, the authors in [10] were able to directly use the Fisher information matrix to establish an intrinsic, Riemannian metric and subsequently a shape distance measure—thus unifying both shape representation and deformation.

But are we always handcuffed to the Fisher-Rao Riemannian metric when trying to establish distances between parametric, probabilistic models? (Remember in this context the parametric models are used to represent shapes.) The metric’s close connections to Shannon entropy and the concomitant use of Fisher information in parameter estimation have cemented it as the incumbent information measure. It has also been proliferated by research efforts in information geometry, where one can show its proportionality to popular divergence measures such as Kullback-Leibler. However, the algebraic form of the Fisher-Rao metric tensor makes it very difficult to use when applied to multi-parameter spaces like mixture models. For instance, it is not possible to derive closed-form solutions for the metric tensor or its derivative. To address many of these computational inefficiencies that arise when using the standard information metric, the current work introduces a new Riemannian metric based on the generalized notion of a ϕ-entropy functional. We take on the challenge of improving the initial model as presented in [10] by incorporating the notion of generalized information metrics as first shown by Burbea and Rao [11]. Section 2 provides a synopsis of their work on obtaining differential metrics using the ϕ-entropy functional in parametric probability spaces. In Section 3, we discuss the use of a specific ϕ-function that leads to the α-order entropy first introduced by Havrda and Charvát [12]. We show how this metric leads to closed-form solutions for the Christoffel symbols when using a Gaussian mixture models (GMM) for coupling shape representation and deformation, enabling almost an order of magnitude performance increase over the Fisher-Rao based solution. Section 4 applies the new metric to compute shape distances between corpus callosum data and provides comparative analysis with the Fisher-Rao metrics and the thin-plate spline bending energy. Some closed-form solutions to the new metric tensor are provided in the Appendix.

2 Differential Metrics for Parametric Distributions

The parametric, GMM representation for 2-D shapes is given by

p(x|θ)=12πσ2Ka=1Kexp{xϕa22σ2}
(1)

where θ is our set of landmarks, ϕa = [θ(2a−1), θ(2a)]T, x = [x(1), x(2)]T [set membership] R2 and equal weight priors are assigned to all components, i.e 1K (Extensions to 3D are mathematically straightforward.) Basically, a shape with K landmarks is represented as a K-component GMM where the landmarks are the means of each component. The model p(x|θ) lives on a statistical manifold M which has its local coordinates defined by the model parameters. Following [10], we choose to only place the Gaussian means on the manifold and assume an isotropic variance σ2 for the landmarks (see examples in Figure 1).

Fig. 1
Examples of representation model. (a) The square shape is represented using four landmarks (K = 4) given by θ = {[ϕ1 = (−0.5, 0.5)], [ϕ2 = (−0.5, 0.5)], [ϕ3 = (0.5,−0.5)], [ϕ4 = (0.5, 0.5)]} ...

Given that our representation model depends on parametric mixture models, it is important to understand how to obtain a metric on the parameter space. Rao established [13] that the Fisher information matrix satisfies the properties of a Riemannian metric on the statistical manifold M. This seminal work and the information matrix’s relationship to the Shannon entropy have entrenched it as the metric tensor of choice when trying to establish a distance metric between two parametric models. However, Burbea and Rao went on to show that the notion of distances between parametric models can be extended to a large class of generalized metrics [11]. They defined the generalized ϕ-entropy functional

Hϕ(p)=χϕ(p)dx
(2)

where χ is the measurable space (for our purposes R2), and ϕ is a C2-convex function defined on R+ [equivalent] [0,∞). (For readability we will regularly replace p(x|θ) with p.) The metric on the parameter space is obtained by finding the Hessian of (2) along a direction in its tangent space. Assuming sufficient regularity properties on θ = {θ1,…, θn}, the direction can be obtained by taking the total differential of p(x|θ) w.r.t θ

dp(θ)=k=1npθkdθk.
(3)

Hence the Hessian is defined as

ΔθHϕ(p)=χϕ(p)[dp(θ)]2dx.
(4)

This directly leads to the following differential metric satisfying Riemannian metric properties

dsϕ2(θ)=ΔθHϕ(p)=i,j=1ngi,jϕdθidθj,
(5)

where

gi,jϕ=χϕ(p)(pθi)(pθj)dx.
(6)

The metric tensor in (6) is called the ϕ-entropy matrix. We can define distances between probability densities by finding a geodesic between their parameter values as determined by (5). If one lets

ϕ(p)=plogp,
(7)

equation (2) becomes the familiar Shannon entropy and (6) yields the Fisher information matrix.

This section illustrated the tight coupling between entropy selection and its corresponding differential metric. In [10], the authors chose (7) and worked with resulting Fisher information matrix to compute geodesics between parametric GMM models, p(x1) and p(x2), which represented two shapes. However, computation of geodesics using this metric is very inefficient as they require numerical calculation of the integral in (6). We now discuss an alternative choice of ϕ that directly leads to a new Riemannian metric and enables us to derive closed-form solutions for (6).

3 α-Order Entropy Metric for Shape Analysis

Our desire to find a computationally efficient information metric was motivated by noticing that if the integral of the metric could be reduced to just a correlation between the partials of the density w.r.t θi and θj, i.e.pθipθjdx, then the GMM would reduce to separable one dimensional Gaussian integrals for which the closed-from solution exists. This translates to selecting a ϕ such that ϕ″ becomes a constant in (6). In [12], Havrda and Charvát introduced the notion of a α-order entropy using the convex function

ϕ(p)=(α1)1(pαp),α1.
(8)

As limα→1 ϕ(p), (8) goes to (7). We set α = 2 which results in 12ϕ=1. The one-half scaling factor does not impact the metric properties. The new metric is defined as

gi,jα=χ(pθi)(pθj)dx
(9)

and we refer to it as the α-order entropy metric tensor.

The new metric can be used to find the distance between two shapes represented as p(x1) and p(x2). The geodesic can be obtained by solving

gkiαθ¨i+Γk,ijθ˙iθ˙j=0
(10)

where Γk,ij=def12{gikαθj+gkjαθigijαθk} is the Christoffel symbol of the first kind and standard Einstein summation convention is assumed. Following [10], we use gradient descent to find a local solution to this system of second order ODEs. However, unlike [10], we now have closed-form solutions to gi,jαandgkjαθi. (We have elected to show the form of the metric tensor and its derivative for one particular combination of indices below and partially list the rest in the Appendix.) Here the θ-coordinates belong to the same landmark as indicated by i2=j2 and the i and j indices are equal.

Ifi2=j2,i=jgij=18πσ4K2    gijθk=0.
(11)

Please see the Appendix for more details regarding the metric tensor and Christoffel symbols.

4 Experimental Results and Analysis

In order to get a better understanding of the geodesics produced with the Fisher-Rao metric versus the new α-order entropy metric, we first computed the geodesics for a set of simple shapes. Figure 2 illustrates geodesics obtained from matching a four-landmark square to one that has been rotated 210° clockwise. The geodesics obtained by the Fisher-Rao metric are smoothly curved, illustrating the hyperbolic nature of the manifold induced by the information matrix [14] whereas the α-order entropy metric displays sharper, abrupt variations. This is due to the fact that the Christoffel symbol values are zero for (i, j) pairs that belong to the same landmark.

Fig. 2
Fisher information versus α-order entropy metric. The dashed line is the initialization and the solid line the final geodesic. The corresponding points are numerically labeled where circular landmarks are the starting shape and square landmarks ...

For applications in medical imaging, we have evaluated both metrics on real data consisting of nine corpora callosa with 63-landmarks each as shown in Figure 3 in the Appendix. We also compared the results to the standard thin-plate spline (TPS) bending energy and studied the particular sensitivities of each metric. The results of pairwise matching of all nine shapes is listed in Table 1. For brevity, only the best and worst matching distance for each pair are listed. In instances where the TPS distance disagrees with other metrics, it assigns the worst values to shapes that differ largely due to “bending” transformations. On the other hand, both the Fisher-Rao and α-order entropy metric take into account how neighboring landmarks change and reflect sensitivities to both bending and stretching. This has good potential for computational medical applications such as computer-aided diagnosis and biological growth analysis where one is interested in measures that take into account multiple, local deformations. Though we have shown results on shapes that exhibit closed-contour topology, it is worth noting that this formulation has no inherent topological constraints on shape structure—thus enabling landmark analysis for anatomical forms with interior points or disjoint parts.

Fig. 3
Nine corpus callosum shapes used for pairwise matching, 63 landmarks per shape
Table 1
Pairwise shape distances. Column 2 and 3 are obtained by integrating the intrinsic geodesic using the Fisher-Rao and α-order entropy metric respectively. Column 4 is the bending energy of the TPS warp. Only the best and worst distances are shown ...

The α-order entropy metric provided huge computational benefits over the existing method utilizing the Fisher-Rao metric. The Fisher-Rao metric requires an extra O(N2) computation of the integral over R2 where we have assumed an N point discretization of the x- and y-axes. This computation must be repeated at each point along the evolving geodesic and for every pair of landmarks. The derivatives of the metric tensor which are needed for geodesic computation require the same O(N2) computation for every landmark triple and at each point on the evolving geodesic. Since our new ϕ-entropy metric tensor and derivatives are in closed-form, this extra O(N2) computation is not required. Please note that the situation only worsens in 3D where O(N3) computations will be required for the Fisher-Rao metric (and derivatives) while our new metric (and derivatives) remain in closed-form. It remains to be seen if other closed-form information metrics can be derived which are meaningful in the shape matching context.

5 Conclusion

In this paper, we introduced a new Riemannian metric for landmark-based shape matching that combines both shape representation and deformation. This improved on the previous work of [10] where the Fisher-Rao metric was used to establish an intrinsic metric between landmark shapes. Given that our parameter space comes from Gaussian mixture models, the Fisher-Rao metric suffers serious computational inefficiencies as it is not possible to get closed-form solutions to the metric tensor or the Christoffel symbols. The new α-order entropy metric, with α = 2, enables us to obtain closed-form solutions to the metric tensor and its derivatives and therefore alleviates this computational burden. Our new approach also illustrated the possibilities of using information metrics besides Fisher-Rao and their benefits. Test results on corpus callosum shapes show the applicability of the new metric to shape matching, providing discriminability similar to that of the Fisher-Rao metric—though admittedly these results are still in the anecdotal stage since we have not performed statistical comparisons on the computed shape geodesic distances. In addition, comparison to the popular thin-plate spline bending energy illustrates the sensitivities of both information based metrics to local deformations other than just bending. These metrics also do not suffer from topological constraints on the shape structure (thus enabling their applicability to a large class of medical imaging applications). Our future work will focus on extending this framework to incorporate diffeomorphic warping of the extrinsic space, intrinsically solving for correspondences between the shape landmarks and extensions to 3D shape matching.

Acknowledgements

This work is partially supported by NSF IIS-0307712 and NIH R01NS046812. We acknowledge helpful conversations with Chris Small and early reviews by Tariq Bakir and Eric Spellman.

Appendix

Metric Tensor and Christoffel Symbol Components

Below we list the closed-form expressions for a few combinations of the metric tensor and its derivative.

Ifi2=j2,ij:gij=0,gijθk=0Ifi2j2,ioddjodd:gij=e(θiθj)24σ2e(θi+1θj+1)24σ216π2σ8K2{πσ2[2σ2(θi)2+2θiθj(θj)2]}fork=i,j:gijθk=πe(θiθj)24σ2e(θi+1θj+1)24σ232π2σ8K2(θiθj)[6σ2(θi)2+2θiθj(θj)2]ioddjeven:gij=e(θiθj1)24σ2e(θi+1θj)24σ216π2σ8K2{πσ2[(θi+1+θj)(θi+θj1)]}fork=i:gijθk=πe(θiθj1)24σ2e(θi+1θj)24σ232π2σ8K2(θi+1+θj)[2σ2(θi)2+2θiθj1(θj1)2]

References

1. Small C. The statistical theory of shape. New York, NY: Springer; 1996.
2. Bookstein FL. Principal warps: Thin-plate splines and the decomposition of deformations. IEEE Trans. Patt. Anal. Mach. Intell. 1989;11:567–585.
3. Rohr K, Stiehl H, Sprengel R, Buzug T, Weese J, Kuhn M. Landmark-based elastic registration using approximating thin-plate splines. IEEE Trans. on Medical Imaging. 2001;20:526–534. [PubMed]
4. Davies R, Twining C, Cootes T, Taylor C. An information theoretic approach to statistical shape modelling. European Conference on Computer Vision (ECCV). Lecture Notes in Computer Science, LNCS 2351; Springer. 2002. pp. 3–20.
5. Camion V, Younes L. Energy Minimization Methods for Computer Vision and Pattern Recognition. New York: Springer; 2001. Geodesic interpolating splines; pp. 513–527.
6. Joshi S, Miller M. Landmark matching via large deformation diffeomorphisms. IEEE Trans. Image Processing. 2000;9:1357–1370. [PubMed]
7. Chui H, Rangarajan A. A new point matching algorithm for non-rigid registration. Computer Vision and Image Understanding. 2003;89:114–141.
8. Guo H, Rangarajan A, Joshi S, Younes L. Non-rigid registration of shapes via diffeomorphic point matching. ISBI 2004. 2004
9. Siddiqi K, Shokoufandeh A, Dickinson SJ, Zucker SW. Shock graphs and shape matching. ICCV; 1998. pp. 222–229.
10. Peter A, Rangarajan A. Shape matching using the Fisher-Rao Riemannian metric: Unifying shape representation and deformation. ISBI 2006. 2006
11. Burbea J, Rao R. Entropy differential metric, distance and divergence measures in probability spaces: A unified approach. Journal of Multivariate Analysis. 1982;12:575–596.
12. Havrda ME, Charvát F. Quantification method of classification processes: Concept of structural α-entropy. Kybernetica. 1967;3:30–35.
13. Rao C. Information and accuracy attainable in estimation of statistical parameters. Bulletin of the Calcutta Mathematical Society. 1945;37:81–91.
14. Costa S, Santos S, Strapasson J. Fisher information matrix and hyperbolic geometry. IEEE Information Theory Workshop. 2005:28–30.