Home | About | Journals | Submit | Contact Us | Français |

**|**HHS Author Manuscripts**|**PMC2864486

Formats

Article sections

- Abstract
- I. Introduction
- II. Description of the Model
- III. Applications to Medical Imagery
- IV. Conclusion
- References

Authors

Related links

Proc IEEE Comput Soc Conf Comput Vis Pattern Recognit. Author manuscript; available in PMC 2010 May 5.

Published in final edited form as:

Proc IEEE Comput Soc Conf Comput Vis Pattern Recognit. 2004 June 27; 1: I314–I319.

doi: 10.1109/CVPR.2004.1315048PMCID: PMC2864486

NIHMSID: NIHMS178029

Departments of Electrical Engineering and Diagnostic Radiology, Yale University P.O. Box 208042, New Haven CT 06520-8042, USA

Email: ude.elay@gnay.j

See other articles in PMC that cite the published article.

This paper presents a novel method for 3D image segmentation, where a Bayesian formulation, based on joint prior knowledge of multiple objects, along with information derived from the input image, is employed. Our method is motivated by the observation that neighboring structures have consistent locations and shapes that provide configurations and context that aid in segmentation. In contrast to the work presented earlier in [1], we define a Maximum A Posteriori (MAP) estimation model using the joint prior information of the multiple objects to realize image segmentation, which allows multiple objects with clearer boundaries to be reference objects to provide constraints in the segmentation of difficult objects. To achieve this, muiltiple signed distance functions are employed as representations of the objects in the image. We introduce a representation for the joint density function of the neighboring objects, and define joint probability distribution over the variations of objects contained in a set of training images. By estimating the MAP shapes of the objects, we formulate the joint shape prior models in terms of level set functions. We found the algorithm to be robust to noise and able to handle multidimensional data. Furthermore, it avoids the need for point correspondences during the training phase. Results and validation from various experiments on 2D/3D medical images are demonstrated.

Image segmentation remains an important and challenging task due to poor image contrast, noise, and missing or diffuse boundaries. To address these problems, Snakes or Active Contour Models (ACM) (Kass et al. (1987)) [2] have been widely used for segmenting non-rigid objects in a wide range of applications, where an initial contour is deformed towards the boundary of the object to be detected by minimizing an energy functional. These methods maybe sensitive to the starting position and may “leak” through the boundary of the object if the edge feature is not salient enough.

In more sophisticated deformable models, the incorporation of more specific prior information into deformable models has received a large amount of attention. Cootes et al. [3] find corresponding points across a set of training images and construct a statistical model of shape variation from the point positions. The best match of the model to the image is found by searching over the model parameters. Staib and Duncan [4] incorporate global shape information into the segmentation process by using an elliptic Fourier decomposition of the boundary and placing a Gaussian prior on the Fourier coefficients. Zeng et al. [5] develop a coupled surfaces algorithm to segment the cortex by using a thickness prior constraint. Leventon et al. [6] extend Caselles' [7] geodesic active contours by incorporating shape information into the evolution process.

Our work is also a prior-information-based approach to image segmentation. As an extention of the neighbor-constraint deformable model presented earlier in [1], our work shares the observation that neighboring structures have consistent locations and shapes that provide configurations and context that aid in segmentation. In contrast to the work presented in [1], the MAP segmentation framework that we present in this paper is based on a joint prior information of the multiple objects in the image (instead of using the conditional local neighbor prior information). The objects with clearer boundaries in the image can be used as reference objects to provide constraints in the segmentation of difficult objects. Our work also shares the common aspects with a number of coupled active contour models [1][5][8], where multiple level set functions are employed as the representations of the multiple objects within the image. By using this level-sets based numerical algorithm, several objects can be segmented simultaneously.

The strength of our approach is the incorporation of joint prior information of multiple objects into image segmentation to improve the segmentation results as well as reduce the complexity of the segmentation process by providing prior constraints from multiple neighboring objects. Our model is based on a MAP framework using the joint prior information of neighboring objects within the image. We introduce a representation for the joint density function of the neighbor objects and define the corresponding probability distributions. Formulating the segmentation as a MAP estimation of the shapes of the objects and modeling in terms of level set functions, we compute the associated Euler-Lagrange equations. The contours evolve while attempting to adhere to the neighbor prior information and the image gray level information.

As presented in our previous work in [1], probabilistic formulations are powerful approaches to deformable models. Deformable models can be fit to the image data by finding the model shape parameters that maximize the posterior probability. Consider an image *I* that has *M* shapes of interest; a MAP framework can be used to realize image segmentation combining joint prior information of the neighboring objects and image information:

$$\begin{array}{cc}\hfill {\widehat{S}}_{i}& ={\mathrm{arg}\mathrm{max}}_{{S}_{i}}p({S}_{1},{S}_{2},\dots ,{S}_{i},\dots ,{S}_{M}\mid I)\hfill \\ \hfill & =\mathrm{arg}\mathrm{max}{S}_{i}p(I\mid {S}_{1},{S}_{2},\dots ,{S}_{M})p({S}_{1},{S}_{2},\dots ,{S}_{M})i=1,2,\dots ,M\hfill \end{array}$$

(1)

where *S*_{1}, *S*_{2}, …, *S _{M}* are the evolving surfaces of all the shapes of interest.

*p*(*I*|*S*_{1}, *S*_{2}, …, *S _{M}*) is the probability of producing an image

$$\begin{array}{cc}\hfill & p(I\mid {S}_{1},{S}_{2},\dots ,{S}_{M})\hfill \\ \hfill & =\prod _{i=1}^{M}\{\prod _{(x,y,z)\mathit{inside}\left({S}_{i}\right)}\mathrm{exp}[-\frac{{\left(I(x,y,z){c}_{1i}\right)}^{2}}{2{\sigma}_{1i}^{2}}]\phantom{\}}\hfill \\ \hfill & \phantom{\{}\cdot \prod _{(x,y,z)\mathit{outside}\left({S}_{i}\right),\mathit{inside}\left({\Omega}_{i}\right)}\mathrm{exp}[-\frac{{(I(x,y,z)-{c}_{2i})}^{2}}{2{\sigma}_{2i}^{2}}]\}\hfill \end{array}$$

(2)

where *c*_{1i} and σ_{1i} are the average and variance of *I* inside *S*_{i}, *c*_{2i} and σ_{2i} are the average and variance of *I* outside *S _{i}* but also inside a certain domain Ω

*p*(*S*_{1}, *S*_{2}, …, *S _{M}*) is the joint density function of all the

$$\begin{array}{cc}\hfill & {\widehat{S}}_{i}=\hfill \\ \hfill & {\mathrm{arg}\mathrm{max}}_{{S}_{i}}[p(I\mid {S}_{1}={\xi}_{1},{S}_{2}={\xi}_{2},\dots ,{S}_{l}={\xi}_{l},{S}_{l+1},\dots ,{S}_{M})\hfill \\ \hfill & \cdot p({S}_{1}={\xi}_{1},{S}_{2}={\xi}_{2},\dots ,{S}_{l}={\xi}_{l},{S}_{l+1},\dots ,{S}_{M})]i=l+1,l+2,\dots ,M,1\le l<M\hfill \end{array}$$

(3)

To build a model for the joint prior of the neighboring objects, we choose level sets as the representation of the shapes [1][6][8], and then define the joint probability density function *p*(*S*_{1}, *S*_{2}, …, *S _{M}*) in equation (1).

Consider a training set of *n* aligned images {*I*_{1}, *I*_{2}, …, *I _{n}*}, with

$$T=\left[\begin{array}{cccc}\hfill {\Psi}_{11}\hfill & \hfill {\Psi}_{12}\hfill & \hfill \dots \hfill & \hfill {\Psi}_{1n}\hfill \\ \hfill {\Psi}_{21}\hfill & \hfill {\Psi}_{22}\hfill & \hfill \dots \hfill & \hfill {\Psi}_{2n}\hfill \\ \hfill \hfill & \hfill \dots \hfill & \hfill \hfill & \hfill \hfill \\ \hfill {\Psi}_{M1}\hfill & \hfill {\Psi}_{M2}\hfill & \hfill \dots \hfill & \hfill {\Psi}_{Mn}\hfill \end{array}\right]$$

(4)

Using the technique developed in [6], each of the Ψ_{ij}(*i* = 1, 2, …, *M*; *j* = 1, 2, …*n*) is placed as a column vector with *N ^{d}* elements, where

Following the lead of [6][8], the mean and variance of the level sets vector χ can be computed using Principal Component Analysis (PCA). The mean level sets vector, $\stackrel{\u2012}{\chi}$, is calculated using

$$\stackrel{\u2012}{\chi}=\frac{1}{n}\sum _{j=1}^{n}{\chi}_{j}$$

(5)

For each level sets vector χ_{j} in the training set we calculate its deviation from the mean, *d*χ_{j}, where

$$d{\chi}_{j}={\chi}_{j}-\stackrel{\u2012}{\chi},j=1,2,\dots ,n$$

(6)

Then each such deviation is placed as a column vector in a *MN ^{d}* ×

$$\stackrel{~}{\chi}=\stackrel{\u2012}{\chi}+{U}_{k}\alpha $$

(7)

To write equation (7) in the level sets form, we have:

$$\stackrel{~}{\left[\begin{array}{c}\hfill {\Psi}_{1}\hfill \\ \hfill {\Psi}_{2}\hfill \\ \hfill \vdots \hfill \\ \hfill {\Psi}_{M}\hfill \end{array}\right]}=\stackrel{\u2012}{\left[\begin{array}{c}\hfill {\Psi}_{1}\hfill \\ \hfill {\Psi}_{2}\hfill \\ \hfill \cdots \hfill \\ \hfill {\Psi}_{M}\hfill \end{array}\right]}+{U}_{k}\alpha $$

(8)

Under the assumption of a Gaussian distribution of joint level sets represented by α, the joint probability density function of neighboring objects, *p*(*S*_{1}, *S*_{2}, …, *S _{M}*), can be approximated by:

$$p\left(\alpha \right)=\frac{1}{\sqrt{{\left(2\pi \right)}^{k}\mid {\Sigma}_{k}\mid}}\mathrm{exp}[-\frac{1}{2}{\alpha}^{T}{\Sigma}_{k}^{-1}\alpha ]$$

(9)

Figure 1 shows a few of the 16 MR cardiac training images used to define the level set based shape model of the endocardial boundary of the left and right ventricles. Before computing and combining the level sets of these training shapes, the curves were rigidly aligned. By using PCA of the joint level sets of the two structures, we can build a model of the joint shapes of left and right ventricles. Figure 2 illustrates zero level sets corresponding to the mean and three primary modes of variance of the distribution of the two ventricles jointly.

Outlines of left and right ventricles in 6 out of 16 2D MR cardiac training images gated and at a fixed point in the cardiac cycle.

We also show a 3D training set of two rigidly aligned subcortical structures: the left amydala and left hippocampus in Figure 3. Figure 4 shows the three primary modes of variance of the left amydala and left hippocampus. Note that the zero level sets of the mean joint level sets and primary modes appear to be reasonable representative shapes of the classes of objects being learned. This shows that our joint prior model of multiple objects successfully incorporates the neighbor prior information such as the relative position and shape among the objects and unifies them under one framework.

Zero level sets of two post-aligned subcortical structures-left amydala and left hippocampus in 12 3D MR training images.

In our active contour model, we also add some regularizing terms [1]: a general smoothness Gibbs prior for the region boundaries *p _{B}*(

$${p}_{B}({S}_{1},{S}_{2},\dots ,{S}_{M})=\prod _{i=1}^{M}{e}^{-{\mu}_{i}{\oint}_{{S}_{i}}ds}$$

(10)

$${p}_{A}({S}_{1},{S}_{2},\dots ,{S}_{M})=\prod _{i=1}^{M}{e}^{-{\nu}_{i}{A}_{i}^{c}}$$

(11)

where *A _{i}* is the size of the region of shape

$$\begin{array}{cc}\hfill p({S}_{1},{S}_{2},\dots ,{S}_{m})=& p\left(\alpha \right)\cdot {p}_{B}({S}_{1},{S}_{2},\dots ,{S}_{M})\hfill \\ \hfill & \cdot {p}_{A}({S}_{1},{S}_{2},\dots ,{S}_{M})\hfill \end{array}$$

(12)

Therefore, equation (1) can be approximated by:

$$\begin{array}{cc}\hfill & p({S}_{1},{S}_{2},\dots ,{S}_{M}\mid I)\hfill \\ \hfill & \propto \prod _{i=1}^{M}\{\prod _{(x,y,z)\mathit{inside}\left({S}_{i}\right)}\mathrm{exp}[-\frac{{(I(x,y,z)-{c}_{1i})}^{2}}{2{\sigma}_{1i}^{2}}]\hfill \\ \hfill & \cdot \prod _{(x,y,z)\mathit{outside}\left({S}_{i}\right),\mathit{inside}\left({\Omega}_{i}\right)}\mathrm{exp}[-\frac{{(I(x,y,z)-{c}_{2i})}^{2}}{2{\sigma}_{2i}^{2}}]\}\hfill \\ \hfill & \cdot \prod _{i=1}^{M}{e}^{-{\mu}_{i}{\oint}_{{S}_{i}}ds}\prod _{i=1}^{M}{e}^{-{\nu}_{i}{A}_{i}^{c}}\hfill \\ \hfill & \cdot \frac{1}{\sqrt{{\left(2\pi \right)}^{k}\mid {\Sigma}_{k}\mid}}\mathrm{exp}[-\frac{1}{2}{\alpha}^{T}{\Sigma}_{k}^{-1}\alpha ]\hfill \end{array}$$

(13)

Since:

$$\begin{array}{cc}\hfill {\widehat{S}}_{i}& ={\mathrm{arg}\mathrm{max}}_{{S}_{i}}p({S}_{1},{S}_{2},\dots ,{S}_{i},\dots ,{S}_{M}\mid I)\hfill \\ \hfill & ={\mathrm{arg}\mathrm{min}}_{{S}_{i}}[-{\mathrm{log}}_{e}p({S}_{1},{S}_{2},\dots ,{S}_{i},\dots ,{S}_{M}\mid I)]i=1,2,\dots ,M\hfill \end{array}$$

(14)

Let

$$\begin{array}{cc}\hfill & E=-\mathrm{ln}p({S}_{1},{S}_{2},\dots ,{S}_{i},\dots ,{S}_{M}\mid I)\hfill \\ \hfill & \propto {\Sigma}_{i=1}^{M}\{{\lambda}_{1i}\cdot {\int}_{(x,y,z)\mathit{inside}\left({S}_{i}\right)}{\mid I(x,y,z)-{c}_{1i}\mid}^{2}dxdydz\hfill \\ \hfill & +{\lambda}_{2i}\cdot {\int}_{(x,y,z)\mathit{outside}\left({S}_{i}\right),\mathit{inside}\left({\Omega}_{i}\right)}{\mid I(x,y,z)-{c}_{2i}\mid}^{2}dxdydz\}\hfill \\ \hfill & +{\Sigma}_{i=1}^{M}{\mu}_{i}{\oint}_{{S}_{i}}ds+{\Sigma}_{i=1}^{M}{\nu}_{i}{A}_{i}^{c}+{\omega}_{i}{\alpha}^{T}{\Sigma}_{k}^{-1}\alpha \hfill \end{array}$$

(15)

Given the first *l* objects in the image, the MAP estimation of the other shapes of interest in equation (3), ${\widehat{S}}_{i}(i=l+1,l+2,\dots ,M)$, is also the minimizer of the above energy functional *E*. This minimization problem can be formulated and solved using the level set method and we can realize the segmentation of multiple objects simultaneously.

In the level set method, *S _{i}* is the zero level set of a higher dimensional level set ψ

For the level set formulation of our model, we replace *S _{i}* with ψ

$$\begin{array}{cc}\hfill & E({c}_{1i},{c}_{2i},{\psi}_{i}\mid i=l+1,l+2,\dots M)\hfill \\ \hfill & ={\Sigma}_{i=l+1}^{M}\{{\mu}_{i}{\int}_{\Omega}{\delta}_{\epsilon}\left({\psi}_{i}(x,y,z)\right)\mid \nabla {\psi}_{i}(x,y,z)\mid dxdydz\hfill \\ \hfill & +{\nu}_{i}{\int}_{\Omega}(1-{H}_{\epsilon}\left({\psi}_{i}(x,y,z)\right))dxdydz\hfill \\ \hfill & +{\lambda}_{1i}{\int}_{\Omega}{\mid I(x,y,z)-{c}_{1i}\mid}^{2}(1-{H}_{\epsilon}\left({\psi}_{i}(x,y,z)\right))dxdydz\hfill \\ \hfill & +{\lambda}_{2i}{\int}_{{\Omega}_{i}}{\mid I(x,y,z)-{c}_{2i}\mid}^{2}{H}_{\epsilon}\left({\psi}_{i}(x,y,z)\right)dxdydz\}\hfill \\ \hfill & +{\omega}_{i}[{(G\left({\psi}_{1}\right)-\stackrel{\u2012}{{\psi}_{1}})}^{T},{(G\left({\psi}_{2}\right)-\stackrel{\u2012}{{\psi}_{2}})}^{T},\dots ,{(G\left({\psi}_{M}\right)-\stackrel{\u2012}{{\psi}_{M}})}^{T}]\hfill \\ \hfill & \phantom{\rule{1em}{0ex}}\cdot {U}_{k}{\Sigma}_{k}^{-1}{U}_{k}^{T}\hfill \\ \hfill & \phantom{\rule{1em}{0ex}}\cdot {[{(G\left({\psi}_{1}\right)-\stackrel{\u2012}{{\psi}_{1}})}^{T},{(G\left({\psi}_{2}\right)-\stackrel{\u2012}{{\psi}_{2}})}^{T},\dots ,{(G\left({\psi}_{M}\right)-\stackrel{\u2012}{{\psi}_{M}})}^{T}]}^{T}\hfill \end{array}$$

(16)

where Ω denotes the image domain. *G*(·) is an operator to generate the vector representation(as shown in section II-B) of a matrix by column scanning. *g*(·) is the inverse operator of *G*(·). To compute the associated Euler-Lagrange equation for each unknown level set function ψ_{i}, we assume that the *M* − *l* difficult objects are related to the *l* easy neighbor objects independently, keep *c*_{1i} and *c*_{2i} fixed, and minimize *E* with respect to ψ_{i} (*i* = *l* + 1, *l* + 2, …*M*) respectively. Parameterizing the descent direction by artificial time *t* ≥ 0, the evolution equation in ψ_{i}(*t, x, y, z*) is:

$$\begin{array}{cc}\hfill \frac{\partial {\psi}_{i}}{\partial t}& ={\delta}_{\epsilon}\left({\psi}_{i}\right)[{\mu}_{i}\cdot \mathit{div}\left[\frac{\nabla {\psi}_{i}}{\mid \nabla {\psi}_{i}\mid}\right]+{\nu}_{i}\hfill \\ \hfill & +{\lambda}_{1i}{\mid I-{c}_{1i}\mid}^{2}-{\lambda}_{2i}{\mid I-{c}_{2i}\mid}^{2}]\hfill \\ \hfill & -{\omega}_{i}\cdot g\left[{U}_{ki}{\Sigma}_{k}^{-1}{U}_{ki}^{T}(G\left({\psi}_{i}\right)-{\stackrel{\u2012}{\psi}}_{i})\right]\hfill \\ \hfill & -{\omega}_{i}\cdot g\left[{U}_{ki}{\Sigma}_{k}^{-1}\sum _{j=1}^{l}\left[{U}_{kj}^{T}(G\left({\psi}_{i}\right)-{\stackrel{\u2012}{\psi}}_{j})\right]\right]\hfill \\ \hfill & \phantom{\rule{1em}{0ex}}i=l+1,l+2,\dots M\hfill \end{array}$$

(17)

where *U _{ki}* is the

We approximate *H*_{ε} and δ_{ε} as follows [9]: ${H}_{\epsilon}\left(z\right)={\scriptstyle \frac{1}{2}}[1+{\scriptstyle \frac{2}{\pi}}\mathrm{arctan}\left({\scriptstyle \frac{z}{\epsilon}}\right)]$, ${\delta}_{\epsilon}\left(z\right)=\frac{\epsilon}{\pi ({\epsilon}^{2}+{z}^{2})}$. *c*_{1i} and *c*_{2i} are defined by: ${c}_{1i}\left({\psi}_{i}\right)=\frac{{\int}_{\Omega}I(x,y,z)\cdot (1-H\left({\psi}_{i}(x,y,z)\right))dxdydz}{{\int}_{\Omega}(1-H\left({\psi}_{i}(x,y,z)\right))dxdydz}$, ${c}_{2i}\left({\psi}_{i}\right)=\frac{{\int}_{{\Omega}_{i}}I(x,y,z)\cdot H\left({\psi}_{i}(x,y,z)\right)dxdydz}{{\int}_{{\Omega}_{i}}H\left({\psi}_{i}(x,y,z)\right)dxdydz}$.

Given the surfaces ψ_{i}(*i* = 1, 2, …*M*) at time *t*, we seek to compute the evolution steps that bring all the zero level set curves to the correct final segmentation based on the joint prior information of the objects and image information. We first set up *p*(α) from the training set using PCA. At each stage of the algorithm, we recompute the constants ${c}_{1i}\left({\psi}_{i}^{t}\right)$ and ${c}_{2i}\left({\psi}_{i}^{t}\right)$ and update ${\psi}_{i}^{t+1}$. This is repeated until convergence.

To simplify the complexity of the segmentation system, we generally choose the parameters in our experiments as follows: λ_{1i} = λ_{2i} = λ_{i}, μ_{i} = 0.00005·255^{2}, ν_{i} = 0 [9]. This leaves us only two free parameters (ω_{i} and λ_{i}) to balance the influence of two terms, the image data term and the neighbor prior term for each object. The tradeoff between neighbor prior and image information depends on the strength of the neighbor prior and the image quality for a given application. We set these parameters empirically for particular segmentation tasks, given the general image quality and the neighbor prior information.

We have used our model on various medical images, with at least two different types of shapes, at least one of which can be regarded as the reference object. All the tested images are not in their training sets. The variations captured by the principal components in level set based distribution model (*U _{k}*)in this paper are based on rigid alignment of the training data.

We first consider a 2D cardiac image with two structures of interest, the left and right ventricles. The training set consists 16 images like those in Figure 1. In Figure 5 top, we show the segmentation of the left and right ventricles using only image information, by which the curves cannot lock onto the shapes of the objects. In Figure 5 bottom, we show the results obtained using our model, where the right ventricle is the reference object. The curves are able to converge on the desired boundaries even though some parts of the boundaries are too blurred to be detected using only gray level information. Both of the segmentations converged in a couple of minutes on a 2.00*GHz* Intel XEON CPU.

Three steps in the segmentation of 2 shapes in a 2D cardiac MR image without (top) and with (bottom) neighbor prior (λ_{i} = ω_{i} = 0.5, *i* = 1, 2).

We then consider a 2D MR brain image with eight subcortical structures of different intensities and with blurred boundaries. Figure 6 shows a few of the zero level sets of post-aligned eight subcortical structures from 12 2D MR training images. Figure 7 top shows a few steps of the segmentation using only gray level information. Only the lower (posterior) portions of the lateral ventricles can be segmented perfectly since they have clearer boundaries. Figure 7 bottom shows the results of using our joint shape prior model, where the ventricles are the reference objects for all the other objects. Segmenting all eight subcortical structures took approximately several minutes.

Segmentation of 8 sub-cortical structures (the lateral ventricles (λ_{i} = 0.8, ω_{i} = 0.2), heads of the caudate nucleus (λ_{i} = 0.3, ω_{i} = 0.7), and putamina (λ_{i} = 0.2, ω_{i} = 0.8)) in a MR brain image without prior **...**

We also test our method using 3D medical images. Figure 8 shows a few steps in the segmentation of the left and right amygdalae and hippocampi in a MR brain image. Segmentating the four structures can be very tough without using prior information since all of them have very poorly defined boundaries. After using our neighbor constraint joint prior model, as shown in Figure 8, the four structures can be clearly segmented, where we choose the amygdalae as the references since they have relative smaller variances of the shapes. Segmenting these 3D images (with size 172 × 148 × 124) took approximately a couple of hours.

Initial, middle, and final steps in the segmentation of left and right amygdalae and hippocampi (λ_{i} = 0.1, ω_{i} = 0.9, *i* = 1, 2, 3, 4.) in a 3D MR brain image. Three orthogonal slices (coronal, sagittal, and axial) and the 3D surfaces are **...**

To validate the segmentation results, we test our model on 12 different images for each of the above 3 cases respectively, the tested images are not in their training sets. We then compute the undirected average distances of pixels between the boundaries of the computed segmentation *A* (*N _{A}* points) and the boundaries of the manual segmentation

We also test the robustness of our algorithm to noise. We add Gaussian noise to the MR image in Figure 7 (the mean intensities of white/gray matters: 70/48), then segment it. Figure 9 shows the segmentation results with Gaussian noise of standard deviation of 20 (left), 30 (middle) and 40 (right). In Figure 10, we show the segmentation errors of the lower portion of the left lateral ventricle in three cases: with no prior, with shape prior, and with joint neighbor prior. As the variance of the noise goes up, the error for no prior increases rapidly since the structure is too noisy to be detected using only gray level information. However, for the methods with shape prior and with joint neighbor prior, the errors are much lower and are locked in a very small range even when the variance of the noise is very large. Note that our joint neighbor joint prior model achieves the smallest error among all the cases.

Initial and final steps in the segmentation of 8 sub-cortical structures in a MR brain image with Gaussian noise of σ = 20 (left), σ = 30 (middle), and 40 (right).

In our work, we have focused on balancing the weights of the image data (λ_{i}) and neighbor prior (ω_{i}) with all the other parameters fixed. For all the applications presented in the paper, the weights of the image data and prior information can be varied by around 30% with the corresponding segmentation errors changing by no more than 5%. Thus, our method is not sensitive to the balance of the weights.

A new model for automated segmentation of images containing multiple objects by incorporating neighbor prior information in the segmentation process has been presented. We wanted to capture the constraining information that neighboring objects provided and use it for segmentation. We define a MAP estimation framework using the prior information provided by multiple neighboring objects to segment several objects simultaneously. We introduce a representation for the joint density function of the neighbor objects, and define joint probability distributions over the variations of the neighboring positions and shapes in a set of training images. We estimate the MAP shapes of the objects using evolving level sets based on the associated Euler-Lagrange equations. The contours evolve both according to the neighbor prior information and the image gray level information. Multiple objects in an image can be automatically detected simultaneously.

[1] Yang J, Staib L, Duncan J. Neighbor-Constrained Segmentation with 3D Deformable Models. IPMI. 2003:198–209. [PubMed]

[2] Kass M, Witkin A, Terzopoulos D. Snakes: Active contour models. Int'l Journal on Computer Vision. 1987;1:321–331.

[3] Cootes TF, Hill A, Taylor CJ, Haslam J. Use of active shape models for locating structures in medical images. Image and Vision Computing. 1994 July;12(6):355–365.

[4] Staib L, Duncan J. Boundary finding with parametrically deformable models. PAMI. 1992;14(11):1061–1075.

[5] Zeng X, Staib LH, Schultz RT, Duncan JS. Volumetric Layer Segmentation Using Coupled Surfaces Propagation. IEEE Conf. on Comp. Vision and Patt. Recog. 1998

[6] Leventon M, Grimson E, Faugeras O. Statistical shape influence in geodesic active contours. IEEE Conf. on Comp. Vision and Patt. Recog. 2000;1:316–323.

[7] Caselles V, Kimmel R, Sapiro G. Geodesic active contours. Int. J. Comput. Vis. 1997;22(1):61–79.

[8] Tsai A, Wells W, Tempany C, Grimson E, Willsky A. Coupled Multi-shape Model and Mutral Information for Medical Image Segmentation. IPMI. 2003:185–197. [PubMed]

[9] Chan T, Vese L. Active Contours Without Edges. IEEE Transactions on Image Processing. 2001;10(2):266–277. [PubMed]

PubMed Central Canada is a service of the Canadian Institutes of Health Research (CIHR) working in partnership with the National Research Council's national science library in cooperation with the National Center for Biotechnology Information at the U.S. National Library of Medicine(NCBI/NLM). It includes content provided to the PubMed Central International archive by participating publishers. |