Home | About | Journals | Submit | Contact Us | Français |

**|**PLoS One**|**v.12(4); 2017**|**PMC5380353

Formats

Article sections

- Abstract
- Introduction
- Theoretical foundations
- Proposed method
- Results and comparisons
- Quantitative analysis
- Conclusion
- Appendix
- Supporting information
- References

Authors

Related links

PLoS One. 2017; 12(4): e0174813.

Published online 2017 April 4. doi: 10.1371/journal.pone.0174813

PMCID: PMC5380353

Dzung Pham, Editor^{}

Center for Neuroscience and Regenerative Medicine, UNITED STATES

**Conceptualization:**FA MAG DP.**Data curation:**FA.**Formal analysis:**FA.**Funding acquisition:**FA DP.**Investigation:**FA MAG DP.**Methodology:**FA MAG DP.**Project administration:**DP.**Resources:**FA.**Software:**FA.**Supervision:**MAG DP.**Validation:**FA MAG DP.**Visualization:**FA MAG DP.**Writing – original draft:**FA.**Writing – review & editing:**FA MAG DP.

* E-mail: tac.vru@marka.nahraf

Received 2016 September 21; Accepted 2017 March 15.

Copyright © 2017 Akram et al

This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.

This article has been cited by other articles in PMC.

This paper presents a region-based active contour method for the segmentation of intensity inhomogeneous images using an energy functional based on local and global fitted images. A square image fitted model is defined by using both local and global fitted differences. Moreover, local and global signed pressure force functions are introduced in the solution of the energy functional to stabilize the gradient descent flow. In the final gradient descent solution, the local fitted term helps extract regions with intensity inhomogeneity, whereas the global fitted term targets homogeneous regions. A Gaussian kernel is applied to regularize the contour at each step, which not only smoothes it but also avoids the computationally expensive re-initialization. Intensity inhomogeneous images contain undesired smooth intensity variations (bias field) that alter the results of intensity-based segmentation methods. The bias field is approximated with a Gaussian distribution and the bias of intensity inhomogeneous regions is corrected by dividing the original image by the approximated bias field. In this paper, a two-phase model is first derived and then extended to a four-phase model to segment brain magnetic resonance (MR) images into the desired regions of interest. Experimental results with both synthetic and real brain MR images are used for a quantitative and qualitative comparison with state-of-the-art active contour methods to show the advantages of the proposed segmentation technique in practical terms.

Image segmentation is an important stage in image processing and computer vision [1]. Intensity inhomogeneity is one of the well-known problems in image segmentation, which arises from the imperfections of the image acquisition process or due to external interferences. It manifests as a smooth intensity variation across the image that complicates the segmentation of the objects contained in it. For instance, in medical image analysis, segmentation and registration stages are highly sensitive to spurious variations of image intensity. Therefore, the complexity of intensity inhomogeneity can lead to false results and assumptions that can be critical for decision making by both doctors and radiologists. This is why numerous methods for intensity inhomogeneity correction have been proposed in the past [2].

Thus, intensity inhomogeneity correction is often a pre-processing stage necessary for achieving better image segmentation. In turn, correct segmentation makes intensity inhomogeneity correction rather trivial. Actually, both intensity inhomogeneity correction and segmentation can be viewed as two intertwined processes. In segmentation-based intensity inhomogeneity correction methods, both processes are merged such that they benefit from each other.

The techniques that aim to avoid intensity inhomogeneity in the image acquisition process are known as *prospective*
*methods*. They are only capable of correcting intensity inhomogeneity caused by the imaging device, not being able to segment the objects affected by intensity inhomogeneity. On the other hand, techniques that can correct an image and also segment the objects affected by intensity inhomogeneity are called *retrospective*
*methods*. Retrospective methods are further classified according to the image segmentation method they apply into: filtering methods [3, 4], surface fitting methods [5, 6], histogram-based methods [7, 8], and active contours [9–15]. Segmentation-based methods [10–12] are the most versatile, since they unify segmentation and bias correction in a single framework. In these methods, segmentation and bias correction are applied in conjunction to benefit from each other.

Active contours are retrospective methods suitable for both image segmentation and bias correction [10–12, 16–18]. The first active contour method was proposed in [19] in order to segment an image by evolving a curve towards the boundary of an object contained in the image. An energy functional is first defined by using image statistics, curvature and gradient information. The curve is evolved by minimizing that energy functional. Active contour methods can be categorized into edge-based [19–21] and region-based [22–36] methods.

Edge-based active contour methods typically use image gradient information to define an inflating balloon force that is used in the curve evolution process [20]. However, in case of intense noise or weak edges, edge-based active contours can hardly converge to the right contours. Therefore, edge-based methods are not suitable for that kind of images.

Alternatively, region-based active contour methods use statistical and curvature information from the image in the formulation of the energy functional. They can be further characterized into global [22–26] and local [27–34] methods. Mumford and Shah [22] devised a global region-based active contour method by assuming image homogeneity. Thus, traditional active contour methods based on [22], such as the *active contours without edges* (ACWE) method [23], cannot segment images with intensity inhomogeneity. These methods usually compute intensity averages over the whole image. Therefore, they cannot deal with small changes between distinct regions nor segment objects with weak or blurred boundaries. On the other hand, local-based methods [27, 28] are able to distinguish small changes between the background and the foreground. Therefore, they are suitable for intensity inhomogeneous images.

A region-based active contour model able to process image information in local regions was proposed by Li et al. in [27, 28]. The major contribution of that work was the introduction of a local binary fitting (LBF) energy with a kernel function that enables the extraction of accurate local image information. Therefore, that model can be used to segment images with intensity inhomogeneity, which overcomes the limitation of piecewise constant models [23].

Fig 1(a) shows that a traditional region based active contour method (such as ACWE) is unable to segment an image with an intensity inhomogeneous object. However, a local active contour method (such as LBF) is able to properly segment such object as shown in Fig 1(b). Taking Fig 1 as a reference, it can be concluded that active contour methods which use local statistical information of an image are able to produce fairly acceptable segmentation results in the context of intensity inhomogeneity.

A region-based active contour model with a variational level-set formulation for image segmentation was presented in [29]. It uses local intensity means for computing the level-set curve. Local image intensities are described by local Gaussian distributions (LGD) to guide the motion of the contour toward the object boundaries. Therefore, it can be used to segment images in the presence of intensity inhomogeneity and noise. In particular, the model is able to distinguish regions with different intensity variances.

A local active contour model for segmenting images with intensity inhomogeneity was proposed by Zhang et al. in [30]. Local image information is used to define a local image fitting (LIF) energy functional, which can be interpreted as a constraint on the differences between the fitting image [27, 28] and the original image. Furthermore, a new method is used to regularize the level-set function by using Gaussian kernel filtering after each iteration. In addition, re-initialization of the level-set curve is not required.

In [25], a region-based active contour method is formulated by using a global *signed*
*pressure*
*force* (SPF) function in the energy function. The global SPF function is defined by using global intensity means from the ACWE model [23]. A Gaussian kernel is used to regularize the level-set and prevent re-initialization.

Alternatively, a region-based active contour method is formulated in [31] in the context of intensity inhomogeneity by utilizing local intensity means. It uses an SPF function based on a local fitted image in its energy formulation in order to segment images with intensity inhomogeneity. A Gaussian kernel is used to smooth the level-set function after every step. Therefore, this method does not require re-initialization.

A variational level-set approach for bias correction and segmentation (VLSBCS) for images corrupted with intensity inhomogeneity was proposed by Li et al. in [9, 10]. The computed bias field is intrinsically ensured to be smooth by the data term in the variational formulation, without any additional effect to maintain the smoothness of the bias field.

A local statistical active contour model (LSACM) for image segmentation in the presence of intensity inhomogeneity was proposed by Zhang et al. in [11, 12]. The inhomogeneous objects are modelled as Gaussian distributions of different means and variances, and a moving window is used to map the original image into a new domain in which the intensity distributions of inhomogeneous objects are still Gaussian but better separated. The means of the Gaussian distributions in the transformed domain can be adaptively estimated by multiplying the bias field with the original signal within the window. A statistical energy functional is then defined for each local region, which combines the bias field, the level-set function, and the constant approximating the true signal of the corresponding object.

The present paper proposes a new region-based active contour method that incorporates both local and global fitted images in the energy functional. The local term helps segment objects with intensity inhomogeneity, whereas the global term accelerates the evolution of the contour over smooth homogeneous regions. The new energy functional is formulated with the assumption that the local fitted image from [30] can be divided into two different local and global components. In the gradient descent solution of the proposed energy functional, both local and global signed pressure force (SPF) functions are introduced. The local SPF function is formulated by using local image differences and deals with inhomogeneous regions. In turn, the global SPF function is formulated by using the global image difference in order to segment homogeneous regions while reducing the convergence time of the level-set curve. The replacement of the local and global image differences by their respective SPF functions makes the gradient descent solution more stable. Finally, a Gaussian kernel is used to regularize the level-set curve, which also avoids the computationally expensive re-initialization. In this paper, the proposed two-phase model is also extended to a four-phase model in order to successfully segment a brain MR image into three regions: white matter (WM), grey matter (GM) and cerebrospinal fluid (CSF) regions, which is not possible with a two-phase model. Experiments with both synthetic and real images demonstrate that the proposed method yields better segmentation results and offers bias-corrected images with more detail than state-of-the-art methods.

This paper is organized as follows. The theoretical foundations are discussed in section 2. The proposed method is described in section 3. Experimental results and comparisons are shown in section 4 using both synthetic and real brain magnetic resonance images as a practical application scope. Quantitative analysis is presented in section 5 using a public database of brain anatomical models [37]. Finally, conclusions and further research lines are given in section 6.

In the image segmentation problem, Mumford and Shah proposed an active contour method to find an optimum piecewise smooth approximation function *u* of an image. Let *I*: Ω → **R**^{2}, where *u* varies within each sub-region Ω_{i} of the image domain smoothly, and rapidly or discontinuously across the boundaries of Ω_{i} in order to approximate a closed curve *C* Ω along the object boundary. They proposed the following energy functional:

$${E}_{{\phantom{\rule{-0.166667em}{0ex}}}_{MS}}=\lambda \underset{\Omega}{\int}|I\left(x\right)-u\left(x\right){|}^{2}\phantom{\rule{4pt}{0ex}}dx+v\underset{\Omega \backslash C}{\int}|\nabla u\left(x\right){|}^{2}\phantom{\rule{4pt}{0ex}}dx+\mu L\left(C\right),\phantom{\rule{2.em}{0ex}}x\in \Omega $$

(1)

where *L*(*C*) is the length of the curve *C*, and *μ* and *v* ≥ 0 are fixed parameters. The unknown contour *C* and the non-convexity of the above energy functional make it difficult to minimize it. Some alternative methods were later proposed to simplify or modify the above functional, as described below.

Chan and Vese [23] proposed an active contour method based on the Mumford and Shah model [22]. Let *I*: Ω → **R**^{2} be an input image, *ϕ*: Ω → **R**^{2} a level set, and *C* a closed curve corresponding to the zero level set: *C* = {*x* Ω|*ϕ*(*x*) = 0}. The following energy functional is defined:

$$\begin{array}{ll}{E}_{CV}\hfill & ={\lambda}_{1}\underset{\Omega}{{\displaystyle \int}}\left|I\left(x\right)-{m}_{1}{|}^{2}{H}_{\u03f5}\left(\varphi \right)dx+{\lambda}_{2}\underset{\Omega}{{\displaystyle \int}}\right|I\left(x\right)-{m}_{2}{|}^{2}\left(1-{H}_{\u03f5}\left(\varphi \right)\right)dx\hfill \\ \hfill & +\mu \underset{\Omega}{{\displaystyle \int}}|\nabla {H}_{\u03f5}\left(\varphi \right){|}^{2}dx+v\underset{\Omega}{{\displaystyle \int}}{H}_{\u03f5}\left(\varphi \right)dx\hfill \end{array}$$

(2)

where *μ* ≥ 0, *v* ≥ 0, (*λ*_{1}, *λ*_{2})>0 are fixed parameters and *H*_{ϵ}(*ϕ*) is the regularized version of the Heaviside function:

$${H}_{\u03f5}\left(\varphi \right)=\frac{1}{2}+\frac{1}{\pi}arctan\left(\frac{\varphi}{\u03f5}\right)$$

(3)

Parameter *ϵ* controls the smoothness of the Heaviside function. For *ϵ* → 0, the Heaviside function is the ideal unit step function. Parameter *μ* scales the Euclidian length of the curve *C* in (Eq 2), which is used to regularize the contour. In turn, parameter *v* scales the area term in (Eq 2), which is used to compute the area of the region inside *C*. Constants *m*_{1} and *m*_{2} approximate the image intensities inside and outside contour *C*, respectively. By minimizing the above energy functional with respect to *ϕ* through steepest gradient descent [38], the following gradient descent flow is obtained:

$$\begin{array}{ll}\frac{\partial \varphi}{\partial t}\hfill & =(-{\lambda}_{1}{\left(I\left(x\right)-{m}_{1}\right)}^{2}+{\lambda}_{2}{\left(I\left(x\right)-{m}_{2}\right)}^{2}\hfill \\ \hfill & +\mu \text{div}\left(\frac{\nabla \varphi}{\left|\nabla \varphi \right|}\right)-v){\delta}_{\u03f5}\left(\varphi \right),\hfill \end{array}$$

(4)

where *δ*_{ϵ}(*ϕ*) is the regularized Dirac function:

$${\delta}_{\u03f5}\left(\varphi \right)=\frac{\u03f5}{\pi ({\varphi}^{2}+{\u03f5}^{2})}$$

(5)

In addition to specifying the smoothness of the Heaviside function (3), parameter *ϵ* also controls the width of the Dirac function. For *ϵ*→ 0, the Dirac function is the ideal unit impulse. By minimizing (Eq 2) with respect to *m*_{1} and *m*_{2} while keeping *ϕ* constant, *m*_{1} and *m*_{2} are defined as:

$$\begin{array}{ll}{m}_{1}\hfill & =\frac{{{\displaystyle \int}}_{\Omega}I\left(x\right){H}_{\u03f5}\left(\varphi \right)dx}{{{\displaystyle \int}}_{\Omega}{H}_{\u03f5}\left(\varphi \right)dx},\hfill \\ {m}_{2}\hfill & =\frac{{{\displaystyle \int}}_{\Omega}I\left(x\right)\left(1-{H}_{\u03f5}\left(\varphi \right)\right)dx}{{{\displaystyle \int}}_{\Omega}\left(1-{H}_{\u03f5}\left(\varphi \right)\right)dx}\hfill \end{array}$$

(6)

The data fitting term − *λ*_{1}(*I* − *m*_{1})^{2} + *λ*_{2}(*I* − *m*_{2})^{2} in (Eq 4) plays a key role in the curve evolution. Parameters *λ*_{1} and *λ*_{2} weight the first and the second term, respectively. In most cases, *λ*_{1} = *λ*_{2} and *v* = 0 when the image is smooth and the signal-to-noise ratio is low. Parameter *μ* is a scale factor. If it is low enough, small objects are likely to be extracted. Alternatively, if it is high, big objects can be detected [23]. Obviously, *m*_{1} and *m*_{2} in (Eq 6) are related to the global properties of the image contents inside and outside curve *C*, respectively. However, such a global image segmentation is not accurate if the image intensity inside and/or outside the curve is inhomogeneous.

Li et al. [27, 28] proposed the LBF model by embedding local image information in the energy functional. LBF is able to segment images with intensity inhomogeneity. The basic idea is to introduce a Gaussian kernel function to define the LBF energy functional as follows:

$$\begin{array}{ll}{E}_{LBF}\hfill & ={\lambda}_{1}\underset{\Omega}{\int}{K}_{\sigma}\left(x-y\right)|I\left(y\right)-{f}_{1}\left(x\right){|}^{2}{H}_{\u03f5}\left(\varphi \right)dydx\hfill \\ \hfill & +{\lambda}_{2}\underset{\Omega}{\int}{K}_{\sigma}\left(x-y\right)|I\left(y\right)-{f}_{2}\left(x\right){|}^{2}\left(1-{H}_{\u03f5}\left(\varphi \right)\right)dydx,\hfill \end{array}$$

(7)

where (*λ*_{1}, *λ*_{2}) > 0 are fixed parameters, *I*: Ω → **R**^{2} is an input image, *K*_{σ} is a Gaussian kernel with standard deviation *σ*, and *f*_{1} and *f*_{2} are two smooth functions that approximate the local image intensities inside and outside curve *C*, respectively.

In LBF, *C* Ω can be represented by the zero level-set of a Lipschitz function *ϕ*: Ω **R**. Minimizing the energy functional *E*_{LBF} with respect to *ϕ*, the gradient descent flow is defined as follows:

$$\frac{\partial \varphi}{\partial t}=-({\lambda}_{1}{e}_{1}-{\lambda}_{2}{e}_{2}){\delta}_{\u03f5}\left(\varphi \right),$$

(8)

where *e*_{1} and *e*_{2} are defined as follows:

$$\begin{array}{ll}{e}_{1}\left(x\right)\hfill & =\underset{\Omega}{{\displaystyle \int}}{K}_{\sigma}\left(x-y\right)|I\left(y\right)-{f}_{1}\left(x\right){|}^{2}dy,\hfill \\ {e}_{2}\left(x\right)\hfill & =\underset{\Omega}{{\displaystyle \int}}{K}_{\sigma}\left(x-y\right)|I\left(y\right)-{f}_{2}\left(x\right){|}^{2}dy\hfill \end{array}$$

(9)

In (Eq 8), parameters *λ*_{1} and *λ*_{2} weight the two integrals over the regions inside and outside curve *C*, respectively, which are defined in (Eq 9). In most cases, *λ*_{1} = *λ*_{2}. Functions *f*_{1} and *f*_{2} are the local intensity means inside and outside curve *C*, which are computed in a local neighbourhood:

$$\begin{array}{ll}{f}_{1}\left(x\right)\hfill & =\frac{{K}_{\sigma}*\left[I\left(x\right){H}_{\u03f5}\left(\varphi \right)\right]}{{K}_{\sigma}*{H}_{\u03f5}\left(\varphi \right)},\hfill \\ {f}_{2}\left(x\right)\hfill & =\frac{{K}_{\sigma}*\left[I\left(x\right)\left(1-{H}_{\u03f5}\left(\varphi \right)\right)\right]}{{K}_{\sigma}*\left(1-{H}_{\u03f5}\left(\varphi \right)\right)}\hfill \end{array}$$

(10)

Functions *f*_{1} and *f*_{2} in (Eq 10) represent weighted averages of image intensities in a Gaussian window inside and outside the curve, respectively. In that way, the LBF model can handle images with intensity inhomogeneity.

The standard deviation *σ* of the Gaussian kernel plays an import role in practical applications. It behaves as a scale parameter that controls the region-scalability from the small neighbourhood to the whole image domain [28]. It must be properly chosen according to the images. A too small *σ* may cause an undesirable result, whereas a too large *σ* will yield a high computational cost.

In order to guarantee a stable evolution of the level-set function, the distance regularized term defined in [21] is incorporated into (Eq 8). Moreover, the Euclidean length term is used to regularize the zero contour of *ϕ*. Finally, the total variational formulation is as follows:

$$\begin{array}{ll}\frac{\partial \varphi}{\partial t}\hfill & =\gamma \left({\nabla}^{2}\varphi -\text{div}\left(\frac{\nabla \varphi}{\left|\nabla \varphi \right|}\right)\right)\hfill \\ \hfill & +\left(\mu \text{div}\left(\frac{\nabla \varphi}{\left|\nabla \varphi \right|}\right)-\left({\lambda}_{1}{e}_{1}-{\lambda}_{2}{e}_{2}\right)\right){\delta}_{\u03f5}\left(\varphi \right),\hfill \end{array}$$

(11)

where *γ* is a scaling parameter of the distance regularized energy penalization term that controls the energy leakage and *μ* is a scaling parameter of the Euclidian length of the curve.

In [30], a local image fitting energy functional is proposed in which the difference between the fitted image and the original image is minimized as follows:

$${E}_{LIF}=\frac{1}{2}\underset{\Omega}{{\displaystyle \int}}|I\left(x\right)-{I}_{LFI}\left(x\right){|}^{2}dx,$$

(12)

where *I*_{LFI} is the local fitted image defined as:

(13)

where *M*_{1} = *H*_{ϵ}(*ϕ*), *M*_{2} = (1 − *H*_{ϵ}(*ϕ*)), and *f*_{1} and *f*_{2} are local intensity means in the given image (Eq 10). *H*_{ϵ}(*ϕ*) is the regularized version of the Heaviside function (3). Using the calculus of variations and steepest gradient descent [38], *E*_{LIF} in (Eq 12) is minimized with respect to *ϕ* to yield the corresponding gradient descent flow:

$$\frac{\partial \varphi}{\partial t}=\left(I\left(x\right)-{I}_{LFI}\left(x\right)\right)\left({f}_{1}\left(x\right)-{f}_{2}\left(x\right)\right){\delta}_{\u03f5}\left(\varphi \right),$$

(14)

where *δ*_{ϵ}(*ϕ*) is the regularized Dirac function (11).

In [9, 10], a variational level-set method for the segmentation and bias correction of images corrupted with intensity inhomogeneity is formulated. In this method, the computed bias field is ensured to be smooth exclusively through the data term defined in the energy functional formulation. This method is based on an image model commonly used to describe images with intensity inhomogeneity:

(15)

where *I*(*x*) is the input image with intensity inhomogeneity, *J*(*x*) is the image to be restored without intensity inhomogeneity, *b*(*x*) is the bias field, which represents the modulation of the restored image with the intensity inhomogeneity, and *n*(*x*) is noise. The model assumes that the restored image *J*(*x*) is constant within each object in the image, i.e., $J\left(x\right)\approx {\displaystyle \sum _{i=1}^{N}}{c}_{i}{M}_{i}$ for *x* Ω_{i}, with $x\in {\left\{{\Omega}_{i}\right\}}_{i=1}^{N}$ being a sub-region of Ω.

In traditional active contour methods, the image domain Ω is assumed to be divided into *N* disjoint regions, Ω_{i}, *i* = 1, 2, …., *N*, based on the input image *I*(*x*). However, due to the intensity inhomogeneity caused by the bias field *b*(*x*), the measured intensities are not separable by using traditional intensity-based segmentation methods.

In [9, 10], a K-means clustering method based on the minimization of the following objective function is proposed:

$$E\cong \int \left(\sum _{i=1}^{N}\underset{{\Omega}_{i}}{\int}{K}_{\sigma}(x-y)|I\left(y\right)-b\left(x\right){c}_{i}{|}^{2}dy\right)dx,$$

(16)

where *b*(*x*) is the approximated bias field and *c*_{i} is the computed intensity mean in the presence of intensity inhomogeneity for each of the phases of the two-phase active contour method (*i* = 1, 2). Term *b*(*x*)*c*_{i} can be considered to be the approximation of the means *m*_{i} of the clusters corresponding to each of the phases of the two-phase active contour method: *m*_{i} ≈ *b*(*x*)*c*_{i}.

Directly minimizing the above energy functional with the partition ${\left\{{\Omega}_{i}\right\}}_{i=1}^{N}$ as a variable is not feasible. Therefore, multiple level-set functions are used to represent a partition ${\left\{{\Omega}_{i}\right\}}_{i=1}^{N}$. In the simplest case of *N* = 2, the image domain is partitioned into two regions {Ω_{1}, Ω_{2}}. These regions are separated by the zero-level contour of a function *ϕ*, that is, Ω_{1} {*ϕ* > 0} and Ω_{2} {*ϕ*< 0}. Using the Heaviside function *H*_{ϵ}, the energy *E* in (Eq 16) becomes:

$$E=\int \left(\sum _{i=1}^{N}\underset{\Omega}{\int}{K}_{\sigma}(x-y)|I\left(y\right)-b\left(x\right){c}_{i}{|}^{2}{M}_{i}\left(\varphi \right)dy\right)dx,$$

(17)

where *M*_{i} is the characteristic function of a given image based on the regularized Heaviside function (3). When the image domain is partitioned in two regions {Ω_{1}, Ω_{2}}, *M*_{i} is defined as *M*_{1} = *H*_{ϵ}(*ϕ*) and *M*_{2} = (1 − *H*_{ϵ}(*ϕ*)). By taking the Gateaux derivative (the first order functional derivative) [38] of the energy *E*, the following expressions for *b*(*x*) and *c*_{i} are obtained:

$$b\left(x\right)=\frac{{\displaystyle \sum _{i=1}^{2}}{K}_{\sigma}*\left[I\left(x\right){c}_{i}{M}_{i}\left(\varphi \right)\right]}{{\displaystyle \sum _{i=1}^{2}}{K}_{\sigma}*\left[{c}_{i}^{2}{M}_{i}\left(\varphi \right)\right]},$$

(18)

$${c}_{i}=\frac{\int {K}_{\sigma}*\left[I\left(x\right)b\left(x\right){M}_{i}\left(\varphi \right)\right]dx}{\int {K}_{\sigma}*\left[{b}^{2}\left(x\right){M}_{i}\left(\varphi \right)\right]dx},$$

(19)

For *N* = 4, the image is partitioned into four regions (Ω_{1}, Ω_{2}, Ω_{3}, Ω_{4}) using two level sets *ϕ*_{1} and *ϕ*_{2}. Then, *M*_{i} is defined as *M*_{1}(Φ) = *H*_{ϵ}(*ϕ*_{1})*H*_{ϵ}(*ϕ*_{2}), *M*_{2}(Φ) = *H*_{ϵ}(*ϕ*_{1}) (1 − *H*_{ϵ}(*ϕ*_{2})), *M*_{3}(Φ) = (1 − *H*_{ϵ}(*ϕ*_{1}))*H*_{ϵ}(*ϕ*_{2}) and *M*_{4}(Φ) = (1 − *H*_{ϵ}(*ϕ*_{1})) (1 − *H*_{ϵ}(*ϕ*_{2})).

In order to segment and correct bias in an intensity inhomogeneous image a local statistical active contour model is devised in [11, 12], in which the inhomogeneous objects are modelled as Gaussian distributions of different means and variances. The means of the Gaussian distributions are adaptively estimated by multiplying the bias field by the original signal within a Gaussian window. Let *I*: Ω → **R**^{2} be the input image, Φ: Ω → **R**^{2} a level set function, which yields a closed curve *C* = {*x* Ω|Φ(*x*) = 0}, *b*(*x*) is the approximated bias field, *c*_{i}and *σ*_{i}are intensity means and variances, respectively. Φ is a function of one level set *ϕ* for a two-phase segmentation and Φ is a function of two level sets (*ϕ*_{1}, *ϕ*_{2}) for four-phase segmentation. Based on the above assumptions, an energy functional is defined as:

$$E(\Phi )=\underset{\Omega}{\int}{K}_{\beta}(x,y)\left(log\left({\sigma}_{i}\right)+{(I\left(y\right)-b\left(x\right){c}_{i})}^{2}/2{\sigma}_{i}^{2}\right){M}_{i}(\Phi )dx,\phantom{\rule{1.em}{0ex}}N=2\phantom{\rule{4pt}{0ex}}or\phantom{\rule{4pt}{0ex}}N=4$$

(20)

where *K*_{β}(*x*, *y*) is a Gaussian kernel with a standard deviation *β*. For *N* = 2, the energy functional in (Eq 20) acts as a two-phase active contour method (with one level set). The two characteristic terms are: *M*_{1}(Φ) = *H*_{ϵ}(*ϕ*) and *M*_{2}(Φ) = 1 − *H*_{ϵ}(*ϕ*)). In turn, for *N* = 4, the energy functional in (Eq 20) acts as a four-phase active contour method (with two level sets *ϕ*_{1}and *ϕ*_{2}). The four characteristic terms are defined as: *M*_{1}(Φ) = *H*_{ϵ}(*ϕ*_{1})*H*_{ϵ}(*ϕ*_{2}), *M*_{2}(Φ) = *H*_{ϵ}(*ϕ*_{1}) (1 − *H*_{ϵ}(*ϕ*_{2})), *M*_{3}(Φ) = (1 − *H*_{ϵ}(*ϕ*_{1}))*H*_{ϵ}(*ϕ*_{2}) and *M*_{4}(Φ) = (1 − *H*_{ϵ}(*ϕ*_{1}))*H*_{ϵ}(*ϕ*_{2}) *H*_{ϵ}(*ϕ*_{2})). In (Eq 20), bias correction *b*(*x*), intensity means *c*_{i} and variance *σ*_{i} are defined as:

$$b\left(x\right)=\frac{{\displaystyle \sum _{i=1}^{N}}{K}_{\beta}*\left(I{M}_{i}(\Phi \left(x\right))\right)\xb7\frac{{c}_{i}}{{\sigma}_{i}^{2}}}{{\displaystyle \sum _{i=1}^{N}}{K}_{\beta}*{M}_{i}(\Phi \left(x\right))\xb7\frac{{c}_{i}^{2}}{{\sigma}_{i}^{2}}}$$

(21)

$${c}_{i}=\frac{\int \left({K}_{\beta}*b\right)I\left(x\right){M}_{i}(\varphi \left(x\right))dx}{\int \left({K}_{\beta}*{b}^{2}\right){M}_{i}(\varphi \left(x\right))dx,}$$

(22)

$${\sigma}_{i}=\sqrt{\frac{\int \int \text{\hspace{0.22em}}{K}_{\beta}\left(x,y\right){\left(I\left(x\right)-b\left(x\right){c}_{i}\right)}^{2}{M}_{i}(\varphi \left(x\right))dydx}{\int {K}_{\beta}\left(x,y\right){M}_{i}(\varphi \left(x\right))dydx}}$$

(23)

The generally accepted assumption on intensity inhomogeneity is that it manifests itself as a smooth spatially varying function that alters image intensities that otherwise would be constant for a same object regardless its position in an image. In its simplest form, the model assumes that intensity inhomogeneity is multiplicative or additive, that is, the intensity inhomogeneity field multiplies or adds to the image intensities. Most frequently, the multiplicative model has been used as it is consistent with the inhomogeneous sensitivity of the reception coil of magnetic resonance imaging devices. Therefore, a multiplicative model has been considered for the bias field estimation. Let *I*: Ω → **R**^{2} be the input image with intensity inhomogeneity, *J*(*x*) the restored image without intensity inhomogeneity, *b*(*x*) the bias field approximated with a Gaussian distribution, and *n*(*x*) additive noise (Eq 15).

*J*(*x*) is assumed to be constituted by *k* piecewise constant image components. *I*(*x*) can thus be represented as:

(24)

where *c*_{i} are intensity means computed for the piecewise regions ${\left\{{\Omega}_{i}\right\}}_{i=1}^{N}$ and *M*_{i} is the characteristic function of each region.

In order to segment intensity inhomogeneous images, the following energy functional is defined:

(25)

where *E*_{LGFI} (*ϕ*) is a term based on a local and global fitted image, which will be explained later in this section. *μ* ≥ 0 and *v* ≥ 0 are fixed parameters. *L*_{g}(*ϕ*) and *A*_{g}(*ϕ*) are length and area terms, respectively [21]:

$${L}_{g}\left(\varphi \right)=\underset{\Omega}{\int}g\left(I\right){\delta}_{\u03f5}\left(\varphi \right)|\nabla \varphi |dx,$$

(26)

$${A}_{g}\left(\varphi \right)=\underset{\Omega}{\int}g\left(I\right){H}_{\u03f5}(-\varphi )dx,$$

(27)

The energy functional *A*_{g}(*ϕ*) is introduced to speed up the curve evolution. It is the area of the region ${\Omega}_{\varphi}^{-}=\{(x,y)|\varphi (x,y)<0\}$ [21]. *A*_{g}(*ϕ*) can be viewed as the weighted area of ${\Omega}_{\varphi}^{-}$. In (Eq 27), *g*(*I*) is a positive monotonously decreasing edge indicator function ranging in [0, 1]:

$$g\left(I\right)=\frac{1}{1+|\nabla {K}_{\sigma}{*I|}^{2}}$$

(28)

*E*_{LGFI} is defined according to the following reformulation of (Eq 12):

$${E}_{LGFI}=\underset{\Omega}{\int}\left(I\left(x\right)-{I}_{bLFI}\left(x\right)\right)\left(I\left(x\right)-{I}_{GFI}\left(x\right)\right)dx,$$

(29)

In (Eq 29), let *I*_{bLFI}(*x*) be a two-phase bias local fitted image and *I*_{GFI}(*x*) a global fitted image, using a level set *ϕ*, which are defined as:

(30)

(31)

where *c*_{1}and *c*_{2} are local intensity means and *m*_{1}and *m*_{2} are global intensity means of the given image as defined in Eqs (19) and (6), respectively. *M*_{1} = *H*_{ϵ}(*ϕ*) and *M*_{2} = (1 − *H*_{ϵ}(*ϕ*)), where *H*_{ϵ}(*ϕ*) is the regularized Heaviside function (3). Models based on global intensity means are not sufficient to solve intensity inhomogeneity segmentation problems. In turn, models based on local intensity means have a very high time complexity. Using both local and global fitted images, the proposed method is able to tackle the intensity inhomogeneity problem with a reduced time complexity.

As discussed earlier, an energy functional only based on a global fitted image cannot segment images with intensity inhomogeneity since global intensity means are computed under the assumption that the input image is homogeneous. Therefore, the bias field *b*(*x*) from (Eq 18) is only introduced in the local fitted image difference.

By using calculus of variations and steepest gradient descent [38], *E*_{LGFI} in (Eq 29) is minimized with respect to *ϕ*, leading to the corresponding gradient descent flow (refer to the appendix for a detailed derivation):

$$\begin{array}{ll}\frac{\partial \varphi}{\partial t}\hfill & =(\left(I\left(x\right)-{I}_{bLFI}\left(x\right)\right)\left({m}_{1}-{m}_{2}\right)\hfill \\ \hfill & +b\left(x\right)\left(I\left(x\right)-{I}_{GFI}\left(x\right)\right)\left({c}_{1}-{c}_{2}\right)){\delta}_{\u03f5}\left(\varphi \right),\hfill \end{array}$$

(32)

where *b*(*x*) is the bias field defined in (Eq 18), and {*m*_{1}, *m*_{2}} and {*c*_{1}, *c*_{2}} are global and local intensity means defined in Eqs (6) and (19), respectively. The above gradient descent flow is not stable around object boundaries. Moreover, it does not yield proper segmentation when the boundaries between inhomogeneous objects and the background are undistinguishable. (*I* − *I*_{bLFI}) (*m*_{1} − *m*_{2}) and (*I* − *I*_{GFI}) (*f*_{1} − *f*_{2}) generate large values that cross a maximum threshold resulting in an unstable contour. Therefore, (*I* − *I*_{bLFI}) and (*I* − *I*_{GFI}) are replaced by local and global signed pressure force (SPF) functions, which normalize values to [-1,1] to obtain a smooth version of the gradient descent flow:

$$\frac{\partial \varphi}{\partial t}=\left({\lambda}_{1}{L}_{SPF}({m}_{1}-{m}_{2})+{\lambda}_{2}b{G}_{SPF}({c}_{1}-{c}_{2})\right){\delta}_{\u03f5}\left(\varphi \right),$$

(33)

where the proposed local and global SPF functions are defined as:

$$\phantom{\rule{4pt}{0ex}}\phantom{\rule{4pt}{0ex}}{L}_{SPF}\left(I\right)=\left\{\begin{array}{cc}\frac{I\left(x\right)-{I}_{bLFI}\left(x\right)}{\phantom{\rule{0.166667em}{0ex}}\text{max}\left(|I\left(x\right)-{I}_{bLFI}\left(x\right)|\right)},\hfill & \phantom{\rule{4pt}{0ex}}I\left(x\right)\ne 0\hfill \\ 0,\hfill & \phantom{\rule{4pt}{0ex}}I\left(x\right)=0\hfill \end{array}\right.$$

(34)

$$\phantom{\rule{4pt}{0ex}}\phantom{\rule{4pt}{0ex}}{G}_{SPF}\left(I\right)=\left\{\begin{array}{cc}\frac{I\left(x\right)-{I}_{GFI}\left(x\right)}{\phantom{\rule{0.166667em}{0ex}}\text{max}\left(|I\left(x\right)-{I}_{GFI}\left(x\right)|\right)},\hfill & \phantom{\rule{4pt}{0ex}}I\left(x\right)\ne 0\hfill \\ 0,\hfill & \phantom{\rule{4pt}{0ex}}I\left(x\right)=0\hfill \end{array}\right.$$

(35)

By using the calculus of variations and steepest gradient descent, the solution of *E*_{g,LGFI} from (Eq 25) using Eqs (26) and (27) is:

$$\begin{array}{ll}\frac{\partial \varphi}{\partial t}\hfill & =({\lambda}_{1}{L}_{SPF}\left({m}_{1}-{m}_{2}\right)+{\lambda}_{2}b{G}_{SPF}\left({c}_{1}-{c}_{2}\right)\hfill \\ \hfill & +\mu \text{\hspace{0.17em}div}\left(g\frac{\nabla \varphi}{\left|\nabla \varphi \right|}\right)+vg){\delta}_{\u03f5}\left(\varphi \right)\hfill \end{array}$$

(36)

The two scaling parameters *λ*_{1}and *λ*_{2}in Eqs (33) and (36) are used to tune the model to different types of images.

In (Eq 29), let *I*_{bLFI}(*x*) be a four-phase bias local fitted image and *I*_{GFI}(*x*) a global fitted image, using two level sets Φ(*ϕ*_{1}, *ϕ*_{2}), which are defined as:

(37)

(38)

where *M*_{1}(Φ) = *H*_{ϵ}(*ϕ*_{1})*H*_{ϵ}(*ϕ*_{2}), *M*_{2}(Φ) = *H*_{ϵ}(*ϕ*_{1}) (1 − *H*_{ϵ}(*ϕ*_{2})), *M*_{3}(Φ) = (1 − *H*_{ϵ}(*ϕ*_{1}))*H*_{ϵ}(*ϕ*_{2}) and *M*_{4}(Φ) = (1 − *H*_{ϵ}(*ϕ*_{1})) (1 − *H*_{ϵ}(*ϕ*_{2})), which are the characteristic terms that partition a given image into four segments. *b*(*x*) is the bias term defined for the four-phase energy functional. *c*_{1}, *c*_{2}, *c*_{3} and *c*_{4} are local intensity means from VLSBCS [9, 10] and *m*_{1}, *m*_{2}, *m*_{3}and *m*_{4} are global intensity means from the multiphase level set framework (MLSF) [24], which is a multiphase extension of the two-phase energy functional defined by Chan-Vese [23].

By substituting the four-phase fitted equations in (Eq 29) using steepest gradient descent [38] the following solutions are obtained for *ϕ*_{1} and *ϕ*_{2} (for detailed formulation see appendix B):

$$\begin{array}{ll}\frac{\partial {\varphi}_{1}}{\partial t}\hfill & =[b\left(I-{I}_{GFI}\right)\left(\left({c}_{1}-{c}_{3}\right){H}_{\u03f5}\left({\varphi}_{2}\right)+\left({c}_{2}-{c}_{4}\right)\left(1-{H}_{\u03f5}\left({\varphi}_{2}\right)\right)\right)\hfill \\ \hfill & +\left(I-{I}_{bLFI}\right)\left(\left({m}_{1}-{m}_{3}\right){H}_{\u03f5}\left({\varphi}_{2}\right)+\left({m}_{2}-{m}_{4}\right)\left(1-{H}_{\u03f5}\left({\varphi}_{2}\right)\right)\right]{\delta}_{\u03f5}\left({\varphi}_{1}\right)\hfill \end{array}$$

(39)

$$\begin{array}{ll}\frac{\partial {\varphi}_{2}}{\partial t}\hfill & =[b\left(I-{I}_{GFI}\right)\left(\left({c}_{1}-{c}_{2}\right){H}_{\u03f5}\left({\varphi}_{1}\right)+\left({c}_{3}-{c}_{4}\right)\left(1-{H}_{\u03f5}\left({\varphi}_{1}\right)\right)\right)\hfill \\ \hfill & +\left(I-{I}_{bLFI}\right)\left(\left({m}_{1}-{m}_{2}\right){H}_{\u03f5}\left({\varphi}_{1}\right)+\left({m}_{3}-{m}_{4}\right)\left(1-{H}_{\u03f5}\left({\varphi}_{1}\right)\right)\right]{\delta}_{\u03f5}\left({\varphi}_{2}\right)\hfill \end{array}$$

(40)

Four-phase local and global SPF functions can be obtained by substituting *I*_{bLFI} and *I*_{GFI} from Eqs (37) and (38) in Eqs (34) and (35), respectively. By replacing the four-phase local and global fitted differences with their respective SPF functions, Eqs (39) and (40) are updated as follows:

$$\begin{array}{ll}\frac{\partial {\varphi}_{1}}{\partial t}\hfill & =[{\lambda}_{1}b{G}_{SPF}\left(\left({c}_{1}-{c}_{3}\right){H}_{\u03f5}\left({\varphi}_{2}\right)+\left({c}_{2}-{c}_{4}\right)\left(1-{H}_{\u03f5}\left({\varphi}_{2}\right)\right)\right)\hfill \\ \hfill & +{\lambda}_{2}{L}_{SPF}\left(\left({m}_{1}-{m}_{3}\right){H}_{\u03f5}\left({\varphi}_{2}\right)+\left({m}_{2}-{m}_{4}\right)\left(1-{H}_{\u03f5}\left({\varphi}_{2}\right)\right)\right]{\delta}_{\u03f5}\left({\varphi}_{1}\right)\hfill \end{array}$$

(41)

$$\begin{array}{ll}\frac{\partial {\varphi}_{2}}{\partial t}\hfill & =[{\lambda}_{1}b{G}_{SPF}\left(\left({c}_{1}-{c}_{2}\right){H}_{\u03f5}\left({\varphi}_{1}\right)+\left({c}_{3}-{c}_{4}\right)\left(1-{H}_{\u03f5}\left({\varphi}_{1}\right)\right)\right)\hfill \\ \hfill & +{\lambda}_{2}{L}_{SPF}\left(\left({m}_{1}-{m}_{2}\right){H}_{\u03f5}\left({\varphi}_{1}\right)+\left({m}_{3}-{m}_{4}\right)\left(1-{H}_{\u03f5}\left({\varphi}_{1}\right)\right)\right]{\delta}_{\u03f5}\left({\varphi}_{2}\right)\hfill \end{array}$$

(42)

The SPF functions defined in Eqs (34) and (35) are used to normalize the local and global image differences in the range [-1, 1] inside and outside the region of interest. SPF functions have been formulated in numerous ways (e.g., [25, 26, 31, 36]), some of them incorporating global intensity means and others using local intensity means. The new SPF functions proposed in this work are based on both global and local intensity-based fitted images. Fig 2 shows the signs of an SPF function based on a global fitted image inside and outside the region of interest. In turn, Fig 3 shows the sign of an SPF function based on a local fitted image inside and outside the region of interest.

Since the local SPF function uses local intensity means, it is able to distinguish small changes at the boundaries of intensity inhomogeneous objects. Since the local intensity means are computed in a local neighbourhood, the positive and negative values are distributed close to the inner and outer boundaries of the image contours. In the rest of the image, the SPF function becomes zero. In Fig 3, white shows negative values of the local SPF function, black shows positive values of the local SPF function, and blue represents the region where the local SPF function becomes zero.

In contrast, the global SPF function is unable to segment intensity inhomogeneous regions, since global intensity means are computed all over the image. It has positive and negative values inside and outside the region of interest, and is zero at its boundary, as shown in Fig 2.

In level-set methods, it is essential to initialize the level-set function *ϕ* as a signed distance function (SDF) *ϕ*_{0}. If the initial level-set function is significantly different from the SDF, re-initialization schemes are unable to re-initialize the function to the SDF. In the proposed formulation, not only is the re-initialization procedure completely eliminated, but the level-set function *ϕ* no longer needs to be initialized as an SDF. The initial level set function *ϕ*_{0} is defined as:

$$\phantom{\rule{4pt}{0ex}}\varphi (x,t=0)=\left\{\begin{array}{cc}-\rho ,\hfill & \phantom{\rule{4pt}{0ex}}x\in {\Omega}_{0}-\partial {\Omega}_{0}\hfill \\ 0,\hfill & \phantom{\rule{4pt}{0ex}}x\in \partial {\Omega}_{0}\hfill \\ \rho ,\hfill & \phantom{\rule{4pt}{0ex}}x\in \Omega -{\Omega}_{0}\hfill \end{array}\right.$$

(43)

where *ρ* is a positive constant, Ω_{0} is the inner region of the initial contour, Ω is the image domain and Ω_{0} refers to the initial contour. The stages of the proposed method can be summarized as:

- Solve the partial differential equation (PDE) of
*ϕ*using (Eq 36). - Regularize the level-set function
*ϕ*at time*t*by applying a Gaussian kernel*G*_{χ}, i.e.*ϕ*=*G*_{χ}**ϕ*, where*χ*is the standard deviation of the regularizing Gaussian kernel. - Check whether the regularized level-set function is stationary. If not, iterate from step (c).

The steps described above correspond to the two-phase segmentation algorithm. However, for the four-phase algorithm, these steps are replaced with the variables corresponding to two level sets using their respective definitions and solutions.

The proposed method was implemented using MATLAB and run on a 3.4 GHz Intel Core-i7 with 16 GB of RAM, testing it on both synthetic images and real brain magnetic resonance (MR) images of 250 × 250 pixels with 256 grey levels (8bpp). The parameters used for all experiments in this section are shown in Table 1.

Fig 4 shows the result of the proposed segmentation method and the comparison with the different state-of-the-art methods that have been tested. In this experiment, we used an image with a single homogeneous object and then progressively changed its intensity distribution to a point at which it is even difficult to manually segment it, thus making the object inhomogeneous.

The first column shows the five input images with the initial contour, whereas the segmentation results are shown using LIF [30] in the second column, LBF [27, 28] in the third column, LGD [29] in the fourth column, VLSBCS [9, 10] in the fifth column, LSACM [11, 12] in the sixth column and the proposed method in the last column, respectively. Visual inspection clearly shows that both the proposed method and LBF provide the best segmentation results. LIF also yields acceptable segmentation results, although the final contour in this method is not quite smooth along the object boundaries. Segmentation results of both LSACM and LGD show that they are not able to strictly find the object boundary. Moreover, the contour in LGD is also stuck in the background, which is undesirable. VLSBCS properly segmented the first two images, but its segmentation results are not acceptable for the last three images. In LGD, *σ* was set to 10 for all experiments in Fig 4, since with a small value of *σ*, this method is unable to segment the objects in all images. For all the examples in Fig 4, the parameters of all methods were kept constant. The only change is the initial position of the level set.

Fig 5 shows the experiments conducted with synthetic images with different types of region properties. The first column shows the input images with their respective initial contours. In the first row, both the object and the background are homogeneous. In the second row, the background is homogeneous while the object is inhomogeneous. In the third row, the background is inhomogeneous while the object is homogeneous. In the fourth row, both the background and the object are inhomogeneous. In the last row, the background is homogeneous and there are different inhomogeneous regions with one of the regions having an extra inhomogeneous region within it.

The second column shows the segmentation results with LIF [30]. Notice that it is not able to accurately segment the last image. Although this method is able the segment the objects in the first four images, the results are not acceptable because the contour is not quite smooth along the boundaries of the objects. The segmentation results with LBF [27, 28] are shown in the third column. This method is only able to properly segment the first image, while the segmentation results are not acceptable for the other images. The fourth column shows the segmentation results using LGD [29]. This method is able to properly segment the first image. As for the second image, the contour is not quite smooth along the boundary of the object. Although it is able to segment the objects in the third and fourth images, some undesirable contour is stuck in the background. Furthermore, this method is not able to segment the nested inhomogeneous object in the fifth image.

The fifth column shows the segmentation results using VLSBCS [9, 10]. This method is able to properly segment the first, second and fourth images. It can segment the object in third image. However, some undesirable contour is stuck in the background. Moreover, this method is not able to properly segment the fifth image. The sixth column shows the results with LSACM [11, 12]. This method is able to properly segment the third and fourth images. Although it is able to segment the objects in the first and second images, the final contour is not smooth and does not properly follow the object boundaries. Moreover, this method is unable to segment the nested inhomogeneous object in the fifth image. The last column shows the segmentation results using the proposed method, which is able to properly segment all images.

Table 2 shows a time complexity analysis in terms of CPU time and iterations. The proposed method yields the lowest time complexity for the examples shown in rows 1, 2 and 4, 5. It took 1.58 and 2.19 seconds for the examples shown in the first two rows, respectively. In turn, it took 5.09 and 5.78 seconds for the examples shown in rows 4 and 5, respectively. On the other hand, VLSBCS yields the lowest time complexity for the example shown in row 3. It took 2.27 seconds while the proposed method took 4.17 seconds to obtain the final contour. Although VLSBCS yields the lowest time complexity for this example it is unable to properly segment the object as shown in Fig 4.

In Fig 6, the segmentation and bias correction results using the proposed method (third row) are compared with the ones computed with VLSBCS [9, 10] (first row) and LSACM [11, 12] (second row) using real brain MR images as a practical application. The first column shows the input image with the initial contour. The second column shows the final contours for each method. The third column shows the estimated bias field. Finally, the last column shows the bias-corrected image. The proposed method can segment more detailed regions than the other methods, as highlighted by the green arrows in the second column. Moreover, the bias-corrected image using the proposed method has more details than the ones computed with the other methods, as highlighted by the red circles in the fourth column. The corrected image obtained using the proposed method also has more details in the nose area.

In Fig 7, the segmentation results and bias correction using the proposed method (third row) are compared with the ones computed with VLSBCS [9, 10] (first row) and LSACM [11, 12] (second row) using real brain MR images. The first column shows the input image with the initial contour. The second column shows the final contours. The third column shows the computed bias field. Finally, the last column shows the bias-corrected image. The results show that the proposed method can segment more detailed regions than the other methods, as highlighted by the green arrows in the second column. Moreover, the bias-corrected image using the proposed method has more details than the ones computed with the other methods, as highlighted by the red circles in the fourth column.

Fig 8 shows the effect of the position of the initial level set on the segmentation for all the tested methods. The first column shows the input image with different initial level sets, whereas the segmentation results with LIF [30] are shown in the second column, with LBF [27, 28] in the third column, with LGD [29] in the fourth column, with VLSBCS [9, 10] in the fifth column, with LSACM [11, 12] in the sixth column and with the proposed method in the last column. These results show that the proposed method is not affected by the position of the initial level set and yields the segmentation with the best accuracy. In contrast, the segmentation results of all the tested state-of-the-art methods are affected by the position of the initial level set.

Table 3 shows the number of iterations and CPU time for all the tested methods when applied to the example in Fig 8. The proposed method yields the best performance in the examples shown in rows 2 and 4. It took 2.48 and 1.42 seconds to obtain the final contours for rows 2 and 4, which is significantly lower than the time required by the other methods. In turn, LGD has the best performance for the example in row 1. Thus, LGD took 3.3 seconds while the proposed method took 5.15 seconds to obtain the final contour. In turn, LIF yields the best performance for the example shown in row 3. LIF took 3.86 seconds while the proposed method took 16.06 seconds to obtain the final contour.

Fig 9 shows the segmentation result on a synthetic image, which contains three different regions with intensity inhomogeneity, both without noise and after applying additive Gaussian noise. Fig 9(b) and 9(d) show that proposed method is able to properly segment intensity inhomogeneous objects without and with noise, respectively. Fig 9(e) shows the central row intensity profile of the input synthetic image with both clean and noisy data along with the final contour. It shows that the resultant contour followed the object boundaries perfectly.

Fig 10 shows some segmentation results by applying the proposed method to different intensity inhomogeneous noisy images. Although noise affected the crispness of edges in the input data, the proposed method is able to yield acceptable segmentation results.

The segmentation of brain MR images into disjoint regions based on white matter (WM), grey matter (GM) and cerebrospinal fluid (CSF) is a well-known problem in brain image analysis. Due to the geometric complexity of the human brain cortex, manual slice-by-slice segmentation is cumbersome and time consuming [1]. Thus, numerous methods have been devised to solve such problems. Active contours are quite popular in this context. Because of the complex intensity inhomogeneous regions, brain MR images are hard to be successfully segmented with high accuracy [39]. Therefore, this is a good test bench to test the proposed method in a practical application and compare it with other state-of-the-art methods.

This section shows segmentation results for all tested two-phase active contour methods using 2D brain MR images from a public database of 20 brain anatomical models [37]. All images have 250 × 250 pixels and 8 bits per pixel.

Active contour methods behave differently for different types of images. Because the images used in this section have different characteristics compared to the ones used in results and comparison section, the parameters had to be tuned in order to obtain the best possible segmentation results. The parameters used for all experiments in this section are shown in Table 4.

In order to partition a brain MR image into WM and GM regions, the segmentation result is split into two regions based on two phases: *ϕ*> 0 and *ϕ*< 0. The WM and GM regions represent the brain region, which is the region of interest, while the regions outside the brain (e.g., skull, fat and vacuum) can be taken as irrelevant regions. Therefore, we manually extracted the brain area to segment the WM and GM regions, removing the other irrelevant regions out of the brain. Fig 11 shows the WM and GM regions computed from the two phases of the proposed method and the comparison with the ground truth (GT). In the first row of Fig 11, the first image shows the input image with the initial contour. The second image shows the final contour using the proposed method. The third image shows a manually defined brain mask. The main purpose of the brain mask is to extract the brain region in order to carefully analyse and compare the segmented WM and GM regions with their respective ground truths. The last image in the first row shows the final contour after scaling with the brain mask. Let *ϕ*(*x*, *y*) be the final computed contour shown in the second column of the first row, and *m*(*x*, *y*) the manually defined brain mask shown in the third column of the first row. The scaled final contour *ξ*(*x*, *y*), which is shown in the last column of the first row, is computed as *ξ*(*x*, *y*) = *ϕ*(*x*, *y*)*m*(*x*, *y*). In the second row of Fig 11, the first image shows the WM region computed from the positive phase (*ξ*> 0) of the final contour scaled with the brain mask. The second image shows the GM region computed using the negative phase (*ξ*< 0) of the scaled final contour. In the second row, the third and fourth images show the ground truths of the WM and GM regions, respectively.

Fig 12 shows the accuracy analysis of the region of interest in the brain MR images. A total of 100 2D slices from 20 brain anatomical models [37] were used. Five 2D slices from every patient were considered. The WM and GM regions for all methods were computed as depicted in Fig 11. The segmentation accuracy corresponding to the WM and GM regions presented in Fig 12 was obtained as:

$$\phantom{\rule{0.166667em}{0ex}}\text{Accuracy}=\frac{|A\cap B|}{|A\cup B|}\times 100,$$

(44)

Where *A* is the computed WM or GM region after brain scaling and *B* is the ground truth for that region. Different methods can behave differently based on the type of images. Therefore, five slices from each patient were used and average accuracies for those slices were computed to obtain a representative value for each case. Fig 12 and Table 5 show that the proposed method yields the best segmentation accuracy in most cases for both the WM and GM regions.

A two-phase method is limited to segmenting images into two regions, which is its big weakness in its application to brain MR image segmentation. In the last subsection, all of the two-phase tested methods are evaluated for WM and GM region segmentation. However, the segmented GM region was a combination of GM and CSF regions, resulting into a high amount of false positives. In this subsection, the four-phase model of the proposed active contour method is evaluated along with the state-of-the-art four-phase active contour methods with respect to the segmentation accuracy of WM, GM and CSF regions. All methods are tested using the same brain MR image database mentioned in the previous subsection with the same image size and intensity. The parameters used for the proposed method are: *λ*_{1} = 2, *λ*_{2} = 2, *μ* = 5, *σ* = 3, *χ* = 0.45, *ρ* = 1 *ϵ* = 1.5 and Δ*t* = 1.

In brain MR images, WM, GM and CSF regions represent the brain, which is the region of interest. The regions outside the brain (e.g., skull, fat and vacuum) can be assumed to be irrelevant regions. In order to remove those unwanted regions, it is necessary to strip the skull from the brain MR image using a manually drawn brain mask. A brain mask is used as shown in Fig 11, although it is used for the four-phase method. Let *ϕ*_{1}(*x*, *y*) and *ϕ*_{2}(*x*, *y*) be the final computed contours for two level sets, *m*(*x*, *y*) be the manually defined brain mask and *ξ*_{1}(*x*, *y*) and *ξ*_{2}(*x*, *y*) scaled final contours with the brain mask which are computed as: *ξ*_{1}(*x*, *y*) = *ϕ*_{1}(*x*, *y*)*m*(*x*, *y*) and *ξ*_{2}(*x*, *y*) = *ϕ*_{2}(*x*, *y*)*m*(*x*, *y*). By using the proposed method, the four regions are computed as follows: *R*_{1} = (*ξ*_{1}> 0) ∩ (*ξ*_{2}> 0), *R*_{2} = (*ξ*_{1}> 0) ∩ (*ξ*_{2}< 0), *R*_{3} = (*ξ*_{1}< 0) ∩ (*ξ*_{2}> 0) and *R*_{4} = (*ξ*_{1}< 0) ∩ (*ξ*_{2}< 0). In brain MR images, *R*_{1}, *R*_{2} and *R*_{3}regions usually refer to WM, GM and CSF regions, respectively, and *R*_{4} is an empty region, which is discarded.

Fig 13 shows the accuracy analysis of the regions of interest (i.e., WM, GM and CSF regions) in the brain MR images. A total of 100 2D slices from 20 brain anatomical models [37] were used. Five 2D slices (namely 150, 175, 200, 225 and 250) from every patient were considered. The WM, GM and CSF regions are segmented using MLSF, VLSBCS, LSACM and the proposed methods. The accuracy of WM, GM and CSF regions for the mentioned four-phase active contour methods are computed using (44). In (44), *A* is the computed WM, GM or CSF region and *B* is the respective ground truth.

Fig 13 and Table 6 show that the proposed method yields the best segmentation results for GM and CSF regions. In turn, MLSF method yields the best segmentation results for the WM region with an accuracy of 92.52%. The proposed method yields an accuracy of 91.02% for the WM region, which is 1.5% less than the one with the MLSF method. LSACM and VLSBCS methods yield unsatisfactory segmentation results compared to both the proposed and the MLSF method. Table 6 also shows the CPU time comparison between the evaluated methods. It shows that MLSF is the quickest among the compared four-phase active contour methods. It took an average of 15.12 seconds to get the final segmentation result. In turn, the proposed method took an average of 19.01 seconds to fully converge, which is approximately 4 seconds more than the MLSF method.

In this section, the segmentation results of the proposed method are compared with three alternative open source brain MR image segmentation softwares: SPM [40, 41], LSF [42, 43] and BrainSuite [44, 45] using the Brain Web database. In order to compare the segmentation results, the brain region was extracted by stripping the skull using a brain mask. Fig 14 shows a skull stripped brain region and the visual comparison of the segmented regions with their respective ground truth. It shows that all the compared methods yield similar segmentation results from a qualitative point of view.

Table 7 shows the segmentation accuracy of the evaluated methods using two similarity metrics. Both the mean and standard deviation of the evaluated metrics are considered for all three brain regions, i.e., WM, GM and CSF. These similarity metrics are the Jaccard index [46] and the Dice coefficient [47], which are frequently used when the ground truth of the regions of interest is available. These similarity metrics are defined as:

$$\begin{array}{ll}J\left(S,G\right)\hfill & =\frac{\left|S\cap G\right|}{\left|S\cup G\right|},\hfill \\ D\left(S,G\right)\hfill & =\frac{2\left|S\cap G\right|}{\left|S\right|+\left|G\right|}\hfill \end{array}$$

(45)

where *S* is the segmented region and *G* its respective ground truth.

Table 7 shows that the proposed method yields the best segmentation accuracy using both similarity measures for all three brain regions, i.e., WM, GM and CSF. For the GM region, if we consider the mean error (standard deviation) then the Dice coefficient computed using SPM is 0.95, which is the same as the proposed method. However, SPM yields a Jaccard index of 0.91, which is 0.01 more than the Jaccard index computed by the proposed method.

This paper proposes a new region-based active contour method for image segmentation and bias correction by using an energy functional based on both local and global fitted images. In order to minimize that energy functional, the square image fitted difference is formulated by using The energy functionalboth local and global fitted differences. In the gradient descent solution, both the local and global image differences are replaced by local and global signed pressure force (SPF) functions. Finally, a Gaussian kernel is applied to regularize the curve at each step and avoid the computationally expensive re-initialization.

The main contribution of this paper is the formulation of a new energy functional from the LIF method [30] and the modification of the attained gradient descent solution with new SPF functions based on local and global fitted images to make the solution more stable. Qualitative and quantitative analysis show that the proposed method yields significantly better segmentation results and correction of homogeneous regions than alternative state-of-the-art methods. MR images have been used as a practical test bench.

One of the drawbacks of a local fitted region-based active contour method is its high time complexity. In future work, we aim to formulate a new region-based active contour method by modifying the proposed energy functional in a way to reduce time complexity. In order to do that, we plan to integrate a phase-shift approach similar to the one proposed in [48].

**A. Derivation of the gradient descent flow of the two-phase model** In (Eq 25), the variation *η* is added to the level-set function *ϕ* such that $\varphi =\tilde{\varphi}+\u03f5\eta $. Keeping *c*_{1}, *c*_{2}, *m*_{1}, *m*_{2} and *b*(*x*) fixed, differentiating with respect to *ϕ* and letting *ϵ*→ 0, we have:

$$\text{\hspace{0.17em}}\begin{array}{ll}\frac{\partial {E}_{LGFI}}{\partial \phi}\hfill & =\underset{\u03f5\to 0}{\text{lim}}\frac{d}{d\u03f5}(\frac{1}{2}{\displaystyle \underset{\Omega}{\int}(}I-b({c}_{1}{H}_{\u03f5}(\tilde{\varphi})+{c}_{2}(1-{H}_{\u03f5}(\tilde{\varphi}))))\hfill \\ \hfill & (I-({m}_{1}{H}_{\u03f5}(\tilde{\varphi})+{m}_{2}(1-{H}_{\u03f5}(\tilde{\varphi}))))dx)\hfill \\ \hfill & =\frac{1}{2}\underset{\u03f5\to 0}{\text{lim}}(-{\displaystyle \underset{\Omega}{\int}[}(I-b({c}_{1}{H}_{\u03f5}(\tilde{\varphi})+{c}_{2}(1-{H}_{\u03f5}(\tilde{\varphi}))))({m}_{1}-{m}_{2})\hfill \\ \hfill & -b(I-({m}_{1}{H}_{\u03f5}(\tilde{\varphi})+{m}_{2}(1-{H}_{\u03f5}(\tilde{\varphi}))))({c}_{1}-{c}_{2})]{\delta}_{\u03f5}(\tilde{\varphi})\eta dx)\hfill \\ \hfill & =\frac{1}{2}\underset{\u03f5\to 0}{\text{lim}}(-{\displaystyle \underset{\Omega}{\int}[}(I-b({c}_{1}{H}_{\u03f5}(\varphi )+{c}_{2}(1-{H}_{\u03f5}(\varphi ))))({m}_{1}-{m}_{2})\hfill \\ \hfill & -b(I-({m}_{1}{H}_{\u03f5}(\varphi )+{m}_{2}(1-{H}_{\u03f5}(\varphi ))))({c}_{1}-{c}_{2})]{\delta}_{\u03f5}(\varphi )\eta dx)\text{\hspace{0.17em}}\hfill \end{array}\text{\hspace{0.17em}}$$

The following Euler Lagrange equation is obtained:

$$\begin{array}{l}-[\left(I-b\left({c}_{1}{H}_{\u03f5}\left(\varphi \right)+{c}_{2}\left(1-{H}_{\u03f5}\left(\varphi \right)\right)\right)\right)\left({m}_{1}-{m}_{2}\right)\hfill \\ +b\left(I-\left({m}_{1}{H}_{\u03f5}\left(\varphi \right)+{m}_{2}\left(1-{H}_{\u03f5}\left(\varphi \right)\right)\right)\right)\left({c}_{1}-{c}_{2}\right)]{\delta}_{\u03f5}\left(\varphi \right)=0\hfill \end{array}$$

By applying steepest gradient descent [38], the final gradient descent flow is obtained:

$$\begin{array}{ll}\frac{\partial \varphi}{\partial t}\hfill & =[(I-b({c}_{1}{H}_{\u03f5}(\varphi )+{c}_{2}(1-{H}_{\u03f5}(\varphi ))))({m}_{1}-{m}_{2})\hfill \\ \hfill & +b(I-({m}_{1}{H}_{\u03f5}(\varphi )+{m}_{2}(1-{H}_{\u03f5}(\varphi ))))({c}_{1}-{c}_{2})]{\delta}_{\u03f5}(\varphi )\hfill \\ \hfill & =((I-{I}_{{\text{}}_{bLFI}})({m}_{1}-{m}_{2})+b(I-{I}_{{\text{}}_{GFI}})({c}_{1}-{c}_{2})){\delta}_{\u03f5}(\varphi )\hfill \end{array}$$

**B. Derivation of the gradient descent flow of the four-phase model** By using the definitions of Eqs (33) and (34) in (Eq 25), and adding the variations *η*_{1} and *η*_{2} to the level set functions *ϕ*_{1} and *ϕ*_{2}, respectively, such that $\tilde{{\varphi}_{1}}={\varphi}_{1}+\u03f5{\eta}_{1}$ and $\tilde{{\varphi}_{2}}={\varphi}_{2}+\u03f5{\eta}_{2}$. Keeping *c*_{1}, *c*_{2}, *c*_{3}, *c*_{4}, *m*_{1}, *m*_{2}, *m*_{3}, *m*_{4} and *ϕ*_{2} fixed, differentiating with respect to *ϕ*_{1}and letting *ϵ*→ 0, we have:

$$\begin{array}{ll}\frac{\partial {E}_{LGFI}}{\partial \varphi}\hfill & =\underset{\u03f5\to 0}{\text{lim}}\frac{d}{d\u03f5}(\frac{1}{2}{\displaystyle \underset{\Omega}{\int}(}I-b({c}_{1}{H}_{\u03f5}(\tilde{{\varphi}_{1}}){H}_{\u03f5}(\tilde{{\varphi}_{2}})+{c}_{2}{H}_{\u03f5}(\tilde{{\varphi}_{1}})(1-{H}_{\u03f5}(\tilde{{\varphi}_{2}}))\hfill \\ \hfill & +{c}_{3}(1-{H}_{\u03f5}(\tilde{{\varphi}_{1}})){H}_{\u03f5}(\tilde{{\varphi}_{2}})+{c}_{4}(1-{H}_{\u03f5}(\tilde{{\varphi}_{1}}))(1-{H}_{\u03f5}(\tilde{{\varphi}_{2}}))))\hfill \\ \hfill & (I-({m}_{1}{H}_{\u03f5}(\tilde{{\varphi}_{1}}){H}_{\u03f5}(\tilde{{\varphi}_{2}})+{m}_{2}{H}_{\u03f5}(\tilde{{\varphi}_{1}})(1-{H}_{\u03f5}(\tilde{{\varphi}_{2}}))\hfill \\ \hfill & +{m}_{3}(1-{H}_{\u03f5}(\tilde{{\varphi}_{1}})){H}_{\u03f5}(\tilde{{\varphi}_{2}})+{m}_{4}(1-{H}_{\u03f5}(\tilde{{\varphi}_{1}}))(1-{H}_{\u03f5}(\tilde{{\varphi}_{2}}))))dx)\hfill \\ \hfill & =\frac{1}{2}\underset{\u03f5\to 0}{\text{lim}}({\displaystyle \underset{\Omega}{\int}-}b(I-({m}_{1}{H}_{\u03f5}(\tilde{{\varphi}_{1}}){H}_{\u03f5}(\tilde{{\varphi}_{2}})+{m}_{2}{H}_{\u03f5}(\tilde{{\varphi}_{1}})(1-{H}_{\u03f5}(\tilde{{\varphi}_{2}}))\hfill \\ \hfill & +{m}_{3}(1-{H}_{\u03f5}(\tilde{{\varphi}_{1}})){H}_{\u03f5}(\tilde{{\varphi}_{2}})+{m}_{4}(1-{H}_{\u03f5}(\tilde{{\varphi}_{1}}))(1-{H}_{\u03f5}(\tilde{{\varphi}_{2}}))))\hfill \\ \hfill & (({c}_{1}-{c}_{3}){H}_{\u03f5}(\tilde{{\varphi}_{2}})+({c}_{2}-{c}_{4})(1-{H}_{\u03f5}(\tilde{{\varphi}_{2}})))\hfill \\ \hfill & -(I-b({c}_{1}{H}_{\u03f5}(\tilde{{\varphi}_{1}}){H}_{\u03f5}(\tilde{{\varphi}_{2}})+{c}_{2}{H}_{\u03f5}(\tilde{{\varphi}_{1}})(1-{H}_{\u03f5}(\tilde{{\varphi}_{2}}))\hfill \\ \hfill & +{c}_{3}(1-{H}_{\u03f5}(\tilde{{\varphi}_{1}})){H}_{\u03f5}(\tilde{{\varphi}_{2}})+{c}_{4}(1-{H}_{\u03f5}(\tilde{{\varphi}_{1}}))(1-{H}_{\u03f5}(\tilde{{\varphi}_{2}}))))\hfill \\ \hfill & (({m}_{1}-{m}_{3}){H}_{\u03f5}(\tilde{{\varphi}_{2}})+({m}_{2}-{m}_{4})(1-{H}_{\u03f5}(\tilde{{\varphi}_{2}}))){\delta}_{\u03f5}(\tilde{{\varphi}_{1}}){\eta}_{1}dx)\hfill \\ \hfill & =\frac{1}{2}\underset{\u03f5\to 0}{\text{lim}}({\displaystyle \underset{\Omega}{\int}-}b(I-({m}_{1}{H}_{\u03f5}({\varphi}_{1}){H}_{\u03f5}({\varphi}_{2})+{m}_{2}{H}_{\u03f5}({\varphi}_{1})(1-{H}_{\u03f5}({\varphi}_{2}))\hfill \\ \hfill & +{m}_{3}(1-{H}_{\u03f5}({\varphi}_{1})){H}_{\u03f5}({\varphi}_{2})+{m}_{4}(1-{H}_{\u03f5}({\varphi}_{1}))(1-{H}_{\u03f5}({\varphi}_{2}))))\hfill \\ \hfill & (({c}_{1}-{c}_{3}){H}_{\u03f5}({\varphi}_{2})+({c}_{2}-{c}_{4})(1-{H}_{\u03f5}({\varphi}_{2})))\hfill \\ \hfill & -(I-b({c}_{1}{H}_{\u03f5}({\varphi}_{1}){H}_{\u03f5}({\varphi}_{2})+{c}_{2}{H}_{\u03f5}({\varphi}_{1})(1-{H}_{\u03f5}({\varphi}_{2}))\hfill \\ \hfill & +{c}_{3}(1-{H}_{\u03f5}({\varphi}_{1})){H}_{\u03f5}({\varphi}_{2})+{c}_{4}(1-{H}_{\u03f5}({\varphi}_{1}))(1-{H}_{\u03f5}({\varphi}_{2}))))\hfill \\ \hfill & (({m}_{1}-{m}_{3}){H}_{\u03f5}({\varphi}_{2})+({m}_{2}-{m}_{4})(1-{H}_{\u03f5}({\varphi}_{2}))){\delta}_{\u03f5}({\varphi}_{1}){\eta}_{1}dx)\hfill \end{array}$$

The following Euler Lagrange equation is obtained:

$$\begin{array}{l}-[b(I-({m}_{1}{H}_{\u03f5}({\varphi}_{1}){H}_{\u03f5}({\varphi}_{2})+{m}_{2}{H}_{\u03f5}({\varphi}_{1})(1-{H}_{\u03f5}({\varphi}_{2}))+{m}_{3}(1-{H}_{\u03f5}({\varphi}_{1})){H}_{\u03f5}({\varphi}_{2})\hfill \\ +{m}_{4}(1-{H}_{\u03f5}({\varphi}_{1}))(1-{H}_{\u03f5}({\varphi}_{2}))))(({c}_{1}-{c}_{3}){H}_{\u03f5}({\varphi}_{2})+({c}_{2}-{c}_{4})(1-{H}_{\u03f5}({\varphi}_{2})))\hfill \\ +(I-b({c}_{1}{H}_{\u03f5}({\varphi}_{1}){H}_{\u03f5}({\varphi}_{2})+{c}_{2}{H}_{\u03f5}({\varphi}_{1})(1-{H}_{\u03f5}({\varphi}_{2}))+{c}_{3}(1-{H}_{\u03f5}({\varphi}_{1})){H}_{\u03f5}({\varphi}_{2})\hfill \\ +{c}_{4}(1-{H}_{\u03f5}({\varphi}_{1}))(1-{H}_{\u03f5}({\varphi}_{2}))))(({m}_{1}-{m}_{3}){H}_{\u03f5}({\varphi}_{2})\hfill \\ +({m}_{2}-{m}_{4})(1-{H}_{\u03f5}({\varphi}_{2}))]{\delta}_{\u03f5}({\varphi}_{1})=0\hfill \end{array}$$

By using the steepest gradient descent method [38], the final gradient descent flow obtained for *ϕ*_{1} is:

$$\begin{array}{ll}\frac{\partial {\varphi}_{1}}{\partial t}\hfill & =[b(I-({m}_{1}{H}_{\u03f5}({\varphi}_{1}){H}_{\u03f5}({\varphi}_{2})+{m}_{2}{H}_{\u03f5}({\varphi}_{1})(1-{H}_{\u03f5}({\varphi}_{2}))+{m}_{3}(1-{H}_{\u03f5}({\varphi}_{1})){H}_{\u03f5}({\varphi}_{2})\hfill \\ \hfill & +{m}_{4}(1-{H}_{\u03f5}({\varphi}_{1}))(1-{H}_{\u03f5}({\varphi}_{2}))))(({c}_{1}-{c}_{3}){H}_{\u03f5}({\varphi}_{2})+({c}_{2}-{c}_{4})(1-{H}_{\u03f5}({\varphi}_{2})))\hfill \\ \hfill & +(I-b({c}_{1}{H}_{\u03f5}({\varphi}_{1}){H}_{\u03f5}({\varphi}_{2})+{c}_{2}{H}_{\u03f5}({\varphi}_{1})(1-{H}_{\u03f5}({\varphi}_{2}))+{c}_{3}(1-{H}_{\u03f5}({\varphi}_{1})){H}_{\u03f5}({\varphi}_{2})\hfill \\ \hfill & +{c}_{4}(1-{H}_{\u03f5}({\varphi}_{1}))(1-{H}_{\u03f5}({\varphi}_{2}))))(({m}_{1}-{m}_{3}){H}_{\u03f5}({\varphi}_{2})\hfill \\ \hfill & +({m}_{2}-{m}_{4})(1-{H}_{\u03f5}({\varphi}_{2}))]{\delta}_{\u03f5}({\varphi}_{1})\hfill \\ \hfill & =[b(I-{I}_{GFI})(({c}_{1}-{c}_{3}){H}_{\u03f5}({\varphi}_{2})+({c}_{2}-{c}_{4})(1-{H}_{\u03f5}({\varphi}_{2})))\hfill \\ \hfill & +(I-{I}_{bLFI})(({m}_{1}-{m}_{3}){H}_{\u03f5}({\varphi}_{2})+({m}_{2}-{m}_{4})(1-{H}_{\u03f5}({\varphi}_{2}))]{\delta}_{\u03f5}({\varphi}_{1})\hfill \end{array}$$

Similarly, keeping *c*_{1}, *c*_{2}, *c*_{3}, *c*_{4}, *m*_{1}, *m*_{2}, *m*_{3}, *m*_{4}and *ϕ*_{1}fixed, differentiating with respect to *ϕ*_{2}and by the steepest gradient descent method [38], the following gradient descent flow for *ϕ*_{2} is obtained:

$$\begin{array}{ll}\frac{\partial {\varphi}_{2}}{\partial t}\hfill & =[b\left(I-{I}_{GFI}\right)\left(\left({c}_{1}-{c}_{2}\right){H}_{\u03f5}\left({\varphi}_{1}\right)+\left({c}_{3}-{c}_{4}\right)\left(1-{H}_{\u03f5}\left({\varphi}_{1}\right)\right)\right)\hfill \\ \hfill & +\left(I-{I}_{bLFI}\right)\left(\left({m}_{1}-{m}_{2}\right){H}_{\u03f5}\left({\varphi}_{1}\right)+\left({m}_{3}-{m}_{4}\right)\left(1-{H}_{\u03f5}\left({\varphi}_{1}\right)\right)\right]{\delta}_{\u03f5}\left({\varphi}_{2}\right)\hfill \end{array}$$

(RAR)

Click here for additional data file.^{(370K, rar)}

(RAR)

Click here for additional data file.^{(222K, rar)}

(RAR)

Click here for additional data file.^{(35K, rar)}

(RAR)

Click here for additional data file.^{(21M, rar)}

This work is supported by the research project DPI2016-77415-R and the predoctoral grant FI-DGR-(2014-2016) from the Agency of Management of University and Research Grants (AGAUR), Catalunya, Spain.

This work is supported by the research project DPI2016-77415-R and the predoctoral grant FI-DGR-(2014-2016) from the Agency of Management of University and Research Grants (AGAUR), Catalunya, Spain. The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.

Data Availability

We have uploaded the real brain MR images to the Figshare public repository, which can be accessed from: https://figshare.com/articles/Brain_MRI_Dataset_MINC_files_mnc_/4764853 Synthetic images are within the paper and can be accessed in Supporting Information section.

1.
Elnakib A, Gimel’farb G, Suri JS, El-Baz A. Medical Image Segmentation: A Brief Survey. vol. 1; 2011. p. 235–280. Available at: http://link.springer.com/10.1007/978-1-4419-8195-0
doi: 10.1007/978-1-4419-8204-9_1

2.
Vovk U, Pernus F, Likar B. A Review of Methods for Correction of Intensity Inhomogeneity in MRI. IEEE Transactions on Medical Imaging. 2007;26(3):405–421. doi: 10.1109/TMI.2006.891486
[PubMed]

3.
Lewis EB, Fox NC. Correction of differential intensity inhomogeneity in longitudinal MR images. NeuroImage. 2004;23(1):75–83. doi: 10.1016/j.neuroimage.2004.04.030
[PubMed]

4.
Studholme C, Cardenas V, Song E, Ezekiel F, Maudsley A, Weiner M. Accurate Template-Based Correction of Brain MRI Intensity Distortion with Application to Dementia and Aging. IEEE Transactions on Medical Imaging. 2004;23(1):99–110. doi: 10.1109/TMI.2003.820029
[PMC free article] [PubMed]

5.
Meyer CR, Bland PH, Pipe J. Retrospective correction of intensity inhomogeneities in MRI. IEEE Transactions on Medical Imaging. 1995;14(1):36–41. doi: 10.1109/42.370400
[PubMed]

6.
Vokurka EA, Thacker NA, Jackson A. A fast model independent method for automatic correction of intensity nonuniformity in MRI data. Journal of Magnetic Resonance Imaging. 1999;10(4):550–562. doi: 10.1002/(SICI)1522-2586(199910)10:4%3C550::AID-JMRI8%3E3.3.CO;2-H
[PubMed]

7.
Vovk U, Pernus F, Likar B. MRI intensity inhomogeneity correction by combining intensity and spatial information. Physics in Medicine and Biology. 2004;49(17):4119–4133. doi: 10.1088/0031-9155/49/17/020
[PubMed]

8.
Vovk U, Pernus F, Likar B. Intensity inhomogeneity correction of multispectral MR images. NeuroImage. 2006;32(1):54–61. doi: 10.1016/j.neuroimage.2006.03.020
[PubMed]

9.
Li C, Huang R, Ding Z, Gatenby C, Metaxas D, Gore J. A variational level set approach to segmentation and bias correction of images with intensity inhomogeneity In: Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics). vol. 5242
LNCS; 2008. p. 1083–1091. doi: 10.1007/978-3-540-85990-1_130
[PMC free article] [PubMed]

10.
Li C, Huang R, Ding Z, Gatenby JC, Metaxas DN, Gore JC. A level set method for image segmentation in the presence of intensity inhomogeneities with application to MRI. IEEE Transactions on Image Processing. 2011;20(7):2007–2016. doi: 10.1109/TIP.2011.2146190
[PubMed]

11.
Zhang K, Liu Q, Song H, Li X. A Variational Approach to Simultaneous Image Segmentation and Bias Correction. IEEE Transactions on Cybernetics. 2015;45(8):1426–1437. doi: 10.1109/TCYB.2014.2352343
[PubMed]

12.
Zhang K, Zhang L, Lam K, Zhang D. A Level Set Approach to Image Segmentation With Intensity Inhomogeneity. IEEE Transaction on Cybernetics. 2016;46(2):546–557. doi: 10.1109/TCYB.2015.2409119
[PubMed]

13. Li C, Gatenby C, Wang L, Gore JC. A robust parametric method for bias field estimation and segmentation of MR images. In: 2009 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops, CVPR Workshops; 2009. p. 218–223.

14.
Li C, Xu C, Anderson AW, Gore JC. MRI tissue classification and bias field estimation based on coherent local intensity clustering: A unified energy minimization framework In: Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics). vol. 5636
LNCS; 2009. p. 288–299. doi: 10.1007/978-3-642-02498-6_24
[PubMed]

15.
Huang C, Zeng L. An Active Contour Model for the Segmentation of Images with Intensity Inhomogeneities and Bias Field Estimation. PLoS ONE. 2015;10(4):e0120399
doi: 10.1371/journal.pone.0120399
[PMC free article] [PubMed]

16.
Li C, Gore JC, Davatzikos C. Multiplicative intrinsic component optimization (MICO) for MRI bias field estimation and tissue segmentation. Magnetic Resonance Imaging. 2014;32(7):913–923. doi: 10.1016/j.mri.2014.03.010
[PMC free article] [PubMed]

17.
Ivanovska T, Laqua R, Wang L, Schenk A, Yoon JH, Hegenscheid K, Völzke H, Liebscher V. An efficient level set method for simultaneous intensity inhomogeneity correction and segmentation of MR images. Computerized Medical Imaging and Graphics. 2016;48:9–20. doi: 10.1016/j.compmedimag.2015.11.005
[PubMed]

18.
Kahali S, Adhikari SK, Sing JK. On estimation of bias field in MRI images: polynomial vs Gaussian surface fitting method. Journal of Chemometrics. 2016;30(10):602–620. doi: 10.1002/cem.2825

19.
Kass M, Witkin A, Terzopoulos D. Snakes: Active contour models. International Journal of Computer Vision. 1988;1(4):321–331. doi: 10.1007/BF00133570

20.
Caselles V, Kimmel R, Sapiro G. Geodesic Active Contours. International Journal of Computer Vision. 1997;22(1):61–79. doi: 10.1109/ICCV.1995.466871

21. Chunming Li, Chenyang Xu, Changfeng Gui, Fox MD. Level Set Evolution without Re-Initialization: A New Variational Formulation. In: 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’05). vol. 1; 2005. p. 430–436. Available at: http://ieeexplore.ieee.org/lpdocs/epic03/wrapper.htm?arnumber=1467299

22.
Mumford D, Shah J. Optimal approximations by piecewise smooth functions and associated variational problems. Communications on Pure and Applied Mathematics. 1989;42(5):577–685. doi: 10.1002/cpa.3160420503

23.
Chan TF, Vese LA. Active contours without edges. IEEE Transactions on Image Processing. 2001;10(2):266–277. doi: 10.1109/83.902291
[PubMed]

24.
Vese LA, Chan TF. A multiphase level set framework for image segmentation using the Mumford and Shah model. International Journal of Computer Vision. 2002;50(3):271–293.

25.
Zhang K, Zhang L, Song H, Zhou W. Active contours with selective local or global segmentation: A new formulation and level set method. Image and Vision Computing. 2010;28(4):668–676. doi: 10.1016/j.imavis.2009.10.009

26.
Akram F, Kim J, Lee C, Choi KN. Segmentation of Regions of Interest Using Active Contours with SPF Function. Computational and Mathematical Methods in Medicine. 2015;2015:710326:1–710326:14. doi: 10.1155/2015/710326
[PMC free article] [PubMed]

27. Li C, Kao CY, Gore JC, Ding Z. Implicit active contours driven by local binary fitting energy. In: Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’07). 2007. p.1–7.

28.
Li C, Kao C, Gore JC, Ding Z. Minimization of Region-Scalable Fitting Energy for Image Segmentation. IEEE Transactions on Image Processing. 2008;17(10):1940–1949. doi: 10.1109/TIP.2008.2002304
[PMC free article] [PubMed]

29.
Wang L, He L, Mishra A, Li C. Active contours driven by local Gaussian distribution fitting energy. Signal Processing. 2009;89(12):2435–2447. doi: 10.1371/journal.pone.0143105
[PubMed]

30.
Zhang K, Song H, Zhang L. Active contours driven by local image fitting energy. Pattern Recognition. 2010;43(4):1199–1206. doi: 10.1016/j.compmedimag.2009.04.010
[PubMed]

31.
Akram F, Kim J, Lim HU, Choi KN. Segmentation of Intensity Inhomogeneous Brain MR Images Using Active Contours. Computational and Mathematical Methods in Medicine. 2014;2014:194614:1–194614:14. doi: 10.1155/2014/194614
[PMC free article] [PubMed]

32.
Peng Y, Liu F, Liu S. Active contours driven by normalized local image fitting energy. Concurrency Computation Practice and Experience. 2014;26(5):1200–1214. doi: 10.1016/j.compmedimag.2009.04.010
[PubMed]

33.
Xie X, Zhang A, Wang C. Local average fitting active contour model with thresholding for noisy image segmentation. Optik. 2015;126(9-10):1021–1026. doi: 10.1016/j.ijleo.2015.02.073

34.
Wang XF, Huang DS, Xu H. An efficient local Chan-Vese model for image segmentation. Pattern Recognition. 2010;43(3):603–618. doi: 10.1016/j.patcog.2009.08.002

35.
Wang L, Li C, Sun Q, Xia D, Kao CY. Active contours driven by local and global intensity fitting energy with application to brain MR image segmentation. Computerized Medical Imaging and Graphics. 2009;33(7):520–531. doi: 10.1016/j.compmedimag.2009.04.010
[PubMed]

36.
Li X, Jiang D, Shi Y, Li W. Segmentation of MR image using local and global region based geodesic model. Biomedical Engineering Online. 2015;14(8):1–16. doi: 10.1186/1475-925X-14-8
[PMC free article] [PubMed]

37. Brain web: Anatomical models of 20 normal brains. Accessed: April 24, 2015. Available at: http://brainweb.bic.mni.mcgill.ca/brainweb/anatomic_normal_20.html

38.
Aubert G, Kornprobst P. Mathematical Problems in Image Processing: Partial Differential Equations and the Calculus of Variations. 2nd ed
Springer Publishing Company, Incorporated; 2010.

39.
Balafar MA, Ramli AR, Saripan MI, Mashohor S. Review of brain MRI image segmentation methods; 2010. Artificial Intelligence Review. 2010;33(3):261–274.

40.
Ashburner J, Friston KJ. Unified segmentation. NeuroImage. 2005;26(3):839–851. doi: 10.1016/j.neuroimage.2005.02.018
[PubMed]

41.
Cuadra MB, Cammoun L, Butz T, Cuisenaire O, Thiran JP. Comparison and validation of tissue modelization and statistical classification methods in T1-weighted MR brain images. IEEE Transactions on Medical Imaging. 2005;24(12):1548–1565. doi: 10.1109/TMI.2005.857652
[PubMed]

42.
Smith SM, Jenkinson M, Woolrich MW, Beckmann CF, Behrens TEJ, Johansen-Berg H, et al.
Advances in functional and structural MR image analysis and implementation as FSL. Neuroimage. 2004;23(SUPPL. 1):S208–S219. doi: 10.1016/j.neuroimage.2004.07.051
[PubMed]

43.
Zhang Y, Brady M, Smith S. Segmentation of brain MR images through a hidden Markov random field model and the expectation-maximization algorithm. IEEE Transactions on Medical Imaging. 2001;20(1):45–57. doi: 10.1109/42.906424
[PubMed]

44.
Shattuck DW, Leahy RM. Brainsuite: An automated cortical surface identification tool. Medical Image Analysis. 2002;6(2):129–142. doi: 10.1007/978-3-540-40899-4_6
[PubMed]

45. Kasiri K, Javad DM, Kazemi K, Sadegh HM, Kafshgari S. Comparison evaluation of three brain MRI segmentation methods in software tools. 17th Iranian Conference of Biomedical Engineering (ICBME). 2010;(November):1–4.

46.
Jaccard P. The distribution of the flora in the alphine zone. The New Phytologist. 1912;XI(2):37–50.

47.
Dice LR. Measures of the Amount of Ecologic Association Between Species. Ecology. 1945;26(3):297–302. doi: 10.2307/1932409

48.
Lee SH, Seo JK. Level set-based bimodal segmentation with stationary global minimum. IEEE Transactions on Image Processing. 2006;15(9):2843–2852. [PubMed]

Articles from PLoS ONE are provided here courtesy of **Public Library of Science**

PubMed Central Canada is a service of the Canadian Institutes of Health Research (CIHR) working in partnership with the National Research Council's national science library in cooperation with the National Center for Biotechnology Information at the U.S. National Library of Medicine(NCBI/NLM). It includes content provided to the PubMed Central International archive by participating publishers. |