PMCCPMCCPMCC

Search tips
Search criteria 

Advanced

 
Logo of nihpaAbout Author manuscriptsSubmit a manuscriptHHS Public Access; Author Manuscript; Accepted for publication in peer reviewed journal;
 
Comput Vis Image Underst. Author manuscript; available in PMC 2010 October 1.
Published in final edited form as:
Comput Vis Image Underst. 2009 October; 113(10): 1095–1103.
doi:  10.1016/j.cviu.2009.06.003
PMCID: PMC2777695
NIHMSID: NIHMS146804

Intensity Standardization Simplifies Brain MR Image Segmentation

Abstract

Typically, brain MR images present significant intensity variation across patients and scanners. Consequently, training a classifier on a set of images and using it subsequently for brain segmentation may yield poor results. Adaptive iterative methods usually need to be employed to account for the variations of the particular scan. These methods are complicated, difficult to implement and often involve significant computational costs. In this paper, a simple, non-iterative method is proposed for brain MR image segmentation. Two preprocessing techniques, namely intensity inhomogeneity correction, and more importantly MR image intensity standardization, used prior to segmentation, play a vital role in making the MR image intensities have a tissue-specific numeric meaning, which leads us to a very simple brain tissue segmentation strategy.

Vectorial scale-based fuzzy connectedness and certain morphological operations are utilized first to generate the brain intracranial mask. The fuzzy membership value of each voxel within the intracranial mask for each brain tissue is then estimated. Finally, a maximum likelihood criterion with spatial constraints taken into account is utilized in classifying all voxels in the intracranial mask into different brain tissue groups. A set of inhomogeneity corrected and intensity standardized images is utilized as a training data set. We introduce two methods to estimate fuzzy membership values. In the first method, called SMG (for simple membership based on a gaussian model), the fuzzy membership value is estimated by fitting a multivariate Gaussian model to the intensity distribution of each brain tissue whose mean intensity vector and covariance matrix are estimated and fixed from the training data sets. The second method, called SMH (for simple membership based on a histogram), estimates fuzzy membership value directly via the intensity distribution of each brain tissue obtained from the training data sets. We present several studies to evaluate the performance of these two methods based on 10 clinical MR images of normal subjects and 10 clinical MR images of Multiple Sclerosis (MS) patients. A quantitative comparison indicates that both methods have overall better accuracy than the k-nearest neighbors (kNN) method, and have much better efficiency than the Finite Mixture (FM) model based Expectation-Maximization (EM) method. Accuracy is similar for our methods and EM method for the normal subject data sets, but much better for our methods for the patient data sets.

Keywords: Inhomogeneity correction, standardization, fuzzy connectedness, brain image segmentation, MRI

1. INTRODUCTION

Image segmentation is one of the most crucial tasks in image processing and computer vision. In spite of nearly three decades of research, image segmentation remains a challenging problem [1], particularly because of the dependence of the problem on imaging modalities and on the imaged objects. In medical image processing, particularly as applied to magnetic resonance (MR) images of the brain, segmentation has drawn much attention during the past twenty years [2]. The segmentation tasks in this area are aimed at classifying the component tissues of the brain, and at quantifying the volume and other morphological, architectural, and shape parameters of different brain tissues for aiding in various neurological and neurosurgical applications.

In MR image analysis, in addition to the ubiquitous image thermal noise, two other sources of artifacts pose challenges, and thereby compromise the performance of algorithms. These are the presence of a slow-varying background component of image inhomogeneity and the non-standardness of the MR image intensity gray scale. The latter problem implies the lack of a tissue-specific absolute intensity numeric meaning, even within the same MR imaging (MRI) protocol, for the same body region, for images obtained on the same scanner, for the same patient. While the inhomogeneity issue has been handled extensively in the literature and effective solutions to overcome its effects are available (for example, [39]), the non-standardness problem has not been addressed directly except in [10], [11]. Further, considering the fact that any inhomogeneity correction process itself can introduce non-standardness [12], the importance of handling the latter is not clearly confined only to the issue of non-standardness that is inherent in solely the original, acquired images. Our main postulate in this paper is that if the effects of both these artifacts, particularly those related to non-standardness, are corrected in images, then vast simplifications can be achieved in segmentation methods, which otherwise will have to devise very clever and complex strategies within the segmentation framework to combat these effects. Our secondary hypothesis is that the performance of even those sophisticated and complex segmentation methods can be improved if the effects of these artifacts, particularly of non-standardness in images, is minimized in a preprocessing step. In the rest of this section, we shall briefly review (only representative, by no means exhaustive) the MR image segmentation strategies that have been used previously focusing on the challenges posed by these artifacts.

Brain MR image segmentation strategies may be broadly classified into boundary-based, region-based, and hybrid strategies. Boundary-based approaches focus on delineating the interface between the object and the surrounding co-objects in the image. Representative approaches in this group are as follows. Pitiot et al. [13] described a deformable model-based method for adaptive elastic segmentation of brain MR images via shape-model-guided evolutionary programming. The evolutionary algorithm used prior statistical information about the shape of the target structure to control the behavior of a number of deformable templates. Each template, modeled in the form of B-spline, is warped in a potential field which is itself dynamically adapted. The N3 method [6] is used to correct the input MR images for the background intensity inhomogeneity before initiating the template deformation processes. Duta and Sonka [14] reported a modification of the active shape model introduced by Cootes and Taylor [15] for segmentation and interpretation of brain MR images. Their method incorporates a priori knowledge about neuroanatomic structures, their context, shapes, and shape variation to produce segmentation and labeling. The boundary characteristics in an image depend on the intensity distribution inside an object and in the regions adjacent to the object. With non-standardness, the form of the statistical intensity distributions inside different object regions is distorted considerably [16]. This can affect the performance of boundary-based methods by losing the sharpness (and specificity) of the statistical distributions for the objects considered.

Region-based approaches, which are very prevalent in brain MR image segmentation, focus on delineating the entire region occupied by the object in the image rather than its boundary. Some examples: Pham and Prince [17] described an adaptive Fuzzy C-means (FCM) method of segmentation that considered the presence of intensity inhomogeneities. Intensity inhomogeneity is modeled as a gain field that causes image intensities to smoothly and slowly vary through the image space. The approach requires a priori knowledge of the approximate centroids of the tissue classes being segmented to yield good convergence properties. Statistical model-based methods [3, 1820] classify the pixels into different object classes in terms of the probability values, which are determined based on the intensity distribution of each tissue class in the image. Zhang et al. [19] developed a hidden Markov Random Field (MRF) model with spatial information taken into account; an expectation-maximization (EM) algorithm was utilized for solving the maximum likelihood (ML) estimation of model parameters. Initial estimation is carried out by using a discriminant measure-based thresholding method. An intensity inhomogeneity correction algorithm is incorporated into the method. Leemput et al. [20] reported an approach for model-based inhomogeneity correction and tissue classification. The initial a priori pixel class probabilities are derived from a digital brain atlas. To utilize the a priori information embodied in the atlas, they normalized the atlas to the space of the study image. The class-specific distribution parameters are then computed from the study image by using the registered and reformatted a priori pixel class probability maps provided by the atlas. Intensity non-standardness affects region-based segmentation methods at least as severely as boundary-based strategies. To the extent that all region-based techniques rely to some extent on the distribution properties of the image intensity within the object region (or of the features derived from image intensities), their performance will be affected by intensity non-standardness. This is mainly because of the loss of sharpness and specificity of the statistical distributions of intensities (or of features) for the objects considered.

Hybrid approaches attempt to combine the strengths of both boundary-based and region-based approaches. Some examples: Chakraborty et al. [21] developed a deformable boundary finding method for medical image segmentation by integrating gradient and region information. Prior boundary information about target structures may be very important to start the optimization process. Amini et al. [22] described a method to segment the thalamus from brain MR images by integrating fuzzy clustering strategies with techniques utilizing dynamic contours, where the initial contour is estimated based on the anatomical knowledge of the brain and the intensity information from the images. Imielinska et al. [23] combined fuzzy connectedness with a voronoi-diagram method and a deformable boundary method for anatomical data segmentation. The remarks made under boundary- and region-based methods are applicable to the hybrid strategies also since they utilize both region and boundary information related to image intensities.

The purpose of this work is to demonstrate how even very simple pattern classification strategies would work very effectively, and would compete with advanced adaptive and iterative methods, once the images are first “cleaned” as optimally as possible by correcting for intensity inhomogeneity and by accounting for non-standardness, so that image intensities are made, as well as possible, to have tissue specific consistent meaning. Additionally, we demonstrate that the performance of commonly used strategies, such as kNN, can be significantly improved by proper preprocessing to handle inhomogeneities and non-standardness so as to bring their performance close to that of more advanced methods.

Our method is described in Section 2. In Section 3, we present several examples and an evaluation study carried out to illustrate the behavior of our methods in comparison to the FM-EM and kNN methods. Our concluding remarks are stated in Section 4. An early version of this paper was presented at the SPIE Medical Imaging 2003 conference whose proceedings contained that paper [24].

2. METHOD

First, we give an outline of the segmentation method. This is followed by a more detailed description of each step in subsequent sections.

We refer to a multiprotocol volume MR image as a scene and represent it by a pair C = (C, f), where C, called the scene domain, is a rectangular array of voxels, and f is the scene intensity function which assigns to every voxel c [set membership] C a vector of M integer intensity values. Let L be the number of brain tissue classes to be segmented.

2.1 Outline

The procedure for segmenting the brain tissues consists of the following steps.

  • (S1). Correcting for RF field inhomogeneity.
  • (S2). Standardizing MRI scene intensities.
  • (S3). Training to estimate parameters of the segmentation method.
  • (S4). Creating brain intracranial mask.
  • (S5). Estimating tissue membership values.
  • (S6). Segmenting brain tissue regions.

2.2 (S1). Correcting for RF field inhomogeneity

We utilize a general, entirely image-based strategy for correcting this variation [9]. The method is completely automatic, acquisition-protocol-independent, and requires no prior knowledge. The method is summarized below.

The ball scale (or b-scale) at any voxel c in a given scene is a number that represents the radius of the largest ball centered at c within which the scene intensities are homogeneous under an appropriately defined homogeneity criterion. Regions (balls) corresponding to the largest scale values in the foreground of the scene are first identified, and then the scene is thresholded by using intensity intervals estimated from these regions. A 2nd order polynomial function is then fit to the intensity variation within the segmented region, and inhomogeneity correction is effected by utilizing this polynomial. The process is iteratively repeated on the resulting scene until the changes found in two successive iterations are sufficiently small. A slice of a PD-weighted MRI scene and the corrected scene resulting from this procedure are shown in Figure 1. In place of this method, any other correction strategy can be utilized from the viewpoint of the main thrust of this paper. Note that inhomogeneity correction is carried out for each component of the M-component vectorial scene separately.

Fig. 1
(a) A slice of a PD-weighted MRI scene with inhomogeneity. (b) The corresponding slice of the corrected scene.

2.3 (S2). Standardizing MRI scene Intensities

The method described in [10] offers a way of transforming the scenes so that there is a significant gain in the similarity of the resulting scenes in terms of their intensity wise tissue characterizability. It is based on transforming the intensity histogram of each given scene into a “standard” histogram. This is achieved in two steps – a training step that is executed only once for a given protocol and body region, and a transformation step that is executed for each given scene. In the training step, certain landmarks of a standard histogram (for the body region and protocol under consideration) are estimated from a given set of scenes. In the transformation step, the actual intensity transformation from the intensity scale of the input scene to the standard scale is performed by mapping the landmarks determined from the histogram of the given scene to those of the standard histogram. The standardization process is illustrated in Figure 2 wherein histograms of the white matter (WM) region in a set of 10 PD-weighted MRI scenes are shown before and after standardization. The effect of intensity standardization is readily seen by comparing Figures 2(a) and (c). More importantly, how the estimation of parameters of segmentation methods, particularly those of pattern classification techniques, may become significantly more challenging and marginally effective can be gleaned by comparing Figures 2(b) and (d). We emphasize that, for minimal residual non-standardness, the order of the preprocessing operations should be inhomogeneity correction first followed by intensity standardization [12]. If this order is reversed, the residual non-standardness may be higher, mainly due to the fact that inhomogeneity correction methods tend to introduce some non-standardness, as demonstrated in [12]. Note that the entire standardization operation (involving training and transformation) are carried out separately for each component scalar scene of the M-component vectorial scene.

Fig. 2
(a) Histograms of WM regions in 10 original PD-weighted MRI scenes. (b) A histogram obtained by combining the 10 histograms of (a) which represents WM intensity distribution in the PD scene. (c) Histograms of WM regions in the 10 corresponding standardized ...

We denote by S the set of intensity-inhomogeneity-corrected and standardized vectorial scenes that are utilized in our methodology.

2.4 (S3). Training to estimate segmentation parameters

A subset of the scenes in S is utilized as a training data set to create “true” segmentation of each brain tissue. The “true” segmentations can be obtained by carefully correcting the results of an appropriate segmentation method under the guidance of an expert or by manually segmenting by utilizing the help of experts. In our case, the former approach was taken. The intensity distribution of each brain tissue in the training data set is then obtained by computing the combined histogram (as in Figure 2(d)) over the training data set. The mean intensity vector μl and the covariance matrix ∑l, for each tissue class l, 1 ≤ lL, are then estimated from the training data set and are fixed once for all. Note that this step is done only once for any given segmentation application. The combined histogram, μl, and ∑l, for 1 ≤ lL, are utilized in estimating class membership of voxels as described in Step S5.

2.5 (S4). Creating brain intracranial mask

We utilized the fuzzy connectedness framework, originally reported in [25] and extended to vectorial scenes [26], for this step. In this framework, a local fuzzy relation called affinity is defined on the scene domain, which assigns to every pair of nearby voxels a strength of local hanging togetherness which has a value in [0, 1]. In the scalar version, affinity between two voxels c and d depends on their spatial nearness as well as on how similar their scene intensities and intensity-derived features are within the b-scale regions associated with c and d. A global fuzzy relation called fuzzy connectedness is defined on the scene domain, which assigns to every pair (c, d) of voxels a strength of global hanging togetherness that has a value in [0, 1]. To determine this value, every possible path from c to d is considered and the minimum affinity of pairwise voxels along the path is determined. This affinity represents the strength of this path. The strength of hanging togetherness (connectedness) between c and d is the largest of the strengths of all paths between c and d. In defining a fuzzy connected object, the strength of connectedness between all possible pairs of voxels has to be determined. It has been shown [25] that this problem can be solved via dynamic programming and that a fuzzy connected object containing a given voxel o can be computed at interactive speeds on modern PCs.

In the vectorial extension of this framework [26], the b-scale region associated with each voxel is determined based on the homogeneity of the orientation and magnitude of the vectors associated with the voxels in the ball region around the voxel. Subsequently, in the determination of the affinity between two voxels c and d, all vectors in the b-scale regions of both c and d are considered. In step S4 of our methodology, we specify several seed voxels for each of GM, WM, and CSF on one slice of the input 3D scene C that is roughly centrally situated in the brain. The fuzzy connected GM, WM, and CSF objects containing the given seed voxels are then obtained by using Steps 1–16 of algorithm κV SRFOE of [26]. A fuzzy union of the fuzzy GM, WM, and CSF objects is computed, from which a 3D binary scene CB = (C, fB) is created by thresholding the resulting fuzzy object. The threshold value is determined and fixed once for all by examining a few input 3D scenes. We apply morphological erosion and dilation operations on the binary scene CB to generate the brain intracranial mask. The purpose of the morphological operations is twofold: first, to fill holes in the binary scene that arise from partial volume voxels not belonging to any fuzzy object; second, to remove extracranial regions that may get connected to the brain via weak links of fuzzy connectedness.

The purpose of step S4 is simply the separation of the intracranial region. Our aim is to subsequently apply simple classification techniques within this region. We utilized fuzzy connectedness for this purpose since its implementations are readily available in 3DVIEWNIX [27] (http://www.mipg.upenn.edu). In place of this method, any other technique may be utilized from the viewpoint of the main thrust of this paper. We note that almost all methods of brain tissue segmentation require this step.

2.6 (S5). Estimating tissue membership values

We introduce two methods to estimate the fuzzy membership value of each voxel in each brain tissue for any given (corrected and standardized) scene C = (C, f) for which the intracranial mask is CB = (C, fB). In the first method, called SMG (abbreviation for simple membership based on a Gaussian), we assume that the (standardized) intensities of voxels of each brain tissue conform to a multivariate Gaussian distribution Nl, ∑l), 1 ≤ lL, where the mean intensity vector μl and the covariance matrix ∑l for each brain tissue class are already obtained and fixed in Step S3. Then, for each voxel c [set membership] C in the intracranial mask, the probability membership value in [0,1] that describes the belongingness of the voxel to each tissue class is given simply by, for 1 ≤ lL,

pl(f(c))={exp(12(f(c)μl)Tl1(f(c)μl))(2π)L|l|,iffB(c)0,0,iffB(c)=0.
(1)

The second method, called SMH (abbreviation for simple membership based on histogram), estimates the fuzzy membership value directly via the intensity distribution of each brain tissue expressed by the histograms determined from the training data set. For each voxel c [set membership] C in the intracranial mask, the probability membership value in [0,1] that describes the belongingness of the voxel to each tissue class is given by, for 1 ≤ lL,

pl(f(c))={hl(f(c))i=1Lhi(f(c)),iffB(c)0,0,iffB(c)=0,
(2)

where hi(f(c)) is the frequency value at intensity f(c) in the histogram hi that was determined from the training data set.

2.7 (S6). Separating brain tissue regions

The segmentation of the brain tissues can be done simply via an ML criterion since we have estimated all class memberships for each voxel in the intracranial mask. This yields acceptable segmentation as long as the different tissues have a clearly discernible associated intensity distribution. The segmentation from this simple strategy can be further improved if spatial information is also taken into account. Usually, the neighboring voxels have similar intensities, and the class labels of neighboring voxels are strongly correlated.

We employ the MRF technique [28] to incorporate spatial constraints into the SMG and SMH techniques. Let [mathematical script N]c be the set of neighbors of voxel c, and let xc be a realization of a random process where the probability that voxel c belongs to tissue type l depends on the tissue type of its neighbors. In this paper, we use the simple first-order neighborhood system consisting of only the six direct neighbors. According to the Hammersley-Clifford theorem [28], such a random field is a Gibbs Random Field, and can be characterized by a Gibbs distribution,

p(xc=l|x𝒩c)=Z1exp[U(xc|x𝒩c)],
(3)

where Z−1 is a normalization constant called the partition function, U(xc|x[mathematical script N]c) is an energy function dependent on [mathematical script N]c, and x[mathematical script N]c = {xd|d [set membership] [mathematical script N]c} represents the set of class labels of the neighbors of c. We obtain the initial class labels of voxels via an ML criterion.

We use an energy function similar to that used in [18]; that is, a function whose value is proportional to the number of nearby voxels belonging to the same type,

U(xc|x𝒩c)=d𝒩c[1δ(xcxd)],
(4)

where δ(xcxd) = 1 if c and d have the same labels, otherwise δ(xcxd) = 0. Finally, voxel c is assigned to a region of tissue type w(c) by:

w(c)=argmax1lL{pl(f(c))p(xc=l|x𝒩c)}.
(5)

It is understood that, when we refer to SMG and SMH in the rest of this paper, they include the above MRF-based class labelling scheme at the final step.

3. RESULTS AND EVALUATION

In Section 3.1, we present several examples to illustrate how our methods work. We also implemented the kNN method [29] and FM-EM method [3] [18] for comparison which are applied also only within the brain intracranial mask obtained as described in Section 2.5. We choose the kNN and FM-EM methods because they are commonly used in brain MR image segmentation [3,1820,3032]. In our implementation of the kNN method, we used k = 7. Briefly, this method works as follows. To determine the class membership w(c) of a voxel c of a scene C = (C, f), the k feature (intensity) vectors from the training data set that are closest to the feature vector f(c) are first determined. w(c) is then taken to be the label of the majority of these k feature vectors. For the FM-EM method, we provide the algorithm a rough prior estimation of the classification proportion νl, 1 ≤ lL, initial brain tissue parameters μl and ∑l. Then a two-step EM process updates classification proportion and tissue parameters iteratively until convergence. Finally the classification is done by assigning voxel c to a region of tissue type w(c) according to the maximum a posteriori probability criterion. We present an evaluation of methods SMG and SMH in comparison with kNN and FM-EM in Section 3.2 by using the evaluation framework of [33].

3.1 Qualitative

Our first example, illustrated in Figure 3, comes from MRI of the head of a clinically normal human subject. A fast spin-echo dual-echo protocol is used. The size of this data is 256 × 256 × 56 with the voxel size of 0.86mm × 0.86mm × 3mm. One slice each of the intensity inhomogeneity corrected and intensity standardized PD-weighted and the T2-weighted scene is shown in Figures 3(a) and (b), respectively. The slice of the segmented results from the kNN, FM-EM, SMG, and SMH methods are shown in Figures 3(c)–(f), respectively.

Fig. 3
(a) One slice of an intensity inhomogeneity corrected and standardized PD-weighted MRI scene of the head of a normal human subject. (b) Corresponding T2-weighted slice. Segmentation result by using (c) kNN, (d) FM-EM, (e) SMG, (f) SMH. The tissues from ...

Our second example, shown in Figure 4, is a MRI scene of the head of a patient with Multiple Sclerosis (MS). A fast spin-echo dual-echo protocol is used. The size of this data is 256 × 256 × 60 with the voxel size of 0.86mm × 0.86mm × 3mm. We note that the FM-EM method performs poorly on lesion segmentation. The reason is that, it is difficult to explicitly model the lesions in the standard FM-EM method because of their widely varying appearance in MRI scenes, and because not every individual scan contains a sufficient number of lesions for estimating the model parameters. Leemput et al. [34] described a more sophisticated EM extension method that performs tissue classification using a stochastic model for normal brain MRI scenes, and detects MS lesions as outliers that are not well explained by the model.

Fig. 4
(a) One slice of an intensity inhomogeneity corrected and standardized PD-weighted MRI scene of the head of a patient with Multiple Sclerosis. (b) Corresponding T2-weighted slice. Segmentation result by using (c) kNN, (d) FM-EM, (e) SMG, (f) SMH method. ...

To demonstrate the effect of standardization on the kNN method, we display in Figure 5 the results obtained for kNN on the data sets of Figure 3 and Figure 4 with only inhomogeneity correction but no standardization. Comparing Figure 3(c) and Figure 5(a), and Figure 4(c) and Figure 5(b), it is clear that standardization substantially improved the performance of the kNN method.

Fig. 5
(a) The result of kNN segmentation of the scene of Figure 3(a), (b) obtained by first performing only inhomogeneity correction but no standardization. (b) As in (a) but for the scene in Figure 4(a), (b).

3.2 Quantitative

In the evaluation framework of [33], a method’s performance is assessed by three sets of measures – precision, accuracy, and efficiency. Precision here refers to reliability of the method taking into consideration any subjective actions required in the method. Accuracy refers to how well the results agree with true segmentation, and efficiency indicates the computational and operator time required in the segmentation process. The measures used under each of these groups are described in separate sections below.

Scene data used in our evaluation experiments came from: (D1) 10 different normal subjects and (D2) 10 different MS patients. Both are acquired with fast-spin-echo dual-echo PD- and T2-weighted studies. These data sets were selected from our database comprising different routinely applied MRI protocols, and acquired on the same GE Signa 1.5T scanner with a quadrature transmitter/receiver head coil and consisted of contiguous axial slices covering the entire brain with a scene domain of 256 × 256 × 56, pixel size of 0.86 mm × 0.86 mm, slice thickness of 3.0 mm, and FOV of 22 cm, with TR of 2500 ms and TE of 18 ms/90 ms. Two repeated scans were obtained for each patient with a short time gap in between scans during which the patients were removed from the scanning table and then positioned again. This is for assessing the precision of the method to patient positioning in the scanner. “True” segmentations of WM, GM, and CSF objects for D1, and the additional MS lesion object for D2 have been obtained by carefully correcting the segmentation result produced by the method described in [35] under the guidance of an expert. Both D1 and D2 are corrected for inhomogeneity and standardized for intensity before the kNN, SMG, and SMH methods are applied on them. Because of the reason explained in Section 3.1, we applied the FM-EM method on D1 only after intensity inhomogeneity correction and standardization, and did not apply this method on D2. To quantify the effect of standardization on the kNN method, we also applied the kNN on D1 and D2 with only inhomogeneity correction but no standardization. This latter method is referred to as kNNc below.

3.2.1. Precision

Three factors are measured in evaluating segmentation precision – intra-operator precision PRT1, inter-operator precision PRT2, and repeat-scan precision PRT3. Let CO1 and CO2 be segmented binary scenes representing the same object O in two repeated trials. For inter- and intra-operator precision, we use the following definition to measure the overlap agreement.

PRTi(O1,O2)=|𝒞O1𝒞O2||𝒞O1𝒞O2|×100,1i2,
(6)

where ∩ and [union or logical sum] represent binary scene intersection and union operations, respectively, and |X| denotes the number of voxels in binary scene X with value 1 assigned to them. For repeat scan precision, such an overlap measure cannot be made without scene registration. To avoid the error arising from registration (and subsequently interpolation) that is required to measure overlap, and this error influencing the precision measures, we measure the volume agreement of two repeated trials by

PRT3(O1,O2)=(1||𝒞O1||𝒞O2||12[|𝒞O1|+|𝒞O2|])×100.
(7)

Table 1 and Table 2 show intra-, inter-operator, and repeat-scan precision for segmenting different brain tissues by using the kNNc, kNN, FM-EM, SMG, and SMH methods on D1 and D2, respectively. Mean and standard deviation over the scene population are displayed in the tables. Here, the intra- and inter-operator precision are measured by two operators specifying the seeds needed for creating brain intracranial mask two times on different occasions for the scene sets involved in our evaluation.

TABLE 1
The mean and standard deviation (in parenthesis) of intra-, inter-operator, repeat-scan precision, FPVF, and FNVF for segmenting WM, GM, and CSF by using kNN, kNNc, FM-EM, SMG, and SMH methods over D1.
TABLE 2
The mean and standard deviation of intra-, inter-operator, repeat-scan precision, FPVF, and FNVF for segmenting WM, GM, CSF, and MS lesion regions by using kNN, kNNc, SMG, and SMH methods over D2.

3.2.2. Accuracy

For any scene C = (C, f), let CM be the segmentation result (binary scene) obtained by using any method M, and let Ctd be the corresponding scene of “true” delineation. The following measures are defined to characterize the delineation accuracy of method M.

True Positive Volume Fraction,TPVF=|𝒞td𝒞M||𝒞td|×100,
(8)

True Negative Volume Fraction,TNVF=|𝒞𝒞M𝒞td||𝒞𝒞td|×100,
(9)

False Positive Volume Fraction,FPVF=|𝒞M𝒞td||𝒞𝒞td|×100,
(10)

False Negative Volume Fraction,FNVF=|𝒞td𝒞M||𝒞td|×100.
(11)

TPVF indicates the fraction of the total amount of tissue in the true delineation Ctd that was covered by method M. TNVF describes the fraction of the total amount of tissue in C that is truly not in the object that was also excluded by method M. FPVF denotes the amount of tissue falsely identified by method M as a fraction of the amount of tissue in C that is truly not in the object. And FNVF expresses the fraction of tissue in the true delineation Ctd that was missed by method M. The following desirable relationships among the above measures can be easily established from (8)–(11).

FPVF=100TNVF,
(12)

FNVF=100TPVF.
(13)

The smaller both FPVF and FNVF are, the better is the delineation accuracy of method M. Table 1 and Table 2 show FPVF and FNVF for segmenting different objects by using different methods over D1 and D2. The mean and standard deviation of the factors over the scene population are listed in the tables.

3.2.3. Efficiency

The proposed algorithms are implemented within the 3DVIEWNIX software system [27] and is executed on an Intel Pentium IV PC with a 1.7GHZ CPU under the Red Hat Linux OS version 2.4.7. In determining the efficiency of a segmentation method four types of efficiency factors need to be considered [33]: computer time required for algorithm training (t1c) and for segmenting each scene (t2c), and operator time required for algorithm training (t1h) and for segmenting each scene (t2h). Table 3 lists the computer time and operator time required (mean and standard deviation) per scene for segmenting the objects by using the kNN, kNNc, FM-EM, SMG, and SMH methods. Note that, for all methods except FM-EM, training is needed only once for an application domain and not for every scene to be segmented. (We note that since the same set of scene data and their segmented and expert-corrected binary scenes were used in all methods, the operator time component for training is the same for all methods which is set as 0.) For the FM-EM method, since it is an adaptive method which attempts to account for the variations in each particular scene, no training is needed. However, the computer time for segmentation by using FM-EM is sensitive to initial input parameters. In our implementation of FM-EM, we first calculated the mean intensity vector μ and covariance matrix ∑ for the whole brain tissue. Then we set 0.5μ and ∑ as the initial parameters for WM, μ and ∑ for GM, and 1.5μ and ∑ for CSF. The computer time listed for segmentation includes on the average 20 seconds for inhomogeneity correction and 5 seconds for standardization per scene (except the kNNc method that did not use the standardization operation). The operator time for segmentation consists of the time taken to specify the seed points for different tissues needed for creating the intracranial mask.

TABLE 3
The mean and standard deviation (in parenthesis) of computer time and operator time required per scene for segmentation for the kNN, kNNc,FM-EM, SMG, and SMH methods.

We have also tested another version for FM-EM, denoted here FM-EMc, which took inhomogeneity-corrected, but not standardized, scenes as input. Its behavior was very similar to that of FM-EM, as described below, except that it takes many more iterations to converge than FM-EM.

From paired t-tests carried out to compare the mean values of the measures between each of the possible pairs among the methods in Table 1 and Table 2 (10 pairs for D1 and 6 pairs for D2), for each object, and for each measure, we make the following observations. We use the notation x > y to denote the fact that method x is better than method y, and that the difference between the methods is statistically significant (p < 0.05). When the difference is not significant, we will use the notation xy. The results of comparisons are summarized in Table 4. For WM, GM, CSF, and MS lesions, there is no difference among the methods or the difference is small and statistically insignificant for all precision measures. Understandably, the repeat-scan precision is lower for all objects and for all methods than their intra- and inter-operator precision.

TABLE 4
Comparison of the five methods by using paired t-tests on the basis of precision, accuracy, and efficiency. (M1:kNN, M2:kNNc, M3:FM-EM, M4:SMG, M5:SMH. M3 is applied on D1 only)

From the accuracy viewpoint, for all objects, the following rank order of methods seems to emerge: SMH ≈ SMG ≈ FM-EM ≈ FM-EMc > kNN > kNNc. It is clear that standardization helps in improving the accuracy of the kNN method considerably. SMH, SMG, and FM-EM (and FM-EMc) are similar in accuracy, but are considerably better than kNN, and SMH seems to have a slight edge over SMG. We note that, for all methods, FNVF is greater than FPVF. From Table 1 and Table 2, it may be observed that the accuracy of all methods is better on normal subjects’ data sets than on patients’. This underscores the need to evaluate segmentations of tissue regions appearing to be non-pathological (such as WM, GM, CSF) on patient data sets rather than on data sets from normal subjects.

From the efficiency viewpoint, t2c and t2h are the crucial factors. In our case, all methods have same t2h. Given this, all methods except FM-EM are similar in efficiency, although kNN and kNNc are slightly more expensive computationally. Although kNN is slightly more expensive computationally than kNNc, this price is definitely worth paying for the considerable improvement in accuracy that results. The FM-EM method does not need training, but takes much more computation time for segmentation. Since FM-EMc requires more iterations to converge than FM-EM (typically 1.5 times as many), the FM-EMc method has the worst efficiency among all methods.

4. CONCLUDING REMARKS

Most previously published statistical MRI scene segmentation methods require complicated parameter estimation procedures to obtain the parameters of tissue intensity distributions. In this paper, we have employed an intensity standardization technique to make the MRI scene intensities have tissue-specific meaning. This greatly simplifies the parameter estimation process and allows the parameters to be fixed once for all. We have devised two simple, non-iterative membership-based methods called SMG and SMH for effectively segmenting the MRI brain images. The simplicity and effectiveness stem mainly from MRI scene intensity standardization.

We have demonstrated that these methods have high intra-, inter-operator, and repeat-scan precision, accuracy, and efficiency in segmenting the component tissue regions of the brain, and their efficacy (in terms of precision, accuracy, and efficiency) is better than the commonly used kNN method. We have observed that, these methods have performance in precision and accuracy that is similar to that of the more sophisticated FM-EM method when the latter utilized corrected and standardized images as input. However, the new simple methods have much better efficiency. We have also demonstrated how the accuracy of the kNN method improves considerably upon intensity standardization without compromising its precision and efficiency. We suggest that the accuracy of any methods which utilize the intensity information from a family of scenes for fixing the segmentation parameters can be similarly improved. For those methods whose parameters are determined on a per scene basis, their efficiency can be improved by intensity standardization by obviating the need for per-scene adjustment of parameters and/or minimizing correction of segmentation post-hoc. Another observation we made, not surprising, is that, for the same segmentation tasks, accuracy achieved by all methods was poorer on patient images than on images obtained from healthy subjects. This suggests that evaluations conducted on images of normal subjects cannot be taken to imply applicability to patient data. Further, when comparing quantitative information (such as volume) for normal control subjects and patients, this difference in accuracy and their variability must be taken into account.

ACKNOWLEDGMENTS

The research reported here is supported by a US DHHS grant EB 004395.

Footnotes

Publisher's Disclaimer: This is a PDF file of an unedited manuscript that has been accepted for publication. As a service to our customers we are providing this early version of the manuscript. The manuscript will undergo copyediting, typesetting, and review of the resulting proof before it is published in its final citable form. Please note that during the production process errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain.

REFERENCES

1. Pal NR, Pal SK. A review of image segmentation techniques. Pattern Recognition. 1993;26(9):1277–1294.
2. Pham D, Xu C, Prince J. Current methods in medical image segmentation. Annual Review of Biomedical Engineering. 2000;2:315–337. [PubMed]
3. Wells WM, Grimson EL, Kikinis R, Jolesz FA. Adaptive segmentation of MRI data. IEEE Transactions on Medical Imaging. 1996;15(4):429–442. [PubMed]
4. Guillemaud R, Brady M. Estimating the bias field of MR images. IEEE Transactions on Medical Imaging. 1997;16(3):238–251. [PubMed]
5. Van Leemput K, Maes F, Vandermeulen D, Suetens P. Automated model-based bias field correction of MR images of the brain. IEEE Transactions on Medical Imaging. 1999;18(10):885–896. [PubMed]
6. Sled JG, Zijdenbos AP, Evans AC. A nonparametric method for automatic correction of intensity nonuniformity in MRI data. IEEE Transactions on Medical Imaging. 1998;17(1):87–97. [PubMed]
7. Styner M, Brechbuhler C, Szekely G, Gerig G. Parametric estimate of intensity inhomogeneities applied to MRI. IEEE Transactions on Medical Imaging. 2000;19(3):153–165. [PubMed]
8. Likar B, Viergever MA, Pernus F. Retrospective correction of MR intensity inhomogeneity by information minimization. IEEE Transactions on Medical Imaging. 2001;20(12):1398–1410. [PubMed]
9. Zhuge Y, Udupa JK, Liu J, Saha PK, Iwanaga T. A scale-based method for correcting background intensity variation in acquired images; Proceedings of SPIE Medical Imaging; San Diego, USA: 2002. pp. 1103–1111.
10. Nyul LG, Udupa JK. On standardizing the MR image intensity scale. Magnetic Resonance in Medicine. 1999;42(6):1072–1081. [PubMed]
11. Nyul LG, Udupa JK, Zhang X. New variants of a method of MRI scale standardization. IEEE Transactions on Medical Imaging. 2000;19(2):143–150. [PubMed]
12. Madabhushi A, Udupa JK. Interplay between intensity standardization and inhomogeneity correction in MR images. IEEE Transactions on Medical Imaging. 2005;24(5):561–576. [PubMed]
13. Pitiot A, Toga Arthur W, Thompson PM. Adaptive elastic segmentation of brain MRI via shape-model-guided evolutionary programming. IEEE Transactions on Medical Imaging. 2002;21(8):910–923. [PubMed]
14. Duta N, Sonka M. Segmentation and interpretation of MR brain images: An improved active shape model. IEEE Transactions on Medical Imaging. 1998;17(6):1049–1062. [PubMed]
15. Cootes T, Taylor C, Cooper D. Active shape models-their training and application. Computer Vision and Image Understanding. 1995;61(1):38–59.
16. Ge Y, Udupa JK, Nyul LG, Wei L, Grossman RI. Numerical tissue characterization in MS via standardization of the MR image intensity scale. Journal of Magnetic Resonance Imaging. 2000;12(5):715–721. [PubMed]
17. Pham DL, Prince JL. Adaptive fuzzy segmentation of Magnetic Resonance images. IEEE Transactions on Medical Imaging. 1999;18(9):737–752. [PubMed]
18. Liang Z, MacFall JR, Harrington DP. Parameter estimation and tissue segmentation from multispectral MR images. IEEE Transactions on Medical Imaging. 1994;13(3):441–449. [PubMed]
19. Zhang Y, Brady M, Smith S. Segmentation of brain MR images through a hidden markov random field model and the expectation-maximization algorithm. IEEE Transactions on Medical Imaging. 2001;20(1):45–57. [PubMed]
20. Van Leemput K, Maes F, Vandermeulen D, Suetens P. Automated model-based tissue classification of MR images of the brain. IEEE Transactions on Medical Imaging. 1999;18(10):897–908. [PubMed]
21. Chakraborty A, Staib L, Duncan J. Deformable boundary finding in medical images by integrating gradient and region information. IEEE Transaction on Medical Imaging. 1996;15(6):859–870. [PubMed]
22. Amini L, Soltanian-Zadeh H, Lucas C, Gity M. Automatic segmentation of Thalamus from brain MRI integrating fuzzy clustering and dynamic contours. IEEE Transaction on Biomedical Engneering. 2004;51(5):800–811. [PubMed]
23. Imielinska C, Metaxas D, Udupa JK, Jin Y, Chen T. Hybrid segmentation of anatomical data; Proceedings of MICCAI; Utrecht, The Netherlands: 2001. pp. 1048–1057.
24. Zhuge Y, Liu J, Udupa JK. Membership-based multiprotocol brain MR image segmentation; Proceedings of SPIE Medical Imaging; San Diego, USA: 2003. pp. 1572–1579.
25. Udupa JK, Samarasekera S. Fuzzy connectedness and object definition: Theory, algorithms, and applications in image segmentation. Graphical Models and Image Processing. 1996;58(3):246–261.
26. Zhuge Y, Udupa JK, Saha PK. Vectorial scale-based fuzzy connected image segmentation. Computer Vision and Image Understanding. 2006;101(3):177–193.
27. Udupa JK, Odhner D, Samarasekera S, Goncalves RJ, Iyer K, Venugopal K, Furuie S. 3DVIEWNIX: An Open, transportable, multidimensional multimodality, multiparametric imaging software system; Proceedings of SPIE Medical Imaging; Newport Beach, USA: 1994. pp. 58–73.
28. Li SZ. Markov Random Field Modeling in Computer vision. Berlin, Germany: Springer-Verlag; 1995.
29. Duda RO, Hart PE. Pattern Classification and Scene Analysis. John Wiley & Sons; 1973.
30. Bezdek JC, Hall LO, Clarke LP. Review of MR image segmentation techniques using pattern recognition. Medical Physics. 1993;20(4):1033–1048. [PubMed]
31. Vaidyanathan M, Clarke L, Heidman C, Velthuizen R, Hall L. Normal brain volume measurement using multispectral MRI segmentation. Magnetic Resonance Imaging. 1997;15(1):87–97. [PubMed]
32. Warfield SK, Kaus M, Jolesz FA, Kikinis R. Adaptive, template moderated, spatially varying statistical classification. Medical Image Analysis. 2000;4(1):43–55. [PubMed]
33. Udupa JK, LaBlanc VR, Zhuge Y, Imielinska C, Schmidt H, Currie LM, Hirsch BE, Woodburn J. Methodology for evaluating image segmentation algorithms. Computerized Medical Imaging and Graphics. 2006;30(2):75–87. [PubMed]
34. Van Leemput K, Maes F, Vandermeulen D, Colchester A, Suetens P. Automated segmentation of Multiple Sclerosis lesions by model outlier detection. IEEE Transactions on Medical Imaging. 2001;20(8):677–688. [PubMed]
35. Udupa JK, Wei L, Samarasekera S, Miki Y, Buchem MA, Grossman RI. Multiple sclerosis lesion quantification using fuzzy connectedness principles. IEEE Transactions on Medical Imaging. 1997;16(5):598–609. [PubMed]