Home | About | Journals | Submit | Contact Us | Français |

**|**Biomed Res Int**|**v.2017; 2017**|**PMC5337796

Formats

Article sections

- Abstract
- 1. Introduction
- 2. Methods
- 3. Implementation and Experimental Results
- 4. Discussion
- 5. Conclusion
- References

Authors

Related links

Biomed Res Int. 2017; 2017: 8381094.

Published online 2017 February 19. doi: 10.1155/2017/8381094

PMCID: PMC5337796

Biocomputing Research Center, School of Computer Science and Technology, Harbin Institute of Technology, Harbin 150001, China

*Kuanquan Wang: Email: nc.ude.tih@qkgnaw

Academic Editor: Jiang Du

Received 2016 November 25; Revised 2017 January 8; Accepted 2017 January 23.

Copyright © 2017 Chao Ma et al.

This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Segmentation of the left atrium (LA) from cardiac magnetic resonance imaging (MRI) datasets is of great importance for image guided atrial fibrillation ablation, LA fibrosis quantification, and cardiac biophysical modelling. However, automated LA segmentation from cardiac MRI is challenging due to limited image resolution, considerable variability in anatomical structures across subjects, and dynamic motion of the heart. In this work, we propose a combined random forests (RFs) and active contour model (ACM) approach for fully automatic segmentation of the LA from cardiac volumetric MRI. Specifically, we employ the RFs within an autocontext scheme to effectively integrate contextual and appearance information from multisource images together for LA shape inferring. The inferred shape is then incorporated into a volume-scalable ACM for further improving the segmentation accuracy. We validated the proposed method on the cardiac volumetric MRI datasets from the STACOM 2013 and HVSMR 2016 databases and showed that it outperforms other latest automated LA segmentation methods. Validation metrics, average Dice coefficient (DC) and average surface-to-surface distance (S2S), were computed as 0.9227 ± 0.0598 and 1.14 ± 1.205mm, versus those of 0.6222–0.878 and 1.34–8.72mm, obtained by other methods, respectively.

Atrial fibrillation (AF) is the most common cardiac electrical disorder and a major cause of stroke [1]. During the past decade, ablation of AF has become a commonly performed therapy procedure in many major hospitals throughout the world [2]. Accurate segmentation of the LA anatomy from MR images is of great importance for ablation guidance during the therapy procedure, automatically quantifying the LA fibrosis which is highly associated with postablation AF recurrence [3] and creating cardiac biophysical models [4, 5].

However, developing automated LA segmentation techniques is technically challenging due to several reasons. First, the myocardial walls of the LA are not uniform in thickness, and some area of the walls can be very thin, 2.3 ± 0.9mm [6], which make it challenging to image in cardiac MRI at even the best resolutions available. Although the LA areas can be defined through intensity gradients between the blood pool and surrounding tissues, the adjacent anatomical structures, such as other cardiac chambers, the descending aorta, and the coronary sinus, present signal intensities similar to that of the blood pool, and even manual segmentations by expert raters may show significant variations. Furthermore, the LA structures vary considerably across subjects in terms of the topological variants of the pulmonary vein (PV) and the shape and size of the LA appendage (LAA) [6], prohibiting the use of strong statistical constraints. Finally, the boundary between the LA and the left ventricle is difficult to define due to the different opening positions of the mitral valve (MV) leaflets [7].

In the past several years, a number of techniques were applied to the heart chambers segmentation, in meeting the variety of needs of clinical diagnosing and therapy, including active contour models, graph cuts, and machine learning as well as knowledge-based approaches, such as statistical shape models or atlas-based methods. Compared to the literature of cardiac ventricles segmentation, the one of LA segmentation is much less abundant. In the LA segmentation field, random forests and active contour models and their variants are indeed particularly popular.

Random forests (RFs) [8] machine learning framework has recently enjoyed the increased attentions in the medical image segmentation [9–11]. The RFs are inherently suited for handling a high number of multiclass data with high data dimension and have proven to be accurate and robust for many cardiac tissue segmentation tasks [12]. For example, Margeta et al. [13] proposed to automatically separate the LA from other blood pools in 3D cardiac MRI by using context-rich features within a decision forests scheme. Schneider et al. [14] proposed a framework for joint 3D vessel segmentation and centerline extraction by using multivariate Hough voting and oblique RFs, with local image features extracted by steerable filters. Mahapatra [15] used RF to learn discriminative image features and quantify their importance. The learned feature selection strategy then guided the graph cut to achieve the final segmentation. Although these methods have achieved promising results, they remain to depend on the quality and amount of labeled training data, and the typical RF outputs are not geometrically constrained.

Active contour models (ACMs) [16, 17] typically use image edge [18, 19] or region [20, 21] descriptor to drive the active contour toward object boundaries [22, 23]. They have been extensively explored in cardiac segmentation with promising results [24]. For example, Giannakidis et al. [25] employed a user-guided level set based 3D geodesic active contour method to demarcate the left atrial endocardial surface. Avendi et al. [26] and Ngo et al. [27] employed deep learning algorithms combined with deformable models to develop a fully automatic left ventricle (LV) segmentation tool from short-axis cardiac MRI datasets. Yu et al. [28] used sparsity constraints to alleviate gross errors and integrate them seamlessly with deformable models for cardiac motion analysis. Zhou et al. [29] established a new ACM in a variational level set formulation for cardiac CT images segmentation. The limitations of most conventional ACMs include the dependence of a certain contour initialization and the undesirable segmentation results for the images with challenging image conditions, such as intensity inhomogeneity and low tissue contrast.

Atlases may be used within the ACMs or the RFs framework as a priori anatomical information [30]. The segmentation of tissues is performed under the guidance of a single [31] or multiple atlases [32, 33]. With good target-to-atlases registration, atlas-based methods often exhibit good performances for cardiac image segmentation even in the presence of reduced tissue contrast and increased noise [34, 35]. For example, Zhuang and Shen [36] presented a whole-heart segmentation method by employing multimodality atlases based on a multiscale patch strategy and a global atlas ranking scheme. Bai et al. [37] proposed to combine the intensity, gradient, and contextual information into an augmented feature vector and incorporate it into multiatlas segmentation for cardiac MRI. The main drawback of atlases based techniques is the dependence of the segmentation results on the quality of the registration between the target image and the multiple atlases.

To address these limitations, inspired by the pioneering work [26, 38–40], we tackle the complex problem of LA segmentation utilizing a combined RFs and ACM approach. The proposed approach is able to integrate information from multisource images together for a fully automated, accurate, and robust LA segmentation. Specifically, we use the concatenated classification forest to iteratively learn a sequential tissue classifier from each training subject. Inspired by the autocontext scheme [41], the generated tissue probability map at each iteration is further used as additional image source to train the classifier at the next iteration. By fusing the estimations from all trained concatenated classifiers, we can infer the LA structure for a given testing subject. The inferred LA structure is further fed into a volume-scalable ACM as an initial contour as well as a shape prior to accomplish the final segmentation. Compared to the previous methods using RFs and multiatlas for heart chambers segmentation [12, 24], the proposed method allows the effective integration of image information from multiple training subjects without the requirements of registrations. Furthermore, our method reformulates the classification task of RFs as a contour evolution scheme, which is very important for accurate and smooth segmentation of LA images. In addition, in contrast to the previous ACM methods, our method is an automated one and is more robust to low contrast between adjacent tissues. Validations on two public available datasets have demonstrated significant advantages of the proposed method.

The remainder of the manuscript is organized as follows. In Section 2, the proposed method is described in detail. The implementation and results of the proposed method are presented in Section 3, followed by some discussions in Section 4, and finally, Section 5 summarizes this paper.

The flowchart of the proposed method is shown in Figure 1. In this paper, we formulate the LA segmentation problem as a hybrid problem with tissue classification and tissue boundary contour evolution. Specifically, with the input of volumetric MRI datasets, the method is carried out in three stages:

(1) A variety of image features are explored from MRI volumes to fully capture both local and contextual image information. These image features are provided as the input to the subsequent stages.

(2) The LA structure is inferred using concatenated random forests (CRFs) in an autocontext scheme [38, 41]. Specifically, the RFs are employed as a concatenated classifier to produce a sequence of tissue probability maps for the LA structure by voxel-wise classification. The LA structure is delineated iteratively by assigning the structure label with the largest probability at each voxel within MRI volumes.

(3) In order to refine the structure labels, the voxel-wise classification is further combined with a contour evolution scheme by feeding the inferred LA structure into a volume-scalable ACM [40]. The final segmentation is accomplished by driving the active contour evolving and converging at the desired position of the LA boundary.

After individual training, the voxel classification, and contour evolution stages of the flowchart offline, the system can be deployed for automatic LA segmentation task. The three stages are further elaborated as follows.

Denote the vector valued image to be segmented as *I* : *Ω* → ^{d}, where *Ω* ^{3} is the image domain and *d* ≥ 1 is the dimension of the vector *I*(*x*). In order to fully utilize the information given by the volumetric data, we explore both local and context-aware image features at each voxel, forming a feature map *f* : *Ω* → ^{Df}. Subsequently, the segmentation is performed in the feature space. Any kind of features from multimodality can be integrated into the proposed framework, such as Fourier [42], wavelet [43], SIFT [44], and HOG [45] features, for tissue segmentation. In this work, we use volume-scalable local robust statistics (VSLRS) [40, 46] to extract the feature vectors due to their insensitivity to image noise and computational efficiency.

Numerically, in computing the robust statistics in local volumes at a controllable scale, and assigning different weights to the data for voxels according to their distance to the central voxel, we define the weighting neighborhood using a nonnegative kernel function* K* such that *K*(*u*) ≤ *K*(*v*) for |*u*| > |*v*| and ∫*K*(*x*)*dx* = 1.

There are various choices for the kernel function. In this work, we use the Gaussian kernel

$$\begin{array}{c}{K}_{\sigma}\left(\phantom{\rule[-0.12pt]{0ex}{4.53pt}}u\phantom{\rule[-0.12pt]{0ex}{4.53pt}}\right)=\frac{\mathrm{1}}{{\left(\mathrm{2}\pi \right)}^{n/\mathrm{2}}{\sigma}^{n}}{e}^{-{\left|u\right|}^{2}/\mathrm{2}{\sigma}^{\mathrm{2}}}\end{array}$$

(1)

with a scale parameter *σ* > 0.

Then, for each voxel of interest *x* in the image, we define the VSLRS feature vector *f*(*x*) ^{Df} by combining several VSLRS derived in any randomly displaced and randomly scaled local region *R*_{random}, or in a scalable local region *R*_{centric} centered at *x*, within a neighborhood *B*(*x*) *Ω* around *x*. More explicitly, within the local regions *R*_{random} and *R*_{centric}, whose size can be controlled by the scale parameters *σ* of the kernel functions *K*_{σ} (1), we first normalize the intensities to have the unit 2 norm [47]. Then, we denote

$$\begin{array}{c}\mathrm{V}\mathrm{S}\mathrm{M}\mathrm{E}\mathrm{A}\mathrm{N}\left(\phantom{\rule[-0.12pt]{0ex}{4.53pt}}x\phantom{\rule[-0.12pt]{0ex}{4.53pt}}\right)\u2254\frac{{K}_{\sigma}\left(x\right)\ast I\left(x\right)}{{K}_{\sigma}\left(x\right)\ast \mathrm{1}}\end{array}$$

(2)

as the volume-scalable intensity mean value. In addition, in order to detect the intensity changes yielded by structure changes, meanwhile, eliminating the influence of outliers, the intensity range between the first and the third quartiles, namely, the volume-scalable interquartile range VSIQR(*x*), is calculated as the second feature. Furthermore, the weighted intensity variance is chosen to be the third feature and is calculated as

$$\begin{array}{c}\mathrm{W}\mathrm{I}\mathrm{V}\left(\phantom{\rule[-0.12pt]{0ex}{4.53pt}}x\phantom{\rule[-0.12pt]{0ex}{4.53pt}}\right)\u2254{\left(\phantom{\rule[-9.55399pt]{0ex}{15.68398pt}}\frac{{K}_{\sigma}\left(x\right)\ast {\left(I\left(x\right)-\mathrm{V}\mathrm{S}\mathrm{M}\mathrm{E}\mathrm{A}\mathrm{N}\left(x\right)\right)}^{\mathrm{2}}}{{K}_{\sigma}\left(x\right)\ast \mathrm{1}}\phantom{\rule[-9.55399pt]{0ex}{15.68398pt}}\right)}^{\mathrm{1}/\mathrm{2}}.\end{array}$$

(3)

Consequently, we define the VSLRS feature vector *f*(*x*) as

$$\begin{array}{c}f\left(\phantom{\rule[-0.12pt]{0ex}{4.53pt}}x\phantom{\rule[-0.12pt]{0ex}{4.53pt}}\right)={f}_{\mathrm{l}\mathrm{o}\mathrm{c}\mathrm{a}\mathrm{l}}\left(\phantom{\rule[-0.12pt]{0ex}{4.53pt}}x\phantom{\rule[-0.12pt]{0ex}{4.53pt}}\right)-b{f}_{\mathrm{c}\mathrm{o}\mathrm{n}\mathrm{t}\mathrm{e}\mathrm{x}\mathrm{t}\mathrm{u}\mathrm{a}\mathrm{l}}\left(\phantom{\rule[-0.12pt]{0ex}{4.53pt}}x\phantom{\rule[-0.12pt]{0ex}{4.53pt}}\right),\phantom{\rule{10pt}{0ex}}b\in \left\{\mathrm{0,1}\right\}\end{array}$$

(4)

with

$$\begin{array}{c}{f}_{\mathrm{l}\mathrm{o}\mathrm{c}\mathrm{a}\mathrm{l}}\left(\phantom{\rule[-0.12pt]{0ex}{4.53pt}}x\phantom{\rule[-0.12pt]{0ex}{4.53pt}}\right)={\left(\phantom{\rule[-1.84001pt]{0ex}{6.93999pt}}\mathrm{V}\mathrm{S}\mathrm{M}\mathrm{E}\mathrm{A}\mathrm{N}\left(\phantom{\rule[-0.12pt]{0ex}{4.53pt}}x\phantom{\rule[-0.12pt]{0ex}{4.53pt}}\right),\mathrm{V}\mathrm{S}\mathrm{I}\mathrm{Q}\mathrm{R}\left(\phantom{\rule[-0.12pt]{0ex}{4.53pt}}x\phantom{\rule[-0.12pt]{0ex}{4.53pt}}\right),\mathrm{W}\mathrm{I}\mathrm{V}\left(\phantom{\rule[-0.12pt]{0ex}{4.53pt}}x\phantom{\rule[-0.12pt]{0ex}{4.53pt}}\right)\phantom{\rule[-1.84001pt]{0ex}{6.93999pt}}\right)}_{{R}_{\mathrm{c}\mathrm{e}\mathrm{n}\mathrm{t}\mathrm{r}\mathrm{i}\mathrm{c}}}^{T}\\ \phantom{\rule{10pt}{0ex}}\in {\mathfrak{R}}^{\mathrm{3}},\phantom{\rule{10pt}{0ex}}{R}_{\mathrm{c}\mathrm{e}\mathrm{n}\mathrm{t}\mathrm{r}\mathrm{i}\mathrm{c}}\in B\left(x\right),\\ {f}_{\mathrm{c}\mathrm{o}\mathrm{n}\mathrm{t}\mathrm{e}\mathrm{x}\mathrm{t}\mathrm{u}\mathrm{a}\mathrm{l}}\left(\phantom{\rule[-0.12pt]{0ex}{4.53pt}}x\phantom{\rule[-0.12pt]{0ex}{4.53pt}}\right)\\ \phantom{\rule{10pt}{0ex}}={\left(\phantom{\rule[-1.84001pt]{0ex}{6.93999pt}}\mathrm{V}\mathrm{S}\mathrm{M}\mathrm{E}\mathrm{A}\mathrm{N}\left(\phantom{\rule[-0.12pt]{0ex}{4.53pt}}x\phantom{\rule[-0.12pt]{0ex}{4.53pt}}\right),\mathrm{V}\mathrm{S}\mathrm{I}\mathrm{Q}\mathrm{R}\left(\phantom{\rule[-0.12pt]{0ex}{4.53pt}}x\phantom{\rule[-0.12pt]{0ex}{4.53pt}}\right),\mathrm{W}\mathrm{I}\mathrm{V}\left(\phantom{\rule[-0.12pt]{0ex}{4.53pt}}x\phantom{\rule[-0.12pt]{0ex}{4.53pt}}\right)\phantom{\rule[-1.84001pt]{0ex}{6.93999pt}}\right)}_{{R}_{\mathrm{r}\mathrm{a}\mathrm{n}\mathrm{d}\mathrm{o}\mathrm{m}}}^{T}\\ \phantom{\rule{10pt}{0ex}}\in {\mathfrak{R}}^{\mathrm{3}},\phantom{\rule{10pt}{0ex}}{R}_{\mathrm{r}\mathrm{a}\mathrm{n}\mathrm{d}\mathrm{o}\mathrm{m}}\in B\left(x\right),\end{array}$$

(5)

where the parameter *b* {0,1} indicates which contour evolution stage or voxel classification stage the feature vectors are fed into, as shown in Figures 2(a) and 2(b), respectively. Note that the feature vectors derived in *R*_{centric} capture the local image information around a certain voxel, while the feature vectors jointly derived in *R*_{random} and *R*_{centric} capture the contextual image information, which are nonlocal but of short range. In theory, for each voxel in the volumetric data, we can extract an infinite number of such feature vectors by changing the locations and scales of the local regions. In our implementation, we explore the random feature vectors for a voxel from a predefined range with a maximum local region scale of 5 × 5 × 5 and cuboid searching space patch of 31 × 31 × 31 [48].

We utilize RFs as a concatenated classifier to infer the LA structure within an autocontext scheme. An overview of the proposed classification framework is illustrated in Figure 3. Similar to the atlas forests [39], we encode a single image by training one corresponding concatenated classifier exclusively on the contextual feature map from the training images. Given a testing image as input, each concatenated classifier returns its own tissue probability map for the target. The target structure is then inferred by fusing the probability maps obtained from different individual concatenated classifiers.

Overview of the proposed concatenated classification framework. A single CRF is trained for an individual sample image. The labeling of a new target is performed by the testing step on each individual CRF and the fusion of the obtained probability maps. **...**

The autocontext metaframework attempts to recursively explore and fuse contextual information, as well as appearance [41]. This means running a sequence of classifiers for each image, such that the probabilistic output of one classifier is fed into the next one for refinement. To this end, we train the concatenated classification forests in the training stage, each with the input of multisource feature maps. For simplicity, we only detail the workflow of one sequence of classifiers. Denote the tissue probability map as *M* : *Ω* → , where *Ω* ^{3} is the image domain, and let *f*(*I*) and *f*(*M*) be the feature maps for the original image* I* and tissue probability map* M*, respectively. We initiate the process by taking only the appearance feature map *f*(*I*) from the original image as input for voxel-wise classification in the 1st iteration. In the later iterations, the context feature map *f*(*M*) obtained from the previous iteration will act as augmented source feature map. Specifically, the use of these contextual features from tissue probability maps improves the accuracy of the voxel-wise classification by introducing a spatial coherence constraint to those features in addition to providing a better initialization with the tissue probability map from the former classifier. Another consequence of the use of the contextual information is that the registration of the training samples to the target image is avoided while the spatial awareness is preserved.

We employ RFs as a classifier since they can efficiently handle a large number of training data with high data dimension, which is important in the utilization of large numbers of high-dimensional image features [38]. RFs consist of a set of trees, and as a supervised learning technique, they generally operate in two phases: training and testing. In the next section, we will detail our adaption of training and labeling procedures of RFs to the task of LA segmentation.

During training, each decision tree *t* in the concatenated random forest CRF_{i} is trained on the specific* i*-th training sample, which consists of the original volumetric image and the corresponding class label. The original volumetric image is further augmented by tissue probability map as additional source image. Specifically, each decision tree* t* learns a weak class predictor *p*_{t}(*c**f*(*x*, *I*, *M*)) for a given voxel *x* *Ω* by using its high-dimensional feature representation *f*(*x*, *I*, *M*).

In the first iteration of training, each tree* t* will learn a class predictor *p*_{t}(*c**f*(*x*, *I*)) by using only the image appearance feature map from the original image* I*. The training involves recursively splitting the training voxels at each node of the decision tree based on the high-dimensional feature representations of these voxels. In order to improve the generalization, we inject the randomness into the training by using a bagging strategy. Specifically, each tree *t* in CRF_{i} has access to a different sample subspace of the specific* i*-th training sample space, and a feature subspace of the whole feature space is randomly selected at each node in the tree. Then, for each sample voxel *x* considered at a given internal node, a binary split is performed independently along each feature of the feature subspace with respect to a certain number of thresholds uniformly distributed along the range of each feature. Along with the split of the sample voxel *x* into its left or right child node, the optimal combination of feature and threshold is learned by maximizing the* information gain* at the node [49]. The tree continues splitting and stops when satisfying certain conditions. Finally, by putting all sample voxels of the* i*-th sample image into the trained forest, we can estimate the tissue probability map *M*_{i} from this iteration.

In the later iterations of training, the feature map *f*(*x*, *I*, *M*) explored from both the original image *I* and the tissue probability map *M* which is iteratively updated from the previous iteration is used to learn a class predictor *p*_{t}(*c**f*(*x*, *I*, *M*)). Then, all later training iterations are performed in the same way as the first iteration.

After training, we can obtain a sequence of classifiers CRF for each training sample and associate each leaf node* l* in the CRF with a class predictor *p*^{lx}(*c**f*(*x*, *I*, *M*)) by simply counting the labels of its incoming training samples.

During labeling, each voxel *x* in the target image *I* is labeled by the tree testing on the trained CRFs and the following fusion of the probabilistic estimates from individual concatenated trees. Specifically, by applying the learned split parameters to the high-dimensional feature representation (*f*(*x*, *I*) in the first iteration and *f*(*x*, *I*, *M*) in the later iterations) of a voxel *x*, each tree *t* from a certain CRF yields a class probability *p*_{t}(*c**f*(*x*, *I*, *M*)).

The probabilistic estimate of the testing voxel *x* from the CRF_{cf} with *n*_{t} trees at each iteration is then computed as the average of all individual tree predictions, that is,

$$\begin{array}{c}{p}_{cf}\left(\phantom{\rule[-2.59pt]{0ex}{7.08pt}}c\hspace{0.17em}\mid \hspace{0.17em}f\left(\phantom{\rule[-1.16998pt]{0ex}{6.57999pt}}x,I,M\phantom{\rule[-1.16998pt]{0ex}{6.57999pt}}\right)\phantom{\rule[-2.59pt]{0ex}{7.08pt}}\right)=\frac{\mathrm{1}}{{n}_{t}}{\displaystyle \sum}_{i=\mathrm{1}}^{{n}_{t}}{p}_{{t}_{i}}\left(\phantom{\rule[-2.59pt]{0ex}{7.08pt}}c\hspace{0.17em}\mid \hspace{0.17em}f\left(\phantom{\rule[-1.16998pt]{0ex}{6.57999pt}}x,I,M\phantom{\rule[-1.16998pt]{0ex}{6.57999pt}}\right)\phantom{\rule[-2.59pt]{0ex}{7.08pt}}\right).\end{array}$$

(6)

The final probability of the testing voxel *x* is achieved by averaging these probabilities from the *n*_{cf} CRFs at the last iteration, that is,

$$\begin{array}{c}p\left(\phantom{\rule[-2.59pt]{0ex}{7.08pt}}c\hspace{0.17em}\mid \hspace{0.17em}f\left(\phantom{\rule[-1.16998pt]{0ex}{6.57999pt}}x,I,M\phantom{\rule[-1.16998pt]{0ex}{6.57999pt}}\right)\phantom{\rule[-2.59pt]{0ex}{7.08pt}}\right)=\frac{\mathrm{1}}{{n}_{cf}}{\displaystyle \sum}_{i=\mathrm{1}}^{{n}_{cf}}{p}_{{\mathrm{C}\mathrm{R}\mathrm{F}}_{i}}\left(\phantom{\rule[-2.59pt]{0ex}{7.08pt}}c\hspace{0.17em}\mid \hspace{0.17em}f\left(\phantom{\rule[-1.16998pt]{0ex}{6.57999pt}}x,I,M\phantom{\rule[-1.16998pt]{0ex}{6.57999pt}}\right)\phantom{\rule[-2.59pt]{0ex}{7.08pt}}\right).\end{array}$$

(7)

The LA structure is subsequently delineated by selecting arg max_{c} *p*(*c* | *f*(*x*, *I*, *M*)) for each testing voxel.

The voxel-wise classification is performed for each voxel independently, which might introduce artificial anatomical errors in the delineated LA structure [38, 50]. To address this limitation, we employ a volume-scalable ACM combined with the LA structure inferred from the previous stage for segmentation refinement in the final stage. ACMs can drive the active contour evolving and converging at the desired position of the object boundary by minimizing an energy functional. Compared with RFs, ACMs can provide geometrically constrained segmentation results with subpixel accuracy. Most conventional ACMs are not fully automated due to the necessity for the contour initialization and usually fail to segment cardiac MRI with intensity inhomogeneity and low tissue contrast. We solve these issues by using the inferred LA structure as an initial contour, and also as a shape prior integrated into a volume-scalable energy functional.

Denote the target object volume and the background volume as *Ω*_{1} *Ω* and *Ω*_{2} *Ω*, respectively. In particular, *Ω*_{Inf_1} and *Ω*_{Inf_2} indicated the seeded LA volume and the seeded background volume, respectively, which are inferred from previous stage. Then, with the VSLRS feature vector defined in (4), each voxel *x* can be characterized by combining the probability distribution functions (PDFs) of the feature vectors derived in the inferred volumes with that derived in a neighborhood around voxel *x*. The characterization of voxel *x* is then described as follows:

$$\begin{array}{c}{P}_{i}\left(\phantom{\rule[-0.12pt]{0ex}{4.53pt}}x\phantom{\rule[-0.12pt]{0ex}{4.53pt}}\right)=\left(\phantom{\rule[-0.23pt]{0ex}{6.35999pt}}\mathrm{1}-\omega \phantom{\rule[-0.23pt]{0ex}{6.35999pt}}\right)\frac{\mathrm{1}}{\left|{\Omega}_{\mathrm{I}\mathrm{n}\mathrm{f}\_i}\right|}{{\displaystyle \sum}}_{z\in {\Omega}_{\mathrm{I}\mathrm{n}\mathrm{f}\_i}}p\left(\phantom{\rule[-2.59pt]{0ex}{7.08pt}}f\left(\phantom{\rule[-0.12pt]{0ex}{4.53pt}}x\phantom{\rule[-0.12pt]{0ex}{4.53pt}}\right)-f\left(\phantom{\rule[-0.18999pt]{0ex}{4.67pt}}z\phantom{\rule[-0.18999pt]{0ex}{4.67pt}}\right)\phantom{\rule[-2.59pt]{0ex}{7.08pt}}\right)\\ \phantom{\rule{11.436553955078125pt}{0ex}}+\omega \underset{{\Omega}_{i}}{\overset{}{{\displaystyle \int}}}{K}_{\eta}\left(\phantom{\rule[-2.59pt]{0ex}{5.32999pt}}x-y\phantom{\rule[-2.59pt]{0ex}{5.32999pt}}\right)p\left(\phantom{\rule[-2.98001pt]{0ex}{8.07999pt}}{\mu}_{i}\left(\phantom{\rule[-0.12pt]{0ex}{4.53pt}}x\phantom{\rule[-0.12pt]{0ex}{4.53pt}}\right)-f\left(\phantom{\rule[-2.59pt]{0ex}{4.53pt}}y\phantom{\rule[-2.59pt]{0ex}{4.53pt}}\right)\phantom{\rule[-2.98001pt]{0ex}{8.07999pt}}\right)dy,\phantom{\rule{162.57366943359375pt}{0ex}}i=\mathrm{1,2},\end{array}$$

(8)

where in the first term *z* is the seed voxel that belongs to inferred volumes and the second term is a weighted average of the probability distribution *p* in a neighborhood of voxel *x*, whose size is controlled by the scale parameter *η* of the kernel function given by (1). Moreover, *μ*_{i} in the probability density approximates image characters in local volume *Ω*_{i}. Finally, *ω* is a positive constant (0 ≤ *ω* ≤ 1) which balances the importance of inferred structures and the local volumes [40].

Let *ϕ* be the level set function and *H*(·) be the Heaviside function. In particular, denote the reference level set function corresponding to the LA structure inferred from the previous concatenated classification stage as *ϕ*_{ref}. The energy functional *F*(*ϕ*, *P*_{1}, *P*_{2}) can be expressed as

$$\begin{array}{c}F\left(\phantom{\rule[-2.59pt]{0ex}{6.82999pt}}\varphi ,{P}_{\mathrm{1}},{P}_{\mathrm{2}}\phantom{\rule[-2.59pt]{0ex}{6.82999pt}}\right)=E\left(\phantom{\rule[-2.59pt]{0ex}{6.82999pt}}\varphi ,{P}_{\mathrm{1}},{P}_{\mathrm{2}}\phantom{\rule[-2.59pt]{0ex}{6.82999pt}}\right)+{\alpha}_{\mathrm{1}}L\left(\phantom{\rule[-2.59pt]{0ex}{6.82999pt}}\varphi \phantom{\rule[-2.59pt]{0ex}{6.82999pt}}\right)+{\alpha}_{\mathrm{2}}P\left(\phantom{\rule[-2.59pt]{0ex}{6.82999pt}}\varphi \phantom{\rule[-2.59pt]{0ex}{6.82999pt}}\right)\\ \phantom{\rule{11.436553955078125pt}{0ex}}+{\alpha}_{\mathrm{3}}S\left(\phantom{\rule[-2.59pt]{0ex}{6.82999pt}}\varphi \phantom{\rule[-2.59pt]{0ex}{6.82999pt}}\right),\end{array}$$

(9)

which is a combination of the volume-scalable fitting energy functional

$$\begin{array}{c}E\left(\phantom{\rule[-2.59pt]{0ex}{6.82999pt}}\varphi ,{P}_{\mathrm{1}},{P}_{\mathrm{2}}\phantom{\rule[-2.59pt]{0ex}{6.82999pt}}\right)=-{\lambda}_{\mathrm{1}}\int \left(\phantom{\rule[-7.76001pt]{0ex}{12.85999pt}}\int {K}_{\eta}\left(\phantom{\rule[-2.59pt]{0ex}{5.32999pt}}x-y\phantom{\rule[-2.59pt]{0ex}{5.32999pt}}\right){P}_{\mathrm{1}}\left(\phantom{\rule[-2.59pt]{0ex}{4.53pt}}y\phantom{\rule[-2.59pt]{0ex}{4.53pt}}\right)\phantom{\rule{10pt}{0ex}}\xb7H\left(\phantom{\rule[-2.98001pt]{0ex}{8.07999pt}}\varphi \left(\phantom{\rule[-2.59pt]{0ex}{4.53pt}}y\phantom{\rule[-2.59pt]{0ex}{4.53pt}}\right)\phantom{\rule[-2.98001pt]{0ex}{8.07999pt}}\right)dy\phantom{\rule[-7.76001pt]{0ex}{12.85999pt}}\right)dx-{\lambda}_{\mathrm{2}}\int \left(\phantom{\rule[-7.76001pt]{0ex}{12.85999pt}}\int {K}_{\eta}\left(\phantom{\rule[-2.59pt]{0ex}{5.32999pt}}x-y\phantom{\rule[-2.59pt]{0ex}{5.32999pt}}\right)\phantom{\rule{10pt}{0ex}}\xb7{P}_{\mathrm{2}}\left(\phantom{\rule[-2.59pt]{0ex}{4.53pt}}y\phantom{\rule[-2.59pt]{0ex}{4.53pt}}\right)\left(\phantom{\rule[-2.98001pt]{0ex}{8.07999pt}}\mathrm{1}-H\left(\phantom{\rule[-2.98001pt]{0ex}{8.07999pt}}\varphi \left(y\right)\phantom{\rule[-2.98001pt]{0ex}{8.07999pt}}\right)\phantom{\rule[-2.98001pt]{0ex}{8.07999pt}}\right)dy\phantom{\rule[-7.76001pt]{0ex}{12.85999pt}}\right)dx,\end{array}$$

(10)

smoothness term

$$\begin{array}{c}L\left(\phantom{\rule[-2.59pt]{0ex}{6.82999pt}}\varphi \phantom{\rule[-2.59pt]{0ex}{6.82999pt}}\right)=\int \left|\phantom{\rule[-2.98001pt]{0ex}{8.07999pt}}\nabla H\left(\phantom{\rule[-2.59pt]{0ex}{6.93999pt}}\varphi \left(\phantom{\rule[-0.12pt]{0ex}{4.53pt}}x\phantom{\rule[-0.12pt]{0ex}{4.53pt}}\right)\phantom{\rule[-2.59pt]{0ex}{6.93999pt}}\right)\phantom{\rule[-2.98001pt]{0ex}{8.07999pt}}\right|dx,\end{array}$$

(11)

level set regularization term

$$\begin{array}{c}P\left(\phantom{\rule[-2.59pt]{0ex}{6.82999pt}}\varphi \phantom{\rule[-2.59pt]{0ex}{6.82999pt}}\right)=\int \frac{\mathrm{1}}{\mathrm{2}}{\left(\phantom{\rule[-2.96997pt]{0ex}{8.06995pt}}\left|\phantom{\rule[-2.59pt]{0ex}{6.93999pt}}\nabla \varphi \left(\phantom{\rule[-0.12pt]{0ex}{4.53pt}}x\phantom{\rule[-0.12pt]{0ex}{4.53pt}}\right)\phantom{\rule[-2.59pt]{0ex}{6.93999pt}}\right|-\mathrm{1}\phantom{\rule[-2.96997pt]{0ex}{8.06995pt}}\right)}^{\mathrm{2}}dx,\end{array}$$

(12)

and a priori shape energy term

$$\begin{array}{c}S\left(\phantom{\rule[-2.59pt]{0ex}{6.82999pt}}\varphi \phantom{\rule[-2.59pt]{0ex}{6.82999pt}}\right)=\int -\mathrm{ln}\left(\phantom{\rule[-2.98001pt]{0ex}{8.07999pt}}p\left(\phantom{\rule[-2.59pt]{0ex}{6.93999pt}}\varphi \left(\phantom{\rule[-0.12pt]{0ex}{4.53pt}}x\phantom{\rule[-0.12pt]{0ex}{4.53pt}}\right)-{\varphi}_{\mathrm{r}\mathrm{e}\mathrm{f}}\left(\phantom{\rule[-0.12pt]{0ex}{4.53pt}}x\phantom{\rule[-0.12pt]{0ex}{4.53pt}}\right)\phantom{\rule[-2.59pt]{0ex}{6.93999pt}}\right)\phantom{\rule[-2.98001pt]{0ex}{8.07999pt}}\right)dx.\end{array}$$

(13)

Here, *α*_{i}'s, *i* = 1,…, 3, and *λ*_{j}'s, *j* = 1,2, are positive constants; *P*_{1} and *P*_{2} are two values defined in (8) that characterize image voxels with inferred volumes and neighbor volumes. Also *K*_{η} and *p* are the kernel functions given by (1), which control the size of a local volume centered at the voxel *x* and estimate the level set shape similarity, respectively. Then, by minimizing the energy functional *F*(*ϕ*, *P*_{1}, *P*_{2}), we can obtain the entire object boundary.

We solve the energy functional minimization problem by using the standard gradient descent method. Keeping *ϕ* fixed and minimizing the energy functional *F*(*ϕ*, *P*_{1}, *P*_{2}) in (9) with respect to the functions *P*_{1} and *P*_{2}, we deduce the following optimal expressions for the functions *P*_{1} and *P*_{2} that minimize *F*(*ϕ*, *P*_{1}, *P*_{2}):

$$\begin{array}{c}{P}_{\mathrm{1}}\left(\phantom{\rule[-0.12pt]{0ex}{4.53pt}}x\phantom{\rule[-0.12pt]{0ex}{4.53pt}}\right)=\left(\phantom{\rule[-0.23pt]{0ex}{6.35999pt}}\mathrm{1}-\omega \phantom{\rule[-0.23pt]{0ex}{6.35999pt}}\right)\frac{\mathrm{1}}{\left|{\Omega}_{\mathrm{I}\mathrm{n}\mathrm{f}\_\mathrm{1}}\right|}{{\displaystyle \sum}}_{z\in {\Omega}_{\mathrm{I}\mathrm{n}\mathrm{f}\_\mathrm{1}}}p\left(\phantom{\rule[-2.59pt]{0ex}{7.08pt}}f\left(\phantom{\rule[-0.12pt]{0ex}{4.53pt}}x\phantom{\rule[-0.12pt]{0ex}{4.53pt}}\right)-f\left(\phantom{\rule[-0.18999pt]{0ex}{4.67pt}}z\phantom{\rule[-0.18999pt]{0ex}{4.67pt}}\right)\phantom{\rule[-2.59pt]{0ex}{7.08pt}}\right)\\ \phantom{\rule{10pt}{0ex}}+\omega \int {K}_{\eta}\left(\phantom{\rule[-2.59pt]{0ex}{5.32999pt}}x-y\phantom{\rule[-2.59pt]{0ex}{5.32999pt}}\right)p\left(\phantom{\rule[-2.98001pt]{0ex}{8.07999pt}}{\mu}_{\mathrm{1}}\left(\phantom{\rule[-0.12pt]{0ex}{4.53pt}}x\phantom{\rule[-0.12pt]{0ex}{4.53pt}}\right)-f\left(\phantom{\rule[-2.59pt]{0ex}{4.53pt}}y\phantom{\rule[-2.59pt]{0ex}{4.53pt}}\right)\phantom{\rule[-2.98001pt]{0ex}{8.07999pt}}\right)\phantom{\rule{10pt}{0ex}}\xb7H\left(\phantom{\rule[-2.98001pt]{0ex}{8.07999pt}}\varphi \left(\phantom{\rule[-2.59pt]{0ex}{4.53pt}}y\phantom{\rule[-2.59pt]{0ex}{4.53pt}}\right)\phantom{\rule[-2.98001pt]{0ex}{8.07999pt}}\right)dy,\\ {P}_{\mathrm{2}}\left(\phantom{\rule[-0.12pt]{0ex}{4.53pt}}x\phantom{\rule[-0.12pt]{0ex}{4.53pt}}\right)=\left(\phantom{\rule[-0.23pt]{0ex}{6.35999pt}}\mathrm{1}-\omega \phantom{\rule[-0.23pt]{0ex}{6.35999pt}}\right)\frac{\mathrm{1}}{\left|{\Omega}_{\mathrm{I}\mathrm{n}\mathrm{f}\_\mathrm{2}}\right|}{{\displaystyle \sum}}_{z\in {\Omega}_{\mathrm{I}\mathrm{n}\mathrm{f}\_\mathrm{2}}}p\left(\phantom{\rule[-2.59pt]{0ex}{7.08pt}}f\left(\phantom{\rule[-0.12pt]{0ex}{4.53pt}}x\phantom{\rule[-0.12pt]{0ex}{4.53pt}}\right)-f\left(\phantom{\rule[-0.18999pt]{0ex}{4.67pt}}z\phantom{\rule[-0.18999pt]{0ex}{4.67pt}}\right)\phantom{\rule[-2.59pt]{0ex}{7.08pt}}\right)\\ \phantom{\rule{10pt}{0ex}}+\omega \int {K}_{\eta}\left(\phantom{\rule[-2.59pt]{0ex}{5.32999pt}}x-y\phantom{\rule[-2.59pt]{0ex}{5.32999pt}}\right)p\left(\phantom{\rule[-2.98001pt]{0ex}{8.07999pt}}{\mu}_{\mathrm{2}}\left(\phantom{\rule[-0.12pt]{0ex}{4.53pt}}x\phantom{\rule[-0.12pt]{0ex}{4.53pt}}\right)-f\left(\phantom{\rule[-2.59pt]{0ex}{4.53pt}}y\phantom{\rule[-2.59pt]{0ex}{4.53pt}}\right)\phantom{\rule[-2.98001pt]{0ex}{8.07999pt}}\right)\phantom{\rule{10pt}{0ex}}\xb7\left(\phantom{\rule[-2.98001pt]{0ex}{8.07999pt}}\mathrm{1}-H\left(\phantom{\rule[-2.98001pt]{0ex}{8.07999pt}}\varphi \left(\phantom{\rule[-2.59pt]{0ex}{4.53pt}}y\phantom{\rule[-2.59pt]{0ex}{4.53pt}}\right)\phantom{\rule[-2.98001pt]{0ex}{8.07999pt}}\right)\phantom{\rule[-2.98001pt]{0ex}{8.07999pt}}\right)dy,\end{array}$$

(14)

with

$$\begin{array}{c}{\mu}_{\mathrm{1}}\left(\phantom{\rule[-0.12pt]{0ex}{4.53pt}}x\phantom{\rule[-0.12pt]{0ex}{4.53pt}}\right)=\frac{\int {K}_{\eta}\left(x-y\right)f\left(y\right)H\left(\varphi \left(x\right)\right)dy}{\int {K}_{\eta}\left(x-y\right)H\left(\varphi \left(x\right)\right)dy},\\ \\ {\mu}_{\mathrm{2}}\left(\phantom{\rule[-0.12pt]{0ex}{4.53pt}}x\phantom{\rule[-0.12pt]{0ex}{4.53pt}}\right)=\frac{\int {K}_{\eta}\left(x-y\right)f\left(y\right)\left(\mathrm{1}-H\left(\varphi \left(x\right)\right)\right)dy}{\int {K}_{\eta}\left(x-y\right)\left(\mathrm{1}-H\left(\varphi \left(x\right)\right)\right)dy}.\\ \end{array}$$

(15)

Keeping *P*_{1} and *P*_{2} fixed, we minimize the energy functional *F*(*ϕ*, *P*_{1}, *P*_{2}) in (9) with respect to *ϕ* using first variation of *F* by solving the gradient descent flow of *ϕ* as follows:

$$\begin{array}{c}\frac{\partial \varphi}{\partial t}=\delta \left(\phantom{\rule[-2.59pt]{0ex}{6.82999pt}}\varphi \phantom{\rule[-2.59pt]{0ex}{6.82999pt}}\right)\left(\phantom{\rule[-2.4pt]{0ex}{7.08pt}}{\lambda}_{\mathrm{1}}{e}_{\mathrm{1}}-{\lambda}_{\mathrm{2}}{e}_{\mathrm{2}}\phantom{\rule[-2.4pt]{0ex}{7.08pt}}\right)+{\alpha}_{\mathrm{1}}\delta \left(\phantom{\rule[-2.59pt]{0ex}{6.82999pt}}\varphi \phantom{\rule[-2.59pt]{0ex}{6.82999pt}}\right)\mathrm{div}\left(\phantom{\rule[-10.23993pt]{0ex}{13.71997pt}}\frac{\nabla \varphi}{\left|\nabla \varphi \right|}\phantom{\rule[-10.23993pt]{0ex}{13.71997pt}}\right)\\ \phantom{\rule{11.436553955078125pt}{0ex}}+{\alpha}_{\mathrm{2}}\left(\phantom{\rule[-10.23993pt]{0ex}{15.32997pt}}{\nabla}^{\mathrm{2}}\varphi -\mathrm{div}\left(\phantom{\rule[-10.23993pt]{0ex}{13.71997pt}}\frac{\nabla \varphi}{\left|\nabla \varphi \right|}\phantom{\rule[-10.23993pt]{0ex}{13.71997pt}}\right)\phantom{\rule[-10.23993pt]{0ex}{15.32997pt}}\right)-{\alpha}_{\mathrm{3}}\left|\phantom{\rule[-2.59pt]{0ex}{6.82999pt}}\varphi -{\varphi}_{\mathrm{r}\mathrm{e}\mathrm{f}}\phantom{\rule[-2.59pt]{0ex}{6.82999pt}}\right|,\end{array}$$

(16)

where *δ* is the Dirac delta function and *e*_{1} and *e*_{2} are the functions

$$\begin{array}{c}{e}_{i}\left(\phantom{\rule[-0.12pt]{0ex}{4.53pt}}x\phantom{\rule[-0.12pt]{0ex}{4.53pt}}\right)=\int {K}_{\eta}\left(\phantom{\rule[-2.59pt]{0ex}{5.32999pt}}x-y\phantom{\rule[-2.59pt]{0ex}{5.32999pt}}\right){P}_{i}\left(\phantom{\rule[-2.59pt]{0ex}{4.53pt}}y\phantom{\rule[-2.59pt]{0ex}{4.53pt}}\right)dy,\phantom{\rule{10pt}{0ex}}i=\mathrm{1,2}\end{array}$$

(17)

in which *P*_{1} and *P*_{2} are given by (14).

In the last stage of the proposed workflow, the final refined segmentation result is achieved by evolving the level set equation (16) with initialization of level set function *ϕ* obtained from the inferred LA structure.

We evaluate our approach on the cardiac volumetric MRI datasets from the STACOM 2013 LA Segmentation Challenge [7] and the HVSMR 2016 Whole-Heart and Great Vessel Segmentation Challenge [51], respectively. Specifically, we artificially enlarge the training dataset from the STACOM 2013 Challenge by a factor of ten using image processing techniques such as translation, changing the resolution by downsampling or upsampling, and changing the voxel intensities based on the standard principal component analysis (PCA) [52]. Then, we use the augmented training dataset for algorithm training and parameters tuning. The subsequent validations are performed on the testing dataset from the STACOM 2013 database and the training and testing datasets from the HVSMR 2016 database, respectively. The corresponding labels released with the STACOM 2013 Challenge and the expert manual segmentation of the LA structure for each case in the HVSMR 2016 database are provided and considered as the ground truth.

In our implementation, we train *n*_{cf} = 10 CRFs, each of which corresponds to one of the training subjects of the enlarged training set. We use *n*_{t} = 5 classification trees for each CRF at each iteration and set the iteration number as *n*_{iterations} = 5. The maximum depth of classification tree and the minimum number of samples contained in leaf node are restricted to 50 and 8 [39], respectively. For each classification tree, we randomly sample all the corresponding categorical training voxels with replacement for the target and background class labels, respectively, from each training subject. During the tree training, each node considers *n*_{f} = 10,000 randomly sampled VSLRS features with their respective *n*_{thresholds} = 20 randomly distributed thresholds to determine the optimal split functions. The parameters in the volume-scalable ACM are the tweaked parameters determined empirically [40] during training as

$$\begin{array}{c}\sigma =\mathrm{0.5},\\ \\ \eta =\mathrm{3.0},\\ \\ {\lambda}_{\mathrm{1}}=\mathrm{0.2},\\ \\ {\lambda}_{\mathrm{2}}=\mathrm{1.0},\\ \\ \mathrm{\Delta}t=\mathrm{0.1},\\ \\ \omega =\mathrm{0.4},\\ \\ {\alpha}_{\mathrm{1}}={\alpha}_{\mathrm{2}}={\alpha}_{\mathrm{3}}=\mathrm{1.0},\\ \end{array}$$

(18)

for the best segmentation results. The influence of the parameters in our combined approach on the segmentation results will be discussed in “Discussion.”.

In addition, two metrics, surface-to-surface distance (S2S) [7] and Dice coefficient (DC), with respect to the ground truth are computed for quantitative evaluation of the proposed method. The volume rendering is implemented using the “Model Maker” module in 3D Slicer [53].

Figure 4 illustrates the impact of individual CRF and fusion of multiple CRFs on the segmentation results for a testing subject. The original target image and corresponding ground truth are shown in Figures 4(a) and 4(f), respectively. The tissue probability maps estimated from each iteration of an individual CRF are shown in Figures 4(b)–4(d). Figure 4(e) shows the tissue probability map estimated from fusing the final iterations of each individual CRF in the multiconcatenated RFs. For clarity, we only highlight the voxels of which the confidence is higher than 0.6 in the probability maps with green spots.

The tissue probability maps estimated from individual CRF and multiconcatenated RFs for a target subject. (a) The axial slice from the original volumetric data. (b)–(d) Voxels with more than 0.6 confidence (green spots) in the outcomes of an individual **...**

In order to better understand the influence of the different stages of the proposed method, Figure 5 shows the outcomes of the volume-scalable ACM without shape constraint, the CRFs without contour refinement (stage 2), and the integrated CRFs and volume-scalable ACM (final stage) on case B003 of the STACOM database, respectively. The anterior, posterior, and superior views of the 3D segmentation results and the ground truth are provided in the 1–3 rows, respectively.

Visual comparison of the proposed method with different components integrated on case B003 of the STACOM database. (a) Outcome of the volume-scalable ACM without shape constraint; (b) result of the CRFs without contour refinement (stage 2); (c) the combined **...**

To demonstrate the advantage of the proposed method in terms of segmentation accuracy and robustness more clearly, we show multiple slices of a low quality MRI dataset in three standard views and the final segmentation result obtained by the proposed method (compared with the ground truth segmentation) in Figure 6.

Figure 7 illustrates the influence of different parameters (e.g., the number of trees, depth of trees, minimally allowed sample count, and the iteration number of the concatenated scheme) on the segmentation results of the proposed CRFs. Values for these parameters were determined via leave-one-out cross-validation on the artificially augmented training dataset of the STACOM database, according to the parameter tuning method described in [54]. In this experiment, only the trends of the influence are interested. Therefore, during parameter tuning, when a certain parameter was tuning, the other parameters were set to their respective fixed values instead of the optimal values for the best segmentation results. Please note that we only test the parameters of the CRFs here, while the parameters of the volume-scalable ACM in the last stage of the proposed scheme were discussed in our former paper [40].

Impact of 4 different parameters in the concatenated classification stage: the number of trees used per RF (a), the maximally allowed depth for each tree (b), the minimally allowed sample count per leaf node (c), and the iteration number of the concatenated **...**

In addition, the proposed method was compared to the standard ACM-based method [55, 56], RF-based method [11, 13], and multiatlas-based method [30, 32], respectively. We quantitatively evaluate the performance of different methods by employing the Dice coefficients (DC) and surface-to-surface distance (S2S), as shown in Table 1. The Proposed 1 and Proposed 2 represent the proposed method* without* (stage 2) and* with* (final stage) contour refinement, respectively, as described in “Segmentation Refinement: Integrating Contour Evolution into Voxel-Wise Classification.”

Dice coefficients (DC) and surface-to-surface distance (S2S) of different methods on the STACOM and HVSMR datasets.

In the current study, the algorithm running time of the proposed method was recorded from our experiments with c++ code run on a computer cluster, with 4.2GHz Inter Core i7 processor and 32GB RAM, with Visual Studio 2015 on Windows 7. The average training time for one forest in the second stage is around 1.5h. For each of *n*_{iterations} = 5 iterations, we trained *n*_{cf} = 10 CRFs. By parallel training all these forests, the elapsed time of each iteration is around 1.5h, resulting in total training times of approximately 7.5h. Once trained, the approximated elapsed times of the LA segmentation in a typical cardiac volumetric MRI are as follows: LA structure inferring (CRF labeling): 3min, segmentation refinement (volume-scalable ACM): 30s.

We have presented a combined approach to effectively integrate voxel-wise classification and contour evolution for LA volumetric MRI segmentation. Specifically, we employ a RF technique to effectively infer LA structure. Due to several challenges including the intrinsic limitation of the classification scheme and the limited number of training data, the inferred shape is not satisfied enough. Thus, we refine the delineated shape by using ACM to bring more accuracy to the segmentation result.

As seen in Figures 4(a)–4(d) and 4(f), the accuracy and sharpness of the inferred tissue probability maps are gradually improved along with the iterations of individual forest prediction. Also, fusion of individual forest predictions further improves the inferred tissue structure, as can be seen in Figures 4(e) and 4(f). An explanation for these observations is that, in the first iteration of individual CRFs, only the image appearance features are used to generate the tissue probability map which results in many false positive results along the edges in Figure 4(b). In the later iterations, the concatenated scheme refines the inferred structure (tissue probability map) by recursively integrating the tissue probability maps estimated from the previous iteration and the appearance feature map extracted from the original image. As we can see from Figures 4(c) and 4(d), the tissue probability maps of one-individual CRFs are gradually improved with iterations (*n*_{iterations} = 5 in current implementation). At the end of the concatenated classification procedure, each individual's CRFs (*n*_{cf} = 10 in current implementation) will generate a different soft segmentation in a tissue probability map fashion. In the subsequent fusion procedure, multiple individual tissue probability maps are averaged by (7) where some false-high probabilities of the testing voxels from one probability map may be corrected by the other probability maps. As we can see from Figure 4(e), the averaged tissue probability map becomes more accurate, by comparing with the ground truth shown in Figure 4(f).

As seen in Figure 5(a), although the initial seeds and the iteration number of the volume-scalable ACM have been determined sophisticatedly, the leakage of the segmentation result is severe due to the similar intensities of the blood pools. Furthermore, as a consequence of insufficient contour evolution, the image is undersegmented which appears as the missing of parts of the PV. Alternatively, benefiting from the context-aware image information and the concatenated training and testing schemes, the CRFs do provide certain discriminative advantage in terms of controlling leakage. However, because the classification scheme of the RFs labels each individual voxel exclusively, the outcome of the concatenated classification scheme is not geometrically constrained (see Figure 5(b)). Finally, as seen in Figure 5(c), the combined CRFs and volume-scalable ACM brought more accurate geometrically constrained segmentation result. This is due to the fact that the a priori shape constraint term in (13) significantly prevents the active contour from leaking to the ventricle region; meanwhile, the ACM fills the holes of the CRFs result and refines the details of the result.

It can be seen from Figure 6 that; despite the great challenge of these image slices due to their lower spatial resolution and intensity inhomogeneities, the corresponding segmentation results are quite consistent with the ground truth. The proposed method successfully recovers the smooth boundary of the LA in the volumetric MR image.

In Figure 7, we show the impact of different parameter settings on the accuracy. We first study the impact of the number of trees per RF on segmentation accuracy in Figure 7(a). We find that there is a clear increase in accuracy from using 1 tree (0.6493 ± 0.0673) to using 2 trees (0.7074 ± 0.0614), measured by the average DC value, and the accuracy is improved as the number of trees increased. In addition, we see that the performance is quickly getting stable beyond a certain number of trees. This effect is probably due to the strategy of fusing multiple forests. In this paper, we conservatively choose 5 trees for each forest in each iteration. Next, we study the effect of the maximally allowed depth for each tree (b). We find that the performance is gradually improved from depth of 5 to depth of 30. In order to make the depth of the trees accommodate for the amount of the training samples as well as the minimal sample count per leaf, we set this parameter to 50 in this paper. Further, we test the impact of the minimally allowed sample count per leaf node (c). It can be seen that the performance is improved by decreasing the minimal sample count down to 8, while a small setting of this parameter such as 3 will be likely to be overfitting. Finally, we analyze the influence of the iteration number on the segmentation accuracy. In Figure 7(d), we can see that the more the iterations, the better, and the performance becomes stable after a certain number of iterations. In particular, we see significant improvement from the 1st iteration to the 2nd iteration. This effect is due to the use of the previously estimated tissue probability maps for subsequent voxel classification. Their results further demonstrate the effectiveness of the concatenated classification scheme for segmentation.

Computed metrics in Table 1 show that, even for the proposed method without final contour refinement (the last second row), it produces a competitive accuracy on the validation databases with an overall accuracy of 0.7205 (in terms of DC) and 4.405mm (in terms of S2S), respectively. Although the DC values by the Proposed 1 method is not satisfied enough, it can be improved by the subsequent contour refinement procedure. As demonstrated in Table 1, the DC values by the proposed CRFs are low which are mainly caused by the hollow part in the segmented LA body as seen in Figures Figures44 and and5.5. However, the delineated outer contour of the LA structure shows great agreement with the ground truth which also can be seen in Figures Figures44 and and55 and further demonstrated by the S2S values shown in Table 1. Therefore, the inferred shape can provide good initial contours to the subsequent contour evolution. Consequently, the integrated volume-scalable ACM (the last row) in the Proposed 2 method achieves a superior accuracy with an overall DC of 0.9227 and S2S of 1.14mm, as other standard methods cannot fully utilize the contextual volumetric image information for guiding the segmentation.

Though it may not be reliable for comparing different methods by utilizing different datasets, the performance of the proposed method seems to be competitive with some of the latest results [24, 57]. For example, one state-of-the-art method [36] reports a result of 0.878 ± 0.0624 and 1.34 ± 1.28mm for the DC and S2S measures, respectively. The proposed method outperforms the referenced method in terms of mean and standard deviation values, due to the concatenated classification and contour evolution scheme, and the usage of context-rich information for interpreting the LA structure.

The main computational cost in the proposed method is for training the CRFs. Although the training procedure of the proposed method can be implemented offline within a reasonable time, their efficiency is never perfect. Nevertheless, the computational efficiency of the training stage can be improved by using GPUs in the near future. In testing, the algorithm running time to perform LA segmentation in a typical cardiac volumetric MRI can be within 4min, which was mostly taken by the CRFs labeling and volume-scalable ACM evolving. Note that the integrated ACM in the final stage of the proposed method can converge in less iteration than pure ACM, due to the imposed initial contour and shape prior which were inferred from previous stage. Inspired by the recent LightGBM [58] and sparse representation techniques [59], we will further optimize the proposed work.

Finally, one of the difficulties in developing RFs based cardiac MRI segmentation techniques is the need for a number of data for training and validation. Many iterations of training on limited training samples may lead to overtraining. Although the bagging strategy of the RFs can guarantee the diversity of trees by randomly selecting a subset of samples for the training of each tree, overfitting is still an important issue for the proposed concatenated scheme. To cope with this limitation, we artificially enlarge the training dataset by a factor of ten using image processing techniques such as translation, changing the resolution by downsampling or upsampling, and changing the voxel intensities based on the standard principal component analysis and feed the CRFs with the augmented sample dataset during training as described in “Implementation Details.” However, the overfitting issue still exists in our current implementation, and, accordingly, we observe a higher accuracy of the proposed method on the STACOM 2013 dataset which provides the training samples than the accuracy on the HVSMER 2016 dataset which only provides the testing images as shown in Table 1. A possible remedy would be to make each CRF access more data during training.

In summary, in this work, we propose to fully automatically segment the LA structure from cardiac volumetric MRI through a novel combined RFs and ACM approach. In contrast to previous RFs based methods that define the segmentation problem as a classification task, our approach refines the voxel-wise classification through a contour evolution scheme and therefore achieves geometrically constrained segmentation results. Also, while current standard learning schemes pool training set indiscriminately across all training subjects and encode image points statically, our approach has a flexible training samples selection scheme and explores flexible representations of the individual points in the multisource images from the contextual image information. Compared to standard ACMs, which rely on certain contour initialization and usually fail to segment cardiac MRI with challenging image conditions, our approach has a number of advantages, such as the ability to automate the contour initialization process, and brings more accuracy and robustness through sophisticated feature learning and shape prior integrating schemes.

As demonstrated in our experiments, the proposed method is able to segment the LA volumetric MR images with challenging image conditions and has desirable performance in terms of accuracy and robustness. The proposed method achieved a high accuracy of 0.9227 ± 0.0598 and 1.14 ± 1.205mm for the LA segmentation, measured by DC and S2S values, respectively. Comparative experiments have demonstrated the advantages of the proposed method over other state-of-the-art automated segmentation methods. The scalability of our method on a larger scale of multimodality clinical images will be investigated in our future work.

This work was supported by National Nature Science Foundation of China (NSFC) Grant no. 61571165.

The authors declare that there is no conflict of interests regarding the publication of this paper.

1. Haïssaguerre M., Jaïs P., Shah D. C., et al. Spontaneous initiation of atrial fibrillation by ectopic beats originating in the pulmonary veins. *The New England Journal of Medicine*. 1998;339(10):659–666. doi: 10.1056/nejm199809033391003. [PubMed] [Cross Ref]

2. Calkins H., Brugada J., Packer D. L., et al. HRS/EHRA/ECAS expert consensus statement on catheter and surgical ablation of atrial fibrillation: recommendations for personnel, policy, procedures and follow-up. A report of the Heart Rhythm Society (HRS) Task Force on Catheter and Surgical Ablation of Atrial Fibrillation developed in partnership with the European Heart Rhythm Association (EHRA) and the European Cardiac Arrhythmia Society (ECAS); in collaboration with the American College of Cardiology (ACC), American Heart Association (AHA), and the Society of Thoracic Surgeons (STS). Endorsed and approved by the governing bodies of the American College of Cardiology, the American Heart Association, the European Cardiac Arrhythmia Society, the European Heart Rhythm Association, the Society of Thoracic Surgeons, and the Heart Rhythm Society. *Europace*. 2007;9(6):335–379. doi: 10.1093/europace/eum120. [PubMed] [Cross Ref]

3. Marrouche N. F., Wilber D., Hindricks G., et al. Association of atrial tissue fibrosis identified by delayed enhancement MRI and atrial fibrillation catheter ablation: The DECAAF Study. *Journal of the American Medical Association*. 2014;311(5):498–506. doi: 10.1001/jama.2014.3. [PubMed] [Cross Ref]

4. Krueger M. W., Dorn A., Keller D. U. J., et al. In-silico modeling of atrial repolarization in normal and atrial fibrillation remodeled state. *Medical and Biological Engineering and Computing*. 2013;51(10):1105–1119. doi: 10.1007/s11517-013-1090-1. [PubMed] [Cross Ref]

5. Suinesiaputra A., McCulloch A. D., Nash M. P., Pontre B., Young A. A. Cardiac image modelling: breadth and depth in heart disease. *Medical Image Analysis*. 2016;33:38–43. doi: 10.1016/j.media.2016.06.027. [PubMed] [Cross Ref]

6. Yen Ho S., Cabrera J. A., Sanchez-Quintana D. Left atrial anatomy revisited. *Circulation: Arrhythmia and Electrophysiology*. 2012;5:220–228. doi: 10.1161/CIRCEP.111.962720. [PubMed] [Cross Ref]

7. Tobon-Gomez C., Geers A. J., Peters J., et al. Benchmark for Algorithms Segmenting the Left Atrium From 3D CT and MRI Datasets. *IEEE Transactions on Medical Imaging*. 2015;34(7):1460–1473. doi: 10.1109/TMI.2015.2398818. [PubMed] [Cross Ref]

8. Breiman L. Random forests. *Machine Learning*. 2001;45(1):5–32. doi: 10.1023/A:1010933404324. [Cross Ref]

9. Tustison N. J., Shrinidhi K. L., Wintermark M., et al. Optimal symmetric multimodal templates and concatenated random forests for supervised brain tumor segmentation (simplified) with ANTsR. *Neuroinformatics*. 2015;13(2):209–225. doi: 10.1007/s12021-014-9245-2. [PubMed] [Cross Ref]

10. Menze B. H., Jakab A., Bauer S., et al. The multimodal brain tumor image segmentation benchmark (BRATS) *IEEE Transactions on Medical Imaging*. 2015;34(10):1993–2024. doi: 10.1109/tmi.2014.2377694. [PMC free article] [PubMed] [Cross Ref]

11. Criminisi A., Shotton J., Konukoglu E. Decision forests: a unified framework for classification, regression, density estimation, manifold learning and semi-supervised learning. *Foundations and Trends in Computer Graphics and Vision*. 2011;7(2-3):81–227. doi: 10.1561/0600000035. [Cross Ref]

12. Petitjean C., Zuluaga M. A., Bai W., et al. Right ventricle segmentation from cardiac MRI: a collation study. *Medical Image Analysis*. 2015;19(1):187–202. doi: 10.1016/j.media.2014.10.004. [PubMed] [Cross Ref]

13. Margeta J., McLeod K., Criminisi A., Ayache N. *Statistical Atlases and Computational Models of the Heart. Imaging and Modelling Challenges*. Vol. 8330. Berlin, Germany: Springer; 2014. Decision forests for segmentation of the left atrium from 3D MRI; pp. 49–56. (Lecture Notes in Computer Science). [Cross Ref]

14. Schneider M., Hirsch S., Weber B., Székely G., Menze B. H. Joint 3-D vessel segmentation and centerline extraction using oblique Hough forests with steerable filters. *Medical Image Analysis*. 2015;19(1):220–249. doi: 10.1016/j.media.2014.09.007. [PubMed] [Cross Ref]

15. Mahapatra D. Analyzing training information from random forests for improved image segmentation. *IEEE Transactions on Image Processing*. 2014;23(4):1504–1512. doi: 10.1109/TIP.2014.2305073. [PubMed] [Cross Ref]

16. Kass M., Witkin A., Terzopoulos D. Snakes: active contour models. *International Journal of Computer Vision*. 1988;1(4):321–331. doi: 10.1007/bf00133570. [Cross Ref]

17. Osher S., Sethian J. A. Fronts propagating with curvature dependent speed: algorithms based on hamilton-jacobi formulations. *Journal of Computational Physics*. 1988;79(1):12–49.

18. Vasilevskiy A., Siddiqi K. Flux maximizing geometric flows. *IEEE Transactions on Pattern Analysis and Machine Intelligence*. 2002;24(12):1565–1578. doi: 10.1109/TPAMI.2002.1114849. [Cross Ref]

19. Gao X., Wang B., Tao D., Li X. A relay level set method for automatic image segmentation. *IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics*. 2011;41(2):518–525. doi: 10.1109/TSMCB.2010.2065800. [PubMed] [Cross Ref]

20. Lee S.-H., Seo J. K. Level set-based bimodal segmentation with stationary global minimum. *IEEE Transactions on Image Processing*. 2006;15(9):2843–2852. doi: 10.1109/TIP.2006.877308. [PubMed] [Cross Ref]

21. Sum K. W., Cheung P. Y. S. Vessel extraction under non-uniform illumination: a level set approach. *IEEE Transactions on Biomedical Engineering*. 2008;55(1):358–360. doi: 10.1109/tbme.2007.896587. [PubMed] [Cross Ref]

22. Li C., Huang R., Ding Z., Chris Gatenby J., Metaxas D. N., Gore J. C. A level set method for image segmentation in the presence of intensity inhomogeneities with application to MRI. *IEEE Transactions on Image Processing*. 2011;20(7):2007–2016. doi: 10.1109/tip.2011.2146190. [PubMed] [Cross Ref]

23. Wang L., Li C., Sun Q., Xia D., Kao C.-Y. Active contours driven by local and global intensity fitting energy with application to brain MR image segmentation. *Computerized Medical Imaging and Graphics*. 2009;33(7):520–531. doi: 10.1016/j.compmedimag.2009.04.010. [PubMed] [Cross Ref]

24. Peng P., Lekadir K., Gooya A., Shao L., Petersen S. E., Frangi A. F. A review of heart chamber segmentation for structural and functional analysis using cardiac magnetic resonance imaging. *Magnetic Resonance Materials in Physics, Biology and Medicine*. 2016;29(2):155–195. doi: 10.1007/s10334-015-0521-4. [PMC free article] [PubMed] [Cross Ref]

25. Giannakidis A., Nyktari E., Keegan J., et al. Rapid automatic segmentation of abnormal tissue in late gadolinium enhancement cardiovascular magnetic resonance images for improved management of long-standing persistent atrial fibrillation. *BioMedical Engineering Online*. 2015;14(1, article no. 88) doi: 10.1186/s12938-015-0083-8. [PMC free article] [PubMed] [Cross Ref]

26. Avendi M. R., Kheradvar A., Jafarkhani H. A combined deep-learning and deformable-model approach to fully automatic segmentation of the left ventricle in cardiac MRI. *Medical Image Analysis*. 2016;30:108–119. doi: 10.1016/j.media.2016.01.005. [PubMed] [Cross Ref]

27. Ngo T. A., Lu Z., Carneiro G. Combining deep learning and level set for the automated segmentation of the left ventricle of the heart from cardiac cine magnetic resonance. *Medical Image Analysis*. 2017;35:159–171. doi: 10.1016/j.media.2016.05.009. [PubMed] [Cross Ref]

28. Yu Y., Zhang S., Li K., Metaxas D., Axel L. Deformable models with sparsity constraints for cardiac motion analysis. *Medical Image Analysis*. 2014;18(6):927–937. doi: 10.1016/j.media.2014.03.002. [PMC free article] [PubMed] [Cross Ref]

29. Zhou Y., Shi W.-R., Chen W., et al. Active contours driven by localizing region and edge-based intensity fitting energy with application to segmentation of the left ventricle in cardiac CT images. *Neurocomputing*. 2015;156:199–210. doi: 10.1016/j.neucom.2014.12.061. [Cross Ref]

30. Iglesias J. E., Sabuncu M. R. Multi-atlas segmentation of biomedical images: a survey. *Medical Image Analysis*. 2015;24(1):205–219. doi: 10.1016/j.media.2015.06.012. [PMC free article] [PubMed] [Cross Ref]

31. Bai W., Shi W., de Marvao A., et al. A bi-ventricular cardiac atlas built from 1000+ high resolution MR images of healthy subjects and an analysis of shape and motion. *Medical Image Analysis*. 2015;26(1):133–145. doi: 10.1016/j.media.2015.08.009. [PubMed] [Cross Ref]

32. Kirisli H. A., Schaap M., Klein S., et al. Fully automatic cardiac segmentation from 3D CTA data: a multi-atlas based approach. In: Dawant B. M., Haynor D. R., editors. Proceedings of the Medical Imaging 2010: Image Processing; February 2010; San Diego, Calif, USA. [Cross Ref]

33. Rajchl M., Baxter J. S. H., McLeod A. J., et al. Hierarchical max-flow segmentation framework for multi-atlas segmentation with Kohonen self-organizing map based Gaussian mixture modeling. *Medical Image Analysis*. 2016;27:45–56. doi: 10.1016/j.media.2015.05.005. [PubMed] [Cross Ref]

34. Bai W., Shi W., O'Regan D. P., et al. A probabilistic patch-based label fusion model for multi-atlas segmentation with registration refinement: application to cardiac MR images. *IEEE Transactions on Medical Imaging*. 2013;32(7):1302–1315. doi: 10.1109/tmi.2013.2256922. [PubMed] [Cross Ref]

35. Tian Y., Pan Y., Duan F., Zhao S., Wang Q., Wang W. Automated segmentation of coronary arteries based on statistical region growing and heuristic decision method. *BioMed Research International*. 2016;2016:7. doi: 10.1155/2016/3530251.3530251 [PMC free article] [PubMed] [Cross Ref]

36. Zhuang X., Shen J. Multi-scale patch and multi-modality atlases for whole heart segmentation of MRI. *Medical Image Analysis*. 2016;31:77–87. doi: 10.1016/j.media.2016.02.006. [PubMed] [Cross Ref]

37. Bai W., Shi W., Ledig C., Rueckert D. Multi-atlas segmentation with augmented features for cardiac MR images. *Medical Image Analysis*. 2015;19(1):98–109. doi: 10.1016/j.media.2014.09.005. [PubMed] [Cross Ref]

38. Wang L., Gao Y., Shi F., et al. LINKS: learning-based multi-source IntegratioN frameworK for Segmentation of infant brain images. *NeuroImage*. 2015;108:160–172. doi: 10.1016/j.neuroimage.2014.12.042. [PMC free article] [PubMed] [Cross Ref]

39. Zikic D., Glocker B., Criminisi A. Encoding atlases by randomized classification forests for efficient multi-atlas label propagation. *Medical Image Analysis*. 2014;18(8):1262–1273. doi: 10.1016/j.media.2014.06.010. [PubMed] [Cross Ref]

40. Wang K., Ma C. A robust statistics driven volume-scalable active contour for segmenting anatomical structures in volumetric medical images with complex conditions. *BioMedical Engineering Online*. 2016;15(1, article 39) doi: 10.1186/s12938-016-0153-6. [PMC free article] [PubMed] [Cross Ref]

41. Tu Z., Bai X. Auto-context and its application to high-level vision tasks and 3D brain image segmentation. *IEEE Transactions on Pattern Analysis and Machine Intelligence*. 2010;32(10):1744–1757. doi: 10.1109/TPAMI.2009.186. [PubMed] [Cross Ref]

42. Zhang D., Lu G. Shape-based image retrieval using generic Fourier descriptor. *Signal Processing: Image Communication*. 2002;17(10):825–848. doi: 10.1016/S0923-5965(02)00084-X. [Cross Ref]

43. Virmani J., Kumar V., Kalra N., Khandelwal N. SVM-based characterization of liver ultrasound images using wavelet packet texture descriptors. *Journal of Digital Imaging*. 2013;26(3):530–543. doi: 10.1007/s10278-012-9537-8. [PMC free article] [PubMed] [Cross Ref]

44. Lowe D. G. Distinctive image features from scale-invariant keypoints. *International Journal of Computer Vision*. 2004;60(2):91–110. doi: 10.1023/B:VISI.0000029664.99615.94. [Cross Ref]

45. Dalal N., Triggs B. Histograms of oriented gradients for human detection. Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR '05); June 2005; San Diego, Calif, USA. pp. 886–893. [Cross Ref]

46. Pichon E., Tannenbaum A., Kikinis R. A statistically based flow for image segmentation. *Medical Image Analysis*. 2004;8(3):267–274. doi: 10.1016/j.media.2004.06.006. [PMC free article] [PubMed] [Cross Ref]

47. Cheng H., Liu Z., Yang J. Sparsity induced similarity measure for label propagation. Proceedings of the IEEE 12th International Conference on Computer Vision (ICCV '09); October 2009; Kyoto, Japan. IEEE; pp. 317–324. [Cross Ref]

48. Coupé P., Manjón J. V., Fonov V., Pruessner J., Robles M., Collins D. L. Patch-based segmentation using expert priors: application to hippocampus and ventricle segmentation. *NeuroImage*. 2011;54(2):940–954. doi: 10.1016/j.neuroimage.2010.09.018. [PubMed] [Cross Ref]

49. Menze B. H., Kelm B. M., Masuch R., et al. A comparison of random forest and its Gini importance with standard chemometric methods for the feature selection and classification of spectral data. *BMC Bioinformatics*. 2009;10, article 213 doi: 10.1186/1471-2105-10-213. [PMC free article] [PubMed] [Cross Ref]

50. Wang L., Shi F., Gao Y., et al. Integration of sparse multi-modality representation and anatomical constraint for isointense infant brain MR image segmentation. *NeuroImage*. 2014;89:152–164. doi: 10.1016/j.neuroimage.2013.11.040. [PMC free article] [PubMed] [Cross Ref]

51. Pace D. F., Dalca A. V., Geva T., Powell A. J., Moghari M. H., Golland P. *Medical Image Computing and Computer-Assisted Intervention—MICCAI 2015*. Vol. 9351. Cham, Germeny: Springer International Publishing; 2015. Interactive whole-heart segmentation in congenital heart disease; pp. 80–88. (Lecture Notes in Computer Science). [PMC free article] [PubMed] [Cross Ref]

52. Koikkalainen J., Tölli T., Lauerma K., et al. Methods of artificial enlargement of the training set for statistical shape models. *IEEE Transactions on Medical Imaging*. 2008;27(11):1643–1654. doi: 10.1109/TMI.2008.929106. [PubMed] [Cross Ref]

53. Pieper S., Halle M., Kikinis R. 3D Slicer. Proceedings of the IEEE International Symposium on Biomedical Imaging: Macro to Nano 2004; April 2004; pp. 632–635.

54. Mairal J., Bach F., Ponce J. Task-driven dictionary learning. *IEEE Transactions on Pattern Analysis and Machine Intelligence*. 2012;34(4):791–804. doi: 10.1109/TPAMI.2011.156. [PubMed] [Cross Ref]

55. Scientific Computing and Imaging Institute. “Seg3D” Volumetric Image Segmentation and Visualization, Scientific Computing and Imaging Institute (SCI), http://www.seg3d.org.

56. Chan T. F., Vese L. A. Active contours without edges. *IEEE Transactions on Image Processing*. 2001;10(2):266–277. doi: 10.1109/83.902291. [PubMed] [Cross Ref]

57. Zuluaga M. A., Cardoso M. J., Modat M., Ourselin S. Multi-atlas propagation whole heart segmentation from MRI and CTA using a local normalised correlation coefficient criterion. In: Ourselin S., Rueckert D., Smith N., editors. *Functional Imaging and Modeling of the Heart: 7th International Conference, FIMH 2013, London, UK, June 20–22, 2013. Proceedings*. Vol. 7945. Berlin, Germany: Springer; 2013. pp. 174–181. (Lecture Notes in Computer Science). [Cross Ref]

59. Wang L., Shi F., Li G., et al. Segmentation of neonatal brain MR images using patch-driven level sets. *NeuroImage*. 2014;84:141–158. doi: 10.1016/j.neuroimage.2013.08.008. [PMC free article] [PubMed] [Cross Ref]

Articles from BioMed Research International are provided here courtesy of **Hindawi**

PubMed Central Canada is a service of the Canadian Institutes of Health Research (CIHR) working in partnership with the National Research Council's national science library in cooperation with the National Center for Biotechnology Information at the U.S. National Library of Medicine(NCBI/NLM). It includes content provided to the PubMed Central International archive by participating publishers. |