Deformable registration techniques play vital roles in a variety of medical imaging tasks such as image fusion, segmentation, and post-operative surgery assessment. In recent years, mutual information has become one of the most widely used similarity metrics for medical image registration algorithms. Unfortunately, as a matching criteria, mutual information loses much of its effectiveness when there is poor statistical consistency and a lack of structure. This is especially true in areas of images where the intensity is homogeneous and information is sparse. Here we present a method designed to address this problem by integrating distance transforms of anatomical segmentations as part of a multi-channel mutual information framework within the registration algorithm. Our method was tested by registering real MR brain data and comparing the segmentation of the results against that of the target. Our analysis showed that by integrating distance transforms of the the white matter segmentation into the registration, the overall segmentation of the registration result was closer to the target than when the distance transform was not used.
Image registration; Magnetic resonance imaging; Multidimensional signal processing; Spatial normalization; Distance Transform
Magnetic resonance (MR) images of the tongue have been used in both clinical studies and scientific research to reveal tongue structure. In order to extract different features of the tongue and its relation to the vocal tract, it is beneficial to acquire three orthogonal image volumes—e.g., axial, sagittal, and coronal volumes. In order to maintain both low noise and high visual detail and minimize the blurred effect due to involuntary motion artifacts, each set of images is acquired with an in-plane resolution that is much better than the through-plane resolution. As a result, any one data set, by itself, is not ideal for automatic volumetric analyses such as segmentation, registration, and atlas building or even for visualization when oblique slices are required. This paper presents a method of super-resolution volume reconstruction of the tongue that generates an isotropic image volume using the three orthogonal image volumes. The method uses preprocessing steps that include registration and intensity matching and a data combination approach with the edge-preserving property carried out by Markov random field optimization. The performance of the proposed method was demonstrated on fifteen clinical datasets, preserving anatomical details and yielding superior results when compared with different reconstruction methods as visually and quantitatively assessed.
Super-resolution volume reconstruction; human tongue; magnetic resonance imaging (MRI)
The thalamus sub-cortical gray matter structure consists of contiguous nuclei, each individually responsible for communication between various cerebral cortex and midbrain regions. These nuclei are differentially affected in neurodegenerative diseases such as multiple sclerosis and Alzheimer’s. However thalamic parcellation of the nuclei, manual or automatic, is difficult given the limited contrast in any particular magnetic resonance (MR) modality. Several groups have had qualitative success differentiating nuclei based on spatial location and fiber orientation information in diffusion tensor imaging (DTI). In this paper, we extend these principles by combining these discriminating dimensions with structural MR and derived information, and by building random forest learners on the resultant multi-modal features. In training, we form a multi-dimensional feature per voxel, which we associate with a nucleus classification from a manual rater. Learners are trained to differentiate thalamus from background and thalamic nuclei from other nuclei. These learners inform the external forces of a multiple object level set model. Our cross-validated quantitative results on a set of twenty subjects show the efficacy and reproducibility of our results.
Diffusion tensor imaging; machine learning; deformable models; object segmentation; random forests
In this study, we used manual delineation of high-resolution magnetic resonance imaging (MRI) to determine the spatial and temporal characteristics of the cerebellar atrophy in spinocerebellar ataxia type 2 (SCA2). Ten subjects with SCA2 were compared to ten controls. The volume of the pons, the total cerebellum, and the individual cerebellar lobules were calculated via manual delineation of structural MRI. SCA2 showed substantial global atrophy of the cerebellum. Furthermore, the degeneration was lobule-specific, selectively affecting the anterior lobe, VI, Crus I, Crus II, VIII, uvula, corpus medullare, and pons, while sparing VIIB, tonsil/paraflocculus, flocculus, declive, tuber/folium, pyramis, and nodulus. The temporal characteristics differed in each cerebellar subregion: 1) Duration of disease: Crus I, VIIB, VIII, uvula, corpus medullare, pons, and the total cerebellar volume correlated with the duration of disease; 2) Age: VI, Crus II, and flocculus correlated with age in control subjects; 3) Clinical scores: VI, Crus I, VIIB, VIII, corpus medullare, pons, and the total cerebellar volume correlated with clinical scores in SCA2. No correlations were found with the age of onset. Our extrapolated volumes at the onset of symptoms suggest that neurodegeneration may be present even during the presymptomatic stages of disease. The spatial and temporal characteristics of the cerebellar degeneration in SCA2 are region-specific. Furthermore, our findings suggest the presence of presymptomatic atrophy and a possible developmental component to the mechanisms of pathogenesis underlying SCA2. Our findings further suggest that volumetric analysis may aid in the development of a non-invasive, quantitative biomarker.
ataxia; spinocerebellar ataxia type 2 (SCA2); magnetic resonance imaging (MRI); biomarker
Magnetic resonance tagging makes it possible to measure the motion of tissues such as muscles in the heart and tongue. The harmonic phase (HARP) method largely automates the process of tracking points within tagged MR images, permitting many motion properties to be computed. However, HARP tracking can yield erroneous motion estimates due to: (1) large deformations between image frames; (2) through-plane motion; and (3) tissue boundaries. Methods that incorporate the spatial continuity of motion—so-called refinement or floodfilling methods—have previously been reported to reduce tracking errors. This paper presents a new refinement method based on shortest path computations. The method uses a graph representation of the image and seeks an optimal tracking order from a specified seed to each point in the image by solving a single source shortest path problem. This minimizes the potential errors for those path dependent solutions that are found in other refinement methods. In addition to this, tracking in the presence of through-plane motion is improved by introducing synthetic tags at the reference time (when the tissue is not deformed). Experimental results on both tongue and cardiac images show that the proposed method can track the whole tissue more robustly and is also computationally efficient.
MR tagging; HARP; motion tracking; shortest path; Dijkstra's algorithm
To understand the role of the tongue in speech production, it is desirable to directly image the motion and strain of the muscles within the tongue. Magnetic resonance tagging—which was originally developed for cardiac imaging—has previously been applied to image both two-dimensional and three-dimensional tongue motion during speech. However, to quantify three-dimensional motion and strain, multiple images yielding two-dimensional motion must be acquired at different orientations and then interpolated—a time-consuming task both in image acquisition and processing. Recently, a new MR imaging and image processing method called zHARP was developed to encode and track 3D motion from a single slice without increasing acquisition time. zHARP was originally developed and applied to cardiac imaging. The application of zHARP to the tongue is not straightforward because the tongue in repetitive speech does not move as consistently as the heart in its beating cycle. Therefore tongue images are more susceptible to motion artifacts. Moreover, these artifacts are greatly exaggerated as compared to conventional tagging because of the nature of zHARP acquisition. In this work, we re-implemented the zHARP imaging sequence and optimized it for the tongue motion analysis. We also optimized image acquisition by designing and developing a specialized MRI scanner triggering method and vocal repetition to better synchronize speech repetitions. Our method was validated using a moving phantom. Results of 3D motion tracking and strain analysis on the tongue experiments demonstrate the effectiveness of this method.
Motion quantification; tongue; zHARP
C-arm cone-beam CT (CBCT) can provide intraoperative 3D imaging capability for surgical guidance, but workflow and radiation dose are the significant barriers to broad utilization. One main reason is that each 3D image acquisition requires a complete scan with a full radiation dose to present a completely new 3D image every time. In this paper, we propose to utilize patient-specific CT or CBCT as prior knowledge to accurately reconstruct the aspects of the region that have changed by the surgical procedure from only a sparse set of x-rays. The proposed methods consist of a 3D-2D registration between the prior volume and a sparse set of intraoperative x-rays, creating digitally reconstructed radiographs (DRR) from the registered prior volume, computing difference images by subtracting DRRs from the intraoperative x-rays, a penalized likelihood reconstruction of the volume of change (VOC) from the difference images, and finally a fusion of VOC reconstruction with the prior volume to visualize the entire surgical field. When the surgical changes are local and relatively small, the VOC reconstruction involves only a small volume size and a small number of projections, allowing less computation and lower radiation dose than is needed to reconstruct the entire surgical field. We applied this approach to sacroplasty phantom data obtained from a CBCT test bench and vertebroplasty data with a fresh cadaver acquired from a C-arm CBCT system with a flat-panel detector (FPD). The VOCs were reconstructed from varying number of images (10–66 images) and compared to the CBCT ground truth using four different metrics (mean squared error, correlation coefficient, structural similarity index, and perceptual difference model). The results show promising reconstruction quality with structural similarity to the ground truth close to 1 even when only 15–20 images were used, allowing dose reduction by the factor of 10–20.
Tissue contrast and resolution of magnetic resonance neuroimaging data have strong impacts on the utility of the data in clinical and neuroscience tasks such as registration and segmentation. Lengthy acquisition times typically prevent routine acquisition of multiple MR tissue contrast images at high resolution, and the opportunity for detailed analysis using these data would seem to be irrevocably lost. This paper describes an example based approach using patch matching from a multiple resolution multiple contrast atlas in order to change an image's resolution as well as its MR tissue contrast from one pulse-sequence to that of another. The use of this approach to generate different tissue contrasts (T2/PD/FLAIR) from a single T1-weighted image is demonstrated on both phantom and real images.
Image classification; resolution; segmentation; MR tissue contrast; contrast synthesis; image hallucination; atlas
Harmonic phase (HARP) motion analysis is widely used in the analysis of tagged magnetic resonance images of the heart. HARP motion tracking can yield gross errors, however, when there is a large amount of motion between successive time frames. Methods that use spatial continuity of motion—so-called refinement methods—have previously been reported to reduce these errors. This paper describes a new refinement method based on shortest-path computations. The method uses a graph representation of the image and seeks an optimal tracking order from a specified seed to each point in the image by solving a single source shortest path problem. This minimizes the potential for path dependent solutions which are found in other refinement methods. Experiments on cardiac motion tracking shows that the proposed method can track the whole tissue more robustly and is also computationally efficient.
Motion tracking; HARP; refinement; single source shortest path problem; Dijkstra’s algorithm
Optical coherence tomography (OCT) has proven to be an essential imaging modality for ophthalmology and is proving to be very important in neurology. OCT enables high resolution imaging of the retina, both at the optic nerve head and the macula. Macular retinal layer thicknesses provide useful diagnostic information and have been shown to correlate well with measures of disease severity in several diseases. Since manual segmentation of these layers is time consuming and prone to bias, automatic segmentation methods are critical for full utilization of this technology. In this work, we build a random forest classifier to segment eight retinal layers in macular cube images acquired by OCT. The random forest classifier learns the boundary pixels between layers, producing an accurate probability map for each boundary, which is then processed to finalize the boundaries. Using this algorithm, we can accurately segment the entire retina contained in the macular cube to an accuracy of at least 4.3 microns for any of the nine boundaries. Experiments were carried out on both healthy and multiple sclerosis subjects, with no difference in the accuracy of our algorithm found between the groups.
(100.0100) Image processing; (170.4470) Ophthalmology; (170.4500) Optical coherence tomography
Optical coherence tomography (OCT) has become one of the most common tools for diagnosis of retinal abnormalities. Both retinal morphology and layer thickness can provide important information to aid in the differential diagnosis of these abnormalities. Automatic segmentation methods are essential to providing these thickness measurements since the manual delineation of each layer is cumbersome given the sheer amount of data within each OCT scan. In this work, we propose a new method for retinal layer segmentation using a random forest classifier. A total of seven features are extracted from the OCT data and used to simultaneously classify nine layer boundaries. Taking advantage of the probabilistic nature of random forests, probability maps for each boundary are extracted and used to help refine the classification. We are able to accurately segment eight retinal layers with an average Dice coefficient of 0.79 ± 0.13 and a mean absolute error of 1.21 ± 1.45 pixels for the layer boundaries.
OCT; retinal layer segmentation; random forest classification
This article introduces a new image processing technique for rapid analysis of tagged cardiac magnetic resonance image sequences. The method uses isolated spectral peaks in SPAMM-tagged magnetic resonance images, which contain information about cardiac motion. The inverse Fourier transform of a spectral peak is a complex image whose calculated angle is called a harmonic phase (HARP) image. It is shown how two HARP image sequences can be used to automatically and accurately track material points through time. A rapid, semiautomated procedure to calculate circumferential and radial Lagrangian strain from tracked points is described. This new computational approach permits rapid analysis and visualization of myocardial strain within 5-10 min after the scan is complete. Its performance is demonstrated on MR image sequences reflecting both normal and abnormal cardiac motion. Results from the new method are shown to compare very well with a previously validated tracking algorithm.
cardiac motion; harmonic phase; magnetic resonance tagging; myocardial strain
Diffusion tensor imaging (DTI) is widely used to characterize tissue micro-architecture and brain connectivity. In regions of crossing fibers, however, the tensor model fails because it cannot represent multiple, independent intra-voxel orientations. Most of the methods that have been proposed to resolve this problem require diffusion magnetic resonance imaging (MRI) data that comprise large numbers of angles and high b-values, making them problematic for routine clinical imaging and many scientific studies. We present a technique based on compressed sensing that can resolve crossing fibers using diffusion MRI data that can be rapidly and routinely acquired in the clinic (30 directions, b-value equal to 700 s/mm2). The method assumes that the observed data can be well fit using a sparse linear combination of tensors taken from a fixed collection of possible tensors each having a different orientation. A fast algorithm for computing the best orientations based on a hierarchical compressed sensing algorithm and a novel metric for comparing estimated orientations are also proposed. The performance of this approach is demonstrated using both simulations and in vivo images. The method is observed to resolve crossing fibers using conventional data as well as a standard q-ball approach using much richer data that requires considerably more image acquisition time.
Diffusion weighted imaging; DTI; compressed sensing; orientation distribution function; crossing fibers
Several popular classification algorithms used to segment magnetic resonance brain images assume that the image intensities, or log-transformed intensities, satisfy a finite Gaussian mixture model. In these methods, the parameters of the mixture model are estimated and the posterior probabilities for each tissue class are used directly as soft segmentations or combined to form a hard segmentation. It is suggested and shown in this paper that a Rician mixture model fits the observed data better than a Gaussian model. Accordingly, a Rician mixture model is formulated and used within an expectation maximization (EM) framework to yield a new tissue classification algorithm called RiCE (Rician Classifier using EM). It is shown using both simulated and real data that RiCE yields comparable or better performance to that of algorithms based on the finite Gaussian mixture model. As well, we show that RiCE yields more consistent segmentation results when used on images of the same individual acquired with different T1-weighted pulse sequences. Therefore, RiCE has the potential to stabilize segmentation results in brain studies involving heterogeneous acquisition sources as is typically found in both multi-center and longitudinal studies.
Medical Image segmentation; Tissue Classification; Rician Distribution; Biomedical Imaging
Mapping brain structure in relation to neurological development, function, plasticity, and disease is widely considered to be one of the most essential challenges for opening new lines of neuro-scientific inquiry. Recent developments with MRI analysis of structural connectivity, anatomical brain segmentation, cortical surface parcellation, and functional imaging have yielded fantastic advances in our ability to probe the neurological structure-function relationship in vivo. To date, the image analysis efforts in each of these areas have typically focused on a single modality. Here, we extend the cortical reconstruction using implicit surface evolution (CRUISE) methodology to perform efficient, consistent, and topologically correct analyses in a natively multi-parametric manner. This effort combines and extends state-of-the-art techniques to simultaneously consider and analyze structural and diffusion information alongside quantitative and functional imaging data. Robust and consistent estimates of the cortical surface extraction, cortical labeling, diffusion-inferred contrasts, diffusion tractography, and subcortical parcellation are demonstrated in a scan-rescan paradigm. Accompanying this demonstration, we present a fully automated software system complete with validation data.
brain; MRI; cortical surface; white matter parcellation; fiber tracking; sub-cortical segmentation
Labels that identify specific anatomical and functional structures within medical images are essential to the characterization of the relationship between structure and function in many scientific and clinical studies. Automated methods that allow for high throughput have not yet been developed for all anatomical targets or validated for exceptional anatomies, and manual labeling remains the gold standard in many cases. However, manual placement of labels within a large image volume such as that obtained using magnetic resonance imaging is exceptionally challenging, resource intensive, and fraught with intra- and inter-rater variability. The use of statistical methods to combine labels produced by multiple raters has grown significantly in popularity, in part, because it is thought that by estimating and accounting for rater reliability estimates of the true labels will be more accurate. This paper demonstrates the performance of a class of these statistical label combination methodologies using real-world data contributed by minimally trained human raters. The consistency of the statistical estimates, the accuracy compared to the individual observations, and the variability of both the estimates and the individual observations with respect to the number of labels are presented. It is demonstrated that statistical fusion successfully combines label information using data from online (Internet-based) collaborations among minimally trained raters. This first successful demonstration of a statistically based approach using minimally trained raters opens numerous possibilities for very large scale efforts in collaboration. Extension and generalization of these technologies for new applications will certainly present fascinating areas for continuing research.
Parcellation; labeling; delineation; label fusion; STAPLE; STAPLER; minimal training
Prostate brachytherapy guided by transrectal ultrasound is a common treatment option for early stage prostate cancer. Prostate cancer accounts for 28% of cancer cases and 11% of cancer deaths in men with 217,730 estimated new cases and 32,050 estimated deaths in 2010 in the United States alone. The major current limitation is the inability to reliably localize implanted radiation seeds spatially in relation to the prostate. Multimodality approaches that incorporate X-ray for seed localization have been proposed, but they require both accurate tracking of the imaging device and segmentation of the seeds. Some use image-based radiographic fiducials to track the X-ray device, but manual intervention is needed to select proper regions of interest for segmenting both the tracking fiducial and the seeds, to evaluate the segmentation results, and to correct the segmentations in the case of segmentation failure, thus requiring a significant amount of extra time in the operating room. In this paper, we present an automatic segmentation algorithm that simultaneously segments the tracking fiducial and brachytherapy seeds, thereby minimizing the need for manual intervention. In addition, through the innovative use of image processing techniques such as mathematical morphology, Hough transforms, and RANSAC, our method can detect and separate overlapping seeds that are common in brachytherapy implant images. Our algorithm was validated on 55 phantom and 206 patient images, successfully segmenting both the fiducial and seeds with a mean seed segmentation rate of 96% and sub-millimeter accuracy.
prostate brachytherapy; seed localization; X-ray; non-isocentric C-arm; fiducial; segmentation; overlapping seeds
The simultaneous segmentation of multiple objects is an important problem in many imaging and computer vision applications. Various extensions of level set segmentation techniques to multiple objects have been proposed; however, no one method maintains object relationships, preserves topology, is computationally efficient, and provides an object-dependent internal and external force capability. In this paper, a framework for segmenting multiple objects that permits different forces to be applied to different boundaries while maintaining object topology and relationships is presented. Because of this framework, the segmentation of multiple objects each with multiple compartments is supported, and no overlaps or vacuums are generated. The computational complexity of this approach is independent of the number of objects to segment, thereby permitting the simultaneous segmentation of a large number of components. The properties of this approach and comparisons to existing methods are shown using a variety of images, both synthetic and real.
Magnetic resonance angiography (MRA) provides a noninvasive means to detect the presence, location and severity of atherosclerosis throughout the vascular system. In such studies, and especially those in the coronary arteries, the vessel luminal area is typically measured at multiple cross-sectional locations along the course of the artery. The advent of fast volumetric imaging techniques covering proximal to mid segments of coronary arteries necessitates automatic analysis tools requiring minimal manual interactions to robustly measure cross-sectional area along the three-dimensional track of the arteries in under-sampled and non-isotropic datasets. In this work, we present a modular approach based on level set methods to track the vessel centerline, segment the vessel boundaries, and measure transversal area using two user-selected endpoints in each coronary of interest. Arterial area and vessel length are measured using our method and compared to the standard Soap-Bubble reformatting and analysis tool in in-vivo non-contrast enhanced coronary MRA images.
3D Centerline Tracking; 3D Segmentation; Level Set Methods; Non-contrast enhanced Magnetic Resonance Angiography; Coronary Arteries
Image labeling is an essential step for quantitative analysis of medical images. Many image labeling algorithms require seed identification in order to initialize segmentation algorithms such as region growing, graph cuts, and the random walker. Seeds are usually placed manually by human raters, which makes these algorithms semi-automatic and can be prohibitive for very large datasets. In this paper an automatic algorithm for placing seeds using multi-atlas registration and statistical fusion is proposed. Atlases containing the centers of mass of a collection of neuroanatomical objects are deformably registered in a training set to determine where these centers of mass go after labels transformed by registration. The biases of these transformations are determined and incorporated in a continuous form of Simultaneous Truth And Performance Level Estimation (STAPLE) fusion, thereby improving the estimates (on average) over a single registration strategy that does not incorporate bias or fusion. We evaluate this technique using real 3D brain MR image atlases and demonstrate its efficacy on correcting the data bias and reducing the fusion error.
Labeling; seed; continuous; STAPLE; atlas; statistics; fusion; bias; registration
Diffusion-weighted images of the human brain are acquired more and more routinely in clinical research settings, yet segmenting and labeling white matter tracts in these images is still challenging. We present in this paper a fully automated method to extract many anatomical tracts at once on diffusion tensor images, based on a Markov random field model and anatomical priors. The approach provides a direct voxel labeling, models explicitly fiber crossings and can handle white matter lesions. Experiments on simulations and repeatability studies show robustness to noise and reproducibility of the algorithm, which has been made publicly available.
White matter tracts; DTI Segmentation; Markov random field modeling
Current MRI methods for myocardial motion and strain quantification have limited resolution because of Fourier space spectral peak interference. Methods have been proposed to remove this interference in order to improve resolution; however, these methods are clinically impractical due to the prolonged imaging times. In this paper, we propose total removal of unwanted harmonic peaks (TruHARP); a myocardial motion and strain quantification methodology that uses a novel single breath-hold MR image acquisition protocol. In post-processing, TruHARP separates the spectral peaks in the acquired images, enabling high-resolution motion and strain quantification. The impact of high resolution on calculated circumferential and radial strains is studied using realistic Monte Carlo simulations, and the improvement in strain maps is demonstrated in six human subjects.
MRI; HARP; tagging; cardiac motion; strain quantification
We propose a method for improving the quality of cone-beam tomographic reconstruction done with a C-arm. C-arm scans frequently suffer from incomplete information due to image truncation, limited scan length, or other limitations. Our proposed “hybrid reconstruction” method injects information from a prior anatomical model, derived from a subject-specific CT or from a statistical database (atlas), where the C-arm x-ray data is missing. This significantly reduces reconstruction artifacts with little loss of true information from the x-ray projections. The methods consist of constructing anatomical models, fast rendering of digitally reconstructed radiograph (DRR) projections of the models, rigid or deformable registration of the model and the x-ray images, and fusion of the DRR and x-ray projections, all prior to a conventional filtered back-projection algorithm. Our experiments, conducted with a mobile image intensifier C-arm, demonstrate visually and quantitatively the contribution of data fusion to image quality, which we assess through comparison to a “ground truth” CT. Importantly, we show that a significantly improved reconstruction can be obtained from a C-arm scan as short as 90° by complementing the observed projections with DRRs of two prior models, namely an atlas and a pre-operative same-patient CT. The hybrid reconstruction principles are applicable to other types of C-arms as well.
cone-beam reconstruction; C-arm; computed tomography; anatomical atlas; hybrid reconstruction
The success of prostate brachytherapy critically depends on delivering adequate dose to the prostate gland, and the capability of intraoperatively localizing implanted seeds provides potential for dose evaluation and optimization during therapy. REDMAPS is a recently reported algorithm that carries out seed localization by detecting, matching, and reconstructing seeds in only a few seconds from three acquired x-ray images. In this paper, we present an automatic pose correction (APC) process that is combined with REDMAPS to allow for both more accurate seed reconstruction and the use of images with relatively large pose errors. APC uses a set of reconstructed seeds as a fiducial and corrects the image pose by minimizing the overall projection error. The seed matching and APC are iteratively computed until a stopping condition is met. Simulations and clinical studies show that APC significantly improves the reconstructions with an overall average matching rate of ≥ 99.4%, reconstruction error of ≤ 0.5 mm, and the matching solution optimality of ≥ 99.8%.
Extraction of the brain — i.e. cerebrum, cerebellum, and brain stem — from T1-weighted structural magnetic resonance images is an important initial step in neuroimage analysis. Although automatic algorithms are available, their inconsistent handling of the cortical mantle often requires manual interaction, thereby reducing their effectiveness. This paper presents a fully automated brain extraction algorithm that incorporates elastic registration, tissue segmentation, and morphological techniques which are combined by a watershed principle, while paying special attention to the preservation of the boundary between the gray matter and the cerebrospinal fluid. The approach was evaluated by comparison to a manual rater, and compared to several other leading algorithms on a publically available data set of brain images using the Dice coefficient and containment index as performance metrics. The qualitative and quantitative impact of this initial step on subsequent cortical surface generation is also presented. Our experiments demonstrate that our approach is quantitatively better than six other leading algorithms (with statistical significance on modern T1-weighted MR data). We also validated the robustness of the algorithm on a very large data set of over one thousand subjects, and showed that it can replace an experienced manual rater as preprocessing for a cortical surface extraction algorithm with statistically insignificant differences in cortical surface position.
Brain extraction; skull stripping; watershed principle; segmentation; medical image processing