This paper presents a patch-based non-parametric approach to the correction of intensity inhomogeneity from magnetic resonance (MR) images of the human brain. During image acquisition, the inhomogeneity present in the radio-frequency coil, is usually manifested on the reconstructed MR image as a smooth shading effect. This artifact can significantly deteriorate the performance of any kind of image processing algorithm that uses intensities as a feature. Most of the current inhomogeneity correction techniques use explicit smoothness assumptions on the inhomogeneity field, which sometimes limit their performance if the actual inhomogeneity is not smooth, a problem that becomes prevalent in high fields. The proposed patch-based inhomogeneity correction method does not assume any parametric smoothness model, instead, it uses patches from an atlas of an inhomogeneity-free image to do the correction. Preliminary results show that the proposed method is comparable to N3, a current state of the art method, when the inhomogeneity is smooth, and outperforms N3 when the inhomogeneity contains non-smooth elements.
intensity inhomogeneity; bias field; gain field; MRI; patch; segmentation; RF field inhomogeneity; non-uniformity; artifact correction
This paper presents a patch based method to normalize temporal intensities from longitudinal brain magnetic resonance (MR) images. Longitudinal intensity normalization is relevant for subsequent processing, such as segmentation, so that rates of change of tissue volumes, cortical thickness, or shapes of brain structures becomes stable and smooth over time. Instead of using intensities at each voxel, we use patches as image features as a patch encodes neighborhood information of the center voxel. Once all the time-points of a longitudinal dataset are registered, the longitudinal intensity change at each patch is assumed to follow an auto-regressive (AR(1)) process. An estimate of the normalized intensities of a patch at every time-point are generated from a hidden Markov model, where the hidden states are the unobserved normalized patches and the outputs are the observed patches. A validation study on a phantom dataset shows good segmentation overlap with the truth, and an experiment with real data shows more stable rates of change for tissue volumes with the temporal normalization than without.
Intensity normalization; intensity standardization; MRI; patch; brain
Deformable registration techniques play vital roles in a variety of medical imaging tasks such as image fusion, segmentation, and post-operative surgery assessment. In recent years, mutual information has become one of the most widely used similarity metrics for medical image registration algorithms. Unfortunately, as a matching criteria, mutual information loses much of its effectiveness when there is poor statistical consistency and a lack of structure. This is especially true in areas of images where the intensity is homogeneous and information is sparse. Here we present a method designed to address this problem by integrating distance transforms of anatomical segmentations as part of a multi-channel mutual information framework within the registration algorithm. Our method was tested by registering real MR brain data and comparing the segmentation of the results against that of the target. Our analysis showed that by integrating distance transforms of the the white matter segmentation into the registration, the overall segmentation of the registration result was closer to the target than when the distance transform was not used.
Image registration; Magnetic resonance imaging; Multidimensional signal processing; Spatial normalization; Distance Transform
Magnetic resonance (MR) images of the tongue have been used in both clinical studies and scientific research to reveal tongue structure. In order to extract different features of the tongue and its relation to the vocal tract, it is beneficial to acquire three orthogonal image volumes—e.g., axial, sagittal, and coronal volumes. In order to maintain both low noise and high visual detail and minimize the blurred effect due to involuntary motion artifacts, each set of images is acquired with an in-plane resolution that is much better than the through-plane resolution. As a result, any one data set, by itself, is not ideal for automatic volumetric analyses such as segmentation, registration, and atlas building or even for visualization when oblique slices are required. This paper presents a method of super-resolution volume reconstruction of the tongue that generates an isotropic image volume using the three orthogonal image volumes. The method uses preprocessing steps that include registration and intensity matching and a data combination approach with the edge-preserving property carried out by Markov random field optimization. The performance of the proposed method was demonstrated on fifteen clinical datasets, preserving anatomical details and yielding superior results when compared with different reconstruction methods as visually and quantitatively assessed.
Super-resolution volume reconstruction; human tongue; magnetic resonance imaging (MRI)
Several popular classification algorithms used to segment magnetic resonance brain images assume that the image intensities, or log-transformed intensities, satisfy a finite Gaussian mixture model. In these methods, the parameters of the mixture model are estimated and the posterior probabilities for each tissue class are used directly as soft segmentations or combined to form a hard segmentation. It is suggested and shown in this paper that a Rician mixture model fits the observed data better than a Gaussian model. Accordingly, a Rician mixture model is formulated and used within an expectation maximization (EM) framework to yield a new tissue classification algorithm called RiCE (Rician Classifier using EM). It is shown using both simulated and real data that RiCE yields comparable or better performance to that of algorithms based on the finite Gaussian mixture model. As well, we show that RiCE yields more consistent segmentation results when used on images of the same individual acquired with different T1-weighted pulse sequences. Therefore, RiCE has the potential to stabilize segmentation results in brain studies involving heterogeneous acquisition sources as is typically found in both multi-center and longitudinal studies.
Medical Image segmentation; Tissue Classification; Rician Distribution; Biomedical Imaging
Bayesian and graphical models for biomedical imaging : first International Workshop, BAMBI 2014, Cambridge, MA, USA, September 18, 2014, Revised Selected Papers / M. Jorge Cardoso, Ivor Simpson, Tal Arbel, Doina Precup, Annemie Ribbens .
Fiber tracking in crossing regions is a well known issue in diffusion tensor imaging (DTI). Multi-tensor models have been proposed to cope with the issue. However, in cases where only a limited number of gradient directions can be acquired, for example in the tongue, the multi-tensor models fail to resolve the crossing correctly due to insufficient information. In this work, we address this challenge by using a fixed tensor basis and incorporating prior directional knowledge. Within a maximum a posteriori (MAP) framework, sparsity of the basis and prior directional knowledge are incorporated in the prior distribution, and data fidelity is encoded in the likelihood term. An objective function can then be obtained and solved using a noise-aware weighted ℓ1-norm minimization. Experiments on a digital phantom and in vivo tongue diffusion data demonstrate that the proposed method is able to resolve crossing fibers with limited gradient directions.
diffusion imaging; weighted ℓ1-norm minimization; prior directional knowledge
Segmentation and parcellation of the thalamus is an important step in providing volumetric assessment of the impact of disease on brain structures. Conventionally, segmentation is carried out on T1-weighted magnetic resonance (MR) images and nuclear parcellation using diffusion weighted MR images. We present the first fully automatic method that incorporates both tissue contrasts and several derived features to first segment and then parcellate the thalamus. We incorporate fractional anisotrophy, fiber orientation from the 5D Knutsson representation of the principal eigenvectors, and connectivity between the thalamus and the cortical lobes, as features. Combining these multiple information sources allows us to identify discriminating dimensions and thus parcellate the thalamic nuclei. A hierarchical random forest framework with a multidimensional feature per voxel, first distinguishes thalamus from background, and then separates each group of thalamic nuclei. Using a leave one out cross-validation on 12 subjects we have a mean Dice score of 0.805 and 0.799 for the left and right thalami, respectively. We also report overlap for the thalamic nuclear groups.
Brain imaging; diffusion MRI; magnetic resonance imaging; machine learning; segmentation; thalamus parcellation
Cerebellar ataxia is a progressive neuro-degenerative disease that has multiple genetic versions, each with a characteristic pattern of anatomical degeneration that yields distinctive motor and cognitive problems. Studying this pattern of degeneration can help with the diagnosis of disease subtypes, evaluation of disease stage, and treatment planning. In this work, we propose a learning framework using MR image data for discriminating a set of cerebellar ataxia types and predicting a disease related functional score. We address the difficulty in analyzing high-dimensional image data with limited training subjects by: 1) training weak classifiers/regressors on a set of image subdomains separately, and combining the weak classifier/regressor outputs to make the decision; 2) perturbing the image subdomain to increase the training samples; 3) using a deep learning technique called the stacked auto-encoder to develop highly representative feature vectors of the input data. Experiments show that our approach can reliably classify between one of four categories (healthy control and three types of ataxia), and predict the functional staging score for ataxia.
Microcystic macular edema (MME) manifests as small, hyporeflective cystic areas within the retina. For reasons that are still largely unknown, a small proportion of patients with multiple sclerosis (MS) develop MME—predominantly in the inner nuclear layer. These cystoid spaces, denoted pseudocysts, can be imaged using optical coherence tomography (OCT) where they appear as small, discrete, low intensity areas with high contrast to the surrounding tissue. The ability to automatically segment these pseudocysts would enable a more detailed study of MME than has been previously possible. Although larger pseudocysts often appear quite clearly in the OCT images, the multi-frame averaging performed by the Spectralis scanner adds a significant amount of variability to the appearance of smaller pseudocysts. Thus, simple segmentation methods only incorporating intensity information do not perform well. In this work, we propose to use a random forest classifier to classify the MME pixels. An assortment of both intensity and spatial features are used to aid the classification. Using a cross-validation evaluation strategy with manual delineation as ground truth, our method is able to correctly identify 79% of pseudocysts with a precision of 85%. Finally, we constructed a classifier from the output of our algorithm to distinguish clinically identified MME from non-MME subjects yielding an accuracy of 92%.
(100.0100) Image processing; (170.4470) Ophthalmology; (170.4500) Optical coherence tomography
Integrated PET (Positron Emission Tomography)/MR(magnetic resonance) systems are becoming increasingly popular in clinical and research applications. Quantitative PET reconstruction requires correction for γ photon attenuations using an attenuation coefficient map (μ-map) that is a measure of the electron density. One challenge of PET/MR, in contrast to PET/CT, lies in the accurate computation of μ-maps. Unlike CT, MRI measures physical properties not directly related to electron density. Previous approaches have computed the attenuation coefficients using a segmentation of MR images or using deformable registration of atlas CT images to the space of the subject MRI.
In this work, we propose a patch-based method to generate whole head μ-maps from Ultra-short Echo Time (UTE) MR imaging sequences. UTE images are preferred to other MR sequences, because of the increased signal from bone). To generate a synthetic CT image, we use patches from a reference dataset, which consists of dual echo UTE images and a co-registered CT from the same subject. By matching patches between the reference and target images, corresponding patches from the reference CT are combined via a Bayesian framework. No registration or segmentation is required.
For evaluation, UTE, CT and PET data, acquired from 5 patients under an IRB approved protocol, were used. Another patient (with UTE and CT only) was selected to be the reference to generate synthetic CT images for these five patients. PET reconstructions were attenuation corrected using (1) the original CT, (2) our synthetic CT, and Siemens (3) Dixon- and (4) UTE-based μ-maps, and (5) a deformable registration based CT. Our synthetic CT based PET reconstruction shows higher correlation (average ρ = 0.99, R2 = 0.99) to the original CT based PET, as compared to the segmentation and registration based methods. Synthetic CT based reconstruction had minimal bias (regression slope 0.99) as compared to the segmentation based methods (regression slope 0.97). A peak signal-to-noise ratio of 36.0 dB in the reconstructed PET activity is observed, compared with 29.7, 29.3, 27.4 dB for Siemens Dixon, UTE, and registration based μ-maps.
A patch-matching approach to synthesize CT images from dual echo UTE images leads to significantly more accurate PET reconstruction as compared to actual CT scans. The PET reconstruction is improved over segmentation (Dixon and Siemens UTE) and registration based methods, even in subjects with pathology.
attenuation correction; PET/CT; PET/MRI; CT; UTE
Spinal cord segmentation is an important step in the analysis of neurological diseases such as multiple sclerosis. Several studies have shown correlations between disease progression and metrics relating to spinal cord atrophy and shape changes. Current practices primarily involve segmenting the spinal cord manually or semi-automatically, which can be inconsistent and time-consuming for large datasets. An automatic method that segments the spinal cord and cerebrospinal fluid from magnetic resonance images is presented. The method uses a deformable atlas and topology constraints to produce results that are robust to noise and artifacts. The method is designed to be easily extended to new data with different modalities, resolutions, and fields of view. Validation was performed on two distinct datasets. The first consists of magnetization transfer-prepared T2*-weighted gradient-echo MRI centered only on the cervical vertebrae (C1-C5). The second consists of T1-weighted MRI that cover both the cervical and portions of the thoracic vertebrae (C1-T4). Results were found to be highly accurate in comparison to manual segmentations. A pilot study was carried out to demonstrate the potential utility of this new method for research and clinical studies of multiple sclerosis.
Atlas construction; topology-preserving segmentation; digital homeomorphism; spinal cord segmentation; magnetic resonance imaging
To understand the role of the tongue in speech production, it is desirable to directly image the motion and strain of the muscles within the tongue. Magnetic resonance tagging—which was originally developed for cardiac imaging—has previously been applied to image both two-dimensional and three-dimensional tongue motion during speech. However, to quantify three-dimensional motion and strain, multiple images yielding two-dimensional motion must be acquired at different orientations and then interpolated—a time-consuming task both in image acquisition and processing. Recently, a new MR imaging and image processing method called zHARP was developed to encode and track 3D motion from a single slice without increasing acquisition time. zHARP was originally developed and applied to cardiac imaging. The application of zHARP to the tongue is not straightforward because the tongue in repetitive speech does not move as consistently as the heart in its beating cycle. Therefore tongue images are more susceptible to motion artifacts. Moreover, these artifacts are greatly exaggerated as compared to conventional tagging because of the nature of zHARP acquisition. In this work, we re-implemented the zHARP imaging sequence and optimized it for the tongue motion analysis. We also optimized image acquisition by designing and developing a specialized MRI scanner triggering method and vocal repetition to better synchronize speech repetitions. Our method was validated using a moving phantom. Results of 3D motion tracking and strain analysis on the tongue experiments demonstrate the effectiveness of this method.
Motion quantification; tongue; zHARP
Despite ongoing improvements in magnetic resonance (MR) imaging (MRI), considerable clinical and, to a lesser extent, research data is acquired at lower resolutions. For example 1 mm isotropic acquisition of T1-weighted (T1-w) Magnetization Prepared Rapid Gradient Echo (MPRAGE) is standard practice, however T2-weighted (T2-w)—because of its longer relaxation times (and thus longer scan time)—is still routinely acquired with slice thicknesses of 2–5 mm and in-plane resolution of 2–3 mm. This creates obvious fundamental problems when trying to process T1-w and T2-w data in concert. We present an automated supervised learning algorithm to generate high resolution data. The framework is similar to the brain hallucination work of Rousseau, taking advantage of new developments in regression based image reconstruction. We present validation on phantom and real data, demonstrating the improvement over state-of-the-art super-resolution techniques.
Image reconstruction; super-resolution; regression; brain; MRI
Fluid Attenuated Inversion Recovery (FLAIR) is a commonly acquired pulse sequence for multiple sclerosis (MS) patients. MS white matter lesions appear hyperintense in FLAIR images and have excellent contrast with the surrounding tissue. Hence, FLAIR images are commonly used in automated lesion segmentation algorithms to easily and quickly delineate the lesions. This expedites the lesion load computation and correlation with disease progression. Unfortunately for numerous reasons the acquired FLAIR images can be of a poor quality and suffer from various artifacts. In the most extreme cases the data is absent, which poses a problem when consistently processing a large data set. We propose to fill in this gap by reconstructing a FLAIR image given the corresponding T1-weighted, T2-weighted, and PD-weighted images of the same subject using random forest regression. We show that the images we produce are similar to true high quality FLAIR images and also provide a good surrogate for tissue segmentation.
Image reconstruction; regression; brain
Spatial normalization of positron emission tomography (PET) images is essential for population studies, yet work on anatomically accurate PET-to-PET registration is limited. We present a method for the spatial normalization of PET images that improves their anatomical alignment based on a deformation correction model learned from structural image registration. To generate the model, we first create a population-based PET template with a corresponding structural image template. We register each PET image onto the PET template using deformable registration that consists of an affine step followed by a diffeomorphic mapping. Constraining the affine step to be the same as that obtained from the PET registration, we find the diffeomorphic mapping that will align the structural image with the structural template. We train partial least squares (PLS) regression models within small neighborhoods to relate the PET intensities and deformation fields obtained from the diffeomorphic mapping to the structural image deformation fields. The trained model can then be used to obtain more accurate registration of PET images to the PET template without the use of a structural image. A cross validation based evaluation on 79 subjects shows that our method yields more accurate alignment of the PET images compared to deformable PET-to-PET registration as revealed by 1) a visual examination of the deformed images, 2) a smaller error in the deformation fields, and 3) a greater overlap of the deformed anatomical labels with ground truth segmentations.
PET registration; deformation field; partial least squares
Quantitative measurements from segmentations of soft tissues from magnetic resonance images (MRI) of human brains provide important biomarkers for normal aging, as well as disease progression. In this paper, we propose a patch-based tissue classification method from MR images using sparse dictionary learning from an atlas. Unlike most atlas-based classification methods, deformable registration from the atlas to the subject is not required. An “atlas” consists of an MR image, its tissue probabilities, and the hard segmentation. The “subject” consists of the MR image and the corresponding affine registered atlas probabilities (or priors). A subject specific patch dictionary is created by learning relevant patches from the atlas. Then the subject patches are modeled as sparse combinations of learned atlas patches. The same sparse combination is applied to the segmentation patches of the atlas to generate tissue memberships of the subject. The novel combination of prior probabilities in the example patches enables us to distinguish tissues having similar intensities but having different spatial location. We show that our method outperforms two state-of-the-art whole brain tissue segmentation methods. We experimented on 12 subjects having manual tissue delineations, obtaining mean Dice coefficients of 0:91 and 0:87 for cortical gray matter and cerebral white matter, respectively. In addition, experiments on subjects with ventriculomegaly shows significantly better segmentation using our approach than the competing methods.
image synthesis; intensity normalization; hallucination; patches
Tissue contrast and resolution of magnetic resonance neuroimaging data have strong impacts on the utility of the data in clinical and neuroscience tasks such as registration and segmentation. Lengthy acquisition times typically prevent routine acquisition of multiple MR tissue contrast images at high resolution, and the opportunity for detailed analysis using these data would seem to be irrevocably lost. This paper describes an example based approach using patch matching from a multiple resolution multiple contrast atlas in order to change an image's resolution as well as its MR tissue contrast from one pulse-sequence to that of another. The use of this approach to generate different tissue contrasts (T2/PD/FLAIR) from a single T1-weighted image is demonstrated on both phantom and real images.
Image classification; resolution; segmentation; MR tissue contrast; contrast synthesis; image hallucination; atlas
Images of myocardial strain can be used to diagnose heart disease, plan and monitor treatment, and to learn about cardiac structure and function. Three-dimensional (3-D) strain is typically quantified using many magnetic resonance (MR) images obtained in two or three orthogonal planes. Problems with this approach include long scan times, image misregistration, and through-plane motion. This article presents a novel method for calculating cardiac 3-D strain using a stack of two or more images acquired in only one orientation. The zHARP pulse sequence encodes in-plane motion using MR tagging and out-of-plane motion using phase encoding, and has been previously shown to be capable of computing 3D displacement within a single image plane. Here, data from two adjacent image planes are combined to yield a 3-D strain tensor at each pixel; stacks of zHARP images can be used to derive stacked arrays of 3D strain tensors without imaging multiple orientations and without numerical interpolation. The performance and accuracy of the method is demonstrated in-vitro on a phantom and in-vivo in four healthy adult human subjects.
three-dimensional strain tensor; cardiac function; HARP; zHARP; harmonic phase magnetic resonance
Harmonic phase (HARP) motion analysis is widely used in the analysis of tagged magnetic resonance images of the heart. HARP motion tracking can yield gross errors, however, when there is a large amount of motion between successive time frames. Methods that use spatial continuity of motion—so-called refinement methods—have previously been reported to reduce these errors. This paper describes a new refinement method based on shortest-path computations. The method uses a graph representation of the image and seeks an optimal tracking order from a specified seed to each point in the image by solving a single source shortest path problem. This minimizes the potential for path dependent solutions which are found in other refinement methods. Experiments on cardiac motion tracking shows that the proposed method can track the whole tissue more robustly and is also computationally efficient.
Motion tracking; HARP; refinement; single source shortest path problem; Dijkstra’s algorithm
This article introduces a new image processing technique for rapid analysis of tagged cardiac magnetic resonance image sequences. The method uses isolated spectral peaks in SPAMM-tagged magnetic resonance images, which contain information about cardiac motion. The inverse Fourier transform of a spectral peak is a complex image whose calculated angle is called a harmonic phase (HARP) image. It is shown how two HARP image sequences can be used to automatically and accurately track material points through time. A rapid, semiautomated procedure to calculate circumferential and radial Lagrangian strain from tracked points is described. This new computational approach permits rapid analysis and visualization of myocardial strain within 5-10 min after the scan is complete. Its performance is demonstrated on MR image sequences reflecting both normal and abnormal cardiac motion. Results from the new method are shown to compare very well with a previously validated tracking algorithm.
cardiac motion; harmonic phase; magnetic resonance tagging; myocardial strain
Using modern diffusion weighted magnetic resonance imaging protocols, the orientations of multiple neuronal fiber tracts within each voxel can be estimated. Further analysis of these populations, including application of fiber tracking and tract segmentation methods, is often hindered by lack of spatial smoothness of the estimated orientations. For example, a single noisy voxel can cause a fiber tracking method to switch tracts in a simple crossing tract geometry. In this work, a generalized spatial smoothing framework that handles multiple orientations as well as their fractional contributions within each voxel is proposed. The approach estimates an optimal fuzzy correspondence of orientations and fractional contributions between voxels and smooths only between these correspondences. Avoiding a requirement to obtain exact correspondences of orientations reduces smoothing anomalies due to propagation of erroneous correspondences around noisy voxels. Phantom experiments are used to demonstrate both visual and quantitative improvements in postprocessing steps. Improvement over smoothing in the measurement domain is also demonstrated using both phantoms and in vivo human data.
Smoothing; Diffusion MRI; HARDI; Multi-tensor; Multiple orientations; Multiple directions; Correspondence
Computed tomography (CT) is the standard imaging modality for patient dose calculation for radiation therapy. Magnetic resonance (MR) imaging (MRI) is used along with CT to identify brain structures due to its superior soft tissue contrast. Registration of MR and CT is necessary for accurate delineation of the tumor and other structures, and is critical in radiotherapy planning. Mutual information (MI) or its variants are typically used as a similarity metric to register MRI to CT. However, unlike CT, MRI intensity does not have an accepted calibrated intensity scale. Therefore, MI-based MR-CT registration may vary from scan to scan as MI depends on the joint histogram of the images. In this paper, we propose a fully automatic framework for MR-CT registration by synthesizing a synthetic CT image from MRI using a co-registered pair of MR and CT images as an atlas. Patches of the subject MRI are matched to the atlas and the synthetic CT patches are estimated in a probabilistic framework. The synthetic CT is registered to the original CT using a deformable registration and the computed deformation is applied to the MRI. In contrast to most existing methods, we do not need any manual intervention such as picking landmarks or regions of interests. The proposed method was validated on ten brain cancer patient cases, showing 25% improvement in MI and correlation between MR and CT images after registration compared to state-of-the-art registration methods.
magnetic resonance imaging; MRI; CT; image synthesis; intensity normalization; histogram matching; brain; hallucination; patches
Diffusion tensor imaging (DTI) is widely used to characterize tissue microarchitecture and brain connectivity. However, traditional tensor techniques cannot represent multiple, independent intra-voxel orientations, so DTI suffers serious limitations in regions of crossing fibers. We present a new application of compressed sensing, Crossing Fiber Angular Resolution of Intra-voxel structure (CFARI), to resolve multiple tissue orientations. CFARI identifies a parsimonious tissue model on a strictly voxelwise basis using traditional DTI data. Reliable estimates of multiple intra-voxel orientations are demonstrated in simulations, and intra-voxel fiber orientations consistent with crossing fiber anatomy are revealed with typical in vivo DTI data.
In coronary magnetic resonance angiography, a magnetization-preparation scheme for T2-weighting (T2Prep) is widely used to enhance contrast between the coronary blood-pool and the myocardium. This pre-pulse is commonly applied without spatial selection to minimize flow sensitivity, but the non-selective implementation results in a reduced magnetization of the in-flowing blood and a related penalty in signal-to-noise-ratio (SNR). It is hypothesized that a spatially-selective T2Prep would leave the magnetization of blood outside the T2Prep volume unaffected, and thereby lower the SNR penalty. To test this hypothesis, a spatially-selective T2Prep was implemented where the user could freely adjust angulation and position of the T2Prep slab to avoid covering the ventricular blood-pool and saturating the in-flowing spins. A time gap of 150ms was further added between the T2Prep and other pre-pulses to allow for in-flow of a larger volume of unsaturated spins. Consistent with numerical simulation, the spatially-selective T2Prep increased in vivo human coronary artery SNR (42.3±2.9 vs. 31.4±2.2, n=22, p<0.0001) and contrast-to-noise-ratio (18.6±1.5 vs. 13.9±1.2, p=0.009) as compared to those of the non-selective T2Prep. Additionally, a segmental analysis demonstrated that the spatially-selective T2Prep was most beneficial in proximal and mid segments where the in-flowing blood volume was largest compared to the distal segments.
Coronary MR Angiography; Contrast Enhancement; T2Prep; Vessel Conspicuity; In-flowing Blood
Optical coherence tomography (OCT) of the macula has become increasingly important in the investigation of retinal pathology. However, deformable image registration, which is used for aligning subjects for pairwise comparisons, population averaging, and atlas label transfer, has not been well–developed and demonstrated on OCT images. In this paper, we present a deformable image registration approach designed specifically for macular OCT images. The approach begins with an initial translation to align the fovea of each subject, followed by a linear rescaling to align the top and bottom retinal boundaries. Finally, the layers within the retina are aligned by a deformable registration using one-dimensional radial basis functions. The algorithm was validated using manual delineations of retinal layers in OCT images from a cohort consisting of healthy controls and patients diagnosed with multiple sclerosis (MS). We show that the algorithm overcomes the shortcomings of existing generic registration methods, which cannot be readily applied to OCT images. A successful deformable image registration algorithm for macular OCT opens up a variety of population based analysis techniques that are regularly used in other imaging modalities, such as spatial normalization, statistical atlas creation, and voxel based morphometry. Examples of these applications are provided to demonstrate the potential benefits such techniques can have on our understanding of retinal disease. In particular, included is a pilot study of localized volumetric changes between healthy controls and MS patients using the proposed registration algorithm.
(100.0100) Image processing; (170.4470) Ophthalmology; (170.4500) Optical coherence tomography