Home | About | Journals | Submit | Contact Us | Français |

**|**HHS Author Manuscripts**|**PMC2995277

Formats

Article sections

Authors

Related links

Proc IEEE Int Symp Biomed Imaging. Author manuscript; available in PMC 2010 December 1.

Published in final edited form as:

Proc IEEE Int Symp Biomed Imaging. 2010 June 21; 2010: 932–935.

doi: 10.1109/ISBI.2010.5490140PMCID: PMC2995277

NIHMSID: NIHMS247250

Snehashis Roy: ude.uhj@rsihsahens; Aaron Carass: ude.uhj@ssarac_noraa; Navid Shiee: ude.uhj@divan; Dzung L. Pham: ude.uhj@mahp; Jerry L. Prince: ude.uhj@ecnirp

See other articles in PMC that cite the published article.

The magnetic resonance contrast of a neuroimaging data set has strong impact on the utility of the data in image analysis tasks, such as registration and segmentation. Lengthy acquisition times often prevent routine acquisition of multiple MR contrast images, and opportunities for detailed analysis using these data would seem to be irrevocably lost. This paper describes an example based approach which uses patch matching from a multiple contrast atlas with the intended goal of generating an alternate MR contrast image, thus effectively simulating alternative pulse sequences from one another. In this paper, we deal specifically with Fluid Attenuated Inversion Recovery (FLAIR) sequence generation from T1 and T2 pulse sequences. The applicability of this synthetic FLAIR for estimating white matter lesions segmentation is demonstrated.

A principal goal of the collaboration between the neuroscience and medical imaging communities is the accurate segmentation of brain structures, with a view towards offering insight about the normal and abnormal features of a brain. Several methods [1, 2] have been proposed to find cortical and sub-cortical structures. These methods are intrinsically dependent on the modality and the contrast between tissues. In this paper, we propose a method to alter the modality of a magnetic resonance (MR) image, thereby changing intrinsic contrast.

Image contrast is the cornerstone of many tissue classification algorithms. Recent research has demonstrated that knowledge of all modalities [3] or imaging parameters [4] can improve the segmentation quality, but this is not an optimal solution for many studies. Often, not all MR modalities of an image are acquired for reasons such as cost and time constraints, thus losing an opportunity for detailed multi-modal analysis. In the case of medical imaging of the brain, there is a huge variability in the structures and tissues under observation and the choice of acquisition protocol and underlying image contrast can affect the viability of data. More specifically, it is well established that white matter lesions (WML) are most prominent in Fluid Attenuated Inversion Recovery (FLAIR) images, while conventional T1, T2, and PD weighted sequences provide relatively poor contrast for WML [5]. Fig. 1(c) shows the contrast of lesions in a FLAIR image compared to its T1 (Fig. 1(a)) and T2 (Fig. 1(b)). Most recent lesion segmentation algorithms try to use the lesion information from FLAIR [6, 7], although some use T1, T2, and PD to detect lesions [8]. In this paper, we show how to synthesize images that have FLAIR-like contrast for enhanced lesion detection.

Our approach is based upon the technique known as image hallucination. Image hallucination [9] is typically used to generate a high-resolution image from multiple low-resolution acquisitions; in contrast, we use multiple modalities to generate an alternate modality. The image hallucination approach can be grouped into two major categories, Bayesian [10] and example-based [11]. Bayesian approaches are often formulated as a constrained optimization problem where the imaging process is known and the high resolution image is the maximum likelihood estimator of a cost function given one or more low resolution images. Example-based hallucination techniques rely on training data from a codebook or atlas [12] consisting of one or more high resolution images. These methods are learning based, where the training data at high resolution informs the algorithm how to extract fine details from the low resolution data.

In this paper, we build upon concepts borrowed from image hallucination to generate synthetic FLAIR images from a set of T1 and T2 acquisitions. We first describe our example based image hallucination technique, then we validate the usefulness of synthetic FLAIRs by comparing the lesions computed from both the synthetic FLAIR and a true FLAIR by using a WML segmentation tool [7]. We also show that the lesion segmentations obtained with synthetic FLAIRs are consistent with segmentations produced using real FLAIR images.

Given two registered images with T1 and T2-weighted MR contrasts, *f*_{T1} and *f*_{T2}, we want to generate the corresponding FLAIR image, *f*_{FL}. The images are related by a function *W*, which depends on underlying T1, T2 relaxation times, and other intrinsic acquistion based parameters such as pulse repetition time and flip angle. This can be expressed as

$${f}_{\text{FL}}=W({f}_{\mathrm{T}1},{f}_{\mathrm{T}2},\text{other parameters})+\eta ,$$

(1)

where η is noise. If *W* and the underlying parameters are known then *f*_{FL} can be directly estimated [4]. However, for most studies these quantities are not precisely known, and the complexity of the acquisition process is difficult to precisely model; thus *W* is rarely known. For this reason, our strategy is to estimate *f*_{FL} from *f*_{T1} and *f*_{T2} using an atlas.

We define an atlas as a set of co-registered images = {*g*_{T1}, *g*_{T2}, *g*_{FL}, *h*_{L}} acquired from a “model” subject *g*, where *g*_{T1}, *g*_{T2} and *g*_{FL} are T1, T2, and FLAIR images and *h*_{L} is a hard segmentation of the lesions. Assume that both *f* and *g* are composed of patches *f*_{T1}(*i*), *f*_{T2}(*i*), and *g*_{T1}(*j*), *g*_{T2}(*j*), *g*_{FL}(*j*), centered at voxels *i* Ω_{f}, *j* Ω_{g}, where Ω_{f} and Ω_{g} are the image domains of *f* and *g* respectively. In our experiments, we have used a regular partitioning of the image domains by 3 × 3 × 2 non-overlapping patches.

We can then generate a synthetic FLAIR image _{FL} from the collection of patches _{FL}(*i*), *i* Ω_{f}, as

$${\hat{f}}_{\text{FL}}(i)=\mathcal{F}\{{g}_{\text{FL}}(\mathbf{J})\},$$

(2)

where is a non-local means (NLM) operator [12, 13] and

$$\mathbf{J}=\underset{j\in {\mathrm{\Omega}}_{g}}{\text{argmin}}\phantom{\rule{thinmathspace}{0ex}}[{w}_{\mathrm{T}1}(j){\Vert {f}_{\mathrm{T}1}(i)-{g}_{\mathrm{T}1}(j)\Vert}^{2}+{w}_{\mathrm{T}2}(j){\Vert {f}_{\mathrm{T}2}(i)-{g}_{\mathrm{T}2}(j)\Vert}^{2}+\lambda \mathcal{R}({g}_{\mathrm{T}1}(j),{f}_{\mathrm{T}1}(i),{g}_{\mathrm{T}2}(j),{f}_{\mathrm{T}2}(i))].$$

(3)

Here, λ is a smoothing weight, is a smoothing function on the atlas *g*_{T1} and *g*_{T2}, and *w*_{T1} and *w*_{T2} are spatially varying weighting functions, explained later. makes sure that the patches *g*_{T1}(*j*) and *g*_{T2}(*j*) are chosen such that the boundaries between two neighboring patches in the reconstructed _{FL} remain smooth [12]. We use the following smoothness function:

$$\mathcal{R}\phantom{\rule{thinmathspace}{0ex}}({g}_{\mathrm{T}1}(j),{f}_{\mathrm{T}1}(i),{g}_{\mathrm{T}2}(j),{f}_{\mathrm{T}2}(i))={\displaystyle \sum _{k\in {N}_{i}}{\displaystyle \sum _{l\in {N}_{j}}\{{\Vert {f}_{\mathrm{T}1}(k)-{g}_{\mathrm{T}1}(l)\Vert}^{2}+{\Vert {f}_{\mathrm{T}2}(k)-{g}_{\mathrm{T}2}(l)\Vert}^{2}\}.}}$$

(4)

*N _{i}* and

$${\hat{f}}_{\text{FL}}(i)=\mathcal{F}({g}_{\text{FL}}(j),j\in \mathrm{\Omega})={\displaystyle \sum _{k\in \mathrm{\Omega}}{w}_{\text{NLM}}(k,i){g}_{\text{FL}}(k)}$$

where,

$${w}_{\text{NLM}}(k,i)=\frac{1}{\mathbf{Z}}{e}^{-(\frac{{\Vert {f}_{\mathrm{T}1}(i)-{g}_{\mathrm{T}1}(k)\Vert}^{2}}{2{\beta}_{1}^{2}}+\frac{{\Vert {f}_{\mathrm{T}2}(i)-{g}_{\mathrm{T}2}(k)\Vert}^{2}}{2{\beta}_{2}^{2}})}$$

and β_{1} and β_{2} are empirically chosen smoothing parameters for NLM and *Z* is a normalizing constant such that ∑_{k Ω} *w*_{NLM}(*k, i*) = 1. Thus, the algorithm can be described as:

- For the
*i*^{th}patch pair {*f*_{T1}(*i*),*f*_{T2}(*i*)} in*f*, search in Ω_{g}for the best matching patches {*g*_{T1}(*j*),*g*_{T2}(*j*);*j*Ω_{g}} from*g*to solve Eqn. 2. - Construct the
*i*^{th}patch in_{FL}as a non-local weighted average of*g*_{FL}(*j*)’s,*j*Ω Ω_{g}.

Our atlas consists of a set of registered T1, T2, and FLAIR images of a subject with WML and the segmentation of WML in the atlas is known beforehand in *h*_{L}. The search of optimal patches in the atlas space Ω_{g} depends on the choice of weighting functions *w*_{T1} and *w*_{T2}. Given that the lesions in T1 and T2 have similar intensity as cerebrospinal fluid (CSF), we impose a spatial prior onto the search space of Ω_{g}. We use the fact that lesions mostly occur near ventricles or inside WM. This motivates the use of the hard segmentation of *f*_{T1} as the spatial prior. We use the Topology Preserving Anatomy Driven Segmentation (TOADS) method [2] to find a coarse hard segmentation. TOADS is an atlas based topology preserving approach and by increasing the atlas weighting of the algorithm, lesions are classified as white matter (WM). To distinguish between true CSF and lesions, we impose the following conditions on the search space,

$${w}_{\mathrm{T}1}(j)={w}_{\mathrm{T}2}(j)=\{\begin{array}{ccc}1\hfill & \forall j\in {\mathrm{\Omega}}_{g}\hfill & ,{N}_{i}\in \text{WM}\hfill \\ 1\hfill & \forall j\in \{{\mathrm{\Omega}}_{g}\backslash \phantom{\rule{thinmathspace}{0ex}}{h}_{\mathrm{L}}\}\hfill & ,{N}_{i}\notin \text{WM}\hfill \\ \mathrm{\infty}\hfill & \forall j\in {h}_{\mathrm{L}}\hfill & ,{N}_{i}\notin \text{WM}\hfill \end{array}$$

Our first validation experiment is to generate a synthetic FLAIR from a subject for which we have a true FLAIR which we use for comparison. This subject is known to be lesion free. Fig. 3 shows the subject without lesions and the atlas used to generate the synthetic FLAIR. To evaluate their differences, we use the universal image quality index (UQI) [14] and the visual information fidelity (VIF) [15] as similarity metrics between the two images. The UQI and VIF are two metrics that replicate the behavior of human visual system models and both are considered consistent with subjective quality measurements. Fig. 3(g) shows the VIF and the UQI between Fig. 3(f) and Fig. 3(h) for each of the slices.

We applied our algorithm to real data with lesions to show that the lesion segmentation provides FLAIR-like results when using the combination of synthetic FLAIR and T1 as input channels in comparison to input channels of T1 and T2. All of the lesion segmentations were generated using a validated multi-channel lesion segmentation tool [7]. Fig. 4(c) shows the results of using the combinations of T1 and true FLAIR, which we use as the reference standard. Fig. 4(f) shows the lesion segmentation from T1 and T2 as input channel. Finally Fig. 4(i) shows the lesion segmentation from T1 and synthetic FLAIR as input channels. For Fig. 4(f) and (i), we computed the Dice coefficient [16] with the reference Fig. 4(c). For the data set shown in Fig. 4, the Dice coefficient of lesion segmentation between the T1 and T2 compared to the reference was 0.49, while the T1 and synthetic FLAIR compared to the reference was 0.75. We repeated this experiment on an additional 14 subjects with the results shown in Table 1. Because of their challenging nature, Dice scores of 0.6–0.7 for lesion segmentation are considered quite successful [17].

We have proposed an atlas based image synthesis technique to synthesize an alternate modality of an MR image. We focused this paper on the application to enhancing lesion detection in the absence of an appropriate lesion distinguishing modality, namely FLAIR, though we believe there is broader applicability of this type of image hallucination from a rich atlas set. We expect that increasing the training set to include multiple number of atlases would further improve results significantly.

This research was supported in part by the Intramural Research Program of the NIH, National Institute on Aging. We are grateful to Dr. Susan Resnick and all the participants of the Baltimore Longitudinal Study on Aging, as well as the neuroimaging staff for their dedication to these studies.

This work was supported by the NIH/NINDS under grant 5R01NS037747.

1. Dale AM, Fischl B, Sereno MI. Cortical Surface-Based Analysis i: Segmentation and Surface Reconstruction. NeuroImage. 1999;vol. 9(no. 2):179–194. [PubMed]

2. Bazin PL, Pham DL. Topology-Preserving Tissue Classification of Magnetic Resonance Brain Images. IEEE Trans. on Med. Imag. 2007;vol. 26(no. 4):487–498. [PubMed]

3. Hong X, McClean S, Scotney B, Morrow P. Model-Based Segmentation of Multimodal Images. Comp. Anal. of Images and Patterns. 2007;vol. 4672:604–611.

4. Fischl B, Salat DH, van der Kouwe AJW, Makris N, Segonne F, Quinn BT, Dale AM. Sequence-independent Segmentation of Magnetic Resonance Images. NeuroImage. 2004;vol. 23:S69–S84. [PubMed]

5. Filippi M, Yousry T, Baratti C, Horsfield MA, Mammi S, Becker C, Voltz R, Spuler S, Campi A, Reiser MF, Comi G. Quantitative Assessment of MRI Lesion Load in Multiple Sclerosis, A comparison of conventional spin-echo with fast fluidattenuated inversion recovery. Brain. 1996;vol. 119:1349–1355. [PubMed]

6. Lecoeur J, Ferr JC, Barillot C. Optimized Supervised Segmentation of MS Lesions from Multispectral MRIs. MICCAI workshop on Med. Image Anal. on Multiple Sclerosis. 2009

7. Shiee N, Bazin PL, Ozturk A, Reich DS, Calabresi PA, Pham DL. A Topology-Preserving Approach to the Segmentation of Brain Images with Multiple Sclerosis Lesions. NeuroImage. 2009 [PMC free article] [PubMed]

8. Harmouche R, Collins L, Arnold D, Francis S, Arbel T. Bayesian MS Lesion Classification Modeling Regional and Local Spatial Information; IEEE Intl. Conf. Patt. Recog; 2006. pp. 984–987.

9. Hunt BR. Super-Resolution of Images: Algorithms, Principles, Performance. Intl. Journal of Imag. Sys. and Tech. 1995;vol. 6(no. 4):297–304.

10. Sun J, Zheng NN, Tao H, Shum HY. Image Hallucination with Primal Sketch Priors; IEEE Conf. Comp. Vision and Patt. Recog; 2003. pp. 729–736.

11. Freeman WT, Jones TR, Pasztor EC. Example-Based Super-Resolution. IEEE Comp. Graphics. 2002;vol. 22(no. 2):56–65.

12. Rousseaui F. Brain Hallucination; European Conf. on Comp. Vision; 2008. pp. 497–508.

13. Buades A, Coll B, Morel JM. A Non-Local Algorithm for Image Denoising. IEEE Comp. Vision and Patt. Recog. 2005;vol. 2:60–65.

14. Sheikh HR, Bovik AC, de Veciana G. An Information Fidelity Criterion for Image Quality Assessment using Natural Scene Statistics. IEEE Trans. on Image Proc. 2005;vol. 14(no. 12):2117–2128. [PubMed]

15. Wang Z, Bovik AC. A Universal Image Quality Index. IEEE Signal Proc. Letters. 2002;vol. 9(no. 3):81–84.

16. Dice LR. Measure of the amount of ecological association between species. Ecology. 1945;vol. 26:297–302.

17. Zijdenbos AP, Forghani R, Evans AC. Automatic ”Pipeline” Analysis of 3-D MRI Data for Clinical Trials: Application to Multiple Sclerosis. IEEE Trans. Med. Imaging. 2002;vol. 21(no. 10):1280–1291. [PubMed]

PubMed Central Canada is a service of the Canadian Institutes of Health Research (CIHR) working in partnership with the National Research Council's national science library in cooperation with the National Center for Biotechnology Information at the U.S. National Library of Medicine(NCBI/NLM). It includes content provided to the PubMed Central International archive by participating publishers. |