Home | About | Journals | Submit | Contact Us | Français |

**|**HHS Author Manuscripts**|**PMC2860803

Formats

Article sections

Authors

Related links

Proc SPIE Int Soc Opt Eng. Author manuscript; available in PMC 2010 April 28.

Published in final edited form as:

Proc SPIE Int Soc Opt Eng. 2010; 7623: 76230j.

doi: 10.1117/12.844575PMCID: PMC2860803

NIHMSID: NIHMS191798

Image Analysis and Communications Laboratory, Electrical and Computer Engineering The Johns Hopkins University, Baltimore, MD 21218

Further author information: Send correspondence to Jerry L. Prince ; Email: ude.uhj@ecnirp, Telephone: +1-410-516-5192

See other articles in PMC that cite the published article.

Tissue contrast and resolution of magnetic resonance neuroimaging data have strong impacts on the utility of the data in clinical and neuroscience tasks such as registration and segmentation. Lengthy acquisition times typically prevent routine acquisition of multiple MR tissue contrast images at high resolution, and the opportunity for detailed analysis using these data would seem to be irrevocably lost. This paper describes an example based approach using patch matching from a multiple resolution multiple contrast atlas in order to change an image's resolution as well as its MR tissue contrast from one pulse-sequence to that of another. The use of this approach to generate different tissue contrasts (T2/PD/FLAIR) from a single T1-weighted image is demonstrated on both phantom and real images.

A principal goal of the collaboration between the neuroscience and medical imaging communities is the accurate segmentation of brain structures, with a view towards offering insight about the normal and abnormal features of a brain. Several methods^{1}^{, }^{2} have been proposed to find cortical and sub-cortical structures. These methods are intrinsically dependent on the tissue contrast and resolution of the acquired data. In this paper, we propose a method to alter both the tissue contrast and resolution of a magnetic resonance (MR) image, thereby permitting image analysis techniques that would otherwise be inappropriate or ineffective.

MR tissue contrast and image resolution provided by the application of specific pulse sequences in MR imaging fundamentally determine or limit the performance of tissue classification algorithms^{3}. If several images with different tissue contrasts (e.g., T1-weighted, T2-weighted, and PD-weighted) at high resolution can be obtained then optimal tissue classification solutions can be applied^{4}^{, }^{5}. But in many scenarios—e.g., routine clinical imaging of patients—it is not feasible for cost and time reasons to obtain such a rich data set, thus losing the opportunity for detailed image analysis. In neuroimaging, it is important to be able to automatically delineate the cortical gray matter and to observe white matter lesions. Yet there are inevitable tradeoffs made in choosing pulse sequences that will provide good contrast between the gray matter (GM) and the surrounding white matter (WM) and cerebro-spinal fluid (CSF) as well as delineating lesions. In particular, the tissue contrast between GM and WM is typically larger in a magnetization prepared rapid gradient echo (MPRAGE) image than in a spoiled gradient (SPGR) acquisition. On the other hand, the tissue contrast between CSF and GM is larger in an SPGR image. As well, white matter lesions can best be seen as hyperintensities on fluid-attenuated inversion recovery (FLAIR) images or^{6} T2-weighted images rather than either of the (MPRAGE or SPGR) T1-weighted images mentioned above. All of these observations can be seen in Fig. 1, where equivalent brain sections are shown with different MR tissue contrasts.

(a) a T1w spoiled gradient recalled (SPGR) image, (b) T1w magnetization prepared rapid gradient echo (MPRAGE), (c) T2w, (d) PDw, (e) T1w fluid attenuated inversion recovery (FLAIR) acquisition of the same subject.

Delineation of small brain structures and accurate localization of object boundaries is also dependent on the image resolution. In particular, poor resolution yields blurring of boundaries and loss of image contrast in small structures. The technique we develop in this paper is capable of both changing the tissue contrast and improving the resolution of an MR image. The resulting image (or images) can then be used to carry out a more detailed analysis than would otherwise be possible.

Image hallucination^{7}^{, }^{8} is often used to generate a high-resolution image from its multiple low-resolution acquisitions. There are two major categories in image hallucination, Bayesian^{9}^{-}^{11} and example-based^{12}^{-}^{14}. Bayesian approaches are often formulated as a constrained optimization problem where the imaging process is known and the high resolution image is often the maximum likelihood estimator of a cost function given one or more low resolution images. Example based hallucination techniques are learning algorithms that rely on training data from a training set or atlas^{15} consisting of one or more high resolution images. These methods are primarily patch based, where a patch in the low-resolution image is matched to another patch in the training data. The similarity criteria are often chosen as image gradients, neighborhood information or textures^{16}^{, }^{17}.

In this paper, we extend the idea of atlas based hallucination by using patch matching to synthesize alternative contrast and high resolution MR images. We demonstrate the performance of the method using two applications. First, we synthesize different (T2/PD/FLAIR) tissue contrasts from a single T1-weighted (T1w) image, keeping the resolution the same, thus enhancing the capability of image analysis techniques that require different tissue contrasts. The utility of synthesizing alternate contrast is shown by generating a T1w MPRAGE image from its T1w SPGR acquisition. Then we convert a low resolution (LR) SPGR image to a high resolution (HR) MPRAGE image which has superior GM-WM contrast. We show that the overall delineation of the inner surface improves by such conversion.

Consider two registered MR images *f*_{M1} and *f*_{M2} of the same subject having different tissue contrasts (i.e., generated by different pulse sequences). In our experiments, the tissue contrasts can be any two of the following: T1w, T2w, PDw, or FLAIR. The images are related by the imaging process , which depends on underlying T1 and T2 relaxation times and other imaging parameters such as the pulse repetition time and the flip angle. In mathematical terms,

$${f}_{M2}=\mathcal{W}({f}_{M1},\u03f4)+\eta ,$$

(1)

where *η* is a random noise and Θ comprises the imaging parameters. Ideally, if is known then *f*_{M2} can be directly estimated^{4}. But for many studies Θ is not always known precisely and is difficult to model accurately, and it is therefore impractical to try to reconstruct *f*_{M2} from *f*_{M1} directly. Instead, our approach is to synthesize *f*_{M2} from *f*_{M1} using an atlas.

Define an atlas as *N* sets of triplets, $\mathcal{A}={\cup}_{n}\{{g}_{M1}^{\left(n\right)},{g}_{M2}^{\left(n\right)},{G}_{M2}^{\left(n\right)}\},n=1,\dots ,N$, where ${g}_{M1}^{\left(n\right)},{g}_{M2}^{\left(n\right)}$ are two tissue contrasts of the same subject having same resolution, and ${G}_{M2}^{\left(n\right)}$ is a high-resolution image with tissue contrast M2. By assumption, ${g}_{M2}^{\left(n\right)}$ and ${G}_{M2}^{\left(n\right)}$ are registered to ${g}_{M1}^{\left(n\right)}$, ∀*n*. Also assume that *f*_{M1} and are made of 3D patches ${f}_{M1}\left(i\right),\phantom{\rule{thinmathspace}{0ex}}{g}_{M1}^{\left(n\right)}\left({j}_{n}\right),\phantom{\rule{thinmathspace}{0ex}}{g}_{M2}^{\left(n\right)}\left({j}_{n}\right),{G}_{M2}^{\left(n\right)}\left({J}_{n}\right)$, centered at *i* Ω_{f}, *j _{n}* Ω

Assume that all the images *f* and ${g}_{M1}^{\left(n\right)}$ are normalized in such a way that their WM peaks are the same. Using this definition of an atlas, a synthetic M2 image, having same resolution as *f*_{M1}, can be generated by,

$${\widehat{f}}_{M2}\left(i\right)=\mathcal{F}\left\{{g}_{M2}^{\left(n\right)}\left({\mathcal{J}}_{n}\right)\right\},$$

(2)

where is a non-local means operator^{18} and,

$${\mathcal{J}}_{n}=\underset{{j}_{n},{j}_{n}\in {\Omega}_{{g}^{n}}}{\mathrm{argmin}}\left[{\parallel {f}_{M1}\left(i\right)-{g}_{M1}^{\left(n\right)}\left({j}_{n}\right)\parallel}^{2}+\lambda \mathcal{R}({g}_{M1}^{\left(n\right)}\left({j}_{n}\right),{f}_{M1}\left(i\right))\right],$$

(3)

λ is a smoothing weight. makes sure that the patches ${g}_{M1}^{\left(n\right)}\left({j}_{n}\right)$ are chosen such that the boundaries between two neighboring patches in the synthesized _{M2} remain smooth^{14}. We use the following smoothness function,

$$\mathcal{R}({g}_{M1}^{\left(n\right)}\left({j}_{n}\right),{f}_{M1}\left(i\right))=\underset{k\in {N}_{i}}{\Sigma}\underset{l\in {N}_{{j}^{n}}}{\Sigma}{\parallel {f}_{M1}\left(k\right)-{g}_{M1}^{\left(n\right)}\left(l\right)\parallel}^{2},$$

(4)

where *N _{i}* and

Instead of just taking any patch that maximizes Eqn. 3, an average of the “best matching patches” are used. The “best matching patches” *J _{n}* are defined to be those for which the errors from Eqn. 3 are the lowest

Define Ω_{i,n} as the set of all best matching patches for the patch *f*_{M1}(*i*) obtained from Eqn. 3 for *n*^{th} pair of images $\{{g}_{M1}^{\left(n\right)},{g}_{M2}^{\left(n\right)}\}$. Clearly, Ω_{i,n} Ω* _{gn}*. Using this definition, the non-local means filtered patch is obtained by,

$${\widehat{f}}_{M2}\left(i\right)=\mathcal{F}\left({g}_{M2}^{\left(n\right)}\left({j}_{n}\right),{j}_{n}\in {\Omega}_{i,n}\right)=\underset{n=1}{\overset{N}{\Sigma}}\underset{k\in {\Omega}_{i,n}}{\Sigma}{w}_{\mathrm{NLM}}^{\left(n\right)}(k,i){g}_{M2}^{\left(n\right)}\left(k\right),$$

(5)

where,

$${w}_{\mathrm{NLM}}^{\left(n\right)}(k,i)=\frac{1}{\mathbf{Z}}{e}^{-\frac{{\parallel {f}_{M1}\left(i\right)-{g}_{M1}^{\left(n\right)}\left(k\right)\parallel}^{2}}{2{\beta}^{2}}},\phantom{\rule{thinmathspace}{0ex}}\text{with}\phantom{\rule{1em}{0ex}}\underset{n=1}{\overset{N}{\Sigma}}\underset{k\in {\Omega}_{i,n}}{\Sigma}{w}_{\mathrm{NLM}}^{\left(n\right)}(k,i)=1$$

(6)

where *β* is a smoothing parameter on the NLM operator. In our experiments, *β* is chosen empirically, although it is possible to estimate it in a optimal way^{19}.

Assuming M1 as T1w and M2 as T2w, the algorithm can be described for one atlas as :

- To synthesize
*i*^{th}patch_{M1}(*i*) in_{M1}, find the “best matching patches” ${g}_{M1}^{\left(n\right)}\left({j}_{n}\right),{j}_{n}\in {\Omega}_{i,n}$ from ${g}_{M1}^{\left(n\right)},n=1,\dots ,N$ by searching the image domains Ωto solve Eqn. 3 (1 in Fig. 2)._{g}^{n}Contrast synthesis algorithm flowchart : From the subject T1w image*f*_{T1}we take the*i*^{th}patch*f*_{T1}(*i*) and identify the “best matching patches” from the atlases*g*^{(n)}as $\{{g}_{T1}^{\left(n\right)}\left({j}_{n}\right);{j}_{n}\in {\Omega}_{i,n}\subset {\Omega}_{{g}^{n}}\}$. The corresponding**...**

We merge the learning based image hallucination idea^{12}^{, }^{17} to our tissue contrast synthesis approach to synthesize different contrast as well as improve resolution. To synthesize a high-resolution M2 image _{M2} from a low-resolution M1 image *f*_{M1}, Eqn. 2 is re-written as,

$${\widehat{F}}_{M2}\left(i\right)=\mathcal{F}\left\{{G}_{M2}^{\left(n\right)}\left({\mathcal{I}}_{n}\right)\right\},$$

(7)

with

$${\mathcal{I}}_{n}={\mathcal{D}}^{-1}\left({\mathcal{J}}_{n}\right),$$

(8)

where * _{n}* is obtained from Eqn. 3.

We use the Brainweb phantom^{20} to validate the contrast synthesis method. M1 is chosen as T1w (Fig. 3(d)), and its original T2w and PDw acquisitions shown in Fig. 3(e)-(f). For all our experiments, we choose *N* = 1. The atlas is another set of phantoms, consisting of one of each of T1w, T2w, and PDw acquisitions, as shown in Fig. 3(a)–(c). Because the image intensities are taken from a codebook, the mean square error between the original and the reconstructed image is not a meaningful measure of performance. Instead, we use normalized mutual information (NMI), a visual information fidelity metric^{21} (VIF), and a universal image quality index^{22} (UQI) to quantify similarity between the original and the synthetic images. The NMI between two images *A* and *B* is defined as $\mathrm{NMI}(A,B)=\frac{2H(A,B)}{H\left(A\right)+H\left(B\right)}$ where *H*(*A*) is the entropy of the image *A* and *H*(*A, B*) is the joint entropy between *A* and *B*, with *H*(*A, A*) = *H*(*A*). We want low NMI values between the original and the synthetic images. Ideally, the synthetic image should give an accurate representation of the original image, which implies a small joint entropy between them.

Brainweb Validation : (a) T1 Atlas, (b) T2 Atlas, (c) PD Atlas, (d) Test T1, (e) true test T2, (f) true test PD, (g) synthetic T2 from test T1, (g) synthetic PD from test T1, (i) Universal Image Quality Index^{21} between original and synthetic T2/PD, (j) **...**

The NMI between original T2 and synthetic T2 is 0.7464, while it is 0.7597 between the original PD and the synthetic one. VIF and UQI take two 2D images and return a number between 0 and 1. 1 is achieved only when the images are same or one is a scalar multiple of another. We plot UQI and VIF metrics in Fig. 3(i)-(j) for each slice of the 3D volumes between original and the synthetic T2 and PD. It is observed that the similarity is high on average.

We use our algorithm to synthesize T2 and FLAIR images of a normal subject from its T1w SPGR acquisition for which we also have the actual T2 and FLAIR available for comparison. Fig. 4(a)-(c) show the T1, T2 and FLAIR images of another subject, which is used as the atlas. Fig. 4(g)-(h) show the synthetic T2 and FLAIR of the test subject in Fig. 4(d)-(f). Fig. 4(f) shows that the atlas FLAIR has a better contrast in GM-WM boundary compared to the test FLAIR. This is reflected in the synthetic FLAIR also. This highlights the benefit of our method, where a new contrast is created from the atlas, instead of the contrast reconstruction from the test image.

Contrast Synthesis on Real Data : (a) T1 Atlas, (b) T2 Atlas, (c) FLAIR Atlas, (d) Test T1, (e) true test T2, (f) true test FLAIR, (g) synthetic T2, (g) synthetic FLAIR, (i) Universal Image Quality Index^{21} between original and synthetic T2/FLAIR, (j) **...**

The NMI between the original and the reconstructed T2 and FLAIR are 0.3697 and 0.3102, respectively. NMI, being a distribution dependent statistic, is sensitive to the actual distribution of the intensities rather than the contrast. The NMI numbers for the phantom validation are larger than those of the real data, because the Brainweb phantoms have widely different histograms while keeping the same contrast, while the real data have similar histograms. The plot of UQI and VIF for each slice is also shown in Fig. 4(i)-(j).

The next experiment on real data consists of synthesizing an MPRAGE image from its SPGR acquisition of the same resolution, because MPRAGE images are of importance for their superior GM-WM contrast. The required atlas consists of a pair of SPGR and MPRAGE acquisitions of another subject. Fig. 5(a)-(b) shows the true SPGR and true MPRAGE acquisitions, with Fig. 5(c) being the same resolution synthetic MPRAGE image. The NMI between the original MPRAGE and the synthetic MPRAGE is 0.3254, while it is 0.3341 between the original MPRAGE and the original SPGR.

We show that using both contrast synthesis and resolution enhancementment leads to improved delineation of cortical surfaces. We use M1 as SPGR and M2 as MPRAGE. MPRAGE images, having superior GM-WM contrast, are a better candidate for the delineation of the inner cortical surface compared to SPGR images. So in the absence of an MPRAGE image, we could synthesize one, thus enabling better delineation. Our data set contains a 1.875×1.875×3 mm low-resolution (LR) SPGR image *f*_{M1}, its high resolution (HR) 0.9375×0.9375 1.5 mm true MPRAGE acquisition *F*_{M2} and true HR SPGR acquistion *F*_{M1}, shown in Fig. 6(a)-(c), respectively. Using Eqn. 7, a HR MPRAGE image _{M2} is generated, shown in Fig. 6(d). The cortical inner surfaces are found using CRUISE^{23}. For comparison, “the best available truth” or a “reference” standard of the inner surface is obtained from *F*_{M2}, with which we compare the surface obtained from _{M2} and *F*_{M1}. Fig. 7(a) and (b) show how the lack of GM-WM contrast gives rise to a poor cortical inner surface reconstruction for *F*_{M1} when compared to the reference “true” reconstruction from *F*_{M2} and the reconstruction from our method, _{M2}. Table 1 shows that the mean differences for four subjects between their inner surfaces as generated from _{M2} and *F*_{M1} as compared to *F*_{M2}. The smaller differences show our super-resolution approach gives a marginal improvement in the delineation of the inner surface.

Contrast Synthesis with Resolution improvement: (a) test LR SPGR image *f*_{M1}, (b) its HR SPGR acquisition *F*_{M1}, (c) HR MPRAGE acquisition *F*_{M2}, (d) our synthetic HR MPRAGE _{M2}, The bottom row shows corresponding zoomed regions for each image **...**

(a) Inner surface computed from high-resolution SPGR *F*_{M1} (blue) overlaid on *F*_{M1}, (b) Inner surface computed from high-resolution MPRAGE *F*_{M2} (red), synthetic high-resolution MPRAGE _{M2} (green) overlaid on *F*_{M2}.

The similarity criteria in Eqn 3 is chosen to be the *L*^{2} norm, the underlying assumption being the intensities of *f* and ${g}_{M1}^{\left(n\right)}$ differ by a Gaussian noise, or they follow a similar distribution. If the intensity distributions are different, then *L*^{2} norm fails to produce correct matches from Eqn. 3. Also, we used one atlas for our experiments, but we believe using more than one atlas helps in finding more accurate *J _{n}*. Involving more complex similarity criteria like gradient or texture into Eqn. 3 will also produce more accurate

In summary, we proposed an atlas based image synthesis technique to generate different MR tissue contrasts from a single image acquisition. It is essentially a patch-matching algorithm where a template patch from a test image is matched onto a multi-modal multi-resolution atlas and patches from the atlas are used to generate a synthetic alternate tissue contrast high resolution image. The contribution of this method is that new MR contrasts can be synthesized from the atlas, and unlike histogram matching, this method uses local contextual information to synthesize images. We have validated our method on Brainweb phantoms, and showed that T2, PD, and FLAIR images can be generated from a single T1w acquisition. We also demonstrated that a synthetic high-resolution MPRAGE image can be generated from its low-resolution SPGR acquisition, which leads to improved cortical segmentation. So far our experiments have been carried out on normal subjects. Future work includes reconstruction of alternate tissue contrasts like T2 and FLAIR on patients with lesions.

This research was supported in part by the Intramural Research Program of the NIH, National Institute on Aging. We are grateful to all the participants of the Baltimore Longitudinal Study on Aging (BLSA), as well as the neuroimaging staff for their dedication to these studies.

1. Dale AM, Fischl B, Sereno MI. Cortical Surface-Based Analysis i: Segmentation and Surface Reconstruction. NeuroImage. 1999;9(2):179–194. [PubMed]

2. Bazin PL, Pham DL. Topology-Preserving Tissue Classification of Magnetic Resonance Brain Images. IEEE Trans. on Med. Imag. 2007;26(4):487–498. [PubMed]

3. Clarke LP, Velthuizen RP, Phuphanich S, Schellenberg JD, Arrington JA, Silbiger M. MRI: Stability of Three Supervised Segmentation Techniques. Mag. Res. in Med. 1993;11(1):95–106. [PubMed]

4. Fischl B, Salat DH, van der Kouwe AJW, Makris N, Segonne F, Quinn BT, Dale AM. Sequence-independent Segmentation of Magnetic Resonance Images. NeuroImage. 2004;23:S69–S84. [PubMed]

5. Hong X, McClean S, Scotney B, Morrow P. Model-Based Segmentation of Multimodal Images. Comp. Anal. of Images and Patterns. 2007;4672:604–611.

6. Souplet J, Lebrun C, Ayache N, Malandain G. An Automatic Segmentation of T2-FLAIR Multiple Sclerosis Lesions. Midas Journal, MICCAI Workshop. 2008

7. Hunt BR. Super-Resolution of Images: Algorithms, Principles, Performance. Intl. Journal of Imag. Sys. and Tech. 1995;6(4):297–304.

8. Park S, Park M, Kangg MG. Super-resolution Image Reconstruction: A Technical Overview. IEEE Signal Proc. Mag. 2003;20(3):21–36.

9. Sun J, Zheng NN, Tao H, Shum H. Image Hallucination with Primal Sketch Priors. IEEE Conf. Comp. Vision and Patt. Recog. 2003;2:729–736.

10. Jia K, Gong S. Hallucinating Multiple Occluded Face Images of Different Resolutions. Patt. Reco. Letters. 2006;27:1768–1775.

11. Baker S, Kanade T. Hallucinating faces. IEEE Intl. Conf. on Automatic Face and Gesture Recog. 2000:83–88.

12. Freeman WT, Jones TR, Pasztor EC. Example-Based Super-Resolution. IEEE Comp. Graphics. 2002;22(2):56–65.

13. Xiong Z, Sun X, Wu F. Image Hallucination with Feature Enhancement. IEEE Conf. on Comp. Vision and Patt. Recog. 2009:2074–2081.

14. Rousseaui F. Brain Hallucination. European Conf. on Comp. Vision. 2008;5302:497–508.

15. Ma L, Wu F, Zhao D, Gao W, Ma S. Learning-Based Image Restoration for Compressed Image through Neighboring Embedding. Advances in Multimedia Info. Proc. 2008;5353:279–286.

16. Ng M, Shen H, Lam E, Zhang L. A Total Variation Regularization Based Super-Resolution Reconstruction Algorithm for Digital Video. EURASIP Journal on Advances in Signal Proc. 2007;74585

17. Fan W, Yeung DY. Image Hallucination Using Neighbor Embedding over Visual Primitive Manifolds. IEEE Conf. Comp. Vision and Patt. Recog. 2007:17–22.

18. Buades A, Coll B, Morel JM. A Non-Local Algorithm for Image Denoising. IEEE Comp. Vision and Patt. Recog. 2005;2:60–65.

19. Gasser T, Sroka L, Steinmetz C. Residual Variance and Residual Pattern in Nonlinear Regression. Biometrika. 1986;73:625–633.

20. Cocosco C, Kollokian V, Kwan R-S, Evans A. Brainweb: Online Interface to a 3D MRI Simulated Brain Database. NeuroImage. 1997;5(4):S425.

21. Sheikh H, Bovik A, de Veciana G. An Information Fidelity Criterion for Image Quality Assessment using Natural Scene Statistics. IEEE Trans. on Image Proc. 2005;14(12):2117–2128. [PubMed]

22. Wang Z, Bovik AC. A Universal Image Quality Index. IEEE Signal Proc. Letters. 2002;9(3):81–84.

23. Han X, Pham D, Tosun D, Rettmann M, Xu C, Prince J. Cruise: Cortical reconstruction using implicit surface evolution. NeuroImage. 2004;23:997–1012. [PubMed]

PubMed Central Canada is a service of the Canadian Institutes of Health Research (CIHR) working in partnership with the National Research Council's national science library in cooperation with the National Center for Biotechnology Information at the U.S. National Library of Medicine(NCBI/NLM). It includes content provided to the PubMed Central International archive by participating publishers. |