Magnetic resonance images (MRIs) acquired with similar protocols but on different scanners will show dissimilar intensity values for the same tissue types [1
]. These variations are machine-dependent and do not correspond to noise or bias field inhomogeneity, which both can be reduced with different image processing techniques (e.g., [2
], resp.). This problem becomes particularly severe in large, multicentric settings such as the Alzheimer's Disease Neuroimaging Initiative (ADNI), in which longitudinal data is being acquired on more than 50 different platforms in the United States and Canada.
Image processing pipelines aimed at extracting tissue-based characteristics (e.g., grey matter/white matter identification) must be robust to these variations. Intensity standardization is therefore employed to reduce interscanner differences in order for similar intensities to have similar tissue meaning in the standardized images, regardless of provenance. It has been shown that standardization improves segmentation [4
] and registration [6
]. However, scaling intensities with a simple linear transformation is not sufficient, since the influence of the MRI acquisition parameters on the image intensities is nonlinear [6
]; a higher order transformation is thus needed.
Published standardization techniques generally propose matching image histograms. An algorithm proposed by Wang et al. [8
] stretches or compresses a windowed part of the input image histogram with a multiplicative factor, found by minimizing the bin-count difference between the input and standard images histograms. The window is used to include only pixels of interest and remove, for example, the background; this makes the technique linear in the intensity range of interest. The technique developed by Nyúl et al. [1
] matches input image histogram landmarks onto standard histogram landmarks, obtained during a learning process, linearly interpolating intensities between the landmarks. In particular, the variant in [1
] uses percentile landmarks, which is simple and more robust. This landmark technique has been used in many studies [5
]. Jäger et al. [13
] extended this principle to two or more jointly used MRI sequences (e.g., T1 and T2), matching multidimensional joint histograms with nonlinear registration. As long as the MRI sequences are spatially aligned, which is assumed, no prior registration of the images is required for computing the joint histogram.
Other techniques use models with some form of a priori
knowledge, such as the technique proposed by Hellier [6
]. It approximates the input image histogram with a mixture of Gaussian functions and aligns their mean with those of the standard image through a polynomial function. Christensen [14
] has proposed even-ordered derivatives to find the histogram peak corresponding to the characteristic value of brain white matter; the value is then used to normalize the global image intensity. Weisenfeld and Warfield [4
] have proposed modeling the input image as a standardized image corrupted by a linear transformation. Their iterative algorithm then found the parameters of a linear model minimizing the Kullback-Leibler divergence between the standardized and the standard images, thus matching their histograms.
Bergeest and Jäger [15
] compared four techniques' performances [1
] along with an earlier histogram-matching technique using dynamic histogram warping [16
]. None clearly outperformed the others.
Further, in our view, histogram matching should not be the unique objective, as it does not guarantee the standardization of spatially corresponding tissue
intensities. Towards this end, Leung et al. [17
] have recently proposed a semiautomated segmentation technique that delineates cerebrospinal fluid (CSF), white matter (WM) and grey matter (GM) tissue components, for which they computed mean intensities. In a following step, they performed a linear regression between mean intensities and used the results of this regression to perform the standardization. However, this technique yields a linear transformation, which does not completely addresses the problem as mentioned above.
Thus, to our knowledge, techniques presented so far either matched histograms disregarding spatial correspondence or employed spatial correspondence and linear transformations. Our objective was to design a technique that would (1) use both histogram and tissue-specific intensity information; (2) provide a nonlinear intensity transformation between images; (3) share the simplicity and robustness of the Nyúl's landmark technique [1
], while remaining fully automated.
In this study, we report the development of our STandardization of Intensities
(STI) technique, which fulfills these requirements. We compare STI to the variant L4
of Nyúl et al. [1
], which matches foreground (FRG) intensity histograms using decile (10%) landmarks, in two different multicentric T1-weighted MRI datasets: the Pilot European ADNI (Pilot E-ADNI) study and the larger ADNI dataset.