Measuring the cortical thickness has long been a topic of interest for neuroscientists. Cortical thickness changes in a characteristic pattern during childhood development and with the progression of neurodegenerative diseases such as Alzheimer’s, HIV/AIDS, and epilepsy [Thompson et al., 2004
]. Recent studies examining changes in cortical thickness over time have revealed the trajectory of diseases in the living brain, and have been used to quantify treatment effects, identifying regions where cortical thickness correlates with age, cognitive deterioration, genotype, or medication.
Various approaches have recently been proposed to automate this cortical thickness measurement from Magnetic Resonance Imaging (MRI) data, e.g., [Fischl and Dale, 2000
; Jones et al., 2000
; Kabani et al. 2001
; Lohmann et al., 2003
; Yezzi and Prince, 2003
; Lerch and Evans, 2005
; Thorstensen et al., 2006
; Young and Schuff, 2008
]. The limited spatial resolution of most MRI volumes (typically 1–2 mm) makes it difficult to measure cortical thickness accurately, which varies from 2 to 5 mm in different brain regions and is only a few voxels thick in the images. The neuroscience community has not yet agreed on a unique definition of cortical thickness and so far the various methods proposed measure slightly different quantities. What is common among them is that they virtually all perform a pre-segmentation of the white matter (WM), gray matter (GM), and cerebrospinal fluid (CSF), and most extract explicit models of the surfaces between them (i.e., the inner surface between WM and GM and outer surface between GM and CSF). They then use this hard segmentation as the input data for different tissue thickness measurement algorithms (Section II briefly reviews previous work and comments more on this). The disadvantage of this approach is that in the hard segmentation process, information is discarded and decisions are made before measuring the tissue thickness, a significant local error in measured thickness could be introduced by a few misclassified voxels (see Section IV.A for an example).
The approach we adopt here uses a soft pre-labeled/classified volume as the input data, keeping valuable information all the way into the step of measuring tissue thickness. Due to the limited resolution of an MRI volume, many voxels contain partial amounts of two or more tissue types (see [Pham and Bazin, 2004
] and the references therein). Their intensity values give us information about the probability or proportion of those voxels belonging to any of the categories of WM, GM, or CSF. Rather than a (hard) pre-classified volume, we use one containing the probability that each voxel belongs to the GM.1
These probability values have the same precision as the values in the original MRI volume, and therefore we do not discard any useful information. 2
We compute line integrals of the soft classified data, centered at each voxel and in all possible spatial directions, and then consider their minimum as the local cortical thickness at that voxel.
While hard segmentations are often used as part of the analysis, e.g., to warp surfaces for population studies and/or for visualization, many useful statistics can be performed completely avoiding this hard classification, e.g., region-based statistics. Moreover, volumetric warping avoids hard segmentation. Even if hard segmentation is to be performed for other parts of the analysis, the errors produced by it need not to be transferred to the tissue thickness computation. This error transfer is common in the techniques mentioned below and avoided with our proposed framework.
In Section II, we review previous work on cortical thickness measurement. Section III describes our proposed framework, and experimental results are presented in Section IV. Section V concludes with a review of the contributions, and finally the implementation is covered in detail in the Appendix
An early conference version of this work was published in [Aganj et al., 2008
]. Here, we extend this work with more validation and comparisons. In addition, the implementation of the algorithm is provided in detail, and a new technique to find the GM skeleton is introduced.