The measurement of cortical thickness from 3D magnetic resonance (MR) images can be used to aid diagnosis or perform longitudinal studies of a wide variety of neurodegenerative diseases, such as Alzheimer’s. Manual measurements are labour intensive and have a high variability. Accurate and automated software that maps the three dimensional cortical thickness of the entire brain is thus desirable.
The approaches used for cortical thickness estimation in the literature can be broadly categorised as mesh-based and voxel-based. One common aspect of these techniques is the need for an initial classification of the different brain tissue type, namely gray matter (GM), white matter (WM) and cerebrospinal fluid (CSF). Automatic classification and cortical thickness measurement from MR images are affected by artifacts such as intensity inhomogeneity, noise, and partial volume (PV) effect. PV introduces considerable errors in the measure due to the finite resolution of MR images (~1 mm) compared to the size of the cortical structures (~2–3 mm). Typically, two sulci banks in contact within a voxel may appear connected if the CSF is not detected within the GM. This results in erroneously high thickness estimates or topologically wrong surfaces of the brain.
Mesh based approaches use a deformable mesh to extract the inner and outer boundaries of the cortex before measuring thickness. Deformable model techniques fit closed parametric surfaces to the boundaries between regions (Pham et al., 2000
), such as the inner (GM/WM) and outer (GM/CSF) boundaries of the cortex. The main advantage of deformable model is the smoothness constraint, which provides robustness to noise and false edges. They are also capable of operating in the continuous spatial domain and therefore achieving subvoxel resolution. However, deformable model are complex, incorporating methods to prevent self-intersection of surfaces or topology correction. Another disadvantage of some of these approaches is the need for manual interaction to initialize the model and to choose appropriate parameters. Some implementations impose thickness constraints on the cortex (Zeng et al., 1999
; MacDonald et al., 2000
) in order to model sulci. Fischl and Dale (2000)
imposed a self-intersection constraint, forcing the surface to meet in the middle of sulci. A detailed comparison of three well established methods, CLASP (Kim et al., 2005
), BrainVISA (Mangin et al., 1995
), Freesurfer (Dale et al., 1999
; Fischl and Dale, 2000
) is presented in Lee et al. (2006)
. To our knowledge, CLASP (Kim et al., 2005
) is the only approach to explicitly model the partial volume effect to fit the deformable mesh. It is however computationally intensive, with typical running time of over 20 h on a standard PC, as reported in Lee et al. (2006)
In contrast, voxel-based techniques (Hutton et al., 2008
; Diep et al., 2007
; Lohmann et al., 2003
; Srivastava et al., 2003
; Hutton et al., 2002
) operate directly on the 3D voxel grid of the image, and are therefore more computationally efficient. Those methods are however less robust to noise and mis-segmentation as they typically lack the mechanisms required to assess and correct topological errors. They are also hampered by the MR limited resolution, in small and highly convoluted structures such as the GM sulci, where partial volume effects are preponderant.
Cortical thickness can be estimated using several metrics. The definition of thickness based on Laplace’s equation simulating the laminar structure of the cortex, first introduced by Jones et al. (2000)
, has gained wide acceptance. Haidar and Soul (2006)
showed that the Laplacian approach is the most robust definition of thickness, compared to nearest neighbour and orthogonal projections, with respect to variations in MR acquisition parameters. Lerch and Evans (2005)
performed cortical surface reconstruction and compared six cortical thickness metrics. They found that the coupled surfaces method was the most reproducible, followed by the Laplacian definition. However, the coupled surface method is highly dependant on the scheme used to construct the surface.
Whereas Jones et al. (2000)
explicitly traced streamlines (Lagrangian approach), Yezzi and Prince (2003)
proposed a more efficient method that involves solving a pair of first order linear partial differential equations (PDEs) without any explicit construction of correspondences (Eulerian approach). The major drawback of the Eulerian approach is the limited accuracy when estimating thickness, especially within thin structures since it is solved over a discrete grid. The initialization of the PDEs affects the accuracy, when the distance to the real boundary is not explicitly computed ignoring the PV effect. A hybrid Eulerian–Lagrangian approach was proposed by Rocha et al. (2007)
to improve accuracy while preserving efficiency, but for subvoxel initialization at tissue boundaries a precomputed surface was required. For clinical applications, precision is of upmost importance. For example, the expected change in GM thickness during the early stages of Alzheimer’s disease has been shown to be less than 1mm in most brain regions (Lerch et al., 2005
; Singh et al., 2006
Building upon Yezzi and Prince (2003)
, we have improved the precision of the voxel-based thickness measurement by taking into account the PV coefficients at the GM boundaries to appropriately initialize the PDEs without previous upsampling and interpolation of the images. Our scheme can be considered as a combined Lagrangian–Eulerian approach: the boundaries are initialized with an explicit integration along the streamlines achieving subvoxel accuracy, and for the remaining grid points two PDEs are solved as in the Eulerian approach, preserving the computational efficiency. Unlike Rocha et al. (2007)
, the detection of the boundaries is performed within the gray matter partial volume map, without previous delineation of the surface. Rocha et al. (2007)
additionally proposed the correction for divergent streamlines in thick and irregular structures, introducing a distance tolerance (λ
). In cortical thickness this is unlikely to occur as the GM, which is a few mm thick, spans one or two voxels (for a typical full brain MR resolution in a clinical setting: 1 mm × 1mm × 1.2 mm).
In the remainder of this paper, we first describe the method. We then validate the accuracy of our approach on synthetic data, and its reproducibility on real MR data. In the final section, we apply our cortical thickness estimation approach to a subset of the ADNI database, including 43 healthy elderly individuals or normal controls (NC), 53 mild cognitive impaired (MCI) and 22 Alzheimer’s disease patients (AD). We compared our method against the Eulerian approach of Yezzi and Prince (2003)
. The ability of our method to reach higher power when comparing two groups was demonstrated, with good computational efficiency (30 min in average on a standard PC).