In the following section, we draw an outline for the motivation behind our method in the context of AD-related statistical inference. We will present the complete formulation of topology based kernel construction using cortical thickness signal in the next section.
In the context of Alzheimer’s disease as well as other disorders [30
], numerous findings confirm fundamental differences in the patterns of cortical volume loss, and regard cortical atrophy a useful biomarker for AD [10
]. Cortical thinning leads to localized changes in the spatial distribution of gray matter, and hence a geometric realization of topological measures on the cortex can induce separability between diseased and healthy brains. The goal is to use such clinical signals to derive similarity measures. Because cortical thickness is a highly attributed marker, most of the existing models have used this anatomically relevant aspect of disease progression in a generative setting by assuming co-registered surfaces and by performing statistical tests to evaluate the discriminative power of each feature, where the feature vector’s dimensionality equals the number of cortical surface vertices, and the thickness values at the vertices give the magnitude of each corresponding entry. One may then calculate inter-subject distances (or similarities) in terms of geometric distances based on top discriminative features (or inner products), which may then be fed into a standard SVM procedure. Notice the two major difficulties with this approach: first, we must account for the mismatch between the training set size and the dimensionality of the distribution via feature reduction [22
] or introduction of bias [21
]. Second, during the reconstruction step of cortical thickness using automated software tools, point-wise correspondence of mesh topology among the training subjects may be unavailable. This will lead to different number of vertices at different coordinates for different subjects. One approach to this problem is to try mapping cortical thickness onto a sphere with a fixed number of vertices, re-sample and interpolate the cortical thickness measure for each subject so as to allow a direct point-wise comparison. However, the procedure not only attenuates vertex-wise signal but also ignores
the higher order interactions between subsets of vertices that vary between the two groups. Also neurodegeneration is an exclusive
event, and exact locations (coordinates of affected vertices) also vary among the population, obscuring the statistical concept under study. More importantly, in the context of cortical thickness, vertex-wise thickness values may be less
relevant – in some settings separability between classes likely comes from variation between topological features
(comprised of more than one vertex). In other words, subtle losses of gray matter may affect the shape or topology
or cortical surfaces before
they significantly affect the thickness measures1
. A naive alternative is to look at all possible groups
of vertices, and evaluate the significance of their variation, which is clearly intractable. Our approach below seeks to characterize brain images by deriving a representation of “groups” of vertices on each individual cortex. Briefly, we determine whether a localized region in the brain exhibits any gray matter atrophy via construction of a simplex on critical points
– which provide information on the global topology.
Our method is based on topological persistence of brain measurements defined on cortical surfaces. Within the framework of signal filtration, if one considers the cortex as a complex then the spatial properties of cortical thickness can be represented as the history of a growing complex using notions of “birth and death” of homological classes. It is reasonable to expect that inferences based on such topological changes which occur during the growth either as critical points or noise as a function of their lifetime, will capture the differences between clinically different groups well on a global level. Our algorithm seeks to characterize such changes to derive a precise representation of the topological features on the signal. Once such a representation is obtained, we can easily construct similarity measures (kernels) and leverage emerging ML tools (e.g., MKL) for inclusion in statistical inferences.