|Home | About | Journals | Submit | Contact Us | Français|
This paper responds to Thompson and Holland (2011), who challenged our tensor-based morphometry (TBM) method for estimating rates of brain changes in serial MRI from 431 subjects scanned every 6 months, for 2 years. Thompson and Holland noted an unexplained jump in our atrophy rate estimates: an offset between 0-6 months that may bias clinical trial power calculations. We identified why this jump occurs and propose a solution. By enforcing inverse-consistency in our TBM method, the offset dropped from 1.4% to 0.28%, giving plausible anatomical trajectories. Transitivity error accounted for the minimal remaining offset. Drug trial sample size estimates with the revised TBM-derived metrics are highly competitive with other methods, though higher than previously reported sample size estimates by a factor of 1.6 to 2.4. Importantly, estimates are far below those given in the critique. To demonstrate a 25% slowing of atrophic rates with 80% power, 62 AD and 129 MCI subjects would be required for a 2-year trial, and 91 AD and 192 MCI subjects for a 1-year trial.
This paper responds to a recent commentary in the journal NeuroImage (Thompson and Holland, 2011), regarding the accurate estimation of changes in serial brain MRI scans. Thompson1 and Holland (2011) pointed out an important issue about potential image registration bias when computing changes in brain images, which they noticed in a re-analysis of the data we previously published in NeuroImage (Hua et al., 2010). We carefully studied and agreed with the main argument in Thompson and Holland's letter and have developed a solution to the problem by using inverse-consistent registration. The resulting updated measures from tensor-based morphometry are informative and powerful for use in drug trials to assess factors that affect brain change; sample size estimates remain competitive. Measures from our inverse-consistent algorithm show very good power, and are superior to the adjustments that showed poor statistical power in the Thompson and Holland re-analysis. We would like to thank Thompson and Holland for noting surprising aspects of our prior data and helping us identify and correct them.
Tensor-based morphometry (TBM) produces 3D maps of volumetric brain change found by deforming one brain to match another. Individual maps of brain changes (also called Jacobian maps) are aligned to an average group template, and group-wise comparisons can be made using voxel-based statistics. We note, for clarity, that although this general type of analysis is called TBM, many nonlinear image registration methods have been developed to compute brain changes analyzed in this way (e.g., Freeborough and Fox, 1998; Ashburner, 2007; Yanovsky et al., 1999). Klein et al. (2009) recently compared 14 nonlinear registration methods; these algorithms are in a continual state of refinement, with the goal of reducing quantification errors.
In recent work, we computed rates of brain change based on 1309 ADNI MRI scans (Hua et al., 2010) and in accordance with the recommendations of the ADNI project, we made the resulting numeric summaries from our analyses available in a public database (http://adni.loni.ucla.edu). Several ADNI analysis groups also upload numerical summaries from the MRI scans to this database. Because of the unusually large scale of this neuroimaging project, these numeric summaries are frequently replaced or updated over time, and corrected datasets are uploaded as errors in the analyses come to light.
Although TBM produces an entire 3D map of brain changes, most MRI analysis methods compute a single number from each image (such as the volume of the hippocampus; Schuff et al., 2009; Morra et al., 2009a,b). Given the interest in comparing different analysis methods, we examined two different methods to obtain a representative measure of brain change from TBM. First, we computed the average change over an anatomically defined region of interest (ROI), the temporal lobes. As advocated in work on FDG-PET by Eric Reiman and colleagues (Chen et al., 2009), we also used a statistically-defined region of interest (stat-ROI) that selects the regions in the image with greatest effect sizes, based on a pilot analysis of a non-overlapping dataset. This region is then used to compute summaries of changes in other scans. The validity and advantages of the stat-ROI method have previously been discussed (e.g., Chen et al., 2009) and often outperform standard atlas-based ROIs.
In a re-analysis of our numeric summary data from a stat-ROI, which we uploaded to the public ADNI database, Thompson and Holland (2011) noted an unexplained and surprising jump in the time-series of changes. We were able to replicate this effect in our own data, and the graph is shown in Figure 1. Clearly there is an apparent jump in the atrophy rate between 0-6 months, with more linear changes thereafter. The jump occurs in all diagnostic groups – AD, MCI, and controls. In Hua et al. (2010), we showed time-series of cumulative atrophy in MCI and AD. This jump is more coherent with the rest of the trajectory in those groups, and we interpreted it as natural. This was plausible, given the possibility of a biological nonlinear change in the atrophy over time due to changes in the disease process, measurement error, drift in scanner calibration over time and attrition or sampling effects. However, when the trajectory of atrophy from controls is also shown, it becomes clear that there is a systematic bias in the measures of unknown origin.
We were alerted to this effect on September 27, 2010 (W. Thompson, personal communication) and conducted a set of experiments that hypothesized, tested, identified and subsequently corrected the source of this problem. We postulate that any bias in atrophy estimates may be comprised of a constant, additive offset, and a component whose magnitude may depend on the true level of atrophy, an atrophy-dependent component. We report our experiments below, which consider factors that might affect the linearity of the time-series. In turn, we discuss various sources of bias as they are relevant to ongoing investigations of brain morphometry by our group and others.
As in our prior work (Hua et al., 2010), we used tensor-based morphometry (TBM) to map the 3D profile of progressive atrophy in 91 subjects with probable AD (age: 75.4±7.5 years), and 188 with amnestic mild cognitive impairment (MCI; 74.6±7.1 years), scanned at 0, 6, 12, 18 and 24 months (in ADNI, only the MCI subjects were scanned at 18 month intervals). In the current analysis, we added 152 healthy controls (age: 76.0±4.8 years), scanned at 0, 6, 12, and 24 months. To avoid sampling different individuals at each time-point, we included only those subjects who were scanned at all time-points. At the time of writing, far fewer people had been scanned at 36 months, so we did not include this time point in our analysis. The inclusion of only those subjects with scans at all time-points could have under-sampled people whose atrophy was progressing more quickly and were more likely to drop out of the study (Scahill et al., 2002). So, a sampling bias cannot be absolutely eliminated, and there may be sources of nonlinearity in the trajectory of atrophy that cannot be entirely modeled or explained.
Individual maps of atrophy rates (also known as “Jacobian maps”) were derived from a TBM analysis of MRI scans acquired over time. These maps represent the rates of tissue shrinkage (or CSF space expansion) at each voxel location in the brain.
We compared two registration methods to assess brain changes over time:
This method nonlinearly warps the follow-up scan to match the baseline scan of the same individual, driven by a mutual information cost function, and a regularizing term called the symmetrized Kullback-Leibler (sKL-MI) distance (Yanovsky, et al., 2009). This method was used to compute change over time in our earlier paper (Hua et al., 2010). While the penalty term (the symmetric Kullback-Leibler distance) is designed to be inverse consistent, there is no explicit constraint in this method to ensure inverse-consistency on the matching cost function.
An inverse consistent implementation of sKL-MI was implemented (by B.G.). Distinct from other implementations of inverse consistency, instead of reducing the inverse consistency error, we completely eliminate it, using the equivalent perturbation method introduced in Leow et al. (2007). In early work by Christensen and Johnson (2001), the inverse-consistency error of a nonlinear registration algorithm was penalized, but not reduced to zero, by defining the following energies (or costs to be minimized) on the forward and backward mappings:
Here (as in Leow et al., 2007), T (target) and S (source) are the images to be registered, both defined on a computational domain, Ω, h is the forward transform from the source to the target, g is the backward transform from the target to the source, R is the regularizer and λ and ρ are weighting terms. As noted by us in Leow et al. (2007) and by Avants et al. (2008), this will not entirely remove the inverse consistency error. Instead we consider an infinitesimal perturbation ξ applied to the inverse mapping, and solve for η, the perturbation in the forward mapping that preserves the fact that the forward-backward mapping pair h and h-1 stay inverses of each other. Thus, the composition of the two perturbations ought to approach the identity mapping in the limit:
As previously shown, η(x) = −D(h(x)) ξ (h(x)), where Dh is the Jacobian matrix of h with (i,j)th element hi/xj, so we can then compute a forward equivalent of a body force in the backward direction, using only the forward mapping h, and not involving h-1. This circumvents numerical errors incurred when performing numerical inversion operations to go between h and h-1.
All subjects' maps of brain change were registered to a mean deformation template (MDT) based on 40 subjects from the study, as in Hua et al. (2009). The MDT represented the average shape of 40 healthy elderly controls; the procedure to construct the MDT is detailed in (Hua et al., 2008a,b). The mean template does not affect the estimates of atrophy rates in each person. Average Jacobian maps were computed by taking the mean at each voxel of the Jacobian maps across subjects.
A power analysis was established by the ADNI Biostatistics Core to estimate the minimal sample size required to detect, with 80% power, a 25% reduction in the mean annual change, using a two-sided test and standard significance level (α=0.05) for a hypothetical two-arm study (treatment versus placebo). The estimated minimum sample size for each arm was computed with the formula below. Briefly, β denotes the estimated annual change (average of the group) and σD refers to the standard deviation of the rate of atrophy across subjects.
Here zα is the value of the standard normal distribution for which P[Z < zα]=α (Rosner, 1990). The sample size required to achieve 80% power was computed, denoted by n80. We note in passing that this is a linearization of the exact expression for statistical power.
We performed several statistical analyses to assess factors influencing the linearity of the brain changes over time, and the effect sizes of the resulting measures of atrophy. We assessed brain changes in anatomical and statistical regions of interest. A statistically-defined region of interest (stat-ROI) was based on voxels with significant atrophic rates over time (p < 0.001 or p<0.0001, uncorrected) within the temporal lobes. This was established in a non-overlapping training set of 20 AD patients (age at baseline: 74.8 ± 6.3 years; 7 men and 13 women) scanned at baseline and 12 months. The anatomical ROI included the temporal lobe gray matter, a region typically providing the highest statistical power for tracking AD progression (Jack et al., 1998). This procedure is detailed in (Chen et al., 2009, Hua et al., 2009, 2010). A numerical summary of the atrophic rate, in the temporal lobes, was computed by taking the arithmetic mean of Jacobian values within the corresponding stat-ROI or anatomical ROI (Hua et al., 2009, 2010), giving a single rate-of-atrophy score for each individual. An evidence for an offset at time zero (which may arise from the method and/or biological nonlinearity) was estimated by fitting a linear mixed effects model through (1) measures of cumulative atrophy at 6, 12, and 24 months but leaving out the data point at baseline, (2) measures of cumulative atrophy at 6, 12, and 24 months and the known data point at time zero (having a zero change at month zero is known), in the control group (n=152). The lmer and other statistical functions from the R statistical package (version 2.10.1: library (lme4)) were used to estimate the intercept (offset at month zero) and 95% confidence intervals.
We hypothesized that inverse-consistent registration would:
To show that our inverse-consistent registration algorithm ic-sKL-MI indeed created maps that are inverse-consistent, we made a map of the inverse consistency error, ICE = ‖x − h*h-1(x)‖ where h is the mapping from one time point to another, and h-1 is the mapping computed in reverse (i.e., by the algorithm applied to the same scans, but with the order of the scans switched). A typical map is shown in Figure 2(a), showing that the ICE is around 0.005mm or lower, throughout the brain, with higher values in the scalp or other non-brain regions where signals are not important and contrast is less consistently controlled over time. The backward mapping is within a few thousandths of a millimeter of the inverse of the forward mapping across the entire 3D volume of the brain. Since many voxels undergo very small deformations, it is also instructive to assess the relative inverse consistency error, or ICE/‖h‖. Figure 2(c) shows that relative error is well below 5% of the measured change and much less in most voxels.
As it is difficult to relate error in displacement fields to error in Jacobian determinants directly, we estimated the effect of ICE on Jacobians empirically. Figure 2(b) shows the map of ||Dh| − |D(inv(h−1)||; here, the outer brackets mean absolute value, and inner brackets mean determinant. Note that using the direct numerical inversion inv() creates additional numerical error, so this estimate is pessimistic. Still, the map shows that within the brain, absolute error is on the order of 0.1% change or less. When one integrates over a large ROI, this error is likely to be reduced substantially due to averaging effects.
As shown in Figure 3, brain changes recovered by the ic-sKL-MI method show substantially reduced offsets. By extrapolating back from the 6-12 month interval, the offset in the change measures is very small – around 0.28% for the temporal lobe gray matter ROI, and statistical ROI (Figure 3), compared with the 1.2-1.4% offset for brain change measures computed with sKL-MI (Figure 1). Clearly, this is greatly reduced, and amounts to a displacement field error of a few thousandths of a millimeter in a 1-mm MRI voxel. As any biological sample is heterogeneous, a linearized plot through the mean atrophy rate data will not run through zero exactly.
In Figure 4, offsets are measured based on best linear fit to all the data points at 6, 12, and 24 months in the control group only (n=152), as their trajectory is thought to be linear. The fitted intercepts and the 95% confidence intervals based on control subjects are (a) statistical ROI: 0.29% [0.15, 0.44], (b) temporal lobe gray matter: 0.28% [0.15, 0.42], (c) statistical ROI including the known data point at baseline: 0.12% [0.05, 0.19], (d) temporal lobe gray matter including the known data point at baseline: 0.11% [0.04, 0.18].
Power estimates based on our inverse consistent measures are shown in Table 1. To demonstrate a 25% slowing of atrophy rates with 80% power, 62 AD and 129 MCI subjects would be required for a 2-year trial, and 91 AD and 192 subjects for a 1-year trial. These are 1.6-2.4 times higher than our previously estimated sample sizes, but not 5-16 times higher as alleged in the Thompson and Holland (2011) analysis. The difference is accounted for by using inverse-consistent registration to compute brain changes.
As shown in Table 2, across the same set of subjects in AD, MCI, and CTL, longer intervals led to greater amount of atrophy measured in the statistical-ROI and temporal gray matter, resulting in smaller sample size estimates.
As shown in Figure 5, we computed a map of the voxelwise transitivity errors. To define these, we label the 0, 12, and 24 month scans in a given subject as A, B, and C, respectively; we compute the deformation mappings between these time points: hAB, hAC and hBC. The transitivity error, at each point in the brain, is defined as the difference between the Jacobians of the direct mapping (from A to C) and the composed mapping (from A to C via B):
As shown in Figure 5, the transitivity error is small in all areas of the brain, around 20 times smaller than the estimate of the true change. We were able to confirm that the mean transitivity error was typically around 0.2-0.4%, regardless of whether a standard anatomical or statistical ROI is used (see Figure 6). This error accounts for most of the remaining offset in the data of around 0.28% in Figure 3. As this error is weakly correlated with the true biological change, subtracting it may even reduce the discriminative power of the measures.
First, we are grateful to Thompson and Holland (2011) for pointing out the nonlinear offset of 1.2-1.4% in our previously reported atrophy rate measures. Although some of this offset may result from biological sources, we showed that the intercept from all sources (including biological departures from linearity) is only 0.28% when using inverse-consistent registration to estimate the brain changes. Inverse-consistency errors in our new measures of change were effectively zero throughout the brain (Figure 2). With these measures of change, our power estimates for clinical trials were competitive with others in the literature. To demonstrate a 25% slowing of atrophic rates with 80% power, 62 AD and 129 MCI subjects would be required for a 2-year trial, and 91 AD and 192 subjects for a 1-year trial. These are 1.6-2.4 times higher than our prior sample size estimates, but not 5-16 times higher as alleged in the Thompson and Holland (2011) analysis of our prior numeric summaries. Power was re-gained by using inverse-consistent registration to compute the brain changes.
Some re-interpretation of past results is warranted in the light of these new estimates, and we suggest that our new table (Table 2) should be consulted first, in order to appreciate the very wide range of the confidence intervals on these sample size estimates. Although the notion of inverse consistency is important, it is noteworthy how little the sample size estimates have changed relative to the very large known uncertainty of the estimator itself, which is given by its 95% CI. Any future clinical trial using this kind of estimate would be expected to base their power predictions on both the upper and lower confidence limits. Specifically, in almost all of our prior TBM papers, including those initially published in ADNI (Leow et al., 2006; Hua et al., 2008a, Hua et al., 2008b), we used a 3D inverse-consistent elastic registration method to measure change, known as 3DMI (Leow et al., 2005). As shown formally and with substantial empirical data in Leow et al. (2007), inverse consistency is numerically enforced. More recently, there were four published papers in which we changed our registration method to sKL-MI (Yanovsky et al., 2009), because it appeared to offer more desirable properties such as formal mathematical symmetry. An independent study by Tagare et al. (2006, 2009) noted that sKL, as we formulated it in (Yanovsky et al., 2009) is advantageous, as it is an inherently symmetric cost function. We reported sample size estimates from sKL-derived registrations in two papers: Hua et al. (2009) and (2010), so these estimates may need to be revised upwards. In addition, there is reference to sKL-derived power estimates in Kohannim et al. (2009) and Ho et al. (2010). Our new findings regarding registration asymmetry would suggest that the sample size estimates in those papers should be roughly doubled, while bearing in mind that there is still a 4-5 fold difference in the upper and lower confidence limits, reported here and in the past, so the measures should not be treated as if they are precise in any case.
The results of our registration methods are inverse-consistent, i.e., symmetrical: findings are the same regardless of the order of the images. In general, a registration algorithm will not automatically be symmetric; to achieve symmetry, it requires either equivalent perturbation methods (which we used), or a full space-time (4D) optimization for every pair of images (as is done by the SyN algorithm by Avants et al., 2008, for example).
Transitivity error is another source of error in maps of brain change. In our experiments, this contributes about 0.28% to the observed changes, or a few thousandths of a millimeter in a typical 1-mm MRI voxel. Further reducing transitivity errors requires elaborate registration schemes that include even more penalty functions to adjust the registrations based on more than two input images - such as registering sets of images in groups of three (Geng, 2007). Transform reconciliation (Woods et al., 1998), and group-wise registration methods (Leporé et al., 2008), compute a set of mappings between all N brains in a study, and use the internal consistency among mappings as a means to reduce errors, or simply to redistribute the mean error among all the mappings. In Leporé et al. (2008), we proposed a method called multi-atlas tensor-based morphometry; this uses groupwise registration to reduce the error and boost statistical power in a cross-sectional TBM study. At the expense of very high computational times, we mapped every brain in the study to all others, and used arithmetic relations among mappings to reduce errors. Such highly CPU-intensive groupwise registration methods are more appropriate for small studies, and not yet realistic to apply to datasets the size of ADNI. One promising groupwise registration method is hierarchical correspondence detection by clustering (Wu et al., 2010a,b). This identifies which correspondences in a set of subjects are robust, and uses them to guide anatomical correspondence detection among all subjects, and across time-points. A second way to achieve formally transitive registration is to use a registration target different from all brains in the study, and compute mappings between brains by concatenating the maps to this target and their inverses (Skrinjar et al., 2010). Such a method is formally transitive, yet borders on being exhaustive algorithmically and may result in more overall error in individual mappings.
By expecting linear trajectories for the measures of atrophy over time, as is implied by Thompson and Holland (2011), it is assumed that the atrophy rate remains constant, at least in aggregate, across a group. In 39 healthy controls (aged 31-84), a paper by another research group on a different sample, Scahill et al. (2003), found that rates of changes accelerated, especially after 70 years of age, in the ventricles (p<0.001) and hippocampi (p=0.01). At the time of writing (December 2010), to the best of our knowledge, there are no other voxel-based brain mapping studies from ADNI that use more than two time-points. Schuff et al. (2009) and McEvoy et al. (2009) published the only two studies we know of that examined more than two ADNI time-points. Schuff et al. (2009) examined hippocampal volume in 112 normal elderly, 226 MCI and 96 AD patients who all had at least three successive MRI scans at 0, 6 and 12 months. In both MCI and AD (p=0.0001), but not in normal controls, rates of hippocampal loss were slightly faster in the 6-12 than the 0-6 month interval. McEvoy et al. (2009) reported changes in various ROIs between 0-6 and 6-12 months. In that paper, if lines were drawn connecting the mean values at the 6 and 12 month time points, and extrapolated back to zero, many would not pass through zero (see Figure 7 in that paper). Depending on the ROI chosen, the changes in the second six months are between half and double the changes occurring over the first six months. This suggests caution in ascribing too much meaning to small intercepts that are extrapolated using linear assumptions, based on data that clearly depend on the sample of subjects assessed and the region chosen.
Longitudinal MRI studies at multiple time-points indicate that overall brain volume loss, in general (Chan et al., 2003; Carlson et al., 2008), and hippocampal volume loss, in particular (Ridha et al., 2006; Jack et al., 2008), may accelerate in patients with MCI and Alzheimer's disease, but many of these studies have follow up intervals as long as 10 years. As this acceleration effect is not detected in our data, people with accelerating atrophy may either (1) participate in ADNI in lower proportions or drop out in higher proportions than those with linear or decelerating atrophy, or (2) be less likely to have a full-time series of scans every 6 months for 2 years due to their rapidly accelerating disease progression. Analysis of later ADNI time-points with multiple methods should shed light on this unresolved question.
Yushkevich et al. (2010) noted one potential source of bias in longitudinal image analysis, arising from differences in interpolating baseline and follow-up images after global normalization. In our own registration pipelines (here and previously), this specific problem noted by Yushkevich et al. (2010) was not an issue, as our baseline and follow-up images were treated equivalently during re-alignment, re-sampling and interpolation. Related to our argument in Leow et al. (2007), Tagare et al. (2009) noted that almost all registration algorithms compute correspondences between images by summing up quantities in the coordinate system of one image (the source), the other (the target) or both. He notes that our original cost function, sKL-MI, introduced in Yanovsky et al. (2009), is formally symmetric, while many others - that are now widely used - are not. He proposed a numerical scheme to guarantee symmetry by computing all quantities (including the intensity matching term) in a coordinate system that is weighted using the Jacobian determinant. He also advocates using a specific differential form (a concept from exterior calculus) when computing registration cost functions, such as intensity correspondence and smoothness of the warp (cf. Cachier and Rey, 2000). We tried this in our nonlinear registration work by using the square-root of the Jacobian to weight volumetric integrals (Leow et al., 2007; cf. Noblet et al., 2008). We have not explored its empirical consequences here, but it may boost power in clinical applications of TBM. Extremely computationally demanding 4D methods have also been proposed, that use a subject's entire 4D time-series to infer a continuous evolution of shapes or “hyper-templates” from a set of observations of the same subject (Durrleman et al., 2009; Avants et al., 2010; Khan et al., 2010). When Lorenzi et al. (2010) analyzed ADNI data from 8 MCI subjects at 4 time-points, they noted extremely erratic trajectories for brain change (Fig. 3 of that paper) that they smoothed by fitting a velocity field through all the images. Use of a full time-series for hundreds of subjects is computationally difficult and has not been attempted on datasets the size of ADNI; it also requires re-processing of all time-points when a new scan comes in from one subject.
In addition to offering high power to assess factors influencing brain change, TBM provides 3D anatomical maps showing the region and rate of brain changes, which are not necessarily provided by other numeric summary methods. As noted by Scahill et al. (2002) in their early work on AD with fluid registration, having maps of changes is advisable for treatment trials, in case treatments show region-specific effects, or beneficial effects in regions not surveyed or anticipated when focusing on a volume measure for a pre-selected region. Therefore, it seems reasonable to use TBM for longitudinal estimation of atrophy, so long as possible confounds and sources of error are recognized when interpreting the estimated changes.
We thank Wes Thompson and Dominic Holland for noticing surprising aspects of our prior data that we address here. Data collection and sharing for this project was funded by the Alzheimer's Disease Neuroimaging Initiative (ADNI) (National Institutes of Health Grant U01 AG024904). ADNI is funded by the National Institute on Aging, the National Institute of Biomedical Imaging and Bioengineering, and through generous contributions from the following: Abbott, AstraZeneca AB, Bayer Schering Pharma AG, Bristol-Myers Squibb, Eisai Global Clinical Development, Elan Corporation, Genentech, GE Healthcare, GlaxoSmithKline, Innogenetics, Johnson and Johnson, Eli Lilly and Co., Medpace, Inc., Merck and Co., Inc., Novartis AG, Pfizer Inc, F. Hoffman-La Roche, Schering-Plough, Synarc, Inc., and Wyeth, as well as non-profit partners the Alzheimer's Association and Alzheimer's Drug Discovery Foundation, with participation from the U.S. Food and Drug Administration. Private sector contributions to ADNI are facilitated by the Foundation for the National Institutes of Health (www.fnih.org <http://www.fnih.org>). The grantee organization is the Northern California Institute for Research and Education, and the study is coordinated by the Alzheimer's Disease Cooperative Study at the University of California, San Diego. ADNI data are disseminated by the Laboratory of Neuro Imaging at the University of California, Los Angeles. This research was also supported by NIH grants P30 AG010129, K01 AG030514, and the Dana Foundation. Algorithm development and image analysis for this study was funded by grants to P.T. from the NIBIB (R01 EB007813, R01 EB008281, R01 EB008432), NICHD (R01 HD050735), and NIA (R01 AG020098).
This is a reply to the commentary Thompson WK, Holland D; Alzheimer's Disease Neuroimaging Initiative. Bias in tensor based morphometry Stat-ROI measures may result in unrealistic power estimates.Neuroimage. 2011 July 1; 57(1): 1-4. 21349340
1We note for clarity that the corresponding author of this paper is Paul Thompson (UCLA School of Medicine), and we are responding to a letter by Wes Thompson (no relation) and Dominic Holland of UC San Diego.
Author Contributions: Author contributions were as follows: XH, BG, CB, PR, AL, AK, IY, AT, and PT performed the image analyses, algorithm developments and evaluations; CJ, NS, GE, KC, ER, and MW contributed substantially to the image and data acquisition, study design, quality control, calibration and pre-processing, databasing and image analysis. We thank Anders Dale for his contributions to the image pre-processing and the ADNI project. We thank Jason Stein, Neda Jahanshad, and Sarah Madsen for their comments on this manuscript.
Publisher's Disclaimer: This is a PDF file of an unedited manuscript that has been accepted for publication. As a service to our customers we are providing this early version of the manuscript. The manuscript will undergo copyediting, typesetting, and review of the resulting proof before it is published in its final citable form. Please note that during the production process errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain.