|Home | About | Journals | Submit | Contact Us | Français|
This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
Investigators are actively testing interventions intended to increase lifespan and wish to test whether the interventions increase maximum lifespan. Based on the fact that one cannot be assured of observing population maximum lifespans in finite samples, in previous work, we constructed and validated several tests of difference in the upper parts of lifespan distributions between a treatment group and a control group by testing whether the probabilities that observations are above some threshold defining 'old' or being in the tail of the survival distribution are equal in the two groups. However, a limitation of these tests is that they do not consider how much above the threshold any particular observation is.
In this article we propose new methods which improve upon our previous tests by considering not only whether an observation is above some threshold, but also the magnitudes by which observations exceed the threshold.
Simulations show that the new methods control type I error rates quite well and that the power of the new methods is usually higher than that of the tests we previously proposed. In illustrative analyses of two real datasets involving rodents, when setting the threshold equal to 110 (100) weeks for the first (second) datasets, the new methods detected differences in 'maximum lifespan' between groups at nominal alpha levels of 0.01 (0.05) for the first (second) datasets and provided more significant results than competitor tests.
The new methods not only have good performance in controlling the type I error rates but also improve the power compared with the tests we previously proposed.
Investigators are actively testing interventions intended to increase lifespan . Caloric restriction (CR) is the intervention most well established as able to increase lifespan in experimental models , and investigators are now seeking other interventions that may mimic the life-prolonging effects of CR without requiring a reduction in caloric intake . It is frequently said that CR not only increases average lifespan, but also 'maximum' lifespan . Many researchers in the field of aging therefore wish to test whether other interventions increase maximum lifespan.
Recognizing this and the fact that one cannot be assured of observing population maximum lifespans in finite samples, Wang et al.  constructed and validated several tests (hereafter, the 'Wang-Allison tests') of differences in the upper parts of lifespan distributions by building on the work of Redden et al.  in the area of quantile regression. Wang et al. also showed that a commonly used test for differences in maximum lifespan that involved comparing the means of the top p% (e.g., top 10%) of each of two samples (e.g., a treatment and a control sample) was not valid in that it had an excessive type-1 error rate. Nevertheless, there is appeal to using the full continuity of information in the upper tails of the sample distribution, and colleagues have recently suggested to us that a limitation of the Wang-Allison tests is that they only treat individual lifespans as being above or below some threshold defining 'old' or being in the tail of the survival distribution. That is, the Wang-Allison tests do not consider how much above the threshold any particular observation is, only whether the observation is above the threshold. We acknowledge this limitation and in response, we herein develop new tests that utilize the continuity of information among observations that exceed the threshold of interest, are more powerful than competing tests, including the Wang-Allison tests, in most cases, and remain valid under the null hypothesis of no effect on 'maximum' lifespan.
Consider an experiment with two groups, treatment and control. The extension to more than two groups is straightforward (see discussion section). Let X be an indicator variable taking the value 1 for observations in the treatment group and 0 for observations in the control group. Let Y denote survival time. Let τ denote some threshold chosen by the investigator to denote an extreme portion of the distribution. In survival studies, τ can be chosen in advance to correspond to an age considered 'old' (e.g., 30 months in mice) or set to some high sample percentile (e.g., the 90th). Critically important, τ must be set to the same value for the two groups. That is, if τ is to be defined by an upper sample quantile, it should be the upper sample quantile of both of the two groups combined, not of each group separately.
Although not described in exactly these terms in the paper by Wang et al. , the Wang-Allison tests essentially create a new variable, W, where for the ith subject, Wi 0 if Yi ≤ τ, and Wi 1 if Yi > τ, and subsequently tests whether W is associated with X using an appropriate test statistic.
Thus, the Wang-Allison tests test the following null hypothesis:
A problem with the Wang-Allison tests is that, hypothetically, P (Y > τ|X = 1) may equal P (Y > τ|X = 0) and yet the average magnitude by which lifespans exceed τ when X = 1 may be radically different than when X = 0. This is exemplified in the hypothetical frequency distributions depicted in Figure Figure1.1. Note that these hypothetical distributions are not intended to be realistic, but only to clarify the point.
Let X1 and X0 denote the numbers of observations with Yi > τ in the treatment group and control group, respectively. The Wang-Allison tests use the test procedures for two independent binomial proportions  and these procedures require that X1 and X0 are independent. In the Wang-Allison tests, if the threshold is set in advance according to prior knowledge, X1 and X0 can satisfy the requirement of independence. But if τ is set to be the 90-th percentile, X1 and X0 may not be independent, this creates a theoretical problem. However, on an empirical level, our simulations show that in the sample sizes we considered, this is not an apparent problem because the Wang-Allison tests have very high power and can control type I error quit well in the simulation studies and are practical for the lifespan studies). When X1 and X0 are not independent, simulation studies (including estimation of power and type I error) are an effective way to evaluate the methods (such as Wang-Allison tests) using the test procedures for two independent binomial proportions.
An alternative to testing H0,A is to test the following conceptually related but mathematically distinct null hypothesis:
where μ (•) denotes the population mean (or expectation) of (•). Though appealing, a problem with testing H0,B is that when P (Y > τ|X = 1) >> P (Y > τ|X = 0) or P (Y > τ|X = 1) <<P (Y > τ|X = 0), for any finite sample with equal initial assignment to the two groups, E [n0] <<E [n1] or E [n0] >> E [n1], where E [n0] denotes the expected number of observations in the control group for which Y > τ, and E [n1] denotes the expected number of observations in the treatment group for which Y > τ. This imbalance between E [n0] and E [n1] will greatly reduce the power to reject H0,B. In fact, in the extreme, when either P (Y > τ|X = 1) or P (Y > τ|X = 0), there will be zero power to reject H0,B (actually, it is appropriate to say that H0,B is undefined in such cases). Such a situation is exemplified in the hypothetical frequency distributions depicted in Figure Figure2.2. Again, these hypothetical distributions are not intended to be realistic, but only to clarify the point.
Thus, one can conceive situations in which the power to reject H0,A will be zero and yet the upper tails of the distribution are clearly different. Similarly, one can conceive situations in which the power to reject H0,B will be zero and yet again the upper tails of the distribution are clearly different. Hence, we propose a single-step union-intersection test  of the following compound null hypothesis:
We construct the test of H0,C with the following simple procedure. Define a new variable Z such that Zi I(Yi > τ)Yi, where I(•) denotes the indicator function taking on values of one if (•) is true and zero otherwise. One can then simply conduct an appropriate test (several candidates will be considered below) of whether the population mean of Z is significantly different between the treatment and control groups. This approach (hereafter new tests), has several desirable properties. First and foremost, when an appropriate test statistic is used, the approach will be valid. That is, unlike the conditional t-tests (CTTs) commonly used and shown to be invalid by Wang et al. , when H0,C is true, it will only be rejected 100*α% of the time at the nominal α level even if f(Y|Y ≤ τ ∩ X = 1) ≠ f(Y|Y ≤ τ ∩ X = 0), where f(•) denotes the probability density function of (•).
Note that expectation (or population mean) of Z, μ(Z) = P(Y > τ) μ(Y | Y > τ). Therefore the new test for H0,C is really testing whether
while the method for H0,B is testing whether μ (Y | Y > τ ∩ X = 1) = μ(Y | Y > τ ∩ X = 0) and the method for H0,A is testing whether P(Y > τ | X = 1) = P(Y > τ | X = 0). The mean difference of μ(Z) between two groups consists of two components: the difference between probabilities P(Y > τ | X = 1) and P(Y > τ | X = 0) and the difference between expectations μ (Y | Y > τ ∩ X = 1) and μ (Y | Y > τ ∩ X = 0). The test for H0,A focuses on the first component and the test for H0,A focuses on the second one, while the test for H0,C is related to both components.
We also note that Dominici and Zeger  studied similar mean difference components for two groups (cases and controls) by estimating the mean difference Δ(v) for the two groups conditional on a vector of covariates v for zero-inflated data through smooth quantile ratio estimation with regression,
where, Y is nonnegative random variable denoting the health expenditures. While Dominici and Zeger  estimate the mean difference of nonnegative random variables (Y) for two groups, our methods test the mean difference of random variables (Y) which are greater than threshold τ.
We evaluate the tests via computer simulation. For each scenario simulated, we evaluate the tests at the 2-tailed .05 α level and at the 2-tailed .01 α level using 5,000 simulated datasets per scenario (except for permutation tests where we use 1,000 datasets per scenario and 1,000 random permutations by Monte Carlo sampling for each dataset). In simulation 1, we first evaluate performance in simulation under the null hypothesis H0,C (i.e., both H0,A and H0,B are true) and yet f (Y|Y ≤ τ ∩ X = 1) is radically different from f (Y|Y ≤ τ ∩ X = 0). After showing that the tests remain valid even in these extreme circumstances, we compare their power in several scenarios (simulations 2–4) described below. For each scenario, we assumed that there were two groups with an equal number of subjects per group. We ran scenarios with 50, 80, or 100 subjects in each of the two groups, realistic sample sizes for animal model longevity research.
We simulated data using a concatenation of Weibull distributions to flexibly emulate the data observed in a real study  of obese animals (control; X = 0) versus animals that were obese and then lost weight via CR (treatment; X = 1). Specifically, For example, in simulations 1–4, we simulated Y from the following distribution:
where j = 0 to 1, lifespan (Y) is measured in weeks, aj,L and bi,L are the parameters of a Weibull distribution for the lower 90% of the distribution, and aj,U and bi,U are the parameters of a Weibull distribution for the upper 10% of the distribution. rj is a proportion parameter, for example rj = 0.9. The specific values of the parameters used are provided in Figure Figure33.
Each of the tests listed below was implemented in two manners, first with τ set in advance to a fixed lifespan value (130 weeks), and second with τ set at the sample 90th percentile of the two groups combined. In real-life situations, one usually does know the threshold of interest a priori. We do recognize that we will not have such knowledge in all cases. It is for this reason that when analyzing the simulated data, we also consider a threshold of the 90th percentile of the data allowing for an ad hoc data-based determination of a threshold.
For comparative purposes, the first category of tests we evaluated were the tests denoted QT3 and QT4 in Wang et al  which are, respectively, Boschloo's test and an exact unconditional test based on the observed difference divided by its estimated standard error under the null hypothesis (score statistic) and are described in more detail by Mehrotra et al. . These were the two tests that Wang et al.  had found performed best as tests of H0,A.
In testing H0,B, subjects were only included in the analysis when their lifespans exceeded τ. Distributions of survival times (lifespans) are rarely Gaussian and, even if they were nearly Gaussian after, for example, log transformation, the distribution of just the tail portion (i.e., f (Y|Y > τ) would not be. Hence, in constructing tests we relied on nonparametric statistical methods. Specifically, we used the Wilcoxon-Mann-Whitney (exact) test [11,12] and a permutation test (with t-statistic) as described by Good  to test for differences in lifespan among those subjects whose lifespans exceeded τ.
In testing H0,C, all subjects were analyzed, but the variable analyzed was Z as defined above. Because the distribution of Z cannot be normally distributed, we again used the Wilcoxon-Mann-Whitney test and a permutation test to test for differences in Z.
For a dataset with n1 (n2) subjects in treatment (control) group, the permutation test can be performed in the following way: First put all the (n1 +n2) subjects together, and then generate 1000 replicated datasets. For each replicated dataset, we randomly sample n1 subjects from the (n1 +n2) subjects and assign them to treatment group, and assign the left n2 subjects to control group. We run T-test on the observed dataset and the 1000 replicated datasets. Let T0 be the T value for the observed dataset, then p-value for the permutation test is calculated as the proportion of replicated datasets with absolute T values greater than or equal to the absolute valued of T0.
Results are displayed in Tables Tables11 to to5.5. As can be seen, the new methods for tests of H0,C controls type I error rates quite well. The power of the new methods are always higher than or very close to that of the methods for tests of H0,A (Wang-Allison tests) and are higher than that of the methods for tests of H0,B (Wilcoxon-Mann-Whitney tests and permutation tests for observations above the threshold τ) in some of the simulations.
Table Table11 shows the type I error rate of the tests (in simulation 1) when the null hypothesis H0,C is true (i.e., both H0,A and H0,B are true) and yet f (Y|Y ≤ τ ∩ X = 1) is radically differentfrom f (Y|Y ≤ τ ∩ X = 0). The type I error rates of the new methods are comparable to those of the methods for tests of H0,A and those of the methods for tests of H0,B . It is note worthy that there is a slight but fairly consistent excess of type I errors when the sample 90th percentile is used rather than a fixed cutoff point. This is because the sample 90th percentile is a random variable and when it falls below its population level, the null hypothesis is no longer strictly true in our simulations. That is, the tests remain valid tests of differences in distributions above the actual value used but should not be strictly interpreted as tests of differences in distributions above the 90th (or any other percentile). In practice, this distinction is probably trivial.
In simulation 2 (see Table Table2),2), where H0,A is true, H0,B is false and f (Y|Y ≤ τ ∩ X = 1) is radically different from f (Y|Y ≤ τ ∩ X = 0), the new methods for tests of H0,C and the methods for tests of H0,A have lower power than that of the corresponding methods for tests of H0,B, however, the new methods for tests of H0,C can slightly improve the power compared to the methods for tests of H0,A.
Table Table33 shows the power of the tests in Simulation 3, where H0,B is true, H0,A is false and f (Y|Y ≤ τ ∩ X = 1) is radically different from f (Y|Y ≤ τ ∩ X = 0). The new methods for tests of H0,C and the methods for tests of H0,A have very similar power which is much higher than that of the corresponding methods for tests of H0,B.
From simulation 4 (see Table Table4),4), where H0,B is false, H0,A is false and f (Y|Y ≤ τ ∩ X = 1) and f (Y|Y ≤ τ ∩ X = 0) are identical, we can find that the new methods for tests of H0,C always have higher power than the corresponding methods for tests of H0,A. When τ being set to the 90th percentile of the sample, the new methods also have higher power than the corresponding methods for tests of H0,B.
Finally, we conducted a set of simulations under what we perceived to be the most realistic situations. Here both H0,A and H0,B are false, f (Y|Y ≤ τ ∩ X = 1) is quite different from f (Y|Y ≤ τ ∩ X = 0), and the distributions have no discontinuities. In other words, there is just a simple reduction in the hazard rate when X = 1. Table Table55 presents the power of the tests in Simulation 5, where f (Y|X = 1) = 1.2f (Y|X = 0). In this simulation, the methods for tests of H0,B almost have no power because the control group always has no or few observations above the threshold τ . The new methods for tests of H0,C, when using a permutation test, have power higher than or equal to that of the methods for tests of H0,A.
To illustrate the methods, we applied them to two real datasets. In both of these datasets, prior research had shown differences in overall survival rate and we tested for differences in 'maximum lifespan' herein. The first was a subset of data reported by Vasselli et al . The subset of the data consists of two groups of Sprague-Dawley rats, those kept on a high-fat diet ad libitum throughout life and becoming obese (EO-HF) and those kept on a high-fat diet ad libitum until early-middle adulthood, becoming obese, and subsequently reduced to normal weight via caloric restriction, but on the same high-fat diet (WL-HF). Each group had 49 rats (see Figure Figure44 for the histograms for the data). The second dataset was from a study comparing the lifespan of Agouti-related protein-deficient (AgRP(-/-)) mice to wildtype mice (+/+) as reported by Redmann & Argyropoulos . This dataset consists of 16 mice with genotype '+/+' and 21 mice with genotype '-/-' (see Figure Figure55 for the histograms for this dataset). From Figure Figure4,4, we can see the upper tails of the histograms of the two groups are different. Similar results can be found in Figure Figure55.
Results (p values of tests) are shown in Table Table6.6. As can be seen, when setting τ equal to 110 (100) for the first (second) datasets, both the methods for tests of H0,A and the new methods for tests of H0,C can detect the differences in 'maximum lifespan' between groups at nominal alpha levels of 0.01 (0.05) for the first (second) datasets. But the methods for tests of H0,B cannot detect the difference for all different values of τ . The following description may provide some explanation to these results. For the first dataset, when set τ = 110, the proportions of the observations greater than τ in the EO-HF group and WL-HF group (i.e., estimations of P(Y > τ | X = 0) and P(Y > τ | X = 1)) are 0.061 and 0.306, respectively. These two proportions are significantly different and not surprisingly, the methods for tests of H0,A can detect the difference in 'maximum lifespan' between the two groups. Second, the sample means of the observations greater than τ in the two groups (i.e., estimations of μ (Y | Y > τ ∩ X = 1) and μ (Y | Y > τ ∩ X = 0)) are 117.8 and 122.9, respectively, and there is no much difference between these sample means. However the sample means of the Z-values in the two group (i.e., the estimations of P(Z | X = 0) and P(Z | X = 1)) are 7.210 and 37.633, respectively, and are greatly different, where, Zi I(Yi > τ)Yi. These may explain that the methods for tests of H0,B cannot reject the null but the new methods for tests of H0,C can detect the difference in 'maximum lifespan' between the two groups. Similarly, for the second dataset, when set τ = 100, the proportions of the observations greater than τ in the group with genotype '+/+' and group with genotype '-/-' are 0.188 and 0.571, respectively. The sample means of the observations greater than τ in the two groups are 109.3 and 110.9, respectively. The sample means of the Z-values in the two groups are 20.5 and 63.4 respectively.
From Table Table66 we can also see that in almost all situations the p-values of the new methods for tests of H0,C are somewhat smaller than those of the methods for tests of H0,A. This is consistent with the simulations showing greater power of the new methods.
Herein, we proposed new methods for testing the difference of 'maximum' lifespan between groups (e.g., treatment and control). By defining a new variable Z such that Zi I (Yi > τ)Yi for each observation and then applying Wilcoxon-Mann-Whitney test or better still a permutation test to Z, the new methods achieve far better performance when considered across a broad range of circumstances in terms of both Type-1 error rates and power. In the new methods, we use the Wilcoxon-Mann-Whitney test or permutation test. One could also choose to use a bootstrap test in place of these two tests. However, additional simulations would likely be warranted to evaluate its performance relative to the permutation test we have evaluated herein.
It is straightforward to extend the new methods to more than two groups. For example, one could use the Kruskal-Wallis Test to replace the Wilcoxon-Mann-Whitney test, or use permutation testing for multiple groups to replace that for two groups.
We have shown that the new methods are effective by simulation studies when the sample size (N) of each group is 50, 100, or 200. We expect that these methods will be also be relatively more powerful than existing competitors for much larger sample sizes, such as N = 500 or even N = 5000. There are some mouse data sets (like those of the National Institute of Aging's Intervention Testing Program) where N > 500, and worm and fly data sets in which N may sometimes even exceed 5000. We expect that the new methods are equally applicable to the analysis of such data.
Finally, we note that the tests proposed here are described for the context of testing for differences in lifespan. However, there is nothing intrinsic to them that limits their use to survival data. They could be applied to any situation in which one wanted to test for group differences in the tails of distributions.
The authors declare they have no competing interests.
DBA participated in all parts of the work of the study (including the study design, methodology development, simulations, data acquisition, and manuscript drafting). He wrote major sections of the original manuscript. He revised final version of the manuscript. DTR provided consulting on the statistical issues in the study and manuscript editing. SZ provided assistance in programming for simulation studies. WW provided consulting on simulation and prepared the figures. GG did all simulation studies and real data analyses and drafted the sections of Results, Illustration with real data, and Discussion of the manuscript and participated in revision of the manuscript.
The pre-publication history for this paper can be accessed here:
We thank Richard Miller, David Harrison, and Simon Klebanov for thought provoking dialogue that inspired this paper and George Argyropoulos for graciously providing data. This research was supported in part by NIH grants P30DK056336, R01DK067487, and P01AG11915 and by grant GM073766 from the National Institute of General Medical Sciences.