Home | About | Journals | Submit | Contact Us | Français |

**|**BMC Med Res Methodol**|**v.8; 2008**|**PMC2529340

Formats

Article sections

- Abstract
- Background
- Methods
- Results
- Discussion
- Competing interests
- Authors' contributions
- Pre-publication history
- References

Authors

Related links

BMC Med Res Methodol. 2008; 8: 49.

Published online 2008 July 25. doi: 10.1186/1471-2288-8-49

PMCID: PMC2529340

Guimin Gao,^{1} Wen Wan,^{4} Sijian Zhang,^{5} David T Redden,^{1,}^{2,}^{3} and David B Allison^{}^{1,}^{2,}^{3}

Guimin Gao: ude.bau@gnimiug; Wen Wan: ude.bau.ccc@naw.new; Sijian Zhang: ude.bau@gnahzr; David T Redden: ude.bau@evadnmas; David B Allison: ude.BAU@nosillaD

Received 2008 February 12; Accepted 2008 July 25.

Copyright © 2008 Gao et al; licensee BioMed Central Ltd.

This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

This article has been cited by other articles in PMC.

Investigators are actively testing interventions intended to increase lifespan and wish to test whether the interventions increase maximum lifespan. Based on the fact that one cannot be assured of observing population maximum lifespans in finite samples, in previous work, we constructed and validated several tests of difference in the upper parts of lifespan distributions between a treatment group and a control group by testing whether the probabilities that observations are above some threshold defining 'old' or being in the tail of the survival distribution are equal in the two groups. However, a limitation of these tests is that they do not consider *how much *above the threshold any particular observation is.

In this article we propose new methods which improve upon our previous tests by considering not only whether an observation is above some threshold, but also the magnitudes by which observations exceed the threshold.

Simulations show that the new methods control type I error rates quite well and that the power of the new methods is usually higher than that of the tests we previously proposed. In illustrative analyses of two real datasets involving rodents, when setting the threshold equal to 110 (100) weeks for the first (second) datasets, the new methods detected differences in 'maximum lifespan' between groups at nominal alpha levels of 0.01 (0.05) for the first (second) datasets and provided more significant results than competitor tests.

The new methods not only have good performance in controlling the type I error rates but also improve the power compared with the tests we previously proposed.

Investigators are actively testing interventions intended to increase lifespan [1]. Caloric restriction (CR) is the intervention most well established as able to increase lifespan in experimental models [2], and investigators are now seeking other interventions that may mimic the life-prolonging effects of CR without requiring a reduction in caloric intake [3]. It is frequently said that CR not only increases average lifespan, but also 'maximum' lifespan [4]. Many researchers in the field of aging therefore wish to test whether other interventions increase maximum lifespan.

Recognizing this and the fact that one cannot be assured of observing population maximum lifespans in finite samples, Wang et al. [5] constructed and validated several tests (hereafter, the *'Wang-Allison tests'*) of differences in the upper parts of lifespan distributions by building on the work of Redden et al. [6] in the area of quantile regression. Wang et al. also showed that a commonly used test for differences in maximum lifespan that involved comparing the means of the top *p*% (e.g., top 10%) of each of two samples (e.g., a treatment and a control sample) was not valid in that it had an excessive type-1 error rate. Nevertheless, there is appeal to using the full continuity of information in the upper tails of the sample distribution, and colleagues have recently suggested to us that a limitation of the Wang-Allison tests is that they only treat individual lifespans as being above or below some threshold defining 'old' or being in the tail of the survival distribution. That is, the Wang-Allison tests do not consider *how much *above the threshold any particular observation is, only *whether *the observation is above the threshold. We acknowledge this limitation and in response, we herein develop new tests that utilize the continuity of information among observations that exceed the threshold of interest, are more powerful than competing tests, including the Wang-Allison tests, in most cases, and remain valid under the null hypothesis of no effect on 'maximum' lifespan.

Consider an experiment with two groups, *treatment *and *control*. The extension to more than two groups is straightforward (see discussion section). Let *X *be an indicator variable taking the value 1 for observations in the treatment group and 0 for observations in the control group. Let *Y *denote survival time. Let *τ *denote some threshold chosen by the investigator to denote an extreme portion of the distribution. In survival studies, *τ *can be chosen in advance to correspond to an age considered 'old' (e.g., 30 months in mice) or set to some high sample percentile (e.g., the 90th). Critically important, *τ *must be set to the same value for the two groups. That is, if *τ *is to be defined by an upper sample quantile, it should be the upper sample quantile of both of the two groups combined, not of each group separately.

Although not described in exactly these terms in the paper by Wang et al. [5], the Wang-Allison tests essentially create a new variable, *W*, where for the i^{th }subject, *W*_{i } 0 if *Y*_{i }≤ *τ*, and *W*_{i } 1 if *Y*_{i }> *τ*, and subsequently tests whether *W *is associated with *X *using an appropriate test statistic.

Thus, the Wang-Allison tests test the following null hypothesis:

A problem with the Wang-Allison tests is that, hypothetically, *P *(*Y *> *τ*|*X *= 1) may equal *P *(*Y *> *τ*|*X *= 0) and yet the average magnitude by which lifespans exceed *τ *when X = 1 may be radically different than when X = 0. This is exemplified in the hypothetical frequency distributions depicted in Figure Figure1.1. Note that these hypothetical distributions are not intended to be realistic, but only to clarify the point.

The left graph is the density for control group (X = 0), 0.9*Weibull(5.73, 106.6)*I(X ≤ 130) + 0.1*Weibull(5.40, 100.06)*I(X > 130), and the right graph is the density for treatment group (X = 1), 0.9*Weibull(5.73, 106.6)*I(X ≤ **...**

Let *X*^{1 }and *X*^{0 }denote the numbers of observations with *Y*_{i }> *τ *in the treatment group and control group, respectively. The Wang-Allison tests use the test procedures for two independent binomial proportions [7] and these procedures require that *X*^{1 }and *X*^{0 }are independent. In the Wang-Allison tests, if the threshold is set in advance according to prior knowledge, *X*^{1 }and *X*^{0 }can satisfy the requirement of independence. But if *τ *is set to be the 90-th percentile, *X*^{1 }and *X*^{0 }may not be independent, this creates a theoretical problem. However, on an empirical level, our simulations show that in the sample sizes we considered, this is not an apparent problem because the Wang-Allison tests have very high power and can control type I error quit well in the simulation studies and are practical for the lifespan studies). When *X*^{1 }and *X*^{0 }are not independent, simulation studies (including estimation of power and type I error) are an effective way to evaluate the methods (such as Wang-Allison tests) using the test procedures for two independent binomial proportions.

An alternative to testing *H*_{0,A }is to test the following conceptually related but mathematically distinct null hypothesis:

where *μ *(•) denotes the population mean (or expectation) of (•). Though appealing, a problem with testing *H*_{0,B }is that when *P *(*Y *> *τ*|*X *= 1) >> *P *(*Y *> *τ*|*X *= 0) or *P *(*Y *> *τ*|*X *= 1) <<*P *(*Y *> *τ*|*X *= 0), for any finite sample with equal initial assignment to the two groups, *E *[*n*_{0}] <<*E *[*n*_{1}] or *E *[*n*_{0}] >> *E *[*n*_{1}], where *E *[*n*_{0}] denotes the expected number of observations in the control group for which *Y *> *τ*, and *E *[*n*_{1}] denotes the expected number of observations in the treatment group for which *Y *> *τ*. This imbalance between *E *[*n*_{0}] and *E *[*n*_{1}] will greatly reduce the power to reject *H*_{0,B}. In fact, in the extreme, when either *P *(*Y *> *τ*|*X *= 1) or *P *(*Y *> *τ*|*X *= 0), there will be zero power to reject *H*_{0,B }(actually, it is appropriate to say that *H*_{0,B }is undefined in such cases). Such a situation is exemplified in the hypothetical frequency distributions depicted in Figure Figure2.2. Again, these hypothetical distributions are not intended to be realistic, but only to clarify the point.

The left graph is the density for control group (X = 0), 0.9*Weibull(5.07, 93.52)*I(X ≤ 130) + 0.1*Weibull(5.40, 100.06)*I(X > 130), and the right graph for treatment group (X = 1), 0.6*Weibull(5.07, 93.52)*I(X ≤ 130) + 0.4*Weibull(5.40, **...**

Thus, one can conceive situations in which the power to reject *H*_{0,A }will be zero and yet the upper tails of the distribution are clearly different. Similarly, one can conceive situations in which the power to reject *H*_{0,B }will be zero and yet again the upper tails of the distribution are clearly different. Hence, we propose a single-step union-intersection test [8] of the following compound null hypothesis:

We construct the test of *H*_{0,C }with the following simple procedure. Define a new variable *Z *such that *Z*_{i } *I*(*Y*_{i }> *τ*)*Y*_{i}, where *I*(•) denotes the indicator function taking on values of one if (•) is true and zero otherwise. One can then simply conduct an appropriate test (several candidates will be considered below) of whether the population mean of Z is significantly different between the treatment and control groups. This approach (hereafter new *tests*), has several desirable properties. First and foremost, when an appropriate test statistic is used, the approach will be valid. That is, unlike the conditional t-tests (CTTs) commonly used and shown to be invalid by Wang et al. [5], when *H*_{0,C }is true, it will only be rejected 100**α*% of the time at the nominal *α *level even if *f*(*Y*|*Y *≤ *τ *∩ *X *= 1) ≠ *f*(*Y*|*Y *≤ *τ *∩ *X *= 0), where *f*(•) denotes the probability density function of (•).

Note that expectation (or population mean) of *Z*, *μ*(*Z*) = *P*(*Y *> *τ*) *μ*(*Y *| *Y *> *τ*). Therefore the new test for *H*_{0,C }is really testing whether

while the method for *H*_{0,B }is testing whether *μ *(*Y *| *Y *> *τ *∩ *X *= 1) = *μ*(*Y *| *Y *> *τ *∩ *X *= 0) and the method for *H*_{0,A }is testing whether *P*(*Y *> *τ *| *X *= 1) = *P*(*Y *> *τ *| *X *= 0). The mean difference of *μ*(*Z*) between two groups consists of two components: the difference between probabilities *P*(*Y *> *τ *| *X *= 1) and *P*(*Y *> *τ *| *X *= 0) and the difference between expectations *μ *(*Y *| *Y *> *τ *∩ *X *= 1) and *μ *(*Y *| *Y *> *τ *∩ *X *= 0). The test for *H*_{0,A }focuses on the first component and the test for *H*_{0,A }focuses on the second one, while the test for *H*_{0,C }is related to both components.

We also note that Dominici and Zeger [9] studied similar mean difference components for two groups (cases and controls) by estimating the mean difference Δ(** v**) for the two groups conditional on a vector of covariates

Δ(*v*) = *P*(*Y *> 0 | *X *= 1, *v*) *μ *(*Y *| *Y *> 0, *X *= 1, *v*) - *P*(*Y *> 0| *X *= 0, *v*) *μ *(*Y *| *Y *> 0, *X *= 0, *v*),

where, *Y *is nonnegative random variable denoting the health expenditures. While Dominici and Zeger [9] estimate the mean difference of nonnegative random variables (*Y*) for two groups, our methods test the mean difference of random variables (*Y*) which are greater than threshold *τ*.

We evaluate the tests via computer simulation. For each scenario simulated, we evaluate the tests at the 2-tailed .05 *α *level and at the 2-tailed .01 *α *level using 5,000 simulated datasets per scenario (except for permutation tests where we use 1,000 datasets per scenario and 1,000 random permutations by Monte Carlo sampling for each dataset). In simulation 1, we first evaluate performance in simulation under the null hypothesis *H*_{0,C }(i.e., both *H*_{0,A }and *H*_{0,B }are true) and yet *f *(*Y*|*Y *≤ *τ *∩ *X *= 1) is radically different from *f *(*Y*|*Y *≤ *τ *∩ *X *= 0). After showing that the tests remain valid even in these extreme circumstances, we compare their power in several scenarios (simulations 2–4) described below. For each scenario, we assumed that there were two groups with an equal number of subjects per group. We ran scenarios with 50, 80, or 100 subjects in each of the two groups, realistic sample sizes for animal model longevity research.

We simulated data using a concatenation of Weibull distributions to flexibly emulate the data observed in a real study [10] of obese animals (control; X = 0) versus animals that were obese and then lost weight via CR (treatment; X = 1). Specifically, For example, in simulations 1–4, we simulated Y from the following distribution:

$$f(Y|X=j)={r}_{j}\left[\frac{{b}_{j,L}}{{a}_{j,L}}{\left(\frac{Y}{{a}_{j,L}}\right)}^{{b}_{j,L}-1}{e}^{-{\left(\frac{Y}{{a}_{j,L}}\right)}^{{b}_{j,L}}}\right]I(Y\le 130)+(1-{r}_{j})\left[\frac{{b}_{j,U}}{{a}_{j,U}}{\left(\frac{Y}{{a}_{j,U}}\right)}^{{b}_{j,U}-1}{e}^{-{\left(\frac{Y}{{a}_{j,U}}\right)}^{{b}_{j,U}}}\right]I(Y>130),$$

where j = 0 to 1, lifespan (Y) is measured in weeks, a_{j,L }and b_{i,L }are the parameters of a Weibull distribution for the lower 90% of the distribution, and a_{j,U }and b_{i,U }are the parameters of a Weibull distribution for the upper 10% of the distribution. *r*_{j }is a proportion parameter, for example *r*_{j }= 0.9. The specific values of the parameters used are provided in Figure Figure33.

Each of the tests listed below was implemented in two manners, first with *τ *set in advance to a fixed lifespan value (130 weeks), and second with *τ *set at the sample 90^{th }percentile of the two groups combined. In real-life situations, one usually *does *know the threshold of interest *a priori*. We do recognize that we will not have such knowledge in all cases. It is for this reason that when analyzing the simulated data, we also consider a threshold of the 90^{th }percentile of the data allowing for an *ad hoc *data-based determination of a threshold.

For comparative purposes, the first category of tests we evaluated were the tests denoted QT3 and QT4 in Wang et al [5] which are, respectively, Boschloo's test and an exact unconditional test based on the observed difference divided by its estimated standard error under the null hypothesis (score statistic) and are described in more detail by Mehrotra et al. [7]. These were the two tests that Wang et al. [5] had found performed best as tests of *H*_{0,A}.

In testing *H*_{0,B}, subjects were only included in the analysis when their lifespans exceeded *τ*. Distributions of survival times (lifespans) are rarely Gaussian and, even if they were nearly Gaussian after, for example, log transformation, the distribution of just the tail portion (i.e., *f *(*Y*|*Y *> *τ*) would not be. Hence, in constructing tests we relied on nonparametric statistical methods. Specifically, we used the Wilcoxon-Mann-Whitney (exact) test [11,12] and a permutation test (with t-statistic) as described by Good [13] to test for differences in lifespan among those subjects whose lifespans exceeded *τ*.

In testing *H*_{0,C}, all subjects were analyzed, but the variable analyzed was Z as defined above. Because the distribution of Z cannot be normally distributed, we again used the Wilcoxon-Mann-Whitney test and a permutation test to test for differences in Z.

For a dataset with *n*_{1 }(*n*_{2}) subjects in treatment (control) group, the permutation test can be performed in the following way: First put all the (*n*_{1 }+*n*_{2}) subjects together, and then generate 1000 replicated datasets. For each replicated dataset, we randomly sample *n*_{1 }subjects from the (*n*_{1 }+*n*_{2}) subjects and assign them to treatment group, and assign the left *n*_{2 }subjects to control group. We run T-test on the observed dataset and the 1000 replicated datasets. Let *T*_{0 }be the T value for the observed dataset, then p-value for the permutation test is calculated as the proportion of replicated datasets with absolute T values greater than or equal to the absolute valued of *T*_{0}.

Results are displayed in Tables Tables11 to to5.5. As can be seen, the new methods for tests of *H*_{0,C }controls type I error rates quite well. The power of the new methods are always higher than or very close to that of the methods for tests of *H*_{0,A }(Wang-Allison tests) and are higher than that of the methods for tests of *H*_{0,B }(Wilcoxon-Mann-Whitney tests and permutation tests for observations above the threshold *τ*) in some of the simulations.

Table Table11 shows the type I error rate of the tests (in simulation 1) when the null hypothesis *H*_{0,C }is true (i.e., both *H*_{0,A }and *H*_{0,B }are true) and yet *f *(*Y*|*Y *≤ *τ *∩ *X *= 1) is radically differentfrom *f *(*Y*|*Y *≤ *τ *∩ *X *= 0). The type I error rates of the new methods are comparable to those of the methods for tests of *H*_{0,A }and those of the methods for tests of *H*_{0,B }. It is note worthy that there is a slight but fairly consistent excess of type I errors when the sample 90^{th }percentile is used rather than a fixed cutoff point. This is because the sample 90^{th }percentile is a random variable and when it falls below its population level, the null hypothesis is no longer strictly true in our simulations. That is, the tests remain valid tests of differences in distributions above the actual value used but should not be strictly interpreted as tests of differences in distributions above the 90^{th }(or any other percentile). In practice, this distinction is probably trivial.

In simulation 2 (see Table Table2),2), where *H*_{0,A }is true, *H*_{0,B }is false and *f *(*Y*|*Y *≤ *τ *∩ *X *= 1) is radically different from *f *(*Y*|*Y *≤ *τ *∩ *X *= 0), the new methods for tests of *H*_{0,C }and the methods for tests of *H*_{0,A }have lower power than that of the corresponding methods for tests of *H*_{0,B}, however, the new methods for tests of *H*_{0,C }can slightly improve the power compared to the methods for tests of *H*_{0,A}.

Table Table33 shows the power of the tests in Simulation 3, where *H*_{0,B }is true, *H*_{0,A }is false and *f *(*Y*|*Y *≤ *τ *∩ *X *= 1) is radically different from *f *(*Y*|*Y *≤ *τ *∩ *X *= 0). The new methods for tests of *H*_{0,C }and the methods for tests of *H*_{0,A }have very similar power which is much higher than that of the corresponding methods for tests of *H*_{0,B}.

From simulation 4 (see Table Table4),4), where *H*_{0,B }is false, *H*_{0,A }is false and *f *(*Y*|*Y *≤ *τ *∩ *X *= 1) and *f *(*Y*|*Y *≤ *τ *∩ *X *= 0) are identical, we can find that the new methods for tests of *H*_{0,C }always have higher power than the corresponding methods for tests of *H*_{0,A}. When *τ *being set to the 90th percentile of the sample, the new methods also have higher power than the corresponding methods for tests of *H*_{0,B}.

Finally, we conducted a set of simulations under what we perceived to be the most realistic situations. Here both *H*_{0,A }and *H*_{0,B }are false, *f *(*Y*|*Y *≤ *τ *∩ *X *= 1) is quite different from *f *(*Y*|*Y *≤ *τ *∩ *X *= 0), and the distributions have no discontinuities. In other words, there is just a simple reduction in the hazard rate when X = 1. Table Table55 presents the power of the tests in Simulation 5, where *f *(*Y*|*X *= 1) = 1.2*f *(*Y*|*X *= 0). In this simulation, the methods for tests of *H*_{0,B }almost have no power because the control group always has no or few observations above the threshold *τ *. The new methods for tests of *H*_{0,C}, when using a permutation test, have power higher than or equal to that of the methods for tests of *H*_{0,A}.

To illustrate the methods, we applied them to two real datasets. In both of these datasets, prior research had shown differences in overall survival rate and we tested for differences in 'maximum lifespan' herein. The first was a subset of data reported by Vasselli et al [10]. The subset of the data consists of two groups of Sprague-Dawley rats, those kept on a high-fat diet ad libitum throughout life and becoming obese (EO-HF) and those kept on a high-fat diet ad libitum until early-middle adulthood, becoming obese, and subsequently reduced to normal weight via caloric restriction, but on the same high-fat diet (WL-HF). Each group had 49 rats (see Figure Figure44 for the histograms for the data). The second dataset was from a study comparing the lifespan of Agouti-related protein-deficient (AgRP(-/-)) mice to wildtype mice (+/+) as reported by Redmann & Argyropoulos [14]. This dataset consists of 16 mice with genotype '+/+' and 21 mice with genotype '-/-' (see Figure Figure55 for the histograms for this dataset). From Figure Figure4,4, we can see the upper tails of the histograms of the two groups are different. Similar results can be found in Figure Figure55.

Results (p values of tests) are shown in Table Table6.6. As can be seen, when setting *τ *equal to 110 (100) for the first (second) datasets, both the methods for tests of *H*_{0,A }and the new methods for tests of *H*_{0,C }can detect the differences in 'maximum lifespan' between groups at nominal alpha levels of 0.01 (0.05) for the first (second) datasets. But the methods for tests of *H*_{0,B }cannot detect the difference for all different values of *τ *. The following description may provide some explanation to these results. For the first dataset, when set *τ *= 110, the proportions of the observations greater than *τ *in the EO-HF group and WL-HF group (i.e., estimations of *P*(*Y *> *τ *| *X *= 0) and *P*(*Y *> *τ *| *X *= 1)) are 0.061 and 0.306, respectively. These two proportions are significantly different and not surprisingly, the methods for tests of *H*_{0,A }can detect the difference in 'maximum lifespan' between the two groups. Second, the sample means of the observations greater than *τ *in the two groups (i.e., estimations of *μ *(*Y *| *Y *> *τ *∩ *X *= 1) and *μ *(*Y *| *Y *> *τ *∩ *X *= 0)) are 117.8 and 122.9, respectively, and there is no much difference between these sample means. However the sample means of the Z-values in the two group (i.e., the estimations of *P*(*Z *| *X *= 0) and *P*(*Z *| *X *= 1)) are 7.210 and 37.633, respectively, and are *greatly *different, where, *Z*_{i } *I*(*Y*_{i }> *τ*)*Y*_{i}. These may explain that the methods for tests of *H*_{0,B }cannot reject the null but the new methods for tests of *H*_{0,C }can detect the difference in 'maximum lifespan' between the two groups. Similarly, for the second dataset, when set *τ *= 100, the proportions of the observations greater than *τ *in the group with genotype '+/+' and group with genotype '-/-' are 0.188 and 0.571, respectively. The sample means of the observations greater than *τ *in the two groups are 109.3 and 110.9, respectively. The sample means of the Z-values in the two groups are 20.5 and 63.4 respectively.

From Table Table66 we can also see that in almost all situations the p-values of the new methods for tests of *H*_{0,C }are somewhat smaller than those of the methods for tests of *H*_{0,A}. This is consistent with the simulations showing greater power of the new methods.

Herein, we proposed new methods for testing the difference of 'maximum' lifespan between groups (e.g., treatment and control). By defining a new variable *Z *such that *Z*_{i } *I *(*Y*_{i }> *τ*)*Y*_{i }for each observation and then applying Wilcoxon-Mann-Whitney test or better still a permutation test to *Z*, the new methods achieve far better performance when considered across a broad range of circumstances in terms of both Type-1 error rates and power. In the new methods, we use the Wilcoxon-Mann-Whitney test or permutation test. One could also choose to use a bootstrap test in place of these two tests. However, additional simulations would likely be warranted to evaluate its performance relative to the permutation test we have evaluated herein.

It is straightforward to extend the new methods to more than two groups. For example, one could use the Kruskal-Wallis Test to replace the Wilcoxon-Mann-Whitney test, or use permutation testing for multiple groups to replace that for two groups.

We have shown that the new methods are effective by simulation studies when the sample size (N) of each group is 50, 100, or 200. We expect that these methods will be also be relatively more powerful than existing competitors for much larger sample sizes, such as N = 500 or even N = 5000. There are some mouse data sets (like those of the National Institute of Aging's Intervention Testing Program) where N > 500, and worm and fly data sets in which N may sometimes even exceed 5000. We expect that the new methods are equally applicable to the analysis of such data.

Finally, we note that the tests proposed here are described for the context of testing for differences in lifespan. However, there is nothing intrinsic to them that limits their use to survival data. They could be applied to any situation in which one wanted to test for group differences in the tails of distributions.

The authors declare they have no competing interests.

DBA participated in all parts of the work of the study (including the study design, methodology development, simulations, data acquisition, and manuscript drafting). He wrote major sections of the original manuscript. He revised final version of the manuscript. DTR provided consulting on the statistical issues in the study and manuscript editing. SZ provided assistance in programming for simulation studies. WW provided consulting on simulation and prepared the figures. GG did all simulation studies and real data analyses and drafted the sections of Results, Illustration with real data, and Discussion of the manuscript and participated in revision of the manuscript.

The pre-publication history for this paper can be accessed here:

We thank Richard Miller, David Harrison, and Simon Klebanov for thought provoking dialogue that inspired this paper and George Argyropoulos for graciously providing data. This research was supported in part by NIH grants P30DK056336, R01DK067487, and P01AG11915 and by grant GM073766 from the National Institute of General Medical Sciences.

- Miller RA, Harrison DE, Astle CM, Floyd RA, Flurkey K, Hensley KL, Javors MA, Leeuwenburgh C, Nelson JF, Ongini E, Nadon NL, Warner HR, Strong R. An aging Interventions Testing Program: study design and interim report. Aging Cell. 2007;6:565–575. doi: 10.1111/j.1474-9726.2007.00311.x. [PubMed] [Cross Ref]
- Wanagat J, Allison DB, Weindruch R. Caloric intake and aging: mechanisms in rodents and a study in nonhuman primates. Toxicol Sci. 1999;52:35–40. [PubMed]
- Ingram DK, Zhu M, Mamczarz J, Zou S, Lane MA, Roth GS, deCabo R. Calorie restriction mimetics: an emerging research field. Aging Cell. 2006;5:97–108. doi: 10.1111/j.1474-9726.2006.00202.x. [PubMed] [Cross Ref]
- Weindruch R, Walford RL. The retardation of aging and disease by dietary restriction. C.C. Thomas Publisher, Springfield, IL; 1988. The retardation of aging and disease by dietary restriction. [PubMed]
- Wang C, Li Q, Redden DT, Weindruch R, Allison DB. Statistical methods for testing effects on "maximum lifespan". Mech Ageing Dev. 2004;125:629–32. doi: 10.1016/j.mad.2004.07.003. Erratum in: Mech Ageing Dev 2006, 127(7):652. [PubMed] [Cross Ref]
- Redden DT, Fernandez JR, Allison DB. A simple significance test for quantile regression. Stat Med. 2004;23:2587–97. doi: 10.1002/sim.1839. [PubMed] [Cross Ref]
- Mehrotra DV, Chan IS, Berger RL. A cautionary note on exact unconditional inference for a difference between two independent binomial proportions. Biometrics. 2003;59:441–450. doi: 10.1111/1541-0420.00051. [PubMed] [Cross Ref]
- Little RC, Folks JL. On the comparison of two methods of combining independent tests. Journal of the American Statistical Association. 1972;67:223. doi: 10.2307/2284731. [Cross Ref]
- Dominici F, Zeger SL. Smooth quantile ratio estimation with regression: estimating medical expenditures for smoking-attributable diseases. Biostatistics. 2005;6:505–519. doi: 10.1093/biostatistics/kxi031. [PubMed] [Cross Ref]
- Vasselli JR, Weindruch R, Heymsfield SB, Pi-Sunyer FX, Boozer CN, Yi N, Wang C, Pietrobelli A, Allison DB. Intentional weight loss reduces mortality rate in a rodent model of dietary obesity. Obes Res. 2005;13:693–702. doi: 10.1038/oby.2005.78. [PubMed] [Cross Ref]
- Wilcoxon F. Individual comparisons by ranking methods. Biometrics. 1945;1:80–83. doi: 10.2307/3001968. [Cross Ref]
- Mann HB, Whitney DR. On a test of whether one of two random variables is stochastically larger than the other. Annals of Mathematical Statistics. 1947;18:50–60. doi: 10.1214/aoms/1177730491. [Cross Ref]
- Good P. Permutation Tests: A Practical Guide to Resampling Methods for Testing Hypotheses. New York: Springer-Verlag; 1994.
- Redmann SM, Jr, Argyropoulos G. AgRP-deficiency could lead to increased lifespan. Biochem Biophys Res Commun. 2006;351:860–4. doi: 10.1016/j.bbrc.2006.10.129. [PMC free article] [PubMed] [Cross Ref]

Articles from BMC Medical Research Methodology are provided here courtesy of **BioMed Central**

PubMed Central Canada is a service of the Canadian Institutes of Health Research (CIHR) working in partnership with the National Research Council's national science library in cooperation with the National Center for Biotechnology Information at the U.S. National Library of Medicine(NCBI/NLM). It includes content provided to the PubMed Central International archive by participating publishers. |