Search tips
Search criteria 


Logo of nihpaAbout Author manuscriptsSubmit a manuscriptHHS Public Access; Author Manuscript; Accepted for publication in peer reviewed journal;
Lifetime Data Anal. Author manuscript; available in PMC 2010 August 10.
Published in final edited form as:
PMCID: PMC2919578

Additive transformation models for clustered failure time data


We propose a class of additive transformation risk models for clustered failure time data. Our models are motivated by the usual additive risk model for independent failure times incorporating a frailty with mean one and constant variability which is a natural generalization of the additive risk model from univariate failure time to multivariate failure time. An estimating equation approach based on the marginal hazards function is proposed. Under the assumption that cluster sizes are completely random, we show the resulting estimators of the regression coefficients are consistent and asymptotically normal. We also provide goodness-of-fit test statistics for choosing the transformation. Simulation studies and real data analysis are conducted to examine the finite-sample performance of our estimators.

Keywords: Additive model, Estimating equation, Goodness-of-fit, Robustness, Transformation

1 Introduction

Clustered failure time data are common in biomedical studies. Such data usually arise when study subjects come from the same cluster, for example, failure time can be time to disease occurrence for the patients in the same clinic or time to blindness for the two eyes in the same patient. One of the commonly used statistical models to analyze clustered failure time data is the gamma-frailty model, which accommodates the intra-cluster dependence by incorporating an unobserved random effect, the so-called frailty, into the Cox (1972) proportional hazards model. The consistency and asymptotic distribution of the maximum likelihood estimator for the gamma frailty model have been rigorously studied by Murphy (1994, 1995) for the case of no covariates and by Parner (1998) for the case with covariates.

The multiplicative hazards model focuses on estimating hazard ratios and its multiplicative structure may not model the real data well. In some cases, an additive effect could be more reasonable. Such an effect can be modeled using the so-called additive hazards model (Aalen 1989; Huffer and McKeague 1991; Lin and Ying 1994; and McKeague and Sasieni 1994; among others), which assumes an additive structure of a baseline hazards function and a covariate effect. Particularly, for univariate failure time, the additive models assume the following expression


where Λ(t|X) denotes the cumulative hazard function for the given, possibly time-dependent, covariates X and Λ(t) is the baseline hazard function. Through the additive structure, the additive model interprets β as risk difference and it has been eloquently advocated and successfully utilized for right-censored independent survival data in many papers, e.g., Andersen et al. (1993, pp. 563–566), Lin and Ying (1994), McKeague and Sasieni (1994), and Shen and Cheng (1999). Gandy and Jensen (2005a, b) developed a goodness-of-fit test for checking additive risk model. Additionally, the multivariate version of the above additive model (1) has been used to model clustered failure time data in Yin and Cai (2004):


where Λij (t|X) denotes the cumulative hazards function for subject j in cluster i and Xij is the associated covariate. Yin (2007) further developed a test for checking the additive structure using clustered data. Recently, Yin et al. (2008) considered an additive hazards model with varying coefficients, where they allowed the effect of Xij (t) to depend on other covariates.

All the previous additive models except Yin et al. (2008) assume that X affects the hazard rate linearly. However, in practical applications, this linear assumption may not always be appropriate. To relax the linear assumption, alternative links other than the linear link can be considered.

Especially, in this paper, we propose the following general class of additive transformation models:


where Q(·) is some known and monotone transformation function. Clearly, when Q(x) = x, we obtain the previous additive model. However, Q can be other positive link function, for example, exp{x} and log(1 + x) etc. For transformation Q(x) which satisfies E[exp{−ξ x}] = exp{−Q(x)} for some positive frailty ξ, model (2) can actually be motivated from a frailty model for clustered failure time data: when failure times are clustered, in parallel to generalizing the proportional hazards model to gamma-frailty model, we can generalize the univariate additive hazards model to incorporate some cluster-specific frailty. One natural generalization of the linear additive model in (1) is the following model


where Λij (t|X, ξi) denotes the cumulative hazards function for subject j in cluster i given ξi and we introduce cluster-specific random effect ξi to account for within-cluster correlation in cluster i. Especially, when ξi has mean 1 and the cluster size is completely random, the above model yields that the effects of X(t) for all clusters have the same average β but allow random variation from cluster to cluster. From this general model, by the definition of Q(x), we obtain that the marginal survival function for subject j in cluster i is


Hence, the marginal hazard function for Tij is given by model (2). Particulary, if ξ follows a gamma-distribution with shape and scale parameters (θ1, θ2), then


When θ1=θ21,Q(x)=θ21log(1+θ2x) and we define Q(x) = x at the special case of θ2 = 0, which is the limit as θ2 goes to zero. However, for transformation Q(x) which is not given by equation E[exp{−ξ x}] = exp{−Q(x)}, model (2) cannot be introduced by any frailty model any more.

The general model (2) gives a class of transformed additive model for marginal hazards function for clustered failure times and provides a very flexible way of modeling hazards models, in parallel to transformation models for multiplicative models. Our goal of this paper is to provide inference results for this general class of models. Especially, in Sect. 2, we give a simple inference procedure for model (2) and provide asymptotic properties for the proposed estimators. In Sect. 3, we discuss in detail some issues on how to select transformation function Q. Sect. 4 provides some numerical results from simulation studies and our approach is applied to data from Diabetic Retinopathy Study (Huster et al. 1989) in Sect. 5.

2 Model and inference

Assume that there are n independent clusters and for cluster i, there are ni subjects. We denote Tij and Xij as the failure time and the covariates associated with subject j in cluster i, respectively. We let Λij (t|Xij) be the cumulative hazard function for Tij given covariates Xij. Our general additive risk model is given by Eq. 2.

Our goal is to estimate parameters specified in Eq. 2. In the presence of right-censoring, we only observe data

(Yij=TijCij,Δij=I(TijCij),Xij), j=1,,ni,

where Cij is the censoring time and assumed to be independent of Tij given Xij and the cluster sizes are assumed to be completely random. According to Eq. 2, it is clear


where Nij (t) = I (Yijtij is the observed event process, Rij (t) = I (Yijt) is the at-risk process, and Q′(·) denotes the derivative of Q(·). Hence, using the fact that


has conditional mean zero and following the work of Lin and Ying (1994), we can estimate β and Λ(·) by solving the following equations


for all s ≥ 0, and


Note that each term of the above equations has mean zero when the parameters take the true values. Particularly, we let Λ(t) be a step function only jumping at the observed event time. Then from Eq. 2, we obtain that for given β,


After substituting back into Eq. 3 and some re-organization, we obtain that the estimator for β solves equation

i=1nj=1niRij(t)[Xij(t)k=1nl=1nkRkl(t)Xkl(t)k=1nl=1nkRkl(t)]   ×{dNij(t)Rij(t)Xij(t)TβQ(0tXij(s)Tβds)dt}=0.

The Newton–Raphson iterations can be used to solve the above equation. We denote the estimator for β as [beta] and denote the estimator for Λ(·) given in Eq. 3 as [Lambda with circumflex](·).

The following results provide the asymptotic properties of the proposed estimators. To this end, we define

=limnn1i=1nj=1niRij(t)[Xij(t)k=1nl=1nkRkl(t)Xkl(t)k=1nl=1nkRkl(t)]T×{Xij(t)Q(0tXij(s)Tβ0ds)+Xij(t)Tβ0Q(0tXij(s)Tβ0ds)     0tXij(s)ds}dt

and assume the limit exists and Σ is non-singular. Let β0 and Λ0(·) denote the true values of β and Λ(·) respectively and cluster sizes nk are random and identically distributed for each cluster. We obtain the following theorem.

Theorem 1

Assume Q(·) is a twice-continuously differentiable function. Moreover, Xij (t) has a bounded total variation with probability one and P (Tij > τ|Xij) > δ and P(Cij > τ|Xij) > δ hold for some positive constant δ. Then n(β^β0,Λ^(t)Λ0(t)) converges in distribution to a zero-mean Gaussian process in Rd × l[0, τ].

The outline of the proof is given in the appendix. From the proof of Theorem 1, we further obtain that the influence function for [beta] is given by

1[j=1niRij(t)[Xij(t)E[l=1nkRkl(t)Xkl(t)]E[l=1nkRkl(t)]]   ×{dNij(t)Rij(t)Xij(t)Tβ0Q(0tXij(s)Tβ0ds)dt}].

Therefore, a consistent estimator for the asymptotic covariance of [beta] is

Σ^1i=1n{j=1niRij(t)[Xij(t)k=1nl=1nkRkl(t)Xkl(t)k=1nl=1nkRkl(t)]    ×{dNij(t)Rij(t)Xij(t)Tβ^Q(0tXij(s)Tβ^ds)dt}]}2Σ^1,

where a[multiply sign in circle]2 = aaT and [Sigma] is a consistent estimator for Σ given by

n1i=1nj=1niRij(t)[Xij(t)k=1nl=1nkRkl(t)Xkl(t)k=1nl=1nkRkl(t)]T  ×{Xij(t)Q(0tXij(s)Tβ^ds)+Xij(t)Tβ^Q(0tXij(s)Tβ^ds)0tXij(s)ds}dt.

Additionally, the asymptotic covariance function for [Lambda with circumflex] (t) and [Lambda with circumflex] (s) can be consistently estimated by the empirical covariance of random variables S(·, t) and S(·, s), where “·” denotes the observed data and


and Sβ is the influence function for [beta] but with β0 replaced by [beta] in its expression.

3 Selection of link function

3.1 Test of goodness-of-fit

One critical issue in model (2) is the choice of the transformation function Q(·). However, in practice, since one does not know the underlying distribution for the frailty, one question is whether the choice of some Q is sufficient to fit the data well. To address this question, we propose the following statistic for testing the goodness of fit with model (2):


In other words, our test statistic is the cumulative summation of the residuals given by


where the cumulation is taken over the values of Xij (t)T [beta]. The similar statistics have been given before, e.g., Yin (2007).

Using Eq. 3, Gn(x) can also be written as


The proposed test statistic Gn(x) can be treated as examining how the mean event process relates to linear predictor Xij (t)T β. Thus, if the transformation function Q is misspecified, we would expect Gn(x) departs from zero. Particulary, the following theorem holds.

Theorem 2

Under model (2), n−1/2Gn(x) converges in distribution to a zero-mean Gaussian process in l(−∞,∞).

Using Theorem 2, we can simulate the limiting Gaussian process to test the goodness of fit. Especially, the resampling approach as given in Su and Wei (1991) can be used to simulate this process: we simulate n i.i.d observations, say Z1,…,Zn, from standard normal distribution. We then generate a new β as


where [Sigma] is a consistent estimate for Σ given in the previous section, and


With βnew, we calculate the process

Gnnew(x)=i=1n{j=1niRij(t)×[I(Xij(t)Tβ^x)k=1nl=1nkRkl(t)I(Xkl(t)Tβ^x)k=1nl=1nkRkl(t)]×{dNij(t)Rij(t)dΛ^(t)Rij(t)Xij(t)TβnewQ     ×(0tXij(s)Tβnewds)dt}}Zi.

We repeatedly generate Gnnew(x) for many times and the obtained simulated process should approximate the limiting process of n−1/2Gn(x).

The rationale behind this resampling approach is below. Following the exact expansion as in proving Theorem 2, we can show that conditional on the observed data, n1/2Gnnew(x) is equivalent to


Therefore, conditioning on the observed data, n1/2Gnnew(x) is approximately a Gaussian process with the same covariance as the asymptotic covariance of n−1/2Gn(x).

Using the resampling approach, we can obtain the approximate distribution of many useful testing statistics, including supx |Gn(x)|, ∫ |Gn(x)|dx, and etc.

3.2 Robustness for null test

One interesting question is whether testing for the null hypothesis: β = 0 is valid even if Q is misspecified. Particularly, we look into this issue under the following conditions:

  • (C.1) Xij is time-independent and has mean zero;
  • (C.2) Cij is independent of Xij and has the same distribution for j = 1,…, ni;
  • (C.3) The misspecified transformation function Q is continuously differentiable and strictly increasing with Q(0) = 0.

Among these conditions, Condition (C.1) holds for any time-independent covariates after proper shifting. Condition (C.2) assumes the common distribution for the censoring time and independence between Cij and Xij. This condition is used mostly for the purpose of simplification; however, it should be carefully justified in practice. For example, when Cij = Cij′ for j, j′ = 1,…, ni are the administrative censoring time, the condition holds when Xij are patient treatment allocations. However, the condition is not satisfied when patients in different treatment groups quit the study at systematically different times. Condition (C.3) is satisfied by transformations Q(x) = θ1 log(1+θ2x). These conditions are sufficient but not necessary conditions.

Under the assumptions in Theorem 1 and from its proof, we know [beta] with misspecified transformation function Q is consistent with some parameter β* which satisfies

E[j=1niRij(t)(XijE[l=1nkRkl(t)Xkl]E[l=1nkRkl(t)])      {dNij(t)Rij(t)XijTβ*Q(tXijTβ*)dt}]=0.



we obtain

E[j=1niI(Yijt)(XijE[l=1nkRkl(t)Xkl]E[l=1nkRkl(t)])    {XijTβ0Q0(tXijTβ0)XijTβ*Q(tXijTβ*)}dt]=0.

This is the starting point for us to examine the robustness when Q is a misspecified transformation function.

When β0 = 0, we note Tij is independent of Xij and has the same survival function, so from (C.1) and (C.2),


Thus, Eq. 6 gives


As the result, we have


where beta is between 0 and β*. Since Q(·) is strictly increasing from (C.3), the above equation holds if and only if β* = 0. That is, we have

Theorem 3

Under conditions (C.1)–(C.3), β* = 0.

Theorem 3 indicates that under the null hypothesis, inference for β0 using misspecified transformation function is still valid. In other words, one need not worry about using incorrect transformation when performing the hypothesis testing for β = 0.

4 Simulation studies

We have conducted simulations to examine the small-sample performance of the proposed method. In the first simulation study, we generated clustered survival data using the following true model


where each cluster contained two subjects and cluster-specific random effect ξ was generated from some gamma distribution with mean one and variance θ. Clearly, such data generation implied transformation additive model (2), where the different choices of θ gave different transformations Q(x). Two covariates were generated for each subject: for the first covariate, one subject in the cluster took value 0 and the other one had value 1; the second covariate was randomly generated from the uniform distribution in [0, 1]. We let Λ(t) = t2/2 and generated censoring time from the uniform distribution in [0, 3], which yielded censoring rate around 30%. We set covariates effects to be 0.2 and 0.8, respectively.

We applied our method to estimate covariate effects based on the working model (2). Additionally, the sandwiched variance estimates was used for inference. Table 1 summarizes the results from 1000 repetitions with sample size 200 and 400. Column “Bias” denotes the average bias of the point estimates; column “SE” denotes the empirical standard error of the point estimates; column “ESE” is the average of the estimated standard errors; column “CP” refers to the coverage probability of 95% confidence interval based on the normal approximation. Table 1 shows that for each of the three transformation models, the proposed method works reasonably well with approximately unbiased estimates and correct inference.

Table 1
Simulation results of estimates with transformed additive models

In the second simulation study, we used the same set-up as before. However, we focused on the small-sample behavior for goodness-of-fit statistics in model (2). To generate the stochastic process in the goodness-of-fit statistics, we let x go through all the grid points in the range of XT β. To obtain the distribution of such process, we adopted the resampling approach described in Sect. 3.1. Additionally, we focused on the supreme statistics supx |Gn(x)| and used 500 resamplings to simulate its distribution. Table 2 reports the probability that the observed statistics are larger than the 95% percentile of the simulated statistics from 1000 repetitions. It shows that the tail probability using our approach is very close to the theoretical value 5%, especially for n = 400. In other words, when the transformation model is specified correctly, using our goodness-of-fit statistics should provide a valid inference.

Table 2
Empirical tail probability for goodness-of-fit statistics

In the third simulation study, we aimed to confirm the robustness results as given in Sect. 3.2. In this simulation, we considered only one uniformly distributed covariate which had the same value for the two subjects in the same cluster. The true model corresponded to Q(x) = 2 log(1 + x/2) and assumed that the covariate had no effect. However, we considered estimating β using the working model (2) with each of three transformations Q(x) = x, Q(x) = 2 log(1 + x/2), and Q(x) = log(1 + x). Thus, we misspecified the transformation in the first and third cases. Table 3 gives the results from 1000 repetitions with column “α” being the type I error when testing the null hypothesis. Not surprisingly, we conclude that no matter what transformation is used, the inference for the null effects is always correct.

Table 3
Inference with misspecified transformation under the null hypothesis

5 Application

We apply the proposed methods to the well-known Diabetic Retinopathy Study (Kupfer and ETDRS research group 1976; Huster et al. 1989), which was conducted to assess the effectiveness of laser photocoagulation in delaying visual loss among patients with diabetic retinopathy. One eye of each patientwas randomly selected to receive the laser treatment while the other eye was used as a control. The failure time of interest is the time (in months) to visual loss as measured by visual acuity less than 5/200. Following previous authors, we confine our attention to a subset of 197 high-risk patients, and consider three covariates: X1ij indicates, by the values 1 versus 0, whether or not the jth eye (j = 1 for the left eye and j = 2 for the right eye) of the ith patient was treated with laser photocoagulation, X2i1 [equivalent] X2i2 indicates, by the values 1 versus 0, whether the ith patient had adult-onset or juvenile-onset diabetics, and X3i1 = X3i2 is the age of the subject. Furthermore, we include the interaction term between treatment and diabetic type in the regression.

We fit model (2) with these covariates and consider a class of transformations Q(x) = θ−1 log(1 + θx) by varying θ from 0 to 1 with Q(x) = x when θ = 0. To select the best fit, we calculate the goodness-of-fit statistic for each θ and the final θ is chosen as the one minimizing the L2-norm of such statistic. The best θ is determined to be 0.79. For comparison, Table 4 gives the estimated results from θ = 0, θ = 0.79, and θ = 1. Table 4 shows that for different transformations, the point estimates are very close to each other but there is slightly increasing variation with increasing θ. Based on the selected model, both the treatment indicator and the interaction term are significant, whereas the diabetic type is not. The findings of both the direction and the significance of the covariate’s effect agree with the results using different models (Cai et al. 2002; Zeng et al. 2005). Figure 1 gives the estimated cumulative hazards function for Λ(t) for the untreated eye in a patient with juvenile diabetics and age 16 and it appears pretty linear.

Fig. 1
Estimated cumulative hazards function for θ = 0.79 in an untreated eye for a patient with juvenile diabetics and age 16. The unit of x-axis is month.
Table 4
Application to Diabetic Retinopathy Study

In Fig. 2, we also plot the process for testing goodness-of-fit along with a few random paths from the limiting process, where the thick curve is the observed process. The plot shows that the selected model fits data pretty well. The p-value using the supreme statistics is 0.42. For comparison, we also fit the proportional hazards model with gamma frailty. Since the marginal hazard model is given by θ−1 log{1 + Λ(t)eXTβθ} under the gamma frailty model where θ is the variance of the gamma frailty, we then construct the same goodness-of-fit statistic


Fig. 2
The goodness-of-fit statistic for θ = 0.79 (in thick solid line) and simulated random paths from its limiting process.

It turns out that the observed process for GnM(x) has an increasing drift (Fig. 3) indicating not a good fit. Additionally, testing the equivalence of this model with our previously selected model using the supreme value of GnM(x) yields the p-value close to zero. Thus, we conclude that the multiplicative model is not suitable for this data set. The additive model with θ = 0.79 is a good choice.

Fig. 3
The goodness-of-fit statistic for the proportional hazards model with frailty: x-axis is the linear predictor from the proportional hazards model.

6 Remarks

We have proposed a class of additive transformation models for multivariate failure time. We have particularly discussed the issue of selecting the transformation by proposing a valid goodness-of-fit test. Even though we considered the multivariate failure time, it is noted that all the procedures and approaches apply to the univariate failure time. However, the latter lacks the motivation for transformation model as given in the introduction. For the additive risk model, one drawback is that it may yield the decreasing cumulative hazards function when predicting Λ(t|X). In this case, one usually modifies the predicted value, say [Lambda with circumflex] (t|X), as maxst [Lambda with circumflex](s|X). It is easy to show that such modification still yields a consistent estimator.

When transformation Q(x) is specifically from some gamma-frailty, we can work on the full likelihood function instead of marginal transformation hazards model as given in the paper. The observed likelihood function has form


where f (ξ) is the gamma-frailty density. The inference for maximizing the likelihood function becomes much harder. It would be interesting to see the efficiency gain over our estimators by using this approach. Furthermore, one may even allow frailty variability to vary from cluster to cluster.

The transformation models can also be extended to other cases, for example, multiple types of events (Cai and Schaubel 2004) by allowing different baseline hazards function and different effects for different events. Additionally, when neither additive model nor multiplicative model is a sufficiently good approximation to model data structure, some combination of additive and multiplicative effects as given by Lin and Ying (1995) and Zeng et al. (2005), along with transformations, may be useful.


This research is partly supported by National Institute of Health Grant NILBI R01 HL-57444.

Appendix: outline of the proofs

Proof of Theorem 1 Let


First, we note


Thus, in any neighborhood of β0, ln(β) converges almost surely and uniformly in β to


Moreover, [nabla]βln(β) uniformly converges to [nabla]βl(β). Since

l(β0)=E[j=1niRij(t)[Xij(t)E[l=1nkRkl(t)Xkl(t)]E[l=1nkRkl(t)]]    ×{dNij(t)Rij(t)Xij(t)Tβ0Q(0tXij(s)Tβ0ds)dt}]=E[j=1niRij(t)[Xij(t)E[l=1nkRkl(t)Xkl(t)]E[l=1nkRkl(t)]]dΛ(t)]=0

and − [nabla]βl(β0) = Σ is non-singular by assumption, from the inverse function theorem, there exists some [beta] such that ln([beta]) = 0. Therefore, we have shown [beta]p β0.

By the Taylor expansion, it is easy to show

n(β^β0)=1Gn[j=1niRij(t)[Xij(t)E[l=1nkRkl(t)Xkl(t)]E[l=1nkRkl(t)]]   ×{dNij(t)Rij(t)Xij(t)Tβ0Q(0tXij(s)Tβ0ds)dt}]+op(1),

where Gn denotes the empirical process from n i.i.d observations. Furthermore, from Eq. 3 and the fact that


we obtain

n(Λ^(t)Λ0(t))    =Gn{0tj=1niRij(u)[dNij(u)Rij(u)Xij(u)Tβ^Q(0uXij(s)Tβ^ds)du]E[j=1niRij(u)]}    Gn{0tE[j=1niRij(u)[dNij(u)Rij(u)Xij(u)Tβ^Q(0uXij(s)Tβ^ds)du]]j=1niRij(u)E[j=1niRij(u)]2}    E{0tE[j=1niRij(u)Xij(u)Tβ˜β(0uXij(s)Tβ˜ds)du]E[j=1niRij(u)]}n(β^β0),

where beta is between [beta] and β0. Since Xij (t), Rij (t) and Nij (t) are stochastic process with bounded variation, they belong to some Donsker class indexed by t. The same holds for Xij(t)T β, 0tXij(s)Tβdsand0tXij(s)Xij(s)Tβds as stochastic process indexed by both t and β. As the result, we have

n(Λ^(t)Λ0(t))  =Gn{0tj=1niRij(u)[dNij(u)Rij(u)Xij(u)Tβ0Q(0uXij(s)Tβ0ds)du]E[j=1niRij(u)]}    Gn{0tE[j=1niRij(u)[dNij(u)Rij(u)Xij(u)Tβ0Q(0uXij(s)Tβ0ds)du]]j=1niRij(u)E[j=1niRij(u)]2}    E{0tE[j=1niRij(u)Xij(u)Tβ0βQ(0uXij(s)Tβ0ds)du]E[j=1niRij(u)]}n(β^β0)+op(1).

Then theorem 1 holds using the Donsker theorem (c.f., van der Vaart and Wellner 1996)

Proof of Theorem 2 First note

1n{i=1nj=1niRij(t)[I(Xij(t)Tβ^x)k=1nl=1nkRkl(t)I(Xkl(t)Tβ^x)k=1nl=1nkRkl(t)]     ×{dNij(t)Rij(t)Xij(t)Tβ0Q(0tXij(s)Tβ0ds)dt}}   =1n{i=1nj=1niRij(t)     ×[I(Xij(t)Tβ^x)k=1nl=1nkRkl(t)I(Xkl(t)Tβ^x)k=1nl=1nkRkl(t)]dMij(t)},



From the martingale property of d Mij, we obtain

1n{i=1nj=1niRij(t)[I(Xij(t)Tβ^x)k=1nl=1nkRkl(t)I(Xkl(t)Tβ^x)k=1nl=1nkRkl(t)]     ×{dNij(t)Rij(t)Xij(t)Tβ0Q(0tXij(s)Tβ0ds)dt}}   =1n{i=1nj=1niRij(t)     ×[I(Xij(t)Tβ0x)E[l=1nkRkl(t)I(Xkl(t)Tβ0x)]E[l=1nkRkl(t)]]dMij(t)}     +op(n1/2).

Therefore, we can rewrite Gn(x) as

Gn(x)  =n1/2Gn[j=1niRij(t)      ×[I(Xij(t)Tβ^x)k=1nl=1nkRkl(t)I(Xkl(t)Tβ^x)k=1nl=1nkRkl(t)]      ×{dNij(t)Rij(t)Xij(t)Tβ0Q(0tXij(s)Tβ0ds)dt}]      1n{i=1nj=1niRij(t)[I(Xij(t)Tβ^x)k=1nl=1nkRkl(t)I(Xkl(t)Tβ^x)k=1nl=1nkRkl(t)]      ×Rij(t)β(Xij(t)Tβ0Q(0tXij(s)Tβ0ds))dt}(β^β0)+op(|β^β0|).

Since I (Xij(t)T βx), as a bounded process index by t, β and x, belongs to some Donsker class, we obtain

n1/2Gn(x)  =Gn[j=1niRij(t)      ×[I(Xij(t)Tβ0x)E[l=1nkRkl(t)I(Xkl(t)Tβ0x)]E[l=1nkRkl(t)]]dMij(t)]      E{j=1niRij(t)[I(Xij(t)Tβ0x)E[l=1nkRkl(t)I(Xkl(t)Tβ0x)]E[l=1nkRkl(t)]]      ×Rij(t)β(Xij(t)Tβ0Q(0tXij(s)Tβ0ds))dt}n(β^β0)+op(1).

Using the proof of Theorem 1, we have


Theorem 2 thus follows from the Donsker theorem.

Contributor Information

Donglin Zeng, Department of Biostatistics, University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, USA, ude.cnu.soib@gnezd.

Jianwen Cai, Department of Biostatistics, University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, USA, ude.cnu.soib@iac.


  • Aalen OO. A linear regression model for the analysis of lifetimes. Stat Med. 1989;8:907–925. [PubMed]
  • Andersen PK, Borgan O, Gill RD, Keiding N. Statistical models based on counting processes. New York: Springer-Verlag; 1993.
  • Cai J, Schaubel DE. Marginal means/rates models for multiple type recurrent event data. Lifetime Data Analysis. 2004;10:121–138. [PubMed]
  • Cai T, Cheng SC, Wei LJ. Semiparametric mixed-effects models for clustered failure time data. J Am Stat Assoc. 2002;97:514–522.
  • Cox DR. Regression models and life tables (with discussion) J R Stat Soc B. 1972;34:187–220.
  • Gandy A, Jensen U. Checking a semiparametric additive risk model. Lifetime Data Anal. 2005;11:451–472. [PubMed]
  • Gandy A, Jensen U. On goodness-of-fit tests for Aalen’s additive risk model. Scand J Stat. 2005;32:425–445.
  • Huffer FW, McKeague IW. Weighted least squares estimation for Aalen’s additive risk model. J Am Stat Assoc. 1991;86:114–129.
  • Huster WJ, Brookmeyer R, Self SC. Modelling paired survival data with covariates. Biometrics. 1989;45:145–156. [PubMed]
  • ETDRS research group. Kupfer C. The diabetic retinopathy vitrectomy study (editorial) Am J Ophthalmol. 1976;81:687–690. [PubMed]
  • Lin DY, Ying ZL. Semiparametric analysis of the additive risk model. Biometrika. 1994;81:61–71.
  • Lin DY, Ying ZL. Semiparametric analysis of general additive-multiplicative intensity models for counting processes. Annal Stat. 1995;23:1712–1734.
  • McKeague IW, Sasieni PD. A partly parametric additive risk model. Biometrika. 1994;81:501–514.
  • Murphy SA. Consistency in a proportional hazards model incorporating a random effect. Annal Stat. 1994;22:712–731.
  • Murphy SA. Asymptotic theory for the frailty model. Annals Stat. 1995;23:182–198.
  • Parner E. Asymptotic theory for the correlated gamma-frailty model. Annal Stat. 1998;26:183–214.
  • Shen Y, Cheng SC. Confidence bands for cumulative incidence curves under the additive risk model. Biometrics. 1999;55:1093–1100. [PubMed]
  • Su J, Wei LJ. A lack-of-fit test for the mean function in a generalized linear model. J Am Stat Assoc. 1991;86:420–426.
  • van der Vaart AW, Wellner JA. Weak convergence and empirical processes. New York: Springer; 1996.
  • Yin G. Model checking for additive hazards model with multivariate survival data. J Multivariate Anal. 2007;98:1018–1032.
  • Yin G, Cai J. Additive hazards model with multivariate failure time data. Biometrika. 2004;91:801–818.
  • Yin G, Li H, Zeng D, Zhou Y. Partially linear additive hazards regression with varying coefficients. J Am Stat Assoc. 2008;103:1200–1213.
  • Zeng D, Lin DY, Yin G. Maximum likelihood estimation in proportional odds model with random effects. J Am Stat Assoc. 2005;100:470–483.
  • Zeng D, Yin G, Ibrahim J. Inference for a class of transformed hazards models. J Am Stat Assoc. 2005;100:1000–1008.