Home | About | Journals | Submit | Contact Us | Français |

**|**HHS Author Manuscripts**|**PMC2802211

Formats

Article sections

- Abstract
- 1 Introduction
- 2 Testing for Covariate Effect in the Cox PH Model
- 3 Implementation of the Proposed Tests and Examples
- 4 Comparison of the Proposed Test with Existing Methods
- 5 Concluding Remarks
- References

Authors

Related links

Commun Stat Theory Methods. Author manuscript; available in PMC 2010 January 5.

Published in final edited form as:

Commun Stat Theory Methods. 2009 January 1; 38(14): 2333–2347.

doi: 10.1080/03610920802536958PMCID: PMC2802211

NIHMSID: NIHMS76096

Karthik Devarajan, Division of Population Science, Fox Chase Cancer Center, Philadelphia, PA 19111;

Karthik Devarajan: ude.cccf@najaraved.kihtrak; Nader Ebrahimi: ude.uin.htam@redan

See other articles in PMC that cite the published article.

This paper presents methods for testing covariate effect in the Cox proportional hazards Model based on Kullback-Leibler divergence and Renyi’s information measure. Renyi’s measure is referred to as the information divergence of order *γ* (*γ* ≠ 1) between two distributions. In the limiting case *γ* → 1, Renyi’s measure becomes Kullback-Leibler divergence. In our case, the distributions correspond to the baseline and one possibly due to a covariate effect. Our proposed statistics are simple transformations of the parameter vector in the Cox proportional hazards model, and are compared with the Wald, likelihood ratio and Score tests that are widely used in practice. Finally, the methods are illustrated using two real-life data sets.

The Cox proportional hazards (PH) model (Cox, 1972) offers a method for exploring the association of covariates with the survival time outcome variable often seen in medical and engineering studies. The observed data consist of independent observations on the triple (*X*, *δ*, ** z**), where

The Cox PH model is perhaps most widely used in censored survival data analysis. For a given vector of observed covariates ** z**, the hazard function of

$$\lambda (t\mid \mathit{z})={\lambda}_{0}(t)exp({\mathit{\beta}}^{\prime}\mathit{z})$$

(1.1)

where *λ*_{0}(*t*) is an unspecified function of time and ** β**′ = (

$$S(t\mid \mathit{z})={({S}_{0}(t))}^{exp({\mathit{\beta}}^{\prime}\mathit{z})}$$

(1.2)

and

$$\mathrm{\Lambda}(t\mid \mathit{z})={\mathrm{\Lambda}}_{0}(t)exp({\mathit{\beta}}^{\prime}\mathit{z})$$

(1.3)

where *S*_{0}(*t*) and Λ_{0}(*t*) are the baseline survival and cumulative hazard functions, respectively. In (1.2) and (1.3), the survival and cumulative hazard functions of *T* are modeled respectively.

Checking for covariate effect in (1.1) is a matter of great interest. Several procedures have been described that include the Wald, likelihood ratio (LR) and Score tests among others (see Klein and Moeschberger (2003)). In this paper, we develop tests for covariate effect in the Cox PH model (1.1) that are easy to compute as well as perform better than these statistical tests. Our tests are based on Kullback-Leibler (KL) divergence as well as Renyi’s divergence. In Section 2, we derive expressions for our test statistics based on these measures. In Section 3, we discuss the implementation of the proposed tests and give two illustrative examples. Section 4 compares the performance of the proposed tests with existing tests in terms of power and consistency via extensive simulation studies. Section 5 provides some concluding remarks.

In this case, our null hypothesis of interest is non-parametric and is given by

$${H}_{0}:\mathrm{\Lambda}(t\mid \mathit{z})={\mathrm{\Lambda}}_{0}(t).$$

(2.1)

The alternative is given by

$${H}_{1}:\mathrm{\Lambda}(t\mid \mathit{z})=exp({\mathit{\beta}}^{\prime}\mathit{z})\{{\mathrm{\Lambda}}_{0}(t)\},$$

(2.2)

where the terms are as defined earlier. We observe that the null hypothesis corresponds to ** β** = 0 indicating no covariate effect. We can now re-write our hypotheses as

$${H}_{0}:\mathit{\beta}=\mathbf{0}\phantom{\rule{0.16667em}{0ex}}\text{against}\phantom{\rule{0.16667em}{0ex}}{H}_{1}:\mathit{\beta}\ne \mathbf{0}.$$

(2.3)

We use KL divergence (see Kullback (1978), Ebrahimi et al. (1994) and Ebrahimi and Soofi (2004) for more details) to discriminate between the null and alternative hypotheses.

In general, the KL divergence between two distribution functions *F* and *F*_{0} is

$$I(F:{F}_{0})={\int}_{0}^{\infty}f(x)\left\{log\frac{f(x)}{{f}_{0}(x)}\right\}dx,$$

(2.4)

where *f* and *f*_{0} are the density functions corresponding to *F* amd *F*_{0} respectively. *I*(*F: F*_{0}) is also known as the directed divergence and measures the discrepancy between *F* and *F*_{0} in the direction of *F*. It is well known that *I*(*F: F*_{0}) ≥ 0 and the equality holds if and only if *F* = *F*_{0}. Similarly, one can also define the directed divergence

$$I({F}_{0}:F)={\int}_{0}^{\infty}{f}_{0}(x)\left\{log\frac{{f}_{0}(x)}{f(x)}\right\}dx$$

(2.5)

which measures the discrepancy between *F* and *F*_{0} in the direction of *F*_{0}. Combining (2.4) and (2.5), we can derive the divergence measure given by

$$J(F:{F}_{0})=I(F:{F}_{0})+I({F}_{0}:F)$$

(2.6)

measuring the difficulty of discriminating between *F*_{0} and *F*.

Using (1.2), we can write the density functions under *H*_{1} and *H*_{0} as

$$f(t\mid \mathit{z})={\lambda}_{F}(t\mid \mathit{z})S(t\mid \mathit{z})$$

(2.7)

and

$${f}_{0}(t)={\lambda}_{0}(t){S}_{0}(t),$$

(2.8)

respectively, where *λ _{F}* is the hazard function under

$$\begin{array}{l}{I}_{1}(F:{F}_{0}\mid \mathit{z})={\int}_{0}^{\infty}f(t\mid \mathit{z})log\left\{\frac{f(t\mid \mathit{z})}{{f}_{0}(t)}\right\}dt\\ ={\int}_{0}^{\infty}f(t\mid \mathit{z})log\left\{\frac{{\lambda}_{F}(t\mid \mathit{z})S(t\mid \mathit{z})}{{\lambda}_{0}(t){S}_{0}(t)}\right\}dt\\ =exp(-{\mathit{\beta}}^{\prime}\mathit{z})+{\mathit{\beta}}^{\prime}\mathit{z}-1,\end{array}$$

(2.9)

and the directed divergence *I*_{2}(*F*_{0}: *F* |** z**) given covariate vector

$$\begin{array}{l}{I}_{2}({F}_{0}:F\mid \mathit{z})={\int}_{0}^{\infty}{f}_{0}(t)log\left\{\frac{{f}_{0}(t)}{f(t\mid \mathit{z})}\right\}dt\\ ={\int}_{0}^{\infty}{f}_{0}(t)log\left\{\frac{{\lambda}_{0}(t){S}_{0}(t)}{{\lambda}_{F}(t\mid \mathit{z})S(t\mid \mathit{z})}\right\}dt\\ =exp({\mathit{\beta}}^{\prime}\mathit{z})-{\mathit{\beta}}^{\prime}\mathit{z}-1.\end{array}$$

(2.10)

The divergence measure *J*(*F: F*_{0}|** z**) given the covariate vector

$$\begin{array}{l}J(F:{F}_{0}\mid \mathit{z})={I}_{1}(F:{F}_{0}\mid \mathit{z})+{I}_{2}({F}_{0}:F\mid \mathit{z})\\ =exp({\mathit{\beta}}^{\prime}\mathit{z})+exp(-{\mathit{\beta}}^{\prime}\mathit{z})-2.\end{array}$$

(2.11)

Another information measure that one may use to discriminate between *H*_{0} and *H*_{1} is Renyi’s measure. This measure generalizes KL divergence and is referred to as the information divergence of order *γ* between two distributions *F* and *F*_{0} and corresponding densities *f* and *f*_{0}. Conditional on the covariate vector ** z**, this measure is defined as

$$\begin{array}{l}{R}_{\gamma}(F:{F}_{0}\mid \mathit{z})=\frac{1}{\gamma -1}log\int {\left(\frac{f(x\mid \mathit{z})}{{f}_{0}(x\mid \mathit{z})}\right)}^{\gamma -1}f(x\mid \mathit{z})dx\\ =\frac{1}{1-\gamma}log[(1-\gamma )exp({\beta}^{\prime}\mathit{z})+\gamma ]-{\mathit{\beta}}^{\prime}\mathit{z},\end{array}$$

(2.12)

where *γ* is the order of information and *γ* ≠ 1. In the limit *γ* →1, (2.9) and (2.12) coincide. Information divergence of order *γ*, *R _{γ}*(

It should be emphasized that an important feature of our proposed measures *I*_{1}, *I*_{2}, *J* and *R _{γ}* is that all these information measures are invariant under any non-singular transformation on the failure time data.

Let us denote the observed data by (*X _{j}*,

Using (2.9) – (2.12), our discrimination information statistics based on KL divergence, *J* and Renyi’s measure are

$${\widehat{I}}_{1}(F:{F}_{0}\mid \mathit{z})=\sum _{i=1}^{n}\left[exp(-{\widehat{\mathit{\beta}}}^{\prime}{\mathit{z}}_{i})+{\widehat{\mathit{\beta}}}^{\prime}{\mathit{z}}_{i}-1\right],$$

(2.13)

$${\widehat{I}}_{2}({F}_{0}:F\mid \mathit{z})={\mathrm{\sum}}_{i=1}^{n}[exp({\widehat{\mathit{\beta}}}^{\prime}{\mathit{z}}_{i})-{\widehat{\mathit{\beta}}}^{\prime}{\mathit{z}}_{i}-1],$$

(2.14)

$$\widehat{J}(F:{F}_{0}\mid \mathit{z})=\sum _{i=1}^{n}\left[exp({\widehat{\mathit{\beta}}}^{\prime}{\mathit{z}}_{i})+exp(-{\widehat{\mathit{\beta}}}^{\prime}{\mathit{z}}_{i})-2\right],$$

(2.15)

and

$${\widehat{R}}_{\gamma}(F:{F}_{0}\mid \mathit{z})=\frac{1}{\gamma -1}\sum _{i=1}^{n}log\left[(1-\gamma )exp({\widehat{\mathit{\beta}}}^{\prime}{\mathit{z}}_{i})+\gamma \right]-{\widehat{\mathit{\beta}}}^{\prime}{\mathit{z}}_{i},$$

(2.16)

respectively. It should be noted that one can use (2.13)–(2.16) to perform the global hypothesis of *H*_{0}: ** β** =

The following theorem gives asymptotic distributions of our proposed test statistics.

Î_{1}; Î_{2}; Ĵ and _{γ} are maximum likelihood estimators and are asymptotically normal with means I_{1}; I_{2}; J and R_{γ} respectively.

As seen from (2.13)–(2.16), *Î*_{1}, *Î*_{2}, *Ĵ* and * _{γ}* are simple transformations of the MPL estimator

Using Theorem 1, we compute the exact variances of the proposed statistics *Î*_{1}, *Î*_{2} and *Ĵ* under the null hypothesis. For simplicity, we derive the variance estimators for the case of a single covariate *z*. The extension to multiple covariates is straightforward. Given that ** is asymptotically normal with mean ***β* = 0 and variance *σ*^{2} under the null, and using the moment generating function of the normal random variable (i.e., if ~ *N*(0, *σ*^{2}), then for *z _{i}* fixed,

$$\begin{array}{l}\mathit{Var}({\widehat{I}}_{1})=V\left(\sum _{i=1}^{n}[exp(-{\widehat{\beta}}^{\prime}{z}_{i})+{\widehat{\beta}}^{\prime}{z}_{i}-1]\right)\\ =\sum _{i=1}^{n}\sum _{j=1}^{n}\mathit{Cov}([exp(-{\widehat{\beta}}^{\prime}{z}_{i})+{\widehat{\beta}}^{\prime}{z}_{i}-1],[exp(-{\widehat{\beta}}^{\prime}{z}_{j})+{\widehat{\beta}}^{\prime}{z}_{j}-1])\\ =\sum _{i=1}^{n}\sum _{j=1}^{n}exp[{({z}_{i}+{z}_{j})}^{2}{\sigma}^{2}/2]-exp[({z}_{i}^{2}+{z}_{j}^{2}){\sigma}^{2}/2]+{\sigma}^{2}{z}_{i}{z}_{j}[1-exp({z}_{i}^{2}{\sigma}^{2}/2)-exp({z}_{j}^{2}{\sigma}^{2}/2)].\end{array}$$

Similarly,

$$\begin{array}{l}\mathit{Var}({\widehat{I}}_{2})=V\left(\sum _{i=1}^{n}[exp({\widehat{\beta}}^{\prime}{z}_{i})-{\widehat{\beta}}^{\prime}{z}_{i}-1]\right)\\ =\sum _{i=1}^{n}\sum _{j=1}^{n}\mathit{Cov}([exp({\widehat{\beta}}^{\prime}{z}_{i})-{\widehat{\beta}}^{\prime}{z}_{i}-1],[exp({\widehat{\beta}}^{\prime}{z}_{j})-{\widehat{\beta}}^{\prime}{z}_{j}-1])\\ =\sum _{i=1}^{n}\sum _{j=1}^{n}exp[{({z}_{i}+{z}_{j})}^{2}{\sigma}^{2}/2]-exp[({z}_{i}^{2}+{z}_{j}^{2}){\sigma}^{2}/2]+{\sigma}^{2}{z}_{i}{z}_{j}[1-exp({z}_{i}^{2}{\sigma}^{2}/2)-exp({z}_{j}^{2}{\sigma}^{2}/2)]\end{array}$$

and

$$\begin{array}{l}\mathit{Var}(\widehat{J})=V\left(\sum _{i=1}^{n}[exp({\widehat{\beta}}^{\prime}{z}_{i})+exp(-{\widehat{\beta}}^{\prime}{z}_{i})-2]\right)\\ =\sum _{i=1}^{n}\sum _{j=1}^{n}\mathit{Cov}([exp({\widehat{\beta}}^{\prime}{z}_{i})+exp(-{\widehat{\beta}}^{\prime}{z}_{i})-2],[exp({\widehat{\beta}}^{\prime}{z}_{j})+exp(-{\widehat{\beta}}^{\prime}{z}_{j})-2])\\ =\sum _{i=1}^{n}\sum _{j=1}^{n}2exp[{({z}_{i}+{z}_{j})}^{2}{\sigma}^{2}/2]-4exp[({z}_{i}^{2}+{z}_{j}^{2}){\sigma}^{2}/2]+2exp[{({z}_{i}-{z}_{j})}^{2}{\sigma}^{2}/2].\end{array}$$

For our measure based on Renyi’s divergence * _{γ}*, the variance estimator is obtained using the delta method, simply by expanding

$$\begin{array}{l}\mathit{Var}({\widehat{R}}_{\gamma})=V\left(c\sum _{i=1}^{n}log[(1-\gamma )exp({\widehat{\beta}}^{\prime}{z}_{i})+\gamma ]-{\widehat{\beta}}^{\prime}{z}_{i}\right)\\ ={c}^{2}\sum _{i=1}^{n}\sum _{j=1}^{n}\mathit{Cov}\left(log[(1-\gamma )exp({\widehat{\beta}}^{\prime}{z}_{i})+\gamma ]-{\widehat{\beta}}^{\prime}{z}_{i},log[(1-\gamma )exp({\widehat{\beta}}^{\prime}{z}_{j})+\gamma ]-{\widehat{\beta}}^{\prime}{z}_{j}\right)\end{array}$$

where
$c=\frac{1}{\gamma -1}$. Using the first three terms in the expansion and the fact that *E*(^{2}^{k}^{+1}) = 0 for *k* odd and
$E({\widehat{\beta}}^{2k})=\frac{(2k)!}{{2}^{k}k!}{\sigma}^{2k}$ for *k* even, we obtain a reasonable approximation to the variance estimator given by

$$\mathit{Var}({\widehat{R}}_{\gamma})\approx {\left(\frac{1}{\gamma -1}\right)}^{2}\sum _{i=1}^{n}\sum _{j=1}^{n}\frac{1}{2}{\gamma}^{2}{z}_{i}^{2}{z}_{j}^{2}{\sigma}^{4}+\frac{5}{12}{(2{\gamma}^{2}-\gamma )}^{2}{z}_{i}^{3}{z}_{j}^{3}{\sigma}^{6}.$$

In this section, we describe the implementation of our proposed test statistics for testing (2.1) against (2.2). One can also follow similar steps for global testing as well as local testing. We apply the following steps to implement *Î*_{1}(*F: F*_{0}|** z**),

**Step 1**: Use partial likelihood procedure to estimate ** β** (see Klein & Moeschberger (2003)). Denote the estimate by

**Step 2**: Replace ** β** by

**Step 3**: From Theorem 1, it is clear that *Î*_{1}, *Î*_{2}, *Ĵ* and * _{γ}* in (2.13) – (2.16) have asympotically normal distributions under

**Step 4**: (a) Using *Î _{i}*, reject

From Theorem 1, it is clear that the tests specified above using *Î*_{1}, *Î*_{2}, *Ĵ* and * _{γ}* are all consistent tests.

We implemented our methods using the R statistical system (R Development Core Team (2005)). The MPLE ** was computed using the function ***coxph* and the variance-covariance matrix of ** under ***H*_{0} (*Var*(_{0})) was computed using the function *vcov*. All computations for the simulations, including those pertaining to the variances of the proposed statistics, were programmed in R. we utilized the parallel implementation in the *snow* package in R using the High Performance Computing cluster at Fox Chase Cancer Center.

In this section we illustrate our approach using two real-life examples. For Renyi’s measure, we only compute * _{γ}* for

The data consist of times to remission (in weeks) of 42 leukemia patients from Kalbfleisch & Prentice (2002). This data set is available in the R statistical system (R Development Core Team (2005)). The patients are randomized into two groups; a control group and the treatment group for which the drug 6-MP was administered. Each group consists of 21 patients. The covariate is group membership specified by the indicator variable *z* where *z* = 0 denotes the control group and *z* = 1 denotes the treatment group.

We are interested in testing for covariate effect i.e., if there is any significant treatment effect (*z* = 1) relative to the control group (*z* = 0). Here *p* = 1 and the null hypothesis is

$${H}_{0}:\beta =0$$

corresponding to no covariate effect against the two-sided alternative

$${H}_{1}:\beta \ne 0.$$

The Cox PH model was fit to this data. The estimate of the regression coefficient is 1.57 with a standard error of 0.412 and a significant *p*-value of 1.4e-04. The estimate of the relative risk is *e ^{}* = 4.81 i.e., the risk is 4.81 times higher for a patient in the control group relative to that of a patient in the treatment group. Using the MPL estimate of

$$\widehat{J}(F:{F}_{0}\mid z=1)=exp(\widehat{\beta})+exp(-\widehat{\beta})-2=\mathrm{3.02.}$$

Also, *Ĵ*(*F: F*_{0}|*z* = 0) = 0. Consequently from (2.15), *Ĵ*(*F:F*_{0}|*z*) = 63.51 and
$\frac{{\widehat{J}}^{2}}{\widehat{\mathit{Var}}(\widehat{J})}=128.07$ which gives a *p*-value < 1e-09 using an asymptotic *χ*^{2} distribution with 1 degree of freedom under the null hypothesis. Hence, we reject the null hypothesis of no covariate effect at a significance level 0.05. Similarly from (2.13), *Î*_{1}(*F: F*_{0}|*z*) = 47.14 and
$\frac{{\widehat{I}}_{1}^{2}}{\widehat{\mathit{Var}}({\widehat{I}}_{1})}=322.55$ which gives a *p*-value < 1e-09, and from (2.14), *Î*_{2}(*F*_{0}: *F*|*z*) = 16.34 and
$\frac{{\widehat{I}}_{2}^{2}}{\widehat{\mathit{Var}}({\widehat{I}}_{2})}=38.92$ which gives a *p*-value < 1e-09. Finally, using (2.16), Renyi’s divergence of order *γ* for *γ* = 0.5 is _{0.5}(*F: F*_{0}|*z*) = 11.83 and
$\frac{{\widehat{R}}_{.5}^{2}}{\widehat{\mathit{Var}}({\widehat{R}}_{.5})}=106.57$ which also gives a *p*-value < 10^{−9}. In this example, all our proposed tests confirm that the treatment has a positive effect. In the case of Wald, LR and Score tests, the computed statistics (and *p*-values) were 14.5 (1.38e-04), 16.4 (5.26e-05) and 17.3 (3.28e-05), respectively. Hence, these standard tests confirm the results from our proposed statistics.

We now consider the leukemia data described in Feigl & Zelen (1965). This data set consists of uncensored survival times of 33 patients who died from acute myelogenous leukemia, and is also available in the R statistical system (R Development Core Team (2005)). The patients were also factored into 2 groups according to the presence or absence of a morphologic characteristic of white blood cells (AG). The white blood cell count at the time of diagnosis was also measured for each patient (WBC).

We are interested in testing the effects of AG and WBC on the survival time of patients. Here, AG is a binary variable (present/absent) and WBC is a continuous covariate. Also, *p* = 2 and the null hypothesis is

$${H}_{0}:\mathit{\beta}=\mathbf{0}$$

corresponding to no effects due to AG and WBC against the two-sided alternative

$${H}_{1}:\mathit{\beta}\ne \mathbf{0}$$

that at least one covariate has an effect on survival time. We fit the Cox PH model to this data with AG (*z*_{1}), and log(WBC) (*z*_{2}), as our covariates, and performed a global test of the null hypothesis. AG is coded as 1 for present and 0 for absent. Based on this, the MPL estimates of the regression coefficients are _{1} = −1.069 and _{2} = 0.368 with corresponding *p*-values 0.013 and 0.017, respectively. That is, AG has a negative effect and WBC has a positive effect on survival time, and both effects are statistically significant. Now, our estimate of the divergence measure given by (2.15) is *Ĵ*(*F: F*_{0}|** z**) = 901190.8. Consequently,
$\frac{{\widehat{J}}^{2}}{\widehat{\mathit{Var}}(\widehat{J})}=8.76$ which gives a

We compared the proposed test statistics *Î*_{1}(*F: F*_{0}|** z**),

For the two sample problem, simulations were performed using randomly generated data from exponential distributions with means 1 and 0.5. Sample sizes ranged from 5 to 20 per group (total sample size ranged from 10 to 40). Independent censoring based on a uniform distribution over a fixed interval (0,B) was applied to result in 25% censored observations. For the continuous covariate case, we randomly generated data from exponential distributions with means 0.5, 1 or 2 using the method outlined in Bender et al. (2005). For a sample of size *n*, we created a covariate vector by randomly permuting the vector (1/*n*, 2/*n*, …, *n*/*n*) once or generating a single sample from a standard normal or a uniform (0, 1) distribution and using this covariate vector across all simulations. Sample sizes ranged from 10 to 50 and simulations were performed in the manner outlined for the two sample problem.

For each sample size, we computed the power and consistency at various levels of the Type I error *α* (i.e., *α* = 0.01, 0.05, 0.1, 0.15, 0.2, 0.25) based on 5000 simulations for each test. The simulation results for the two sample problem are tabulated in Tables 4.1 through 4.4. For the continuous covariate case, results are tabulated in Tables 4.5 through 4.8. In each table, the first row corresponding to each test statistic indicates the power and the second row (numbers enclosed within paranthesis) indicates the consistency for each *α*. These results demonstrate that the proposed tests outperform all the leading tests in terms of power. In particular, we observe from the simulation results that in most cases, *Î*_{1} has better performance relative to all other proposed tests in terms of both power and consistency. This is true across sample sizes and censoring proportions. Also, as *γ* →1, there is an improvement in the consistency of * _{γ}*. There is also an overall improvement in its power and in some cases, higher values of

Overall, we find that for some loss in consistency, there is a substantial increase in power for almost all of the proposed tests relative to the three standard tests. The only exception to this is *Î*_{2}. It is interesting to note that the realized *α* for the proposed tests stabilize at around a Type I error cut-off of 0.15. This is not observed in the case of the three standard tests. It is well known that the score test is the most powerful test, asymptotically, under the proportional hazards assumption. However, for small to moderate sample sizes, we find that the proposed tests demonstrate better performance. In particular, we recommend using our tests for small sample sizes. For larger sample sizes, we found that the performance of the proposed and standard tests are very similar (data not shown). In that case, use of either test (proposed or standard) would be appropriate. For the continuous covariate case, the overall results were similar across these three different approaches used for creating the covariate vector. This demonstrates that the proposed tests are insensitive to the choice of the covariate structure. Furthermore, for larger censoring proportions (such as 50% censored observations), we found the performance of the proposed and standard tests to be similar overall (data not shown). It should be mentioned that the critical value for the Wald statistic was based on the normalized *z*-score and those for all other test statistics in this study were based on a Chi-square distribution with 1 degree of freedom at a significance level of 0.05.

In this paper, we proposed several tests for covariate effect in the Cox PH model based on KL divergence and Renyi’s divergence. These tests are useful for discriminating between the baseline distribution and one possibly due to the covariates in the Cox PH model. An important advantage of these discrimination information functions is that all of them are invariant under any non-singular transformation of the failure time data.

Our proposed tests are moderately easy to compute. An attractive feature of the proposed methods is that the test statistics are simple functions of the MPLE and independent of the baseline distribution. Using the results derived in this paper, one can easily implement our tests in a readily available, open source programming environment such as the R statistical system. All these tests, particularly *Î*_{1} and * _{γ}* as

While we have specifically focused on testing for any covariate effect in the widely used Cox PH model using a class of discrimination information functions, one could be interested in applying similar methods to other survival models such as the accelerated failure time (AFT) model (Etezadi-Amoli & Ciampi, 1985) or the additive hazards (AH) model (Tibshirani & Ciampi, 1983). Unlike the Cox PH model, the AFT model allows for crossing hazard curves. For a test of the null hypothesis of no covariate effect against the alternative specified by the AFT model, the discrimination information functions (analogous to the quantities specified in (2.9)–(2.12)) would be dependent on the baseline hazard. This is also the case with the AH model. In that situation, one would have to apply spline approximations for parameter estimation as described in Devarajan (1999), and then follow the methods described in this paper for testing the covariate effect in these models. Devarajan & Ebrahimi (2001) utilized such an approach for testing the goodness of fit of the Cox PH model against a generalization of the Cox PH model that allows for crossing hazards. A detailed discussion of this method is beyond the scope of this article and interested reader is referred to the paper referenced above.

The authors would like to thank the referees for their helpful comments. The work of the first author was supported in part by NIH grant P30 CA 06927 and an appropriation from the Commonwealth of Pennsylvania. The authors would like to acknowledge the services provided by the High Performance Computing Facility at Fox Chase Cancer Center.

Karthik Devarajan, Division of Population Science, Fox Chase Cancer Center, Philadelphia, PA 19111.

Nader Ebrahimi, Division of Statistics, Northern Illinois University, DeKalb, IL 60115.

- Asadi M, Ebrahimi N, Soofi E. Dynamic generalized information measures. Statistics and Probability letters. 2006;71:85–98.
- Bender R, Augustin T, Blettner M. Generating survival times to simulate Cox proportional hazards models. Statistics in Medicine. 2005;24:1713–1723. [PubMed]
- Cox DR. Regression Models and Life Tables (with Discussion) Journal of the Royal Statistical Society-Series B. 1972;34:187–202.
- Devarajan K. Ph.D. Dissertation. Division of Statistics, Northern Illinois University; DeKalb, Illinois: 1999. Inference for a non-proportional hazards regression model and applications.
- Devarajan K, Ebrahimi N. Goodness-of-Fit Tests and Model Validity, Statistics for Industry and Technology. Birkhauser Publishers; Boston: 2001. Goodness of fit testing for the Cox proportional hazards model using Kullback-Leibler discrimination function; pp. 235–251.
- Ebrahimi N, Habibullah M, Soofi E. Testing exponentiality based on Kullback-Leibler information. Journal of the Royal Statistical Society-Series B. 1994;54:739–748.
- Ebrahimi N, Soofi E. Information functions for reliability. In: Soyer R, Mazzuchi TA, Singpurwalla N, editors. Mathematical Reliability: An Expository Perspective. Kluwer Academic; Boston: 2004.
- Etezadi-Amoli J, Ciampi A. A General Model for Testing the Proportional Hazards and the Accelerated Failure Time Hypothesis in the Analysis of Censored Survival Data with Covariates. Communications in Statistics-A. 1985;14:651–667.
- Feigl P, Zelen M. Estimation of exponential survival probabilities with con-comitant information. Biometrics. 1965;21:826–838. [PubMed]
- Kalbfleisch JD, Prentice RL. The Statistical Analysis of Failure Time Data. 2. John Wiley; New York: 2002.
- Klein John, Moeschberger ML. Survival analysis, techniques for censored and truncated data. 2. Springer; New York: 2003.
- Kullback S. Information Theory and Statistics. Dover Publications Inc.; New York: 1978.
- R Development Core Team. R: A language and environment for statistical computing. Vienna, Austria; 2005. URL http://www.R-project.org.
- Tibshirani RJ, Ciampi A. A family of proportional- and additive-hazards models for survival data. Biometrics. 1983;39:141–147. [PubMed]

PubMed Central Canada is a service of the Canadian Institutes of Health Research (CIHR) working in partnership with the National Research Council's national science library in cooperation with the National Center for Biotechnology Information at the U.S. National Library of Medicine(NCBI/NLM). It includes content provided to the PubMed Central International archive by participating publishers. |