Home | About | Journals | Submit | Contact Us | Français |

**|**HHS Author Manuscripts**|**PMC2992554

Formats

Article sections

- Abstract
- 1 Introduction
- 2 Nonlinear transformation models
- 3 Profile information matrix
- 4 Examples
- 5 Discussion
- References

Authors

Related links

Lifetime Data Anal. Author manuscript; available in PMC 2010 November 26.

Published in final edited form as:

PMCID: PMC2992554

NIHMSID: NIHMS252530

A. Tsodikov, Department of Biostatistics, School of Public Health, University of Michigan, 1420 Washington Heights, Ann Arbor, MI 48109-2029, USA, Email: ude.hcimu@vokidost;

The publisher's final edited version of this article is available at Lifetime Data Anal

For semiparametric models, interval estimation and hypothesis testing based on the information matrix for the full model is a challenge because of potentially unlimited dimension. Use of the profile information matrix for a small set of parameters of interest is an appealing alternative. Existing approaches for the estimation of the profile information matrix are either subject to the curse of dimensionality, or are ad-hoc and approximate and can be unstable and numerically inefficient. We propose a numerically stable and efficient algorithm that delivers an exact observed profile information matrix for regression coefficients for the class of Nonlinear Transformation Models [A. Tsodikov (2003) J R Statist Soc Ser B 65:759–774]. The algorithm deals with the curse of dimensionality and requires neither large matrix inverses nor explicit expressions for the profile surface.

In semiparametric models the parameter is partitioned as (β, *H*) with β a low-dimensional parameter of interest and *H* a high-dimensional nuisance parameter. For example, in semiparametric regression survival models, β is the vector of regression coefficients and *H* is the baseline cumulative hazard function estimated as a step-function by the Nonparametric Maximum Likelihood Estimator (NPMLE). The dimension of *H* is given by the number of distinct failure times and increases with the sample size.

Within the NPMLE framework the following tools are available for interval estimation and hypothesis testing for β.

*Likelihood Ratio*. The likelihood ratio statistic for testing*H*_{0}: β = β_{0}is defined as,where is the log-likelihood function, (,$$\text{LR}({\beta}_{0})=2(\ell (\widehat{\beta},\widehat{H})-\ell ({\beta}_{0},\widehat{H}({\beta}_{0}))),$$*Ĥ*) is the NPMLE of (β,*H*), and*Ĥ*(β) is the MLE of*H*given β. Although classical ML theory does not directly apply to unlimited dimension, for many semiparametric models LR has an asymptotic chi-square distribution with*d*degrees of freedom, where*d*is the dimension of β. A (1 − α)% confidence set for β is given bywhere$$\{\beta :\text{LR}(\beta )\le {C}_{d,\alpha}\},$$*C*_{d,α}is the α percentile of the chi-square distribution with*d*degrees of freedom. When the asymptotic distribution of LR is unknown, bootstrap can be used to approximate*C*_{d,α}.The likelihood ratio approach for building confidence regions for β involves inverting the LR surface, which is quite computer intensive as repeated maximizations of the likelihood with respect to*H*are required.*Wald Statistic*. An alternative method of inference for β is based on the Wald statistic defined aswhere ∑$$W(\beta )={(\widehat{\beta}-\beta )}^{\mathrm{T}}{\displaystyle {\sum}_{\beta \beta}^{-1}(\widehat{\beta}-\beta )},$$_{ββ}is the β-submatrix of the inverse of the observed information matrix$$I=\left(\begin{array}{c}\frac{-{\partial}^{2}\ell (\beta ,H)}{\partial \beta \partial {\beta}^{\mathrm{T}}}\frac{-{\partial}^{2}\ell (\beta ,H)}{\partial \beta \partial {H}^{\mathrm{T}}}\hfill \\ \frac{-{\partial}^{2}\ell (\beta ,H)}{\partial H\partial {\beta}^{\mathrm{T}}}\frac{-{\partial}^{2}\ell (\beta ,H)}{\partial H\partial {H}^{\mathrm{T}}}\hfill \end{array}\right)\phantom{\rule{thinmathspace}{0ex}}{|}_{\beta =\widehat{\beta},H=\widehat{H}}$$Note that in the presence of nuisance parameters the information matrix needs to be inverted twice (Severini 2000, p. 121), the first time in its high-dimensional full model form*I*, and the second time as a dim(β)-submatrix of ∑ =*I*^{−1}.Under certain conditions,*W*is asymptotically equivalent to the likelihood ratio and asymptotically has a chi-square distribution with*d*degrees of freedom. In this case,is a confidence set of approximate coverage probability 1 − α.$$\{\beta :W(\beta )\le {C}_{d,\alpha}\},$$

The bottleneck of this procedure is the invertion of a potentially infinitely large matrix *I*.

The two methods of inference on β described above are based on the full model. An appealing alternative is to consider the so-called profile likelihood (Murphy and van der Vaart 2000)

$${\ell}_{\text{pr}}(\beta )=\underset{H}{\text{max}}\phantom{\rule{thinmathspace}{0ex}}\ell (\beta ,H).$$

The profile likelihood may be used as a likelihood for β. The MLE for β, the first component of the pair (, *Ĥ*) that maximizes (β, *H*), is the maximizer of the profile likelihood function _{pr}(β).

Theoretical justification for the use of the profile likelihood for semiparametric models was given in (Murphy and van der Vaart 1997, 2000; van der Vaart 1998). It was shown that profile likelihoods with the nuisance parameter estimated out behave like ordinary likelihoods under regularity conditions. These conditions need to be verified on a case-by-case basis as the general theory remains a challenge. Theoretical justification has been obtained for the proportional odds (PO) model (Murphy and van der Vaart 2000; Murphy et al. 1997) and the PH frailty models (Murphy 1994, 1995; Parner 1998; Kosorok et al. 2004).

The observed profile information matrix will be denoted *I*_{pr},

$${I}_{\text{pr}}=-\frac{{\partial}^{2}{\ell}_{\text{pr}}(\beta )}{\partial \beta \partial {\beta}^{\mathrm{T}}}{|}_{\beta =\widehat{\beta}}.$$

This matrix is asymptotically the same as ${\sum}_{\beta \beta}^{-1}$, and summarizes partial information on β.

The Likelihood Ratio and Wald statistics based on _{pr} are easier to obtain than the ones based on the full model provided

- a numerically efficient method is available to profile out the nuisance parameter
*H*, and - it is possible to derive the exact observed profile information matrix or estimate it in a computationally efficient way.

Fulfilling both conditions is a challenge. First, maximization over *H* is a problem of potentially very large dimension. Second, in most cases _{pr} cannot be differentiated analytically. Several alternatives for estimating the profile information matrix have been proposed in the literature. However, they are all approximations, often difficult to calibrate in practice, and algorithms to obtain them are computationally costly.

In this paper we propose a computationally efficient exact solution for the class of semiparametric Nonlinear Transformation Models (NTM) (Tsodikov 2003). The basic assumption that defines this model family is that the survival function at each timepoint *t* is a *function* of *H*(*t*) mapping real numbers [0, ∞] → [0, 1] rather than a *functional* mapping a functional space to [0,1]. In other words, the model-based survival function is obtained by plugging a cumulative hazard *H* or a baseline survival function *F* = exp(− *H*) into a suitably defined parametric function (so-called model-generating function, see Sect.2). Note that a similar assumption underlies the von-Mises Calculus (van der Vaart 1998, p. 291). The NTM class includes the proportional hazards (PH) model, univariate PH frailty models, the PO model, and cure models such as the PHPH model (Tsodikov 2002; Tsodikov et al. 2003). A numerically efficient Quasi-EM algorithm, a subset of the MM family (Lange et al. 2000) was developed to obtain the maximum profile likelihood for NTM models (Tsodikov 2003). The algorithm has since been used in computer intensive settings such as the bootstrap (Dixon et al. 2005).

The algorithm for the exact *I*_{pr} proposed in this paper works under the following two basic assumptions.

*Independence of the future*. Independence of the future means that the contribution to the likelihood of an observed event at time *t* depends on the past *H*[0,*t*] of the function *H*, but not on the future.

*Nonlinear Transformation Model Assumption*. The survival function given covariates is specified as a parametric transformation of *H*. A detailed definition is given below.

We compare our method to the following three existing techniques used to estimate the profile information matrix that amount to particular forms of numerical differentiation of the second order.

*Discretized second derivative*. Corollary 3 of Murphy and van der Vaart (2000) shows that under certain conditionsfor all sequences ${\nu}_{n}\stackrel{P}{\to}\nu \in \mathbb{R}\phantom{\rule{thinmathspace}{0ex}}\text{and}\phantom{\rule{thinmathspace}{0ex}}{h}_{n}\stackrel{P}{\to}0\text{such that}{\left(\sqrt{n}{h}_{n}\right)}^{-1}={O}_{P}(1)$.$$-2\frac{\text{log}\phantom{\rule{thinmathspace}{0ex}}{\ell}_{\text{pr}}(\widehat{\beta}+{h}_{n}{\nu}_{n})-\text{log}\phantom{\rule{thinmathspace}{0ex}}{\ell}_{\text{pr}}(\widehat{\beta})}{n{h}_{n}^{2}}\stackrel{P}{\to}{\nu}^{\mathrm{T}}{I}_{\text{pr}}\nu ,$$(1)This result can be used to derive an estimate of*I*_{pr}. Note that this method requires careful maintenance of the speed of convergence of the sequence {*h*} as the condition ${\left(\sqrt{n}{h}_{n}\right)}^{-1}={O}_{P}(1)$ implies that the convergence should be neither too slow nor too fast. The reason is that the precision of the discrete differential operator as_{n}*n*→ ∞ on the left side of (1) needs to be measured against the convergence of MLEs to the true value. Indeed, under regularity conditions, the asymptotic expansion of the likelihood ratio statistic about the MLE has the form $n{(\beta -\widehat{\beta})}^{\mathrm{T}}{I}_{\text{pr}}(\beta -\widehat{\beta})+{o}_{P}(\sqrt{n}\Vert \beta -{\beta}^{*}\Vert +1)$, where β^{*}is the true value. The procedure (1) is designed to extract the quadratic term by setting β = +*h*, and by ensuring the $1/\sqrt{n}$ rate of convergence of β to simultaneously and β_{n}ν_{n}^{*}so that the quadratic term is indeed the dominant one. Otherwise, the expansion would be dominated by its*o*(1) part if_{P}*h*is too fast or by ${o}_{P}(\sqrt{n}\Vert \beta -{\beta}^{*}\Vert )$ if_{n}*h*is too slow._{n}See Sect. 4 for further details and implementation of this method.*Fitting a Quadratic Form*. Asymptotically, under regularity conditions, the profile likelihood surface around the true β is quadratic. Nielsen et al. (1992) proposed fitting a quadratic form to_{pr}(β) in some domain around the maximum likelihood estimator, , and the derivation of an approximate profile information matrix using the estimated coefficients of the form. Note that globally the likelihood surface is not quadratic. The quadratic approach is difficult to implement as a sufficiently small domain around where the likelihood surface can be well approximated by a quadratic form is not well defined. Misspecification of this domain with the quadratic method often leads to estimates of the profile information matrix that are not positive definite, particularly if the number of covariates is large. Yet the domain needs to be large enough to ensure adequate precision and sufficient sample size representing the number of likelihood evaluations within the domain. This balancing act is notoriously difficult as the true variance is unknown and the likelihood surface is specific to the data set being analyzed.*Numerical Differentiation of the Profile Likelihood*. Standard numerical algorithms can be used to numerically differentiate the profile likelihood function. We use Ridder’s method (Press et al. 1994) in the examples presented in Sect. 4. The difficulties in the implementation of this idea are similar to the ones with the Quadratic Form approach. Numerical differentiation requires choosing a tolerance for the estimation of the derivatives, and typically involves interpolation of the function. The precision and speed of these methods are inversely related and they vary widely dependent on the tolerance. Since the likelihood surface is dataset-specific, this method may require calibration and tuning for a particular dataset.

The approximating nature of the standard approaches outlined above, the need to balance various tradeoffs in their implementation, and a likely need to tweak implementation based on the dataset at hand, makes it difficult to develop these approaches to the point of automation sufficient for use in standard statistical software.

The algorithm proposed in this paper is exact, automatic and requires no tuning or calibration. This makes it an attractive alternative, particularly with statistical software applications in mind.

The PO model is used in this paper to compare via simulations the performance of the three estimation methods for *I*_{pr} and the proposed exact algorithm. For different sample sizes, the approaches were compared in terms of the number of operations required to achieve a reasonable standardized precision. Naturally, the exact method outperforms any approximating method if an ever better precision is demanded. In our numerical study we focus on practical precisions where approximating methods could nevertheless represent a viable competition to an exact procedure. Numerical efficiency and precision of the computation of *I*_{pr} is of great importance for variable selection procedures. In an example involving seven variables, backward variable selection using the Wald statistic based on the exact profile information matrix took less than one third of the time of the quadratic approach. We also compared the estimation methods in terms of relative error. Of all approximating methods, the numerical approach has the smallest relative error.

As a result of these studies we believe that the exact method should be the primary choice for NTM.

NTM are defined as follows (Tsodikov 2002, 2003).

*Definition 1* Let γ(*x* |β,*z*) be a parametrically specified distribution function with *x*-domain of [0,1]. Let *F*(*t*) be a nonparametrically specified baseline survival function. A semiparametric regression survival model is called a Nonlinear Transformation Model if, conditional on the covariates *z*, its survival function *G* can be represented in the form

$$G(t|\beta ,z)=\gamma (F(t)|\beta ,z).$$

(2)

The function γ is called the NTM-generating function.

Note that *F*(*t*) = exp(− *H*(*t*)) where *H*(*t*) is the baseline cumulative hazard function. With this in mind we can write the hazard function of the model as

$$\lambda (t|\beta ,z)=\frac{\gamma \prime (F(t)|\beta ,z)}{\gamma (F(t)|\beta ,z)}F(t)h(t),$$

(3)

where *h*(*t*) = *H*′(*t*) is the baseline hazard function.

In Tsodikov (2003) a Quasi-EM (QEM) point estimation algorithm for the NTM was developed and conditions that ensure its convergence were given.

The algorithm solves a functional self-consistency score equation of the form *H* = ψ(β, *H*) for *H*, where ψ is a mapping that generalizes a Nelson–Aalen–Breslow estimator for the proportional hazards model so that its denominator depends on *H* as well as β. Functional iterations

$${H}^{(k+1)}=\psi (\beta ,{H}^{(k)}),\text{}k=1,2,\dots $$

(4)

are exercised until *Ĥ*, the fixed-point of ψ, has been approximated, *H*^{(k)} → *Ĥ*, as *k* → ∞, see Tsodikov (2003) for details.

Although any parameterization of γ in terms of β and *z* is allowed, in the examples we assume that γ is parameterized through a set of parameters/predictors θ, η, …, where each predictor is further parameterized using generally different sets of regression coefficients β_{1}, β_{2}, …, so that $\theta =\text{exp}({\beta}_{1}^{\mathrm{T}}z),\eta =\text{exp}({\beta}_{2}^{\mathrm{T}}z),\dots $.

Let *t _{i}, i* = 1, …,

$$\ell ={\displaystyle \sum _{i=1}^{n}{D}_{i}\phantom{\rule{thinmathspace}{0ex}}\text{log}({h}_{i})}+{\displaystyle \sum _{i=1}^{n}{\displaystyle \sum _{j\in {\mathcal{C}}_{i}\cup {\mathcal{D}}_{i}}\text{log}\phantom{\rule{thinmathspace}{0ex}}\vartheta ({F}_{i}|\beta ,{z}_{\mathit{\text{ij}}},{c}_{\mathit{\text{ij}}}),}}$$

where

$$\vartheta (x|\beta ,z,c)={x}^{c}\frac{{\partial}^{c}\gamma (x|\beta ,z)}{\partial {x}^{c}},$$

^{0}γ/*x*^{0} = γ, *D _{i}* is the number of failures associated with

$${F}_{i}=F({t}_{i})=\text{exp}\phantom{\rule{thinmathspace}{0ex}}\left(-{\displaystyle \sum _{l=1}^{i}{h}_{l}}\right).$$

The profile likelihood is defined as a supremum of the full likelihood taken over the nonparametric part of the model

$${\ell}_{\text{pr}}(\beta )=\underset{h}{\text{max}}\phantom{\rule{thinmathspace}{0ex}}\ell (\beta ,h).$$

The MLE of *h* for a given β will be denoted *ĥ*(β) = (*ĥ*_{1}, …, *ĥ _{n}*), with

Differentiating with respect to *h* and setting the score equal to 0 we obtain *Ĥ*(β) as the solution of the functional self-consistency equation

$${\widehat{h}}_{m}=\frac{{D}_{m}}{{\displaystyle {\sum}_{(i,j)\in {\mathcal{R}}_{m}}\mathrm{\Theta}({F}_{i}|\beta ,{z}_{\mathit{\text{ij}}},{c}_{\mathit{\text{ij}}})}},\text{}m=1,\dots ,n,$$

(5)

where *F _{i}* is a function of

$$\mathrm{\Theta}(x|\beta ,z,c)=-\frac{\partial \text{log}\vartheta (x|\beta ,z,c)}{\partial x}=c+x\frac{{\gamma}^{(c+1)}(x|\beta ,z,c)}{{\gamma}^{(c)}(x|\beta ,z,c)},$$

(6)

γ^{(a)} = ^{a}γ(*x*)/*x ^{a}*, and

Point estimation proceeds along the lines of the following nested procedure,

- maximize
_{pr}(β) by a conventional nonlinear programming method, for example, the Powell method (Press et al., 1994), - for each β demanded in the above maximization procedure, find $\underset{h}{{\displaystyle \text{max}\phantom{\rule{thinmathspace}{0ex}}}}\ell (\beta ,h)$ as the fixed point of (5).

The QEM algorithm makes use of the straightforward recursion to obtain the profile likelihood,

$${h}_{m}^{(k+1)}=\frac{{D}_{m}}{{\displaystyle {\sum}_{(i,j)\in {\mathcal{R}}_{m}}\mathrm{\Theta}\left(\text{exp}\left(-{\displaystyle {\sum}_{l=1}^{i}{h}_{l}^{(k)}}\right)|\beta ,{z}_{\mathit{\text{ij}}},{c}_{\mathit{\text{ij}}}\right)}},=1,2,\dots \phantom{\rule{thinmathspace}{0ex}};m=1,\dots \phantom{\rule{thinmathspace}{0ex}},n,$$

(7)

where *k* counts iterations. Note that an increment of *k* occurs only once all the parameters *h _{i}, i* = 1, …,

It can be shown that if Θ is nondecreasing in *x*, each update of *H* using (7) strictly improves the likelihood, given β. This guarantees convergence of the sequence of likelihood values (β, *h*^{(k)}) to (β, *ĥ*(β)) under fairly general conditions (Tsodikov 2003); that is, *Ĥ*(β) is the fixed point of the recursion given in (7).

It should be noted that the proposed information matrix algorithm is not contingent on using a specific method for point estimation. Yet it builds on the idea of the self-consistency through implicit differentiation of the self-consistency equation.

The profile information matrix is the observed information matrix derived from the profile likelihood,

$${I}_{\text{pr}}(\beta )=-\frac{{\partial}^{2}{\ell}_{\text{pr}}(\beta )}{\partial \beta \partial {\beta}^{\mathrm{T}}}.$$

Implicit differentiation of the profile likelihood yields the following expression for the profile information matrix

$${I}_{\text{pr}}(\beta )={I}_{\beta \beta}+{h}_{\beta}^{\mathrm{T}}{I}_{\mathit{\text{hh}}}{h}_{\beta}+{h}_{\beta}^{\mathrm{T}}{I}_{h\beta}+{{I}_{h\beta}}^{\mathrm{T}}{h}_{\beta}+{\displaystyle \sum _{m=1}^{n}{\ell}_{{h}_{m}}{h}_{m,\beta \beta},}$$

(8)

where *h* = *h*(β) is some function of β, and

$${h}_{\beta}=\frac{\partial h(\beta )}{\partial \beta},\text{}{I}_{\mathit{\text{ab}}}=-\frac{{\partial}^{2}\ell}{\partial a\partial {b}^{\mathrm{T}}},\text{}{\ell}_{{h}_{m}}=\frac{\partial \ell (\beta ,h)}{\partial {h}_{m}},\text{and}{h}_{m,\beta \beta}=\frac{{\partial}^{2}{h}_{m}(\beta )}{\partial \beta \partial {\beta}^{\mathrm{T}}},$$

with *a* and *b* equal to β or *h*.

When evaluated at the MLE *Ĥ*(β), where *Ĥ* is a function defined implicitly as the solution of the score equation

$${\ell}_{h}(\beta ,h)=0\Rightarrow \widehat{h}=\widehat{h}(\beta ),$$

(9)

the information matrix simplifies to

$${I}_{\text{pr}}={I}_{\beta \beta}+{I}_{\widehat{h}\beta}^{\mathrm{T}}{\widehat{h}}_{\beta}.$$

(10)

Indeed, by virtue of the score equation (9),

$${\ell}_{h}(\beta ,\widehat{h}(\beta ))\equiv 0.$$

(11)

Differentiating (11) with respect to β, we also have

$$\frac{d{\ell}_{h}(\beta ,\widehat{h}(\beta ))}{d\beta}={I}_{\widehat{h}\beta}+{I}_{\widehat{h}\widehat{h}}{\widehat{h}}_{\beta}\equiv 0,$$

(12)

with (10) now following from (8) on substitution of (11) and (12).

It should be noted, however, that unless the score equation (9) is solved for *h exactly*, the short form of the observed profile information matrix (10) is generally not going to be symmetric. Except in the Cox model, there is no closed form solution to *Ĥ*, and this function is an output of a numerical algorithm such as (7) converging to *Ĥ* with some tolerance. To preserve the symmetry of *I*_{pr}, we prefer to keep some of the theoretically redundant terms in (8) and use the form

$${I}_{\text{pr}}(\beta )={I}_{\beta \beta}+{\widehat{h}}_{\beta}{I}_{\widehat{h}\widehat{h}}{\widehat{h}}_{\beta}+{\widehat{h}}_{\beta}{I}_{\widehat{h}\beta}+{I}_{\widehat{h}\beta}^{\mathrm{T}}{\widehat{h}}_{\beta}.$$

(13)

Notice that *I*_{pr} has dimension *d* × *d, d* = dim(β). Therefore only a small matrix needs to be inverted in order to get an estimator of the covariance matrix of regression coefficients.

The difficulty in (13) is that since *Ĥ*(β) is defined implicitly, so is the potentially large Jacobian matrix *ĥ*/β. Therefore, the Jacobian is generally unavailable in a closed form. The success in the calculation of the profile information matrix is determined by the existence of an efficient numerical method to compute *ĥ*/β. Generally, computation of *ĥ*/β is as difficult as taking the inverse of the original full model information matrix (*O*(*n*^{3}) operations required), and this derivation defeats the purpose. However, if the functional 𝛝(*H, t*|·) that defines model contributions to the likelihood depends on (*H,t*) only through *H*(*t*), which is the case for the NT models (2), *ĥ*/β can be obtained by solving a system of linear equations with a special structure. This specific structure of the linear system can be exploited to derive an efficient numerical solution given in Proposition 1.

We first show how to obtain *I*_{ββ}, *I*_{hβ} and *I _{hh}*.

The *H*-score of an NT model is,

$$\frac{\partial \ell}{\partial {h}_{k}}=\frac{{D}_{k}}{{h}_{k}}-{\displaystyle \sum _{(i,j)\in {\mathcal{R}}_{k}}\mathrm{\Theta}({F}_{i}|\beta ,{z}_{\mathit{\text{ij}}},{c}_{\mathit{\text{ij}}})}.$$

Differentiating the *H*-score with respect to β we get,

$$-\frac{{\partial}^{2}\ell}{\partial {h}_{k}\partial {\beta}_{m}}={\displaystyle \sum _{(i,j)\in {\mathcal{R}}_{k}}\frac{\partial \mathrm{\Theta}}{\partial {\beta}_{m}}({F}_{i}|\beta ,{z}_{\mathit{\text{ij}}},{c}_{\mathit{\text{ij}}}).}$$

Evaluation of derivatives of Θ or γ with respect to β depends on the parameterization of the model’s predictor as a function of explanatory variables *z*, which is model-specific. Once a model is specified, the calculation of *I*_{ββ} and *I*_{hβ} is straightforward.

Since ${F}_{i}=\text{exp}(-{\displaystyle {\sum}_{l=1}^{i}{h}_{l}})$, we have

$$\frac{\partial \mathrm{\Theta}({F}_{i}|\xb7)}{\partial {h}_{m}}=\{\begin{array}{cc}\hfill Q({F}_{i}|\xb7),\hfill & \hfill m\le i,\hfill \\ \hfill 0,\hfill & \hfill m>i,\hfill \end{array}$$

(14)

where

$$Q(x|\xb7,c)=-x\frac{\partial \mathrm{\Theta}(x|\xb7,c)}{\partial x}=-(\mathrm{\Theta}(x|\xb7,c)-c)(\mathrm{\Theta}(x|\xb7,c+1)-\mathrm{\Theta}(x|\xb7,c)),$$

(15)

and “·” stands for “β, *z*”, and Θ is given by (6) extended to *c* = 0,1,2. Note that Θ(*F _{i}*|·)/

From (14) it follows that,

$$-\frac{{\partial}^{2}\ell}{\partial {h}_{k}\partial {h}_{m}}={\displaystyle \sum _{(i,j)\in {\mathcal{R}}_{\text{max}\{k,m\}}}Q({F}_{i}|\beta ,{z}_{\mathit{\text{ij}}},{c}_{\mathit{\text{ij}}})+\frac{{D}_{k}}{{h}_{k}^{2}}{1}_{\{k=m\}},}$$

where

$${1}_{\{k=m\}}=\{\begin{array}{cc}1,\hfill & k=m,\hfill \\ 0,\hfill & k\ne m.\hfill \end{array}$$

From this we get *I _{hh}*.

Now we turn our attention to the Jacobian *ĥ*/β. Proposition 1 gives the main result used to efficiently calculate *ĥ*/β in the case of NT models. Its proof is given in the Appendix.

*Proposition 1 Let D be an n × n diagonal matrix with diagonal elements d _{i}* ≠ 0,

*Define the functions* _{k}: → , *k* = 1, …, *n recursively as*

$$\begin{array}{c}{\phi}_{n}(y)=\frac{{b}_{n}}{{d}_{n}}-\frac{{a}_{n}}{{d}_{n}}y,\hfill \\ {\phi}_{k}(y)=\frac{1}{{d}_{k}}\left({b}_{k}-{\displaystyle \sum _{i=k}^{n}{a}_{i}y+{\displaystyle \sum _{l=k+1}^{n}{\displaystyle \sum _{i=k}^{l-1}{a}_{i}{\phi}_{l}(y)}}}\right),=n-1,\dots \phantom{\rule{thinmathspace}{0ex}},1,\hfill \end{array}$$

*for y in* . *Let* : → *be the function given by* $\tilde{\phi}(y)={\displaystyle {\sum}_{k=1}^{n}{\phi}_{k}(y)}$ *and let*

$$\tilde{y}=\frac{\tilde{\phi}(0)}{1+\tilde{\phi}(0)-\tilde{\phi}(1)}.$$

*Then the solution to the system of equations (D + R)x = b is the n-dimensional vector x* = (_{1}(*ỹ*), …, _{n}(*ỹ*))^{T}.

We now show that the Jacobian *ĥ*/β satisfies a relationship of the form as discussed in Proposition 1. Differentiating the self-consistency equation (5) implicitly, we get that *Ĥ* satisfies the relationship

$$\frac{\partial {\widehat{h}}_{m}}{\partial {\beta}_{k}}=-\frac{{\widehat{h}}_{m}^{2}}{{D}_{m}}\left({\displaystyle \sum _{l=1}^{n}{\displaystyle \sum _{(i,j)\in {\mathcal{R}}_{\text{max}\{m,l\}}}Q({F}_{i}|\beta ,{z}_{\mathit{\text{ij}}},{c}_{\mathit{\text{ij}}})\frac{\partial {\widehat{h}}_{l}}{\partial {\beta}_{k}}+{\displaystyle \sum _{(i,j)\in {\mathcal{R}}_{m}}\frac{\partial \mathrm{\Theta}}{\partial {\beta}_{k}}({F}_{i}|\beta ,{z}_{\mathit{\text{ij}}},{c}_{\mathit{\text{ij}}})}}}\right),$$

(16)

where *Q* is the function given in (15).

Let *D* be the diagonal matrix with elements

$${d}_{m}=\frac{{D}_{m}}{{({\widehat{h}}_{m})}^{2}},\phantom{\rule{thickmathspace}{0ex}}m=1,\dots \phantom{\rule{thickmathspace}{0ex}},d.$$

Let *R* = (*R _{ml}*) with ${R}_{\mathit{\text{ml}}}={\displaystyle {\sum}_{i=\text{max}\{m,l\}}^{n}{a}_{i}}$, where

$${a}_{i}={\displaystyle \sum _{j\in {\mathcal{C}}_{i}\cup {\mathcal{D}}_{i}}Q({F}_{i}|\beta ,{z}_{\mathit{\text{ij}}},{c}_{\mathit{\text{ij}}}),\phantom{\rule{thickmathspace}{0ex}}i=1,\dots \phantom{\rule{thickmathspace}{0ex}},n}$$

and for *k* = 1, …, *d* let

$${b}^{(k)}={\left(-{\displaystyle \sum _{(i,j)\in {\mathcal{R}}_{1}}\frac{\partial \mathrm{\Theta}({F}_{i}|\beta ,{z}_{\mathit{\text{ij}}},{c}_{\mathit{\text{ij}}})}{\partial {\beta}_{k}},\dots \phantom{\rule{thickmathspace}{0ex}},-{\displaystyle \sum _{(i,j)\in {\mathcal{R}}_{n}}\frac{\partial \mathrm{\Theta}({F}_{i}|\beta ,{z}_{\mathit{\text{ij}}},{c}_{\mathit{\text{ij}}})}{\partial {\beta}_{k}}}}\right)}^{\mathrm{T}}.$$

It follows from (16) that

$$\frac{\partial \widehat{h}}{\partial {\beta}_{k}}=-{D}^{-1}\left(R\frac{\partial \widehat{h}}{\partial {\beta}_{k}}-{b}^{(k)}\right).$$

Hence,

$$(R+D)\frac{\partial \widehat{h}}{\partial {\beta}_{k}}={b}^{(k)}.$$

Therefore, for each *k* = 1, …, *n* the vector *ĥ*/β_{k} can be obtained from Proposition 1. We now have all the components of (13) defined. This completes the exposition of our method.

In the examples we compare the performance of four methods to compute the observed profile information matrix. A brief explanation of the methods and details on how they were implemented in our examples are given below.

*Discretized*. The estimation is based on the result of Corollary 3 in Murphy and van der Vaart (2000). Under certain conditionsfor all sequences ${\nu}_{n}\stackrel{P}{\to}\nu \in \mathbb{R}\phantom{\rule{thinmathspace}{0ex}}\text{and}\phantom{\rule{thinmathspace}{0ex}}{h}_{n}\stackrel{P}{\to}0\phantom{\rule{thickmathspace}{0ex}}\text{such that}\phantom{\rule{thickmathspace}{0ex}}{\left(\sqrt{n}{h}_{n}\right)}^{-1}={O}_{P}(1)$.$$-2\frac{\text{log}\phantom{\rule{thinmathspace}{0ex}}{\ell}_{\text{pr}}(\widehat{\beta}+{h}_{n}{\nu}_{n})-\text{log}\phantom{\rule{thinmathspace}{0ex}}{\ell}_{\text{pr}}(\widehat{\beta})}{{\mathit{\text{nh}}}_{n}^{2}}\stackrel{P}{\to}{\nu}^{\mathrm{T}}{I}_{\text{pr}}\nu ,$$(17)In order to estimate all the elements of*I*_{pr}, we chose ν =*e*= 1, …,_{i}, i*d*and ν =*e*+_{i}*e*, 1 ≤_{j}*i*<*j*≤*d*, where*e*are Euclidean basis vectors._{i}We set ν_{n}ν, and ${h}_{n}=10/\text{}(\sqrt{n}{C}^{k})$ with*C*= 1.4 and*k*such that $|f(10/(\sqrt{n}{C}^{k}))-f(10/(\sqrt{n}{C}^{k-1}))|<0.001$, where*f*(*h*) is the left-hand side of Eq. (17) considered as a function of*h*. This procedure was motivated by Dixon et al. (2005) who considered a choice of*h*in the one dimensional (_{n}*d*= 1) situation.*Quadratic*. This approach approximates the profile likelihood surface by a quadratic form and derives the estimate of the information matrix from the coefficients of the form fitted to the surface. Specifically, let Δβ be a vector of deviations of the β values sampled in the vicinity of , and let Δ_{pr}be the induced vector of deviations of the profile likelihood from its maximum value,_{pr}(). Then, if Δβ is sufficiently small$$\mathrm{\Delta}{\ell}_{\mathrm{\text{pr}}}\approx \frac{1}{2}\mathrm{\Delta}{\beta}^{\mathrm{T}}{I}_{\mathrm{\text{pr}}}\mathrm{\Delta}\beta .$$Fitting the quadratic form (1/2)Δβ^{T}*A*Δβ to points (Δβ, Δ_{pr}) by least squares produces an estimate,*Â*, of the profile information matrix*I*_{pr}.In our implementation of this method we limit the domain to points that are not rejected at 0.05 significance level by the LR test (applied informally and disregarding the multi-comparison issue). In other words, points β are included if −2{_{pr}() −_{pr}(β)} ≤*C*_{d,0.05}, where*C*_{d,0.05}is the 0.05th upper tail percentile of the χ^{2}distribution with*d*= dim(β) degrees of freedom. Since the validity of the quadratic approximation is itself a prerequisite for the validity of the likelihood ratio statistic, this choice is far from perfect. Yet this procedure would ensure a desired property of the domain shrinking with sample size, and we know of no better alternative.*Numerical*. The calculation of the observed profile information matrix is carried on using Ridder’s numerical differentiation of the profile likelihood function, see Press et al. (1994).Let*f*: → be a differentiable function. By definition, the derivative of*f*is the limit as*h*→ 0 of the incremental quotient$$q(h)=\frac{f(x+h)-f(x)}{h}.$$The basic idea of Ridder’s method is to calculate*q*(*h*) for several values of*h*, and then extrapolate the result to the limit*h*= 0. In the case of a function with domain on , the vector of first derivatives is obtained by applying the algorithm on each coordinate at a time and leaving the other coordinates fixed.Numerical experimentation showed that this approach gives the second derivatives of_{pr}(β) with very high precision, albeit at a greater computational cost than other methods.*Exact*. This is the method developed in Sect. 3 of this paper for computation of the exact observed profile information matrix for NTM.PO model will be used as a basis for all our comparisons. The validity of NPMLE and the profile likelihood for this model has been demonstrated elsewhere.

Given covariates *z*, the survival function *G*(*t*|β,*z*) of a PO model can be written in the form,

$$G(t|\beta ,z)=G(t|\theta (\beta ,z))=\frac{\theta (\beta ,z)}{\theta (\beta ,z)+H(t)},$$

(18)

where *H* is some nonparametrically specified baseline cumulative hazard function, and θ is a predictor. Since *H* = − log *F*, the NTM-generating function of the PO model is

$$\gamma (x|\xb7)=\frac{\theta (\xb7)}{\theta (\xb7)-\text{log}\phantom{\rule{thinmathspace}{0ex}}x}.$$

A characteristic feature of the PO model is that for any two values, θ_{1}, θ_{2}, of the predictor, the odds ratio

$$\frac{\text{Odds}(G(t|{\theta}_{1}))}{\text{Odds}(G(t|{\theta}_{2}))}=\frac{{\theta}_{1}}{{\theta}_{2}}$$

is constant in *t*.

It follows that

$$\mathrm{\Theta}(x|\xb7,c)=\frac{c+1}{\theta (\xb7)-\text{log}\phantom{\rule{thinmathspace}{0ex}}x}.$$

We consider an exponential parameterization of the predictor θ(β, *z*) = exp(β^{T}*z*). With this parameterization,

$$\frac{\partial \theta}{\partial \beta}=\theta z,\text{}\frac{{\partial}^{2}\theta}{\partial \beta \partial {\beta}^{\mathrm{T}}}=\theta {\mathit{\text{zz}}}^{\mathrm{T}}.$$

The following derivatives of Θ are necessary to specify the algorithm of Sect.3,

$$\frac{\partial \mathrm{\Theta}}{\partial \beta}=\frac{\partial \mathrm{\Theta}}{\partial \theta}\theta z,\text{}\frac{{\partial}^{2}\mathrm{\Theta}}{\partial \beta \partial {\beta}^{\mathrm{T}}}=\left(\frac{{\partial}^{2}\mathrm{\Theta}}{\partial {\theta}^{2}}{\theta}^{2}+\frac{\partial \mathrm{\Theta}}{\partial \theta}\theta \right){\mathit{\text{zz}}}^{\mathrm{T}},$$

where

$$\frac{\partial \mathrm{\Theta}(x|\xb7,c)}{\partial \theta}=-\frac{c+1}{{(\theta (\xb7)-\text{log}\phantom{\rule{thinmathspace}{0ex}}x)}^{2}},\text{and}\frac{{\partial}^{2}\mathrm{\Theta}(x|\xb7,c)}{\partial {\theta}^{2}}=\frac{2(c+1)}{{(\theta (\xb7)-\text{log}\phantom{\rule{thinmathspace}{0ex}}x)}^{3}}.$$

As an example, we use data from the National Cancer Institutes Surveillance Epidemiology and End Results (SEER) program. Using the publicly available SEER database, 11,621 cases of primary prostate cancer diagnosed in the state of Utah between 1988 and 1999 were identified. The following selection criteria were applied to a total of 19,819 Utah-cases registered in the database: valid positive survival time, valid stage of the disease, age ≥18 years. Prostate cancer specific survival was analyzed by stage of the disease (localized/regional vs. distant). For the definition of stages as well as for other details of the data we refer the reader to SEER documentation http://www.seer.cancer.gov/.

The data analysis presented in this paper is a continuation of the one given in Tsodikov (2003). Two groups of patients representing stage at diagnosis of the disease are considered, hence the predictor in the PO model has a single parameter β. The log odds ratio β measures the disadvantage of being in the distant stage relative to local/regional stage. The QEM algorithm was applied to fit the PO model to the data. The maximum likelihood estimate of β was = −3.251. Confidence intervals for β were obtained using the Wald statistic based on the profile information matrix. The confidence interval based on the quadratic approximation of the profile information matrix was (− 3.416, − 3.086) and the one obtained through the exact profile information matrix was (− 3.415, − 3.086). Excellent concordance of the two confidence intervals is due to the large sample size and the small dimension of the regression parameter, a situation when approximating methods tend to be accurate.

In the case of a single parameter, the observed profile information matrix is a scalar. The estimates of the observed profile information matrix were 142.1011, 141.2158 and 141.7424 for the Discretized, Quadratic and Numerical approaches respectively and the Exact value was 141.7423. Although the values were quite similar it is clear that the discretized and quadratic approaches depart from the true value.

We simulated age at diagnosis for an adult-onset disease using a PO model. The example in its baseline survival is loosely based on incidence of prostate cancer. The baseline survival function was assumed to follow a Weibull distribution with the median of 38 years and shape of 1.8, the risk starting at the age of 18 (a fixed number of 18 years was added to survival time and censoring). With these parameters incidence before the age of 40 is negligible. An independent censoring mechanism was assumed. Censoring times were generated using Weibull distribution with a median of 46 years and shape parameter of 4. Observations in excess of 105 years were type-I censored at 105. Two covariates were introduced, one categorical with 3 levels, and one continuous (a risk factor) with a range between − 1 and 1. Values for both covariates were generated independently. The continuous covariate followed a uniform distribution. The discrete distribution for the categorical covariate assumed the following probabilities, 0.7 (level 1), 0.5 (level 2), and 0.1 (level 3). The following covariate effects were assumed. An effect of the log odds ratio of 2 was assumed for a unit change in the continuous factor. The categorical covariate was assumed to have a progressing effect on the risk of the disease. The log odds ratios comparing level 2 and level 3 to the baseline level 1 were 1.5 and 2.5, respectively.

To assess the speed of performance of the four methods we calculated the number of operations required to compute the exact information matrix and its approximations. Evaluation of Θ, γ, their analytically specified derivatives or similar comparable procedures were counted as one operation. Figure 1 shows the number of operations by sample size and method. In order to make the performance results comparable, the precision of estimation algorithms was calibrated on an ad-hoc basis so that the relative error of the three methods (Discretized, Quadratic, Numerical) was approximately the same (0.02). For any method *A*, the relative error was defined as ‖*I*_{pr}(*A*) − *I*_{pr}(Exact)‖/‖*I*_{pr}(Exact)‖, where *I*_{pr}(*A*) is an estimate of the observed profile information matrix computed using method *A*, and the norm is defined as the sum of absolute values of all elements of the matrix. Regardless of the sample size, the exact calculation outperformed the approximate methods. Inference based on the discretized second derivative required between 10 and 30 times as many operations as the exact calculation. The quadratic approach required between 60 and 200 times as many operations as the calculation of the exact *I*_{pr} matrix. The numerical method was computationally very costly requiring between 600 and 7,000 as many operations as the exact approach. However, the numerical approach behaved better than the other two methods in terms of relative error as shown in Sect.4.3.3.

A sample of size 500 was used to find the smallest possible relative error of the method when adjusting the different parameters involved on an ad-hoc basis. The best relative error achieved by the Discretized method was 0.01 and 8.13 × 10^{5} operations were required. This number was 0.013 for the quadratic approach with 5.32 × 10^{6} operations required, while the numerical approach achieved a relative error of 8 × 10^{−7} and required 3.87 × 10^{8} operations. This example shows that the numerical approach is perhaps the only one of the approximating methods that can compete with the exact procedure in terms of precision required in real-life analysis. Its high computational cost, however, makes it a poor choice for variable selection and other procedures requiring repeated evaluations of *I*_{pr}.

Three sets of experiments were performed with samples of size 100, 500, and 1,000. For each sample size, 1,000 simulated samples were generated. The covariance matrices based on *I*_{pr} were computed for each sample using the four approaches discussed. The mean and standard deviation of each of the entries of the estimators of covariance matrices under study were estimated from the 1,000 replicates. In addition, point semiparametric MLE estimates of the three parameters entering the profile likelihood (log odds ratios for the continuous factor and level 2 vs. 1 and 3 vs. 1 contrasts) were used to compute the empirical covariance matrix based on 1,000 replicates. A comparison of the estimated means of the entries of the covariance matrices calculated using exact and numerical approaches with the empirical ones were used to evaluate how well these methods estimate the true finite sample variance-covariance. The results are shown in Table 1. Two factors contributed to the distance between the exact and numerical approaches and the empirical one: the finite-sample bias of covariance estimates based on *I*_{pr}, and the bias in the estimate of *I*_{pr} by an approximating method (this latter bias does not pertain to the exact method). The following basic conclusions are evident from the Table 1.

- All methods are much better at estimating variances (left half of the table) than covariances (right half of the table);
- The precision of estimation of covariance improves rapidly with sample size;
- Under all sample sizes the numerical approach showed excellent concordance with the exact method. This is in agreement with our earlier observation that of all approximating methods, the numerical approach is most precise albeit computationally costly. The quadratic method was the least precise in this analysis;
- For reasonably large sample sizes all methods are in good agreement with the empirical estimate.

In this paper we have proposed a method to compute the profile information matrix based on implicit differentiation of the self-consistency equation. Computationally the method outperformed all existing approaches to the best of our knowledge. An attractive property of the procedure is that it is exact contingent upon point estimates. Even though exact point estimates are hardly ever available, the precision of variance-covariance estimation is improved as the method does not add any error to the one associated with imprecision of point estimates. Numerically efficient and stable procedures for point estimates have been developed earlier and provide a good complement to this methodology. We recommend the Exact method as a preferred choice with NTM.

Since derivatives of the profile likelihood are defined implicitly, applying the Newton–Raphson method to the profile likelihood for point estimation is a challenge. The Newton–Raphson method typically requires exact inverse Hessian matrix and is not guaranteed to converge if this matrix is approximated. The results of this paper can be used to provide an exact inverse Hessian matrix at any point in the parameter space and thus enable the Newton–Raphson method for use with the profile likelihood.

This research is supported by National Cancer Institute grant U01 CA97414, and Department of Defence grant DAMD17-03-1-0034.

The equation (*D* + *R*)*x* = *b* implies

$${d}_{k}{x}_{k}+{\displaystyle \sum _{l=1}^{n}{R}_{\mathit{\text{kl}}}{x}_{l}={b}_{k},\text{}k=1,\dots \phantom{\rule{thickmathspace}{0ex}},n.}$$

(19)

Since ${R}_{\mathit{\text{kl}}}={\displaystyle {\sum}_{i=\text{max}\{k,l\}}^{n}{a}_{i}}$, it follows from (19) that for *k* = 1, …, *n*,

$$\begin{array}{cc}{b}_{k}\hfill & ={d}_{k}{x}_{k}+{\displaystyle \sum _{i=k}^{n}{a}_{i}}{\displaystyle \sum _{l=1}^{k}{x}_{l}}+{\displaystyle \sum _{l=k+1}^{n}{\displaystyle \sum _{i=l}^{n}{a}_{i}{x}_{l}}}\hfill \\ \hfill & ={d}_{k}{x}_{k}+{\displaystyle \sum _{i=k}^{n}{a}_{i}}{\displaystyle \sum _{l=1}^{k}{x}_{l}}+{\displaystyle \sum _{i=k+1}^{n}{\displaystyle \sum _{l=k+1}^{i}{a}_{i}{x}_{l}}}\hfill \\ \hfill & ={d}_{k}{x}_{k}+{\displaystyle \sum _{i=k}^{n}{a}_{i}}\left({\displaystyle \sum _{l=1}^{n}{x}_{l}}-{\displaystyle \sum _{l=i+l}^{n}{x}_{l}}\right).\hfill \end{array}$$

The second equality above is a consequence of a change of summation order.

Hence, solving the system of equations (*D* + *R*)*x* = *b* is equivalent to solving the system

$$\begin{array}{cc}{x}_{k}\hfill & =\frac{1}{{d}_{k}}\left({b}_{k}-{\displaystyle \sum _{i=k}^{n}{a}_{i}y}+{\displaystyle \sum _{l=k+1}^{n}{\displaystyle \sum _{i=k}^{l-1}{a}_{i}{x}_{l}}}\right),\text{}k=1,\dots \phantom{\rule{thickmathspace}{0ex}},n\hfill \\ y\hfill & ={\displaystyle \sum _{l=1}^{n}{x}_{l}}.\hfill \end{array}$$

Notice that {*x _{k}*} are in fact functions of

$$a=\tilde{\phi}(1)-1-\tilde{\phi}(0)\text{and}=\tilde{\phi}(0).$$

Therefore,

$$\tilde{y}=\frac{\tilde{\phi}(0)}{1+\tilde{\phi}(0)-\tilde{\phi}(1)}.$$

A. Tsodikov, Department of Biostatistics, School of Public Health, University of Michigan, 1420 Washington Heights, Ann Arbor, MI 48109-2029, USA, Email: ude.hcimu@vokidost.

G. Garibotti, Centro Regional Universitario Bariloche, Universidad Nacional del Comahue, Quintral 1250, 8400 Bariloche, Argentina, Email: na.vog.aenc.bac@ttobirag.

- Dixon JR, Kosorok MR, Lee BL. Functional inference in semiparametric models using the piggyback bootstrap. Ann Inst Statist Math. 2005;57:255–277.
- Kosorok MR, Lee BL, Fine JP. Robust inference for univariate proportional hazards frailty regression models. Ann Statist. 2004;32:1448–1491.
- Lange K, Hunter DR, Yang I. Optimization transfer using surrogate objective functions (with discussion) J Comput Graph Statist. 2000;9:1–59.
- Murphy SA. Consistency in a proportional hazards model incorporating a random effect. Ann Statist. 1994;22(2):712–731.
- Murphy SA. Asymptotic theory for the frailty model. Ann Statist. 1995;23(1):182–198.
- Murphy SA, van der Vaart AW. Semiparametric likelihood ratio inference. Ann Statist. 1997;25:1471–1509.
- Murphy SA, van der Vaart AW. On profile likelihood. J Am Statist Assoc. 2000;95:449–485.
- Murphy SA, Rossini AJ, van der Vaart AW. Maximum likelihood estimation in the proportional odds model. J Am Statist Assoc. 1997;92(439):968–976.
- Nielsen GG, Gill RD, Andersen PK, Sorensen TI. A counting process approach to maximum likelihood estimation in frailty models. Scand J Statist. 1992;19:25–43.
- Parner E. Asymptotic theory for the correlated Gamma-frailty model. Ann Statist. 1998;26:183–214.
- Press WH, Flannery BP, Teukolsky SA, Vetterling WT. The art of scientific computing. New York: Cambridge University Press; 1994. Numerical recipies in Pascal.
- Severini TA. Likelihood methods in statistics. New York: Oxford University Press; 2000.
- Tsodikov A. Semiparametric models of long- and short-term survival: an application to the analysis of breast cancer survival in Utah by age and stage. Statist Med. 2002;21:895–920. [PubMed]
- Tsodikov A. Semiparametric models: a generalized self-consistency approach. J R Statist Soc, Ser B. 2003;65:759–774. [PMC free article] [PubMed]
- Tsodikov A, Ibrahim JG, Yakovlev AY. Estimating cure rates from survival data: an alternative to two-component mixture models. J Am Statist Assoc. 2003;98:1063–1078. [PMC free article] [PubMed]
- van der Vaart AW. Cambridge series in statistical and probabilistic mathematics. Cambridge UK: Cambridge University Press; 1998. Asymptotic statistics.

PubMed Central Canada is a service of the Canadian Institutes of Health Research (CIHR) working in partnership with the National Research Council's national science library in cooperation with the National Center for Biotechnology Information at the U.S. National Library of Medicine(NCBI/NLM). It includes content provided to the PubMed Central International archive by participating publishers. |