Home | About | Journals | Submit | Contact Us | Français |

**|**HHS Author Manuscripts**|**PMC2811346

Formats

Article sections

- Abstract
- 1 Introduction
- 2 The Neumann series approximation to the efficient score
- 3 Estimation and inference based on approximate locally efficient scores
- 4 Applications to examples
- 5 Numeric study
- 6 Discussion
- Supplementary Material
- References

Authors

Related links

Scand Stat Theory Appl. Author manuscript; available in PMC 2010 June 1.

Published in final edited form as:

Scand Stat Theory Appl. 2009 December 1; 36(4): 713–734.

doi: 10.1111/j.1467-9469.2009.00646.xPMCID: PMC2811346

NIHMSID: NIHMS168144

HUA YUN CHEN, Division of Epidemiology & Biostatistics, School of Public Health, University of Illinois at Chicago, Chicago, IL 60532;

HUA YUN CHEN: ude.ciu@nehcyh

See other articles in PMC that cite the published article.

Theory on semiparametric efficient estimation in missing data problems has been systematically developed by Robins and his coauthors. Except in relatively simple problems, semiparametric efficient scores cannot be expressed in closed forms. Instead, the efficient scores are often expressed as solutions to integral equations. Neumann series was proposed in the form of successive approximation to the efficient scores in those situations. Statistical properties of the estimator based on the Neumann series approximation are difficult to obtain and as a result, have not been clearly studied. In this paper, we reformulate the successive approximation in a simple iterative form and study the statistical properties of the estimator based on the reformulation. We show that a doubly-robust locally-efficient estimator can be obtained following the algorithm in robustifying the likelihood score. The results can be applied to, among others, the parametric regression, the marginal regression, and the Cox regression when data are subject to missing values and the missing data are missing at random. A simulation study is conducted to evaluate the performance of the approach and a real data example is analyzed to demonstrate the use of the approach.

The semiparametric efficient estimation for missing data problems has been extensively studied (Bickel *et al*., 1993; Robins *et al*., 1994 and others). One major task in such problems is to project the estimating score onto the orthogonal complement of the nuisance score space. However, the projection often depends on the unknown underlying distribution that generated the data (Robins *et al*., 1994, 1995; Rotnitzky & Robins, 1995; Rotnitzky *et al*., 1998; Scharfstein *et al*., 1999). To overcome the difficulty, working models have been proposed to compute a locally efficient score. It has been shown that when data are missing at random and either the working model for the missing data mechanism or the working model for the nuisance model of the full data is correct, the locally efficient score is asymptotically unbiased (Lipsitz *et al*., 1999; Robins *et al*., 1999; Robins & Rotnitzky, 2001; van der Laan & Robins, 2003). Note that, except in simple cases, computing the projection using working models still corresponds to the hard problem of solving an integral equation. Neumann series expansion has been proposed to obtain an approximate solution through successive approximation (Robins *et al*., 1994, Robins & Wang, 1998, and van der Laan & Robins, 2003). Since the procedure for finding the locally efficient estimator based on the approximate locally efficient score is complicated, the study of the asymptotic properties of the estimator has been left open. In this article, we reformulate the successive approximation and show that an algorithm based on robustifying the likelihood score yields an estimator having the desired asymptotic properties, *i.e*., doubly robust and locally efficient.

The remainder of the article is organized as follows. In section 2, we reformulate the successive approximation in a simple form and show the robust property of the algorithm. The asymptotic properties of the estimator are carefully studied in section 3. We show that the algorithm indeed yields an estimator which is doubly-robust and locally-efficient when appropriate care is taken with regard to the number of terms used in the Neumann series approximation. Applications of the theory developed to regression models are briefly discussed in section 4. A simulation study is performed using parametric regression with missing covariates to examine the finite sample performance of the algorithm in section 5 and the algorithm is applied to a real data example. The article is concluded with some discussions in section 6. All proofs are collected in the Appendix.

Let *Y* be the full data, *R* be the missing data indicator for *Y*, and *R*(*Y*) and (*Y*) be respectively the observed and missing parts of *Y*. Let the density of the distribution for (*R*, *Y*) with respect to *μ*, a product of count measures and Lebesgue measures, be *dP*_{(}_{α}_{,}_{β}_{,}_{θ}_{)}*/dμ* = *π*(*R*|*Y*, *α*)*f*(*Y*, *β*, *θ*), where *β* Ω, *θ* Θ, *α* Ξ. Here *β* and *α* are usually Euclidean parameters, and *θ* is a nuisance parameter, which can be of infinite dimension. Let *η* = (*α*, *β*, *θ*), where *β* is the parameter of interest and (*α*, *θ*) are nuisance parameters. Let
$b(Y)\in {L}_{0}^{2}({P}_{\eta})$, a mean-zero square-integrable function with respect to *P _{η}*. Define the nonparametric information operator

$${m}_{\eta}\{b(Y)\}={E}_{\eta}[{E}_{\eta}\{b(Y)\mid R,R(Y)\}\mid Y].$$

Neumann series approximation to the efficient score appeared as the successive approximation in Robins *et al.* (1994). The method first finds the efficient score for *β* under the full data model, denoted by
${S}_{\beta}^{F,\mathit{eff}}$. The method then employs the successive approximation,

$${U}_{N}={S}_{\beta}^{F,\mathit{eff}}+{\mathcal{P}}_{\eta}(I-{m}_{\eta}){U}_{N-1},$$

where is the projection to the closure of the nuisance score space of the full data model and
${U}_{0}={S}_{\beta}^{F,\mathit{eff}}$. The semiparametric efficient score is *E _{η}*{

To answer these questions, we reformulate the successive approximation in another form as *U _{N}* = (

$${U}_{N}={(I-{\mathcal{P}}_{\eta}{m}_{\eta})}^{N}{S}_{\beta},$$

where *S _{β}* is the likelihood score for

$$\underset{N\to \infty}{lim}E\{{(I-{\mathcal{P}}_{\eta}{m}_{\eta})}^{N}{S}_{\beta}\mid R,R(Y)\}.$$

Note that the approximation based on the new expression does not require us to first find the efficient score under the full data. The approximation is the likelihood score when *N* = 0 and can be regarded as robustification of the likelihood score when *N* > 0. An algorithm for finding the approximate locally efficient score for estimating *β* based on the expression can be described as follows. First, find an estimate of the nuisance parameters under the working models using methods such as the maximum likelihood approach. Then, compute the approximate efficient score with the nuisance parameters estimated from the working models. Finally, solve the score equation to obtain *β* estimator. Results in the next section show that it is sufficient that the number *N* in the approximation be taken in an order higher than the logarithm of the sample size. Let

$${T}_{N}\{R,R(Y),\eta \}=E\{{(I-{\mathcal{P}}_{\eta}{m}_{\eta})}^{N}{S}_{\beta}\mid R,R(Y)\}$$

and *T*{*R*, *R*(*Y*), *η*} be the limit of *T _{N}*{

Let *P*_{0} = *P*_{(β0,θ0,α0)}, the true distribution generating the data. If, for any small *ε* and large *M*, there exists *θ*_{(}_{ε}_{,}_{M}_{)} such that

$$f(y,{\beta}_{0},\theta )\phantom{\rule{0.16667em}{0ex}}\left(1+\epsilon \left[S(y){1}_{\{\mid S(y)\mid \le M\}}-{E}_{({\beta}_{0},\theta )}\left\{S(Y){1}_{\{\mid S(Y)\mid \le M\}}\right\}\right]\right)=f(y,{\beta}_{0},{\theta}_{\epsilon ,M}),$$

where *S*(*Y*) = *f*(*Y*, *β*_{0}, *θ*_{0})*/f*(*Y*, *β*_{0}, *θ*)−1, then, in this paper, {*f*(*y*, *β*_{0}, *θ*), *θ* Θ} is called a super-convex family for *θ* at *β*_{0}. Note that a super-convex family of distributions is always a convex family of distributions, which corresponds to *M* = ∞. When the densities are bounded above from infinity and below from zero, a convex family is also a super-convex family.

Assume that data are missing at random and that the true distribution generating the data is *P*_{0}. Then the following results hold:

- For any fixed
*N*, if the nuisance model for the full data is correct,*i.e*.,*θ*=*θ*_{0}, then*T*{_{N}*R*,*R*(*Y*),*β*_{0},*θ*_{0},*α*} is asymptotically unbiased under*P*_{0}. The*L*^{2}(*P*_{(β0, θ0, α)}) limit of*T*is also asymptotically unbiased if it is in_{N}*L*^{2}(*P*_{0}) and the missing data probabilities are bounded away from zero. - If the model for the missing data mechanism is correct,
*i.e*.,*α*=*α*_{0}, and*f*(*y*,*β*_{0},*θ*) is a super-convex family with respect to*θ*, then*E*_{0}[*T*{*R*,*R*(*Y*),*β*_{0},*θ*,*α*_{0}}] = 0 if*E*_{0}[*T*^{2}{*R*,*R*(*Y*),*β*_{0},*θ*,*α*_{0}}] < ∞, where*T*^{2}=*TT*′.

We suppress the proof of this proposition because it is similar to the proof of such results in the literature such as in Robins *et al.* (2000) and van der Laan & Robins (2003). The proposition suggests that *T* is doubly robust, *i.e*., it is unbiased when either the missing data mechanism model or the nuisance model for the full data is correctly specified. For a fixed *N*, *T _{N}* is unbiased only when the nuisance model for the full data is correct. However, note that

To simplify notation in this section, we use *θ* to denote the nuisance parameter. That is, we absorb *α* into *θ*. Denote the model ∫ *π*(*R*|*Y*, *α*)*f*(*Y*, *β*, *θ*)*d*(*Y*) by *g*(*R*, *R*(*Y*), *β*, *θ*) after the parameter absorption. Let *θ*(*γ*), *γ* Γ define a working model which is a regular submodel. Let *θ*() be a
$\sqrt{n}$ consistent estimator of *θ*(*γ*) under the working models.

To accommodate *β* of infinite dimension, we use the functional form to denote *T _{N}* and

$${P}_{n}{T}_{N}({\stackrel{\sim}{\beta}}_{N},\widehat{\theta})({h}_{1})=\frac{1}{n}\sum _{i=1}^{n}{T}_{N(i)}({\stackrel{\sim}{\beta}}_{N},\widehat{\theta})({h}_{1})=0,$$

for all *h*_{1} *H*_{1}. Let be the solution to the equation

$${P}_{n}T(\stackrel{\sim}{\beta},\widehat{\theta})({h}_{1})=\frac{1}{n}\sum _{i=1}^{n}{T}_{(i)}(\stackrel{\sim}{\beta},\widehat{\theta})({h}_{1})=0,$$

for all *h*_{1} *H*_{1}. Define linear operator as a map from *H*_{1} to itself satisfying

$$<{\mathcal{Q}}_{0}{h}_{1},{h}_{1}^{\ast}{>}_{{H}_{1}}={E}_{0}\left\{\frac{\partial T({\eta}_{0})}{\partial \beta}({h}_{1})({h}_{1}^{\ast})\right\}.$$

(1)

Assumption 9 in the Appendix guarantees that exists, is uniquely defined, and is continuously invertible because of the following. For any given *h*_{1}, the right-hand side of the foregoing equation defines a linear functional on *H*_{1}. By Riesz representation theorem, there exists an *h*^{**} *H*_{1} such that, for all
${h}_{1}^{\ast}\in {H}_{1}$,

$$<{h}_{1}^{\ast \ast},{h}_{1}^{\ast}{>}_{{H}_{1}}={P}_{0}\left\{\frac{\partial T({\eta}_{0})}{\partial \beta}({h}_{1})({h}_{1}^{\ast})\right\}.$$

We can thus define the map from *H*_{1} to *H*_{1} such that
${\mathcal{Q}}_{0}{h}_{1}={h}_{1}^{\ast \ast}$. By varying *h*_{1} *H*_{1}, we see that is well-defined on *H*_{1}. It is straightforward to verify that is a linear operator on *H*_{1}. Similarly, we define linear operators, and , that map *H*_{1} to itself, respectively as

$$<{\mathcal{Q}}_{N}{h}_{1},{h}_{1}^{\ast}{>}_{{H}_{1}}={P}_{0}\left\{\frac{\partial {T}_{N}({\beta}_{N},{\theta}^{\ast})}{\partial \beta}({h}_{1})({h}_{1}^{\ast})\right\}.$$

and

$$<{\mathcal{Q}}_{0N}{h}_{1},{h}_{1}^{\ast}{>}_{{H}_{1}}={P}_{0}\left\{\frac{\partial {T}_{N}({\beta}_{0},{\theta}^{\ast})}{\partial \beta}({h}_{1})({h}_{1}^{\ast})\right\},$$

which are respectively continuously invertible from assumption 10 in the Appendix and from the continuity of the right-hand side with respect to *β*.

We now state theorems on the asymptotic properties of the *β* estimators when data are missing at random and either the missing data mechanism model or the nuisance full data model is correctly specified. Theorem 1 describes the asymptotic behavior of when *n* → ∞. Theorem 2 states the asymptotic property of * _{N}* when

Under assumptions 1–9, as *n* →; ∞, → *β*_{0} almost surely and

$$<\sqrt{n}(\stackrel{\sim}{\beta}-{\beta}_{0}),{h}_{1}{>}_{{H}_{1}}\to N(0,{V}_{0}({h}_{1})),$$

uniformly for *h*_{1} *H*_{10}, and

$${V}_{0}({h}_{1})={E}_{0}{\left\{T({\beta}_{0},{\theta}^{\ast})({\mathcal{Q}}_{0}^{-1}{h}_{1})\right\}}^{\otimes 2},$$

which can be consistently estimated uniformly for *h*_{1} *H*_{10} by

$$\frac{1}{n}\sum _{i=1}^{n}{\left[{T}_{(i)}(\stackrel{\sim}{\beta},\widehat{\theta})\phantom{\rule{0.16667em}{0ex}}\left\{{\widehat{\mathcal{Q}}}_{0}^{-1}({h}_{1})\right\}\right]}^{\otimes 2},$$

where *θ*^{*} = *θ*(*γ*^{*}) and *γ*^{*} is the limit of , and

$$<{\widehat{\mathcal{Q}}}_{0}({h}_{1}),{h}_{1}^{\ast}{>}_{{H}_{1}}=\frac{1}{n}\sum _{i=1}^{n}\sqrt{n}\left[{T}_{(i)}\{\stackrel{\sim}{\beta}+{h}_{1}/\sqrt{n},\widehat{\theta}\}({h}_{1}^{\ast})-{T}_{(i)}\{\stackrel{\sim}{\beta},\widehat{\theta}\}({h}_{1}^{\ast})\right],$$

for
${h}_{1}^{\ast}\in {H}_{1}$. When the nuisance models for *θ* are correctly specified, *θ*^{*} = *θ*_{0} and *V*_{0} attains the semiparametric efficient variance bound.

Under assumptions 1–8, and 10–12, for any fixed *N*, as *n* → ∞, * _{N}* converges almost surely to

$$<({\beta}_{N}-{\beta}_{0}),{h}_{1}{>}_{{H}_{1}}\approx {E}_{0}\left\{{T}_{N}({\beta}_{0},{\theta}^{\ast})(-{\mathcal{Q}}_{0N}^{-1}{h}_{1})\right\}.$$

Further,

$$<\sqrt{n}({\stackrel{\sim}{\beta}}_{N}-{\beta}_{N}),{h}_{1}{>}_{{H}_{1}}\to N(0,{V}_{N}({h}_{1}))$$

uniformly over *h*_{1} *H*_{10}, where

$${V}_{N}({h}_{1})={E}_{0}{\left[{T}_{N}({\beta}_{N},{\theta}^{\ast})({\mathcal{Q}}_{N}^{-1}{h}_{1})+{E}_{0}{\left\{\frac{\partial {T}_{N}({\beta}_{N},{\theta}^{\ast})}{\partial \theta}(u)({\mathcal{Q}}_{N}^{-1}{h}_{1})\right\}}_{u={U}_{1}}\right]}^{\otimes 2}$$

can be consistently estimated by

$$\frac{1}{n}\sum _{i=1}^{n}{\left({T}_{N(i)}\{{\stackrel{\sim}{\beta}}_{N},\widehat{\theta}\}\{{\widehat{\mathcal{Q}}}_{N}^{-1}{h}_{1})+\sqrt{n}\left[{\overline{T}}_{N}\{{\stackrel{\sim}{\beta}}_{N},\widehat{\theta}+\frac{{U}_{i}}{\sqrt{n}}\}({\widehat{\mathcal{Q}}}_{N}^{-1}{h}_{1})-{\overline{T}}_{N}\{{\stackrel{\sim}{\beta}}_{N},\widehat{\theta}\}\{{\widehat{\mathcal{Q}}}_{N}^{-1}{h}_{1})\right]\right)}^{\otimes 2}$$

uniformly for *h*_{1} *H*_{10}, where (*h*_{1}) is defined as

$$<{\widehat{\mathcal{Q}}}_{N}({h}_{1}),{h}_{1}^{\ast}{>}_{{H}_{1}}=\frac{1}{n}\sum _{i=1}^{n}{T}_{N(i)}\{{\stackrel{\sim}{\beta}}_{N}+{h}_{1}/\sqrt{n},\widehat{\theta}\}({h}_{1}^{\ast})$$

for *h*_{1},
${h}_{1}^{\ast}\in {H}_{1}$, and *U*_{1}, ···, *U _{n}* are influence functions of in estimating

$$\widehat{\theta}-{\theta}^{\ast}=\frac{1}{n}\sum _{i=1}^{n}{U}_{i}+{o}_{p}\left(\frac{1}{\sqrt{n}}\right).$$

Let *N _{n}* be a sequence such that log

$$<\sqrt{n}({\stackrel{\sim}{\beta}}_{{N}_{n}}-\stackrel{\sim}{\beta}),{h}_{1}{>}_{{H}_{1}}={o}_{{P}_{0}}(1)$$

uniformly over *h*_{1} *H*_{10} as *n* → ∞. Further, *V*_{0}(*h*_{1}) can be consistently estimated by

$$\frac{1}{n}\sum _{i=1}^{n}{\{{T}_{{N}_{n}(i)}({\stackrel{\sim}{\beta}}_{{N}_{n}},\widehat{\theta}\}({\widehat{\mathcal{Q}}}_{0{N}_{n}}^{-1}({h}_{1})\}}^{\otimes 2}$$

uniformly over *h*_{1} *H*_{10}, where

$$<{\widehat{\mathcal{Q}}}_{0{N}_{n}}({h}_{1}),{h}_{1}^{\ast}{>}_{{H}_{1}}=\frac{1}{n}\sum _{i=1}^{n}\sqrt{n}\left[{T}_{{N}_{n}(i)}\{{\stackrel{\sim}{\beta}}_{{N}_{n}}+{h}_{1}^{\ast}/\sqrt{n},\widehat{\theta}\}({h}_{1})-{T}_{{N}_{n}(i)}\{{\stackrel{\sim}{\beta}}_{{N}_{n}},\widehat{\theta}\}({h}_{1})\right],$$

for *h*_{1},
${h}_{1}^{\ast}\in {H}_{1}$.

In practice, Theorem 1 is useful only when the locally efficient score has a closed-form expression. This can happen sometimes. In general, Theorem 1 cannot be applied directly because of the unknown form of the locally efficient score. Theorem 2 can almost always be applied to the approximation score with a finite *N*. It can be seen from Theorem 2 that, although the bias in estimating *β* cannot be totally avoided, the magnitude of the bias can be controlled by selecting a sufficiently large *N*. This is because *E*_{0}*T _{N}* (

In this section, we apply the theorems in the previous section to several regression models frequently used in practice.

Let *Y* = (*V*, *W*, *X*) with density *p*(*v*|*w*, *x*)*f*(*w*|*x*, *β*)*q*(*x*) where *f*(*w*|*x*, *β*) is the parametric regression model with a Euclidean parameter *β* *R ^{k}*, which is of primary interest, and

$${E}_{\eta}\{{m}_{\eta}^{-1}(\mathcal{D})\mid X,W\}-{E}_{\eta}\{{m}_{\eta}^{-1}(\mathcal{D})\mid X\}=\frac{\partial}{\partial \beta}logf(W\mid X,\beta ).$$

When missing data form monotone patterns, ${m}_{\eta}^{-1}$ has a closed-form expression. But the foregoing integral equation does not have a closed-form solution even with the simplest missing data pattern and when no auxiliary covariates are involved. As a result, successive approximation is needed except when the missing data form monotone patterns and we are satisfied with a doubly robust estimator.

The score operator for the parameters (*q*, *β*, *p*) is

$${A}_{\eta}({h}_{1},{h}_{21},{h}_{22})={h}_{1}^{T}\frac{\partial logf}{\partial \beta}+{h}_{21}(v,w,x)+{h}_{22}(x),$$

where *h*_{1} *H*_{1} = *R ^{k}* with

$${h}_{21}\in {H}_{21}=\{{h}_{21}(v,w,x)\in {L}^{2}({P}_{\eta})\mid {E}_{\eta}\{{h}_{21}(V,W,X)\mid W,X\}=0\}$$

and

$${h}_{22}\in {H}_{22}=\{{h}_{22}(x)\in {L}^{2}({P}_{\eta})\mid {E}_{\eta}\{{h}_{22}(X)\}=0\}.$$

Note that *H*_{1} does not vary for different *β* Ω. *H*_{10} can be chosen as *H*_{10} = {*h*_{1} *R ^{k}*||

$${A}_{2\eta}^{\ast}\{s(v,w,x)\}=\{s(v,w,x)-{E}_{\eta}(s\mid w,x),{E}_{\eta}(s\mid x)-{E}_{\eta}(s)\}.$$

It follows that
${A}_{2\eta}^{\ast}{A}_{2\eta}({h}_{21},{h}_{22})=({h}_{21}(v,w,x),{h}_{22}(x))$. Hence,
${A}_{2\eta}^{\ast}{A}_{2\eta}$ is continuously invertible on *H*_{2} and
${\mathcal{P}}_{\eta}={A}_{2\eta}{({A}_{2\eta}^{\ast}{A}_{2\eta})}^{-1}{A}_{2\eta}^{\ast}$ having the form

$${\mathcal{P}}_{\eta}(s)=s(v,w,x)-{E}_{\eta}(s\mid w,x)+{E}_{\eta}(s\mid x),$$

for any mean-zero square integrable function *s*. Assume that the densities involving the nuisance parameters are uniformly bounded, the convexity requirement for *q*(*v*|*w*, *x*)*f*(*w*|*x*, *β*)*p*(*x*) with respect to *qp* can be verified as follows.

$$\tau {q}_{1}(v\mid w,x){p}_{1}(x)+(1-\tau ){q}_{2}(v\mid w,x){p}_{2}(x)={q}_{\tau}(v\mid w,x){p}_{\tau}(x),$$

where

$${q}_{\tau}(v\mid w,x)=\frac{\tau {q}_{1}(v\mid w,x){p}_{1}(x)+(1-\tau ){q}_{2}(v\mid w,x){p}_{2}(x)}{\tau {p}_{1}(x)+(1-\tau ){p}_{2}(x)}$$

and

$${p}_{\tau}(x)=\tau {p}_{1}(x)+(1-\tau ){p}_{2}(x)$$

for *τ* [0, 1].

Let *W* = (*W*_{1}, ···, *W _{K}*)

$${\text{Cov}}_{\eta}\{{m}_{\eta}^{-1}(\mathcal{D}),\epsilon \mid X\}=\frac{dg}{d\beta}.$$

When *X* is completely observed, the efficient score has the form

$$\frac{dg}{d\beta}{[{\text{Var}}_{\eta}\{{m}_{\eta}^{-1}(\epsilon )\mid X\}]}^{-1}{m}_{\eta}^{-1}(\epsilon ).$$

Successive approximation is needed when either missing data form nonmontone patterns or covariates are subject to missing values.

When data are fully observed, the score for estimating *β* is

$${A}_{1\eta}{h}_{1}=-\sum _{k=1}^{K}{h}_{1}^{T}{X}_{k}^{T}{g}_{k}^{\prime}({X}_{k}\beta )\frac{\partial logf}{\partial {\epsilon}_{k}}({\epsilon}_{1},\cdots ,{\epsilon}_{K}),$$

where *h*_{1} *H*_{1} = *R ^{d}* and

$${A}_{2\eta}\{{h}_{21}(V,X,W),{h}_{22}(X,W),{h}_{23}(X)\}={h}_{22}(X,W)+{h}_{21}(V,X,W)+{h}_{23}(X),$$

where *H*_{2} = *H*_{21} × *H*_{22} × *H*_{23} with

$$\begin{array}{c}{H}_{21}=\{{h}_{21}(v,w,x)\in {L}^{2}({P}_{\eta})\mid {E}_{\eta}\{{h}_{21}(V,W,X)\mid W,X\}=0\},\\ {H}_{22}=\{{h}_{22}(w,x)\in {L}^{2}({P}_{\eta})\mid {E}_{\eta}\{{h}_{22}(X,W)\mid X\}=0,{E}_{\eta}\{\epsilon {h}_{22}(X,W)\mid X\}=0\},\end{array}$$

and

$${H}_{23}=\{{h}_{23}(x)\in {L}^{2}({P}_{\eta})\mid {E}_{\eta}\{{h}_{23}(X)\}=0\}.$$

The adjoint operator of *A*_{2}* _{η}* is

$$\begin{array}{l}{A}_{2\eta}^{\ast}S(V,X,W)=\{S(V,X,W)-{E}_{\eta}(S\mid X,W),{E}_{\eta}(S\mid X,W)-{E}_{\eta}\{S\mid X\}\\ -{\text{Cov}}_{\eta}(S,\epsilon \mid X){\{Va{r}_{\eta}(\epsilon \mid X)\}}^{-1}\epsilon ,{E}_{\eta}(S\mid X)-{E}_{\eta}(S)\}.\end{array}$$

It follows that
${A}_{2\eta}^{\ast}{A}_{2\eta}\{{h}_{21},{h}_{22},{h}_{23}\}=({h}_{21}(v,w,x),{h}_{22}(w,x),{h}_{23}(x))$. Hence,
${A}_{2\eta}^{\ast}{A}_{2\eta}$ has continuous inverse on *H*_{21} × *H*_{22} × *H*_{23} and
${\mathcal{P}}_{\eta}={A}_{2\eta}{({A}_{2\eta}^{\ast}{A}_{2\eta})}^{-1}{A}_{2\eta}^{\ast}$ appears as

$${\mathcal{P}}_{\eta}s=s(V,X,W)-{E}_{\eta}(s\epsilon \mid X){\{Va{r}_{\eta}(\epsilon \mid X)\}}^{-1}\epsilon .$$

for mean-zero square-integrable function *s*. The efficient score for *β* under the full data is

$${S}_{\beta}^{\mathit{eff},F}=\left\{{X}_{1}^{T}{g}_{1}^{\prime}({X}_{1}\beta ),\cdots ,{X}_{K}^{T}{g}_{K}^{\prime}({X}_{K}\beta )\right\}{\{Va{r}_{\eta}(W\mid X)\}}^{-1}\epsilon $$

When the densities involving the nuisance parameters are uniformly bounded, the convexity requirement can be verified as follows.

$$\tau {q}_{1}(v\mid w,x){f}_{1}\{w-g(\beta )\}{p}_{1}(x)+(1-\tau ){q}_{2}(v\mid w,x){f}_{2}(w-g(\beta )\}{p}_{2}(x)={q}_{\tau}(v\mid w,x){f}_{\tau}\{w-g(\beta )\}{p}_{\tau}(x),$$

where

$$\begin{array}{c}{q}_{\tau}(v\mid w,x)=\frac{\tau {q}_{1}(v\mid w,x)f\{w-g(\beta )\}{p}_{1}(x)+(1-\tau ){q}_{2}(v\mid w,x){f}_{2}(w-g(\beta )\}{p}_{2}(x)}{\tau f\{w-g(\beta )\}{p}_{1}(x)+(1-\tau ){f}_{2}(w-g(\beta )\}{p}_{2}(x)},\\ {f}_{\tau}\{w-g(\beta )\}=\frac{\tau f\{w-g(\beta )\}{p}_{1}(x)+(1-\tau ){f}_{2}(w-g(\beta )\}{p}_{2}(x)}{\tau {p}_{1}(x)+(1-\tau ){p}_{2}(x)},\end{array}$$

and *p _{τ}*(

Suppose that *T* is the survival time of a subject, which is subject to censoring by censoring time *C*. Given covariate *Z* (time-independent), *T* and *C* are independent. *X* = *T* ∧ *C* = min(*T*, *C*) and *δ* = 1_{{}_{T}_{≤}_{C}_{}} rather than (*T*, *C*) are observed. *Z* is subject to missing values. Assume that, given (*T*, *C*, *Z*), the missing data mechanism depends on the observed data *R*(*Y*) = {*X*, *δ*, *R*, *R*(*Z*)} only. Suppose that the Cox proportional hazards model holds, that is

$$\underset{\mathrm{\Delta}\to 0}{lim}\frac{1}{\mathrm{\Delta}}P(t<T\le t+\mathrm{\Delta}\mid T\ge t,Z\}=\lambda (t)\phi (\beta Z),$$

where is a known function and *λ*(*t*) is an unknown baseline hazard function. The nuisance parameter includes the censoring distribution, baseline hazard, and covariate distribution. The efficient score for estimating *β* when data are subject to MAR missing values is
${E}_{\eta}[{m}_{\eta}^{-1}\{\mathcal{D}(X,\delta ,Z)\}\mid R,R(X,\delta ,Z)]$ (Robins *et al*., 1994; Nan *et al*., 2004) where is the unique solution to

$$\begin{array}{l}\frac{\partial log\phi}{\partial \beta}-\frac{{E}_{\eta}\{\xi (u){\scriptstyle \frac{\partial \phi}{\partial \beta}}\}}{{E}_{\eta}\{\xi (u)\phi \}}={m}_{\eta}^{-1}(\mathcal{D}(u,1,Z)-\frac{{E}_{\eta}\{{m}^{-1}(\mathcal{D})\xi (u)\mid Z\}}{{E}_{\eta}\{\xi (u)\mid Z\}}\\ -{E}_{\eta}\left(\xi (u)\phi \left[{m}_{\eta}^{-1}(\mathcal{D})(u,1,Z)-\frac{{E}_{\eta}\{{m}_{\eta}^{-1}(\mathcal{D})\xi (u)\mid Z\}}{{E}_{\eta}\{\xi (u)\mid Z\}}\right]\right)/{E}_{\eta}\{\xi (u)\phi \}\end{array}$$

and

$$\mathcal{D}=\int \left\{{b}_{1}(u,Z)-\frac{{E}_{\eta}\{\xi (u)\phi {b}_{1}(u,Z)\}}{{E}_{\eta}\{\xi (u)\phi \}}\right\}\{dN(u)-\xi (u)\phi (\beta Z)d\mathrm{\Lambda}(u)\}$$

for some *b*_{1}, where *ξ*(*u*) = 1_{{}_{X}_{≥}_{u}_{}} and *N*(*u*) = 1_{{}_{X}_{≤}_{u}_{,}_{δ}_{=1}}. The successive approximation is needed in obtaining a locally efficient estimator of *β*.

The density for (*X*, *δ*, *Z*) is

$$f(x,\delta \mid z,\beta ,\lambda )p(z)={\lambda}^{\delta}(x){\phi}^{\delta}(\beta z)exp\{-\mathrm{\Lambda}(x)\phi (\beta z)\}{g}_{c}^{1-\delta}(x\mid z){\overline{G}}_{c}^{\delta}(x\mid z)p(z),$$

where
$\mathrm{\Lambda}(x)={\int}_{0}^{x}\lambda (t)dt$ and *g _{c}* is the density function of the censoring time distribution

$$\begin{array}{l}{A}_{\eta}\{{h}_{11},{h}_{12}(x),{h}_{21}(x,z),{h}_{22}(z)\}=\int {h}_{11}^{T}\frac{\partial}{\partial \beta}log\phi (\beta Z)d{M}_{T}(t,z)+\int {h}_{12}(t)d{M}_{T}(t,z)\\ +\int {h}_{21}(t,Z)d{M}_{C}(t,Z)+{h}_{22}(Z),\end{array}$$

where *H*_{1} = *H*_{11} × *H*_{12}, *H*_{2} = *H*_{21} × *H*_{22}, and *H*_{11} = *R ^{k}* with

$$\begin{array}{l}{H}_{12}=\{{h}_{12}(t)\mid {h}_{12}(t)\in {L}^{2}\{d\mathrm{\Lambda}(t)\}\},\\ {H}_{21}=\{{h}_{21}(t,z)\mid {h}_{12}(t,z)\in {L}^{2}\{d{\mathrm{\Lambda}}_{C}(t\mid z)dP(z)\}\},\end{array}$$

and

$${H}_{22}=\{{h}_{22}(z)\in {L}^{2}({P}_{\eta})\mid {E}_{\eta}\{{h}_{22}(Z)\}=0\}.$$

For Λs that are bounded at *T*_{0}, the study stopping time, *H*_{12} does not change with Λ. Hence, *H*_{1} is fixed. Define the inner product on *H*_{2} as

$$<({h}_{21},{h}_{22}),({h}_{21}^{\ast},{h}_{22}^{\ast}){>}_{{H}_{2}}={E}_{\eta}\left\{\int {h}_{21}(t,Z){h}_{21}^{\ast}(t,Z)d{\mathrm{\Lambda}}_{c}(t\mid Z)\right\}+{E}_{\eta}\{{h}_{22}(Z){h}_{22}^{\ast}(Z)\}.$$

It is not difficult to see that *H*_{2} is a Hilbert space under the inner product. Similarly, we can define an inner product on *H*_{1} as

$$<({h}_{11},{h}_{12}),({h}_{11}^{\ast},{h}_{12}^{\ast}){>}_{{H}_{1}}={h}_{11}^{T}{h}_{11}^{\ast}+\int {h}_{12}(t){h}_{12}^{\ast}(t)d\mathrm{\Lambda}(t)$$

to make it a Hilbert space. Let *H*_{10} be the subset of *H*_{1} such that for any (*h*_{11}, *h*_{12}) *H*_{10}, *h*_{11} is bounded by 1 and *h*_{12} *BV* [0, *T*_{0}], *i.e*., *h*_{12} has bounded variation on [0, *T*_{0}].

Let *A*_{2}* _{η}*(

$$<{A}_{2\eta}({h}_{21},{h}_{22}),s(x,\delta ,z){>}_{{L}^{2}({P}_{\eta})}=<({h}_{21},{h}_{22}),{A}_{2\eta}^{\ast}\{s(x,\delta ,z)\}{>}_{{H}_{2}}.$$

Note that, for any *s*(*X*, *δ*, *Z*) *L*^{2}(*P _{η}*),

$$\begin{array}{l}s(X,\delta ,Z)=\int \left[s(t,1,Z)-\frac{{E}_{\eta}\{s(X,\delta ,Z)Y(t)\mid Z\}}{{E}_{\eta}\{Y(t)\mid Z\}}\right]d{M}_{T}(t,z)\\ =\int \left[s(t,0,Z)-\frac{{E}_{\eta}\{s(X,\delta ,Z)Y(t)\mid Z\}}{{E}_{\eta}\{Y(t)\mid Z\}}\right]d{M}_{C}(t,z)\\ +{E}_{\eta}\{s(X,\delta ,Z)\mid Z\},\end{array}$$

and the three components are orthogonal to each other. It follows that

$$\begin{array}{l}<{A}_{2\eta}({h}_{21},{h}_{22}),s(X,\delta ,Z){>}_{{L}^{2}({P}_{\eta})}={E}_{\eta}\{{h}_{22}(Z){E}_{\eta}(s\mid Z)\}\\ +{E}_{\eta}\left[\int {h}_{21}(t,Z)\left\{s(t,0,Z)-\frac{{E}_{\eta}\{s(X,\delta ,Z)Y(t)\mid Z\}}{{E}_{\eta}\{Y(t)\mid Z\}}\right\}d<{M}_{C}>(t,z)\right],\end{array}$$

where *d* < *M _{C}* > (

$$\begin{array}{l}{A}_{2\eta}^{\ast}(s)=(s(t,0,z){E}_{\eta}\{Y(y)\mid Z\}-{E}_{\eta}\{s(X,\delta ,Z)Y(t)\mid Z\},\\ {E}_{\eta}\{s(X,\delta ,Z)\mid Z\}-{E}_{\eta}\{s(X,\delta ,Z)\}).\end{array}$$

By direct calculation, it follows that

$${A}_{2\eta}^{\ast}{A}_{2\eta}({h}_{21},{h}_{22})=\left({h}_{21}(t,Z)E\{Y(t)\mid Z\},{h}_{22}(Z)\right),$$

which implies that
${A}_{2\eta}^{\ast}{A}_{2\eta}$ is continuously invertible on *H*_{2} when *E _{η}*{

$${\mathcal{P}}_{\eta}(s)=\int \left[s(t,0,Z)-\frac{{E}_{\eta}\{s(X,\delta ,Z)Y(t)\mid Z\}}{{E}_{\eta}(Y(t)\mid Z)}\right]d{M}_{C}(t,Z)+{E}_{\eta}\{s(X,\delta ,Z)\mid Z\}.$$

for a mean-zero square-integrable function *s*. The efficient score for estimating (*β*, Λ) can be expressed as

$$\underset{N\to \infty}{lim}{E}_{\eta}\left\{{(I-{\mathcal{P}}_{\eta}{m}_{\eta})}^{N}a(X,\delta ,Z)\mid X,\delta ,R,R(Z)\right\},$$

where

$$a(X,\delta ,Z)=\int {h}_{11}^{T}\frac{\partial}{\partial \beta}log\phi (\beta Z)d{M}_{T}(t,Z)+\int {h}_{12}(t)d{M}_{T}(t,Z).$$

The convexity requirement can be verified as follows.

$$\tau {g}_{c1}^{1-\delta}(x\mid z){\overline{G}}_{c1}^{\delta}(x\mid z){p}_{1}(z)+(1-\tau ){g}_{c2}^{1-\delta}(x\mid z){\overline{G}}_{c2}^{\delta}(x\mid z){p}_{2}(z)={g}_{c\tau}^{1-\delta}(x\mid z){\overline{G}}_{c\tau}^{\delta}(x\mid z){p}_{\tau}(z),$$

where

$${g}_{c\tau}(t\mid z)=\frac{\tau {g}_{c1}(t\mid z){p}_{1}(z)+(1-\tau ){g}_{c2}(t\mid z){p}_{2}(z)}{\tau {p}_{1}(z)+(1-\tau ){p}_{2}(z)}$$

and

$${p}_{\tau}(z)=\tau {p}_{1}(x)+(1-\tau ){p}_{0}(z),$$

for *τ* [0, 1]. Note that we considered the regression parameter and the baseline hazard as the parameters of interest rather than the regression parameter alone because of the convexity requirement. This treatment is different from those treated in the literature.

We perform a simulation study on missing data in parametric regression with/without auxiliary covariates. Two parametric regression models were simulated. The first was the logistic regression. The second was the linear regression with a normal error.

In the logistic regression model, two independent covariates were simulated. One was binary and the other was normally distributed. One normally distributed auxiliary covariate was also simulated in this case. It was assumed that, given the covariates, the outcome and the auxiliary covariates were independent. But the auxiliary covariate depended on the other covariates. In the simulation, both covariates were subject to missing values and the missingness depended on the outcome and the auxiliary covariate only. More specifically, we assumed that *E*(*Y* |*X*_{1}, *X*_{2}) = *g*(*β*_{0}+*β*_{1}*x*_{1} +*β*_{2}*x*_{2}), where *g*(*t*) = (1+*e*^{−}* ^{t}*)

$$log\frac{P({R}_{1}=i,{R}_{2}=j\mid Y,V)}{P({R}_{1}=1,{R}_{2}=1\mid Y,V)}={\alpha}_{0}+{\alpha}_{1}Y+{\alpha}_{2}V+{\alpha}_{3}YV,$$

where (*i*, *j*) = (1, 0) or (0, 1) and (*α*_{0}, *α*_{1}, *α*_{2}) = (−0.5, −0.5, 0.5), and *α*_{3} = 0 corresponding to a correct missing data mechanism model and *α*_{3} = −1.5 corresponding to an incorrect missing data mechanism model in the data analysis.

The correct model for *Y* given *X*_{1} and *X*_{2} was always used in the analysis of the simulated data. The distributions of the covariates and the auxiliary covariate were assumed unknown in the analysis of the simulated data. The semiparametric odds ratio models with bilinear log-odds ratio functions were used for modeling the distributions of the covariates (*X*_{1}, *X*_{2}) and of the auxiliary covariate given the outcome and the covariates (*X*_{1}, *X*_{2}) in the analysis. The polytomous logit regression model with different sets of parameters for different missing patterns and without the interaction term was always assumed in the data analysis. This implies that the missing data mechanism model was misspecified in the analytical model if the model generating the missing data included the interaction term. To compare the performance of different methods, we computed the following estimators for the regression parameter. The first one was the estimator from the complete-case analysis (CC), which is the solution to the estimating equation,

$$\sum _{i=1}^{n}{1}_{\{{R}_{i}=\mathbf{1}\}}\frac{\partial}{\partial \beta}logf({Y}_{i}\mid {X}_{i},\beta )=0,$$

where **1** is a vector with 1 in all of its components. The second one was the simple missing-data-probability weighted estimator (SW), which is the solution to the estimating equation,

$$\sum _{i=1}^{n}\frac{{1}_{\{{R}_{i}=\mathbf{1}\}}}{{\pi}_{i}(\mathbf{1})}\frac{\partial}{\partial \beta}logf({Y}_{i}\mid {X}_{i},\beta )=0,$$

where *π _{i}*(

$$\begin{array}{l}\sum _{i=1}^{n}[\frac{{1}_{\{{R}_{i}=\mathbf{1}\}}}{{\pi}_{i}(\mathbf{1})}\frac{\partial}{\partial \beta}logf({Y}_{i}\mid {X}_{i},\beta )+\sum _{r}\left\{{1}_{\{{R}_{i}=r\}}-\frac{{1}_{\{{R}_{i}=\mathbf{1}\}}}{{\pi}_{i}(\mathbf{1})}{\pi}_{i}(r)\right\}\\ \times \widehat{E}\left\{\frac{\partial}{\partial \beta}logf({Y}_{i}\mid {X}_{i},\beta )|R=r,r({V}_{i},{X}_{i},{Y}_{i})\right\}]=0,\end{array}$$

where *Ê* was computed using the distribution estimated from the following maximum likelihood estimator. The fourth was the maximum likelihood estimator (ML) with the bilinear odds ratio model and without interaction for the covariate distribution. The last two were the likelihood robustification estimators as proposed in this paper using approximation with *N* = 10 (LR-10) and *N* = 20 (LR-20) respectively.

The simulation results were based on 500 replicates of a sample size of 400. The missing proportions for *X*_{1} and for *X*_{2} were approximately 25%. The average number of complete cases thus obtained was close to 200. Table 1 lists the simulation results for the binary outcome data. As expected, when all models are correct, except the CC estimator, the biases of all the other five estimators are relatively small. But the efficiency is different: with the ML estimator the most efficient and the SW estimator the least efficient. When the covariate model is correct and the missing data mechanism is incorrect, the CC and SW estimators can have substantial biases. Biases of all the other estimators are small. When the missing data mechanism model is correct and the covariate model is incorrect, the SW estimator is unbiased. The ML estimator is subject to a sizable bias. Both LR-10 and LR-20 estimators largely correct the bias in the ML estimator. When neither model is correct, the LR-10 and LR-20 estimators along with the AW estimator appear to have much smaller biases when compared with the CC, SW, and ML estimators although all the estimators are biased. The variance estimates for the likelihood robustification estimators appear to work well. The AW estimator has good performance in all the above cases both in terms of bias and variation. This is partly because it has the doubly robust property in the narrow sense that the estimator is consistent when either the covariate models or the missing data mechanism model is correct as long as both the missing data mechanism and its model depend only on the fully observed covariates and the outcome.

Simulation results for the logistic regression model with missing covariates and auxiliary information.

In the linear regression model, the variance of the residual error was set to 1. Variables were generated in the same way as in the logistic regression model with the exception that the normal error was used in generating *Y* and *g*(*t*) = *t*. To simplify the computation involved in the analysis of the simulated data, we included *V* as the third covariate in the linear regression model. However, *V* had no effect on *Y* conditional on *X*_{1} and *X*_{2}. The integral with respect to *y* in computing the expectations in the robustification procedure was approximated by 10 points Gauss-Hermite quadrature. LR-5 (*N* = 5) and LR-10 (*N* = 10) estimators were computed. Five hundred replicates of a sample of 200 were used in the simulation. The results are shown in Table 2. The behavior of the estimators is almost the same as that observed in the previous scenario for the logistic regression model. The difference between *LR* – 5 and *LR* – 10 is still relatively small, which indicates that the convergence rate of the likelihood robustification approximation is reasonably fast in the simulated cases.

In summary, the SW estimator is sensitive to misspecification of the missing data mechanism. The ML estimator can have sizable bias when the covariate models are severely misspecified. We have also simulated other scenarios (not shown) which suggest that the ML estimator with the semiparametric odds ratio model for the covariates is relatively robust against covariate model misspecification. The AW estimator is very robust although it does not have the doubly robust property in general. The likelihood robustification estimators perform better than the AW estimator in all the cases. The estimators from LR-5, LR-10 and LR-20 are nearly indistinguishable, which suggests that approximation using *N* = 10 or even *N* = 5 is good enough in the simulated cases. Other simulations not shown indicate that the number *N* that gives good approximation depends on the amount of missing data. In general, the higher the percentage of missing data is, the larger the number *N* is required. In practice, *N* can be empirically determined by comparing estimators using different numbers of approximation. In the computation, the covariates that were subject to missing were rounded to the nearest 0.05 in the logistic regression, and to 0.1 in the linear regression. The effect of the rounding on the parameter estimates was nearly negligible as indicated in the results (not shown) when finer roundings were used.

The hip fracture data were collected by Dr. Barengolts at the College of Medicine of the University of Illinois at Chicago in studying the hip fracture in veterans. The study matched a case and a control by age and race. Risk factors on bone fracture were assessed. As in Chen (2004), we concentrated on 9 of the risk factors in the analysis. One of the challenging problems in analyzing this dataset is that most of the risk factors are subject to missing values and there are a large number (38 altogether) of missing patterns. This dataset was analyzed in Chen (2004) by the likelihood method using the semiparametric odds ratio models proposed there for the covariates. Since the covariate models applied there are not guaranteed to be correctly specified, it is of interest to verify whether any substantial bias is introduced into the parameter estimator due to the potential covariate model misspecification. This is assessed here by computing the doubly robust estimators of the parameter and comparing them with the maximum likelihood estimator.

There were a few obstacles in actually implementing the proposed method to this dataset. The primary problem was to estimate the missing data probabilities. Since many missing patterns (26 out of 38) have less than 5 observations, it is virtually impossible to estimate the missing data probabilities that depend on one or more variables. As a compromise, we assumed that the missing data did not depend on the observed or unobserved data, *i.e*., MCAR. Under this assumption, the simple missing data probability weighted estimator is the same as the estimator from the complete-case analysis. We computed the estimator from the complete-case analysis, the maximum likelihood estimator, the augmented weighted estimator, and the likelihood robustification estimators with *N* = 10 and *N* = 20 respectively. In computing these estimators, we rounded data for the three continuous variables: BMI, log(HGB), and Albumin to allow each of them to have about 10 categories. This reduces the computation time and the storage space required. However, the effect of rounding on the parameter estimators is small as discussed in Chen (2004). All the parameter estimates except LR-20, which is the same as LR-10, are shown in Table 3.

The regression coefficients for LevoT4 and dementia estimated from the complete-case analysis are substantially different from those estimated by the other methods. The estimates from the maximum likelihood, the augmented weighted estimating equation, and the likelihood robustification are very close. Estimates from the latter two are even closer. This suggests that the covariate models used in the likelihood approach appear to be reasonable in the sense that it may be close to correctly specified or even if it is incorrectly specified, the influence of the misspecification on the parameter estimates is very small.

We have shown that the Neumann series approximation can be used to find a locally efficient estimator in missing data problems under the assumption that all configurations of the full data can be observed with a probability bounded away from zero. This helps to close a gap between the semiparametric efficient theory for the missing data problem and the implementation of the procedure in finding such an estimator. The results can be easily modified to be applied to the study of the asymptotic behavior of the doubly robust estimators when missing data are nonmonotone. Note that the results do not cover the case where a continuous inverse of *m* does not exist. Similar results in the latter case is expected to be much harder to obtain.

The author thanks the editor for the detailed comments which have greatly improved the presentation of the paper. The author would also like to thank Professor James Robins for his insightful comments on the earlier versions of the paper. Comments from Drs. Y. Q. Chen and C. Y. Wang at FHCRC on the earlier versions of the paper are also very much appreciated. The research was supported by a grant from NCI/NIH on statistical methods for missing data.

Assume that the semiparametric model under consideration is *π*(*r|y, η*)*p*(*y, η*), where *p*(*y, η*) is the density of the full data *y* with respect to *μ*, a product of Lebesgue measures and counting measures. Assume that *η* = (*β, θ*) Ω × Θ and *β* is the parameter of interest and *θ* is the nuisance parameter. The marginal distribution for observed data (*R, R*(*Y*)) is *g*(*r, r*(*y*)*, η*) = ∫ *π* (*r|y, η*)*p*(*y, η*)*d*(*y*), where (*y*) denote those components of *y* that are missing in the missing pattern *r*. Assume that *θ*(*γ*), *γ* Γ is a restricted parameterization of *θ* such that *θ*(Γ) Θ. Let *η*_{0} = (*β*_{0}*, θ*^{*}) and *P*_{0} be the true probability measure that generated the data. The following regularity conditions are used in the theorems.

- For any
*β*Ω and*γ*Γ, aside from a*μ*-zero set, {(*r, y*)*|g*(*r, r*(*y*)*, β, θ*(*γ*)) > 0} is the same for any fixed*r. π*(*r|y, η*) and*p*(*y, η*) are bounded away from zero and ∞ uniformly for all*y*and*η*= Ω ×*θ*(Γ) Ω × Θ, if*π*(*r|y, η*) > 0 for a*μ*-nonzero set of*y*. The full data model*p*(*y, η*) is a convex family with respect to*θ*. - Ω is a compact subset of a Hilbert space. The true parameter value
*β*_{0}is an inner point of Ω. Θ is a subset of another Hilbert space. As*n*→ ∞, →*γ*^{*}in the norm defined on Γ.*θ*(*γ*) is continuous. - As a
*L*^{2}(*μ*) function of (*β, θ*) Ω × Θ, {*π*(*r|y, η*)*p*(*y, η*)}^{1/2}is Fréchet differentiable with respect to*η*Ω× Θ. The score operator defined as 2{*π*(*r|y, η*)*p*(*y, η*)}^{−1/2}times the derivative is denoted by (*A*_{1}(_{η}*h*_{1})*, A*_{2}(_{η}*h*_{2})) with*h*_{1}*H*_{1}and*h*_{1}*H*_{2}. Both*H*_{1}and*H*_{2}are Hilbert spaces. - ${A}_{2\eta}^{\ast}{A}_{2\eta}$ is continuously invertible at
*η*_{0}, where ${A}_{2\eta}^{\ast}$, mapping*L*^{2}{*P*(_{η}*y*)} to*H*_{2}, is the adjoint operator of*A*_{2}._{η} - For
*η*, (*π*)_{η}p_{η}^{1/2}*A*_{1}(_{η}*h*_{1}) and (*π*)_{η}p_{η}^{1/2}*A*_{2}(_{η}*h*_{2}) are continuous with respect to*η*in*L*^{2}(*μ*) and are Fréchet differentiable with respect to*β*in a neighborhood of*η*_{0}in*L*^{2}(*μ*) and ${A}_{2\eta}^{\ast}({({\pi}_{\eta}{p}_{\eta})}^{-1/2}s)$ is continuous with respect to*η*in*H*_{2}norm for*s**L*^{2}(*μ*) and is Fréchet differentiable with respect to*β*in a neighborhood of*η*_{0}in*H*_{2}norm. - Suppose that
*p*(*y, η*) =*dP*(_{η}/d*μ*_{1}×···×*μ*) where_{J}*μ*is Lebesgue measure on_{j}*R*^{1}or a counting measure,*j*= 1, ···,*J*. Suppose that, for any missing pattern*r*,*π*(*r|y, η*)*p*(*y, η*),*A*_{1}(_{η}*h*_{1}),*A*_{2}(_{η}*h*_{2}), and ${A}_{2\eta}^{\ast}(s)$, and their derivatives with respect to*β*for*η*, are all continuous with respect to the*j*th argument of*y*if*μ*is Lebesgue measure,_{j}*j*= 1, ···,*J*and*h*_{1},*h*_{2}, and*s*are continuous with respect to*y*._{j}There exists a norm on , denoted by ||·||, such that$$\begin{array}{l}{\left|\right|\pi (r\mid y,{\eta}_{1})-\pi (r\mid y,{\eta}_{2})\left|\right|}_{{L}^{\infty}({P}_{{\eta}_{0}})}\le {C}_{1}\left|\right|{\eta}_{1}-{\eta}_{2}\left|\right|,\\ {\left|\right|p(y,{\eta}_{1})-p(y,{\eta}_{2})\left|\right|}_{{L}^{\infty}({P}_{{\eta}_{0}})}\le {C}_{2}\left|\right|{\eta}_{1}-{\eta}_{2}\left|\right|,\\ {\left|\right|{A}_{1{\eta}_{1}}({h}_{1})-{A}_{1{\eta}_{2}}({h}_{1})\left|\right|}_{{L}^{\infty}({P}_{{\eta}_{0}})}\le {C}_{3}\left|\right|{\eta}_{1}-{\eta}_{2}\left|\right|\phantom{\rule{0.16667em}{0ex}}{\left|\right|{h}_{1}\left|\right|}_{{H}_{1}},\\ {\left|\right|{A}_{2{\eta}_{1}}({h}_{2})-{A}_{2{\eta}_{2}}({h}_{2})\left|\right|}_{{L}^{\infty}({P}_{{\eta}_{0}})}\le {C}_{4}\left|\right|{\eta}_{1}-{\eta}_{2}\left|\right|\phantom{\rule{0.16667em}{0ex}}{\left|\right|{h}_{2}\left|\right|}_{{H}_{2}},\end{array}$$and$${\left|\right|{A}_{2{\eta}_{1}}^{\ast}(s)-{A}_{2{\eta}_{2}}^{\ast}(s)\left|\right|}_{{H}_{2}}\le {C}_{5}\left|\right|{\eta}_{1}-{\eta}_{2}\left|\right|\phantom{\rule{0.16667em}{0ex}}{\left|\right|s\left|\right|}_{{L}^{\infty}({P}_{{\eta}_{0}})}$$for any*η*_{1}*, η*_{2}and some constants*C*,_{i}*i*= 1, ···, 5. The*ε*-covering number for under ||·||,*N*(*, ε,*||·||) satisfies$${\int}_{0}^{\infty}\sqrt{logN(\mathcal{E},\epsilon /\mid log\epsilon \mid ,||\xb7||)}d\epsilon <\infty .$$ - There exists an
*H*_{10}*H*_{1}, for any fixed continuously invertible map from*H*_{1}to itself, if <*h*_{1}*, β*>_{H1}= 0 for all*h*_{1}*H*_{10}, then*β*= 0. The covering number of*H*_{10}under supremum norm,*N*(*H*_{10}*, ε*, ||·||_{∞}) satisfies$${\int}_{0}^{\infty}\sqrt{logN({H}_{10},\epsilon ,{\left|\right|\xb7\left|\right|}_{\infty})}d\epsilon <\infty .$$ - $$\underset{{\left|\right|{h}_{1}\left|\right|}_{{H}_{1}}=1,{\left|\right|{h}_{1}^{\ast}\left|\right|}_{{H}_{1}}=1}{inf}\left|{E}_{0}\left\{\frac{\partial T({\eta}_{0})}{\partial \beta}({h}_{1})({h}_{1}^{\ast})\right\}\right|>0.$$
- $$\underset{{\left|\right|{h}_{1}\left|\right|}_{{H}_{1}}=1,{\left|\right|{h}_{1}^{\ast}\left|\right|}_{{H}_{1}}=1}{inf}\left|{E}_{0}\left\{\frac{\partial {T}_{N}({\beta}_{N},{\theta}^{\ast})}{\partial \beta}({h}_{1})({h}_{1}^{\ast})\right\}\right|>0.$$
- There exists
*U*,_{i}*i*= 1, ···,*n*, iid with*E*_{0}*U*^{2}finite such that$$\theta (\widehat{\gamma})-{\theta}^{\ast}=\frac{1}{n}\sum _{i=1}^{n}{U}_{i}+{o}_{p}\left(\frac{1}{\sqrt{n}}\right),$$where*θ*^{*}=*θ*(*γ*^{*}). *A*_{1}(_{η}*h*_{1}) and*A*_{2}(_{η}*h*_{2}) are Fréchet differentiable with respect to*θ*along the path*θ*(*γ*),*γ*Γ in a neighborhood of*η*_{0}in*L*^{2}(*P*_{η}_{0}) and ${A}_{2\eta}^{\ast}(s)$ is Fréchet differentiable with respect to*θ*along the path*θ*(*γ*),*γ*Γ in a neighborhood of*η*_{0}in*H*_{2}.

Before we prove Theorems 1–3, we first establish a set of lemmas for the proofs of the theorems. These lemmas are mostly for showing that *T* and *T _{N}* are differentiable and that
$\mathcal{F}={\cup}_{N=0}^{\infty}\{{T}_{N}(\eta )({h}_{1})\mid \eta \in \mathcal{E},{h}_{1}\in {H}_{10}\}\cup \{T(\eta )({h}_{1})\mid \eta \in \mathcal{E},{h}_{1}\in {H}_{10}\}$ is a

- Under assumptions 1–5, ${g}_{\eta}^{1/2}{T}_{N}(\eta )({h}_{1})$, for any N, is Fréchet differentiable with respect to β in L
^{2}(μ) in a neighborhood of η_{0}in , and both ${g}_{\eta}^{1/2}{T}_{N}(\eta )({h}_{1})$, for any N, and the derivatives are continuous at η_{0}in L^{2}(μ). If we define the derivative of T_{N}with respect to β as$$\frac{\partial {T}_{N}(\eta )}{\partial \beta}({h}_{1})({h}_{1}^{\ast})={g}_{\eta}^{-1/2}\left\{\frac{\partial \{{g}_{\eta}^{1/2}{T}_{N}(\eta )({h}_{1})\}}{\partial \beta}({h}_{1}^{\ast})-\frac{1}{2}{g}_{\eta}^{1/2}{B}_{1\eta}({h}_{1}^{\ast}){T}_{N}(\eta )({h}_{1})\right\},$$where the first term on the right-hand side of the equation refers to the derivative of ${g}_{\eta}^{1/2}{T}_{N}(\eta )({h}_{1})$ with respect to β, then$${\Vert {\epsilon}^{-1}\{{T}_{N}(\eta +\epsilon {h}_{1}^{\ast})({h}_{1})-{T}_{N}(\eta )({h}_{1})\}-\frac{\partial {T}_{N}(\eta )}{\partial \beta}({h}_{1})({h}_{1}^{\ast})\Vert}_{{L}^{2}({P}_{\eta})}\to 0,$$as n → ∞. - (b) Under assumptions 1–5 and 12, ${g}_{\eta}^{1/2}{T}_{N}(\eta )({h}_{1})$, for any N, is Fréchet differentiable with respect to θ along the path θ(γ), γ Γ, in L
^{2}(μ) in a neighborhood of η_{0}and the derivatives are continuous at η_{0}in L^{2}(μ). The derivative of T_{N}with respect to θ is defined similarly.

Under the assumptions 1–5, T(η)(h_{1}) is Fréchet differentiable with respect to β in L^{2}(P_{η0}) in a neighborhood of _{0} and both T(η)(h_{1}) and the derivative are continuous at η_{0} in L^{2}(P_{η0}). If we define the derivative of T with respect to β as

$$\frac{\partial T(\eta )}{\partial \beta}({h}_{1})({h}_{1}^{\ast})={g}_{\eta}^{-1/2}\left\{\frac{\partial \{{g}_{\eta}^{1/2}T(\eta )({h}_{1})\}}{\partial \beta}({h}_{1}^{\ast})-\frac{1}{2}{g}_{\eta}^{1/2}{B}_{1\eta}({h}_{1}^{\ast})T(\eta )({h}_{1})\right\},$$

where the first term on the right-hand side of the equation refers to the derivative of ${g}_{\eta}^{1/2}T(\eta )({h}_{1})$ with respect to β, then

$${\Vert \epsilon \{T(\eta +\epsilon {h}_{1}^{\ast})({h}_{1})-T(\eta )({h}_{1})\}-\frac{\partial T(\eta )}{\partial \beta}({h}_{1})({h}_{1}^{\ast})\Vert}_{{L}^{2}({P}_{\eta})}\to 0,$$

as n → ∞.

Under assumptions 1–3, for any p > 0, N > 0, and s, a function of Y in L^{∞}(P_{η}),

$${\left|\right|{D}_{\eta}^{N+p}s-{D}_{\eta}^{N}s\left|\right|}_{{L}^{\infty}({P}_{\eta})}\le K{N}^{c(Y)}{(1-\sigma )}^{N}{\left|\right|s\left|\right|}_{{L}^{\infty}({P}_{\eta})},$$

where c(Y) denotes the cardinality of Y and K is a constant independent of N.

Suppose that there exists a measurable set with P_{0}() = 0 such that for all
$t(Y)\in {\cup}_{N=0}^{\infty}\{{D}_{\eta}^{N}s\mid s\in S,\eta \in \mathcal{E}\}\cup \{{lim}_{N\to \infty}{D}_{\eta}^{N}s\phantom{\rule{0.16667em}{0ex}}in\phantom{\rule{0.16667em}{0ex}}{L}^{\infty}({P}_{\eta})\mid s\in S,\eta \in \mathcal{E}\}$,

$${\left|\right|t(Y)\left|\right|}_{{L}^{\infty}({P}_{\eta})}\ge b\underset{y\in {\mathcal{N}}^{c}}{sup}\mid t(y)\mid ,$$

(2)

where S is a set of functions of y and 0 < b ≤ 1 is a constant. Then,

$${\cup}_{N=0}^{\infty}\{{D}_{\eta}^{N}s\mid s\in S,\eta \in \mathcal{E}\}\cup \{\underset{N\to \infty}{lim}{D}_{\eta}^{N}s\phantom{\rule{0.16667em}{0ex}}in\phantom{\rule{0.16667em}{0ex}}{L}^{\infty}({P}_{\eta})\mid s\in S,\eta \in \mathcal{E}\}$$

is a P_{0}-Donsker class if

- S is P
_{0}-measurable and L^{∞}(P_{0}) bounded satisfying$${\int}_{0}^{\infty}\sqrt{logN(S,\epsilon ,{L}^{\infty}({P}_{\eta}))}d\epsilon <\infty ,$$(3)for a fixed η . - For any η
_{1}, η_{2}, s S, and a fixed η , there exists a C(η) < ∞ such that$$\mid {D}_{{\eta}_{1}}s-{D}_{{\eta}_{2}}s\mid \le C(\eta )\left|\right|{\eta}_{1}-{\eta}_{2}\left|\right|\phantom{\rule{0.16667em}{0ex}}{\left|\right|s\left|\right|}_{{L}^{\infty}({P}_{\eta})},$$and has covering numbers under ||·|| satisfying$${\int}_{0}^{\infty}\sqrt{logN(\mathcal{E},\epsilon /\mid log\epsilon \mid ,||\xb7||)}d\epsilon <\infty .$$(4)

Under assumption 6, if s(y, η) is continuous with respect to argument j when μ_{j} is Lebesgue measure, then there exists a measurable set with P_{η}() = 0 such that

$${\left|\right|t(Y,\eta )\left|\right|}_{{L}^{\infty}({P}_{\eta})}=\underset{Y\in {\mathcal{N}}^{c}}{sup}\mid t(y,\eta )\mid ,$$

for any

$$t\in {\cup}_{N=0}^{\infty}\{{D}_{\eta}^{N}s\mid s\in S,\eta \in \mathcal{E}\}\cup \{\underset{N\to \infty}{lim}{D}_{\eta}^{N}s\mid s\in S,\eta \in \mathcal{E}\},$$

where the limit is in the sense of the L^{∞}(P_{η}) norm.

Under assumptions 1–7, we have

$${\left|\right|{D}_{{\eta}_{1}}s-{D}_{{\eta}_{2}}s\left|\right|}_{{L}^{\infty}({P}_{\eta})}\le {C}_{1}(\eta )\left|\right|{\eta}_{1}-{\eta}_{2}\left|\right|\phantom{\rule{0.16667em}{0ex}}{\left|\right|s\left|\right|}_{{L}^{\infty}({P}_{\eta})},$$

and

$${\left|\right|{E}_{{\eta}_{1}}\{{D}_{{\eta}_{1}}s\mid R,R(y)\}-{E}_{{\eta}_{2}}\{{D}_{{\eta}_{2}}s\mid R,R(y)\}\left|\right|}_{{L}^{\infty}({P}_{\eta})}\le {C}_{2}(\eta )\left|\right|{\eta}_{1}-{\eta}_{2}\left|\right|\phantom{\rule{0.16667em}{0ex}}{\left|\right|s\left|\right|}_{{L}^{\infty}({P}_{\eta})},$$

for some constants C_{1}(η), C_{2}(η) < ∞.

Under assumptions 1–8,

$$\mathcal{F}={\cup}_{N=0}^{\infty}\{{T}_{N}(\eta )({h}_{1})\mid \eta \in \mathcal{E},{h}_{1}\in {H}_{10}\}\cup \{T(\eta )({h}_{1})\mid \eta \in \mathcal{E},{h}_{1}\in {H}_{10}\}$$

is a P_{0}-Donsker class.

Let = (*, *). By definition, *P _{n}T*()(

$${P}_{0}\{T({\beta}_{0}^{\ast},{\theta}^{\ast})({h}_{1})-T({\beta}_{0},{\theta}^{\ast})({h}_{1})\}={P}_{0}\left[\frac{\partial T}{\partial \beta}\{{\eta}_{0}+\lambda ({\eta}_{0}^{\ast}-{\eta}_{0})\}({h}_{1})({\beta}_{0}^{\ast}-{\beta}_{0})\right]$$

for some 0 ≤ *λ* ≤ 1 and all *h*_{1} *H*_{10}. From assumptions 8 and 9, we can conclude that
${\beta}_{0}^{\ast}={\beta}_{0}$ locally. Since converges (Assumption 2) and varies in a compact set, which implies each convergent subsequence converges to the same limit, locally converges to *β*_{0} almost surely.

Since *E*_{0}{*T*()(*h*_{1})− *T*(*η*_{0})(*h*_{1})}^{2} → 0 uniformly for *h*_{1} *H*_{10} and {*T*(*η*)(*h*_{1})*|η* *, h*_{1} *H*_{10}} is a *P*_{0}-Donsker class with bounded envelope function, it follows (van der Vaart & Wellner, 1996, Lemma 3.3.5 on page 311) that
$\sqrt{n}({P}_{n}-{P}_{0})\phantom{\rule{0.16667em}{0ex}}\{T(\stackrel{\sim}{\eta})({h}_{1})-T({\eta}_{0})({h}_{1})\}={o}_{P}(1)$, which implies that
$-\sqrt{n}{P}_{0}T(\stackrel{\sim}{\eta})({h}_{1})=\sqrt{n}({P}_{n}-{P}_{0})T({\eta}_{0})({h}_{1})+{o}_{{P}_{0}}(1)$. Note that

$$\sqrt{n}{P}_{0}\left\{T(\stackrel{\sim}{\eta})({h}_{1})-T({\beta}_{0},\widehat{\theta})({h}_{1})\right\}={P}_{0}\left\{\frac{\partial T}{\partial \beta}({\beta}_{0},{\theta}^{\ast})({h}_{1})\sqrt{n}(\stackrel{\sim}{\beta}-{\beta}_{0})\right\}+{o}_{{P}_{0}}(||\sqrt{n}(\stackrel{\sim}{\beta}-{\beta}_{0})||),$$

and that *P*_{0}*T*(*β*_{0}*, *)(*h*_{1}) = *P*_{0}*T*(*β*_{0}*, θ*^{*})(*h*_{1}) = 0. It now follows that

$$-{P}_{0}\left[\frac{\partial T}{\partial \beta}({\eta}_{0})({h}_{1})\left\{\sqrt{n}(\stackrel{\sim}{\beta}-{\beta}_{0})\right\}\right]=\sqrt{n}({P}_{n}-{P}_{0})T({\eta}_{0})({h}_{1})+{o}_{{P}_{0}}(1+||\sqrt{n}(\stackrel{\sim}{\beta}-{\beta}_{0})||).$$

By replacing *h*_{1} in the foregoing equation by
${\mathcal{Q}}_{0}^{-1}{h}_{1}$, it follows that

$$<\sqrt{n}(\stackrel{\sim}{\beta}-{\beta}_{0}),{h}_{1}{>}_{{H}_{1}}=\sqrt{n}{P}_{n}T({\eta}_{0})\left\{-{\mathcal{Q}}_{0}^{-1}({h}_{1})\right\}+{o}_{{P}_{0}}(1+||\sqrt{n}(\stackrel{\sim}{\beta}-{\beta}_{0})||),$$

which implies that
$<\sqrt{n}(\stackrel{\sim}{\beta}-{\beta}_{0}),{h}_{1}{>}_{{H}_{1}}={O}_{{P}_{0}}(1)$ and that
$<\sqrt{n}(\stackrel{\sim}{\beta}-{\beta}_{0}),{h}_{1}{>}_{{H}_{1}}\to N(0,V({h}_{1}))$ uniformly on *h*_{1} *H*_{10}, where
$V({h}_{1})={E}_{0}{\left[T({\eta}_{0})\left\{-{\mathcal{Q}}_{0}^{-1}({h}_{1})\right\}\right]}^{2}$.

To prove the consistency of the variance estimate, let
${\stackrel{\sim}{\eta}}_{h}=(\widehat{\beta}+{\scriptstyle \frac{h}{\sqrt{n}}},\widehat{\theta})$ for a fixed *h* *H*_{1}. That {*T*(* _{h}*)(

$$\begin{array}{l}\sqrt{n}{P}_{n}\phantom{\rule{0.16667em}{0ex}}\{T({\stackrel{\sim}{\eta}}_{h})-T(\stackrel{\sim}{\eta})\}({h}_{1})=\sqrt{n}{P}_{0}\{T({\stackrel{\sim}{\eta}}_{h})-T(\stackrel{\sim}{\eta})\}({h}_{1})\\ ={P}_{0}\left\{\frac{\partial T({\eta}_{0})}{\partial \beta}(h)({h}_{1})\right\}=<{\mathcal{Q}}_{0}h,{h}_{1}{>}_{{H}_{1}}.\end{array}$$

Since {*T*^{2}()(*h*_{1})*|h*_{1} *H*_{10}} is a Glivenko-Cantelli class, *P _{n}*{

When both the missing data mechanism model and the nuisance model for the full data are correctly specified, *θ*^{*} = *θ*_{0}. That is, *P _{η}*

$$-{E}_{0}\left\{\frac{\partial T}{\partial \beta}({\eta}_{0})(h)({h}_{1})\right\}={E}_{0}\{T({\beta}_{0},{\theta}_{0})({h}_{1}){A}_{1({\beta}_{0},{\theta}_{0})}(h)\}={E}_{0}\{T({\beta}_{0},{\theta}_{0})({h}_{1})T({\beta}_{0},{\theta}_{0})(h)\}.$$

This implies that is asymptotically efficient.

For any convergent point of * _{N}*, denoted by

$${P}_{0}\left\{{T}_{N}({\beta}_{N},\widehat{\theta})({h}_{1})-{T}_{N}({\beta}_{0},\widehat{\theta})({h}_{1})\right\}={P}_{0}\left\{\frac{\partial {T}_{N}}{\partial \beta}({\beta}_{0},{\theta}^{\ast})({h}_{1})({\beta}_{N}-{\beta}_{0})\right\}+{o}_{{P}_{0}}({\left|\right|{\beta}_{N}-{\beta}_{0}\left|\right|}_{{H}_{1}}).$$

It follows from the definition of that, for all *h*_{1} *H*_{10},

$$<{\beta}_{N}-{\beta}_{0},{h}_{1}{>}_{{H}_{1}}={P}_{0}\left\{{T}_{N}({\beta}_{N},\widehat{\theta})(-{\mathcal{Q}}_{0N}^{-1}{h}_{1})-{T}_{N}({\beta}_{0},\widehat{\theta})(-{\mathcal{Q}}_{0N}^{-1}{h}_{1})\right\}+{o}_{{P}_{0}}({\left|\right|{\beta}_{N}-{\beta}_{0}\left|\right|}_{{H}_{1}}).$$

Next, note that
$\sqrt{n}({P}_{n}-{P}_{0})\phantom{\rule{0.16667em}{0ex}}\left\{{T}_{N}({\stackrel{\sim}{\beta}}_{N},\widehat{\theta})({h}_{1})-{T}_{N}({\beta}_{N},\widehat{\theta})({h}_{1})\right\}={o}_{{P}_{0}}(1)$ uniformly over *h*_{1} *H*_{10}. It follows that, for all *h*_{1} *H*_{10},

$$\begin{array}{l}-\sqrt{n}{P}_{n}{T}_{N}({\beta}_{N},\widehat{\theta})({h}_{1})=\sqrt{n}{P}_{0}\left\{{T}_{N}({\stackrel{\sim}{\beta}}_{N},\widehat{\theta})({h}_{1})-{T}_{N}({\beta}_{N},\widehat{\theta})({h}_{1})\right\}+{o}_{{P}_{0}}(1)\\ ={P}_{0}\left\{\frac{\partial T}{\partial \beta}({\beta}_{N},{\theta}^{\ast})\sqrt{n}({\stackrel{\sim}{\beta}}_{N}-{\beta}_{N})({h}_{1})\right\}+{o}_{{P}_{0}}(\sqrt{n}{\left|\right|{\stackrel{\sim}{\beta}}_{N}-{\beta}_{N}\left|\right|}_{{H}_{1}}+1).\end{array}$$

From assumption 10 and the definition of , it follows that

$$\begin{array}{l}<\sqrt{n}({\stackrel{\sim}{\beta}}_{N}-{\beta}_{N}),{h}_{1}{>}_{{H}_{1}}=\sqrt{n}{P}_{0}\{{T}_{N}({\beta}_{N},\widehat{\theta})(-{\mathcal{Q}}_{N}^{-1}{h}_{1})-{T}_{N}({\beta}_{N},{\theta}^{\ast})(-{\mathcal{Q}}_{N}^{-1}{h}_{1})\}\\ +\sqrt{n}({P}_{n}-{P}_{0}){T}_{N}({\beta}_{N},{\theta}^{\ast})(-{\mathcal{Q}}_{N}^{-1}{h}_{1})+{o}_{{P}_{0}}({\left|\right|\sqrt{n}({\stackrel{\sim}{\beta}}_{N}-{\beta}_{0}))||}_{{H}_{1}}+1).\end{array}$$

Note that, apart from a *o*_{P0} (1) term,

$$\sqrt{n}{P}_{0}\left\{{T}_{N}({\beta}_{N},\widehat{\theta})({h}_{1})-{T}_{N}({\beta}_{N},{\theta}^{\ast})({h}_{1})\right\}=\sum _{i=1}^{n}{P}_{0}\left\{{T}_{N}({\beta}_{N},{\theta}^{\ast}+\frac{{U}_{i}}{\sqrt{n}})({h}_{1})-{T}_{N}({\beta}_{N},{\theta}^{\ast})({h}_{1})\right\}$$

because *T _{N}* is Fréchet differentiable at (

$${V}_{N}({h}_{1})={E}_{0}{\left[{T}_{N}({\beta}_{N},{\theta}^{\ast})(-{\mathcal{Q}}_{N}^{-1}{h}_{1})+{P}_{0}{\left\{\frac{\partial {T}_{N}({\beta}_{N},{\theta}^{\ast})}{\partial \theta}(u)(-{\mathcal{Q}}_{N}^{-1}{h}_{1})\right\}}_{u=U}\right]}^{2}.$$

The consistency of the variance estimate can be shown by virtually identical statements as those given in the previous theorem.

Let *β _{Nn}* denote the solution to the equation

$$\sqrt{n}{P}_{n}\{T({\beta}_{{N}_{n}},\widehat{\theta})({h}_{1})\}=\sqrt{n}{P}_{0}\left\{{T}_{{N}_{n}}({\beta}_{{N}_{n}},\widehat{\theta})({h}_{1})-T({\beta}_{{N}_{n}},\widehat{\theta})({h}_{1})\right\}+{o}_{{P}_{0}}(1)$$

uniformly for *h*_{1} *H*_{10}. Since
$\mid \sqrt{n}{P}_{0}\left\{{T}_{{N}_{n}}({\beta}_{{N}_{n}},\widehat{\theta})({h}_{1})-T({\beta}_{{N}_{n}},\widehat{\theta})({h}_{1})\right\}\mid \le K\sqrt{n}{(1-\sigma )}^{{N}_{n}}$, and that log(*n*)*/N _{n}* → 0 implies
$K\sqrt{n}{(1-\sigma )}^{{N}_{n}}\to 0$ as

For the variance estimate, let
${\eta}_{{N}_{n}h}=({\widehat{\beta}}_{{N}_{n}}+{\scriptstyle \frac{h}{\sqrt{n}}},\widehat{\theta})$ for a fixed *h* *H*_{1}. Again, from the Donsker class result and the uniform convergence of *T _{N}* to

$$\mid \sqrt{n}{P}_{0}\{{T}_{{N}_{n}}({\eta}_{{N}_{n}h})({h}_{1})-T({\eta}_{{N}_{n}h})({h}_{1})\}\mid \le \sqrt{n}K{(1-\sigma )}^{{N}_{n}}\to 0$$

and
$\mid \sqrt{n}{P}_{0}\{{T}_{{N}_{n}}({\eta}_{{N}_{n}})({h}_{1})-T({\eta}_{{N}_{n}})({h}_{1})\}\mid \le \sqrt{n}K{(1-\sigma )}^{{N}_{n}}\to 0$ as *n* → ∞ when log(*n*)/*N _{n}* → 0, it follows that

$$\sqrt{n}{P}_{n}\{{T}_{{N}_{n}}({\eta}_{{N}_{n}h})({h}_{1})-{T}_{{N}_{n}}({\eta}_{{N}_{n}})({h}_{1})\}=\sqrt{n}{P}_{0}\{T({\eta}_{{N}_{n}h})({h}_{1})-T({\eta}_{{N}_{n}})({h}_{1})\}+{o}_{{P}_{0}}(1).$$

Note that { ${T}_{{N}_{n}}^{2}({\eta}_{{N}_{n}})({h}_{1})\mid \forall n,{h}_{1}\in {H}_{10}$} is a Glivenko-Cantelli class. It follows that

$${P}_{n}{\{{T}_{{N}_{n}}({\eta}_{{N}_{n}})({h}_{1})\}}^{2}={P}_{0}{\{T({\eta}_{0})({h}_{1})\}}^{2}+{o}_{{P}_{0}}(1),$$

uniformly in *h*_{1} *H*_{10}. Those two results combined with the proof of consistency of the variance estimate in Theorem 1 imply the consistency of the variance estimate.

- Begun JM, Hall WJ, Huang WM, Wellner J. Information and asymptotic efficiency in parametric-nonparametric models. Ann Statist. 1983;11:432–452.
- Bickel P, Klassen C, Ritov Y, Wellner J. Efficient and Adaptive Estimation for Semiparametric Models. Baltimore: John Hopkins University Press; 1993.
- Chen HY. Nonparametric and semiparametric models for missing covariates in parametric regression. J Amer Statist Assoc. 2004;99:1176–1189.
- Huang Y. Calibration regression of censored lifetime medical cost. J Amer Statist Assoc. 2002;97:318–327.
- Lipsitz SR, Ibrahim JG, Zhao LP. A weighted estimating equation for missing covariate data with properties similar to maximum likelihood. J Amer Statist Assoc. 1999;94:1147–1160.
- Little RJA, Rubin DB. Statistical Analysis with Missing Data. 2. New York: John Wiley; 2002.
- Nan B, Emond MJ, Wellner JA. Information bounds for Cox regression models with missing data. Ann Statist. 2004;32:723–735.
- Robins JM, Hsieh FS, Newey W. Semiparametric efficient estimation of a conditional density with missing or mismeasured covariates. J Roy Statist Soc, Ser B. 1995;57:409–424.
- Robins JM, Rotnitzky A. Comments on “Inference for semiparametric models: Some questions and an answer” by Bickel, P. J. and Kwon, J. in the millennium series of
*Statist*. Sinica. 2001;11:920–936. - Robins JM, Rotnitzky A, van der Laan MJ. Discussion of “On profile likelihood” by Murphy, S.A. and van der Vaart, A. W. J Amer Statist Assoc. 1999;94:477–482.
- Robins JM, Rotnitzky A, Zhao LP. Estimation of regression coefficients when some regressors are not always observed. J Amer Statist Assoc. 1994;89:846–866.
- Robins JM, Rotnitzky A, Van der Laan M. Comment on “On profile likelihood” by Murphy and Van der Vaart. J Amer Statist Assoc. 2000;95:477–485.
- Robins JM, Wang N. Discussion on the papers by Forster and Smith and Clayton et al. J Roy Statist Soc, Ser B. 1998;60:91–93.
- Rotnitzky A, Robins JM. Analysis of semiparametric models with nonignorable nonresponses. Stat Med. 1997;16:81–102. [PubMed]
- Rubin DB. Inference and missing data. Biometrika. 1976;63:581–92.
- Sasieni P. Information bounds for the conditional hazard ratio in a nested family of regression models. J Roy Statist Soc, Ser B. 1992;54:617–635.
- Scharfstein DO, Rotnitzky A, Robins JM. Adjusting for nonignorable drop-out using semiparametric nonresponse models (with discussion) J Amer Statist Assoc. 1999;94:1096–1120.
- Van der Laan MJ, Robins JM. Unified methods for censored longitudinal data and causality. New York: Springer; 2003.
- Van der Vaart AW, Wellner JA. Weak Convergence and Empirical Processes With Application to Statistics. New York: Springer; 1996.
- Yu M, Nan B. Semiparametric regression models with missing data: the mathematical review and a new application. Statist Sinica. 2006;16:1193–1212.

PubMed Central Canada is a service of the Canadian Institutes of Health Research (CIHR) working in partnership with the National Research Council's national science library in cooperation with the National Center for Biotechnology Information at the U.S. National Library of Medicine(NCBI/NLM). It includes content provided to the PubMed Central International archive by participating publishers. |