PMCCPMCCPMCC

Search tips
Search criteria 

Advanced

 
Logo of nihpaAbout Author manuscriptsSubmit a manuscriptHHS Public Access; Author Manuscript; Accepted for publication in peer reviewed journal;
 
Scand Stat Theory Appl. Author manuscript; available in PMC 2010 June 1.
Published in final edited form as:
Scand Stat Theory Appl. 2009 December 1; 36(4): 713–734.
doi:  10.1111/j.1467-9469.2009.00646.x
PMCID: PMC2811346
NIHMSID: NIHMS168144

Estimation and inference based on Neumann series approximation to locally efficient score in missing data problems

Abstract

Theory on semiparametric efficient estimation in missing data problems has been systematically developed by Robins and his coauthors. Except in relatively simple problems, semiparametric efficient scores cannot be expressed in closed forms. Instead, the efficient scores are often expressed as solutions to integral equations. Neumann series was proposed in the form of successive approximation to the efficient scores in those situations. Statistical properties of the estimator based on the Neumann series approximation are difficult to obtain and as a result, have not been clearly studied. In this paper, we reformulate the successive approximation in a simple iterative form and study the statistical properties of the estimator based on the reformulation. We show that a doubly-robust locally-efficient estimator can be obtained following the algorithm in robustifying the likelihood score. The results can be applied to, among others, the parametric regression, the marginal regression, and the Cox regression when data are subject to missing values and the missing data are missing at random. A simulation study is conducted to evaluate the performance of the approach and a real data example is analyzed to demonstrate the use of the approach.

Keywords: auxiliary covariates, information operator, non-monotone missing pattern, weighted estimating equations

1 Introduction

The semiparametric efficient estimation for missing data problems has been extensively studied (Bickel et al., 1993; Robins et al., 1994 and others). One major task in such problems is to project the estimating score onto the orthogonal complement of the nuisance score space. However, the projection often depends on the unknown underlying distribution that generated the data (Robins et al., 1994, 1995; Rotnitzky & Robins, 1995; Rotnitzky et al., 1998; Scharfstein et al., 1999). To overcome the difficulty, working models have been proposed to compute a locally efficient score. It has been shown that when data are missing at random and either the working model for the missing data mechanism or the working model for the nuisance model of the full data is correct, the locally efficient score is asymptotically unbiased (Lipsitz et al., 1999; Robins et al., 1999; Robins & Rotnitzky, 2001; van der Laan & Robins, 2003). Note that, except in simple cases, computing the projection using working models still corresponds to the hard problem of solving an integral equation. Neumann series expansion has been proposed to obtain an approximate solution through successive approximation (Robins et al., 1994, Robins & Wang, 1998, and van der Laan & Robins, 2003). Since the procedure for finding the locally efficient estimator based on the approximate locally efficient score is complicated, the study of the asymptotic properties of the estimator has been left open. In this article, we reformulate the successive approximation and show that an algorithm based on robustifying the likelihood score yields an estimator having the desired asymptotic properties, i.e., doubly robust and locally efficient.

The remainder of the article is organized as follows. In section 2, we reformulate the successive approximation in a simple form and show the robust property of the algorithm. The asymptotic properties of the estimator are carefully studied in section 3. We show that the algorithm indeed yields an estimator which is doubly-robust and locally-efficient when appropriate care is taken with regard to the number of terms used in the Neumann series approximation. Applications of the theory developed to regression models are briefly discussed in section 4. A simulation study is performed using parametric regression with missing covariates to examine the finite sample performance of the algorithm in section 5 and the algorithm is applied to a real data example. The article is concluded with some discussions in section 6. All proofs are collected in the Appendix.

2 The Neumann series approximation to the efficient score

Let Y be the full data, R be the missing data indicator for Y, and R(Y) and R (Y) be respectively the observed and missing parts of Y. Let the density of the distribution for (R, Y) with respect to μ, a product of count measures and Lebesgue measures, be dP(α,β,θ)/dμ = π(R|Y, α)f(Y, β, θ), where β [set membership] Ω, θ [set membership] Θ, α [set membership] Ξ. Here β and α are usually Euclidean parameters, and θ is a nuisance parameter, which can be of infinite dimension. Let η = (α, β, θ), where β is the parameter of interest and (α, θ) are nuisance parameters. Let b(Y)L02(Pη), a mean-zero square-integrable function with respect to Pη. Define the nonparametric information operator mη as

mη{b(Y)}=Eη[Eη{b(Y)R,R(Y)}Y].

Neumann series approximation to the efficient score appeared as the successive approximation in Robins et al. (1994). The method first finds the efficient score for β under the full data model, denoted by SβF,eff. The method then employs the successive approximation,

UN=SβF,eff+Pη(Imη)UN1,

where An external file that holds a picture, illustration, etc.
Object name is nihms168144ig1.jpg is the projection to the closure of the nuisance score space of the full data model and U0=SβF,eff. The semiparametric efficient score is Eη{U|R, R(Y)}. There are many unanswered questions associated with the use of this approach in practice. First, we need guidelines for choosing a finite N in implementing the algorithm. Second, the successive approximation is only known to converge under L2-norm, which is insufficient in studying the properties of the estimator when the underlying distribution used in computing the projection is estimated rather than known. Third, it is not known if the estimator generated from the procedure involving approximations is indeed asymptotically equivalent to the estimator with known underlying distribution.

To answer these questions, we reformulate the successive approximation in another form as UN = (IAn external file that holds a picture, illustration, etc.
Object name is nihms168144ig1.jpgmη)UN−1, or equivalently as an explicit expression:

UN=(IPηmη)NSβ,

where Sβ is the likelihood score for β. The equivalence of the simpler form to the successive approximation can be shown by noting that (IAn external file that holds a picture, illustration, etc.
Object name is nihms168144ig1.jpg)An external file that holds a picture, illustration, etc.
Object name is nihms168144ig1.jpg = 0 and SβF,eff=(IPη)Sβ. This allows us to express the efficient score for the missing data problem directly as

limNE{(IPηmη)NSβR,R(Y)}.

Note that the approximation based on the new expression does not require us to first find the efficient score under the full data. The approximation is the likelihood score when N = 0 and can be regarded as robustification of the likelihood score when N > 0. An algorithm for finding the approximate locally efficient score for estimating β based on the expression can be described as follows. First, find an estimate of the nuisance parameters under the working models using methods such as the maximum likelihood approach. Then, compute the approximate efficient score with the nuisance parameters estimated from the working models. Finally, solve the score equation to obtain β estimator. Results in the next section show that it is sufficient that the number N in the approximation be taken in an order higher than the logarithm of the sample size. Let

TN{R,R(Y),η}=E{(IPηmη)NSβR,R(Y)}

and T{R, R(Y), η} be the limit of TN{R, R(Y), η} under L2(Pη). The large sample properties of the approximate locally efficient estimator are studied in the next section. The following proposition describes the robust properties of the estimating scores, which are used in the next section. Similar results can be found in van der Laan & Robins (2003, sections 2.9 & 2.10). Note, however, that part (a) of the proposition is different from their results.

Let P0 = P(β0,θ0,α0), the true distribution generating the data. If, for any small ε and large M, there exists θ(ε,M) such that

f(y,β0,θ)(1+ε[S(y)1{S(y)M}E(β0,θ){S(Y)1{S(Y)M}}])=f(y,β0,θε,M),

where S(Y) = f(Y, β0, θ0)/f(Y, β0, θ)−1, then, in this paper, {f(y, β0, θ), θ [set membership] Θ} is called a super-convex family for θ at β0. Note that a super-convex family of distributions is always a convex family of distributions, which corresponds to M = ∞. When the densities are bounded above from infinity and below from zero, a convex family is also a super-convex family.

Proposition 1

Assume that data are missing at random and that the true distribution generating the data is P0. Then the following results hold:

  1. For any fixed N, if the nuisance model for the full data is correct, i.e., θ = θ0, then TN{R, R(Y), β0, θ0, α} is asymptotically unbiased under P0. The L2(P(β0, θ0, α)) limit of TN is also asymptotically unbiased if it is in L2(P0) and the missing data probabilities are bounded away from zero.
  2. If the model for the missing data mechanism is correct, i.e., α = α0, and f(y, β0, θ) is a super-convex family with respect to θ, then E0[T{R, R(Y), β0, θ, α0}] = 0 if E0[T[multiply sign in circle]2{R, R(Y), β0, θ, α0}] < ∞, where T[multiply sign in circle]2 = TT′.

We suppress the proof of this proposition because it is similar to the proof of such results in the literature such as in Robins et al. (2000) and van der Laan & Robins (2003). The proposition suggests that T is doubly robust, i.e., it is unbiased when either the missing data mechanism model or the nuisance model for the full data is correctly specified. For a fixed N, TN is unbiased only when the nuisance model for the full data is correct. However, note that TN approximates T. If we allow N to depend on the sample size, we can make TNn doubly robust as n → ∞. We explain in the next section how to implement this idea.

3 Estimation and inference based on approximate locally efficient scores

To simplify notation in this section, we use θ to denote the nuisance parameter. That is, we absorb α into θ. Denote the model ∫ π(R|Y, α)f(Y, β, θ)dR(Y) by g(R, R(Y), β, θ) after the parameter absorption. Let θ(γ), γ [set membership] Γ define a working model which is a regular submodel. Let θ([gamma with circumflex]) be a n consistent estimator of θ(γ) under the working models.

To accommodate β of infinite dimension, we use the functional form to denote TN and T. Let TN(i)(η)(h1) = TN {Ri, Ri(Yi), η}(h1) and T(i)(η)(h1) = T{Ri, Ri(Yi), η}(h1) where h1 [set membership] H1, a Hilbert space. Let [theta w/ hat] = θ([gamma with circumflex]). For a given N, let betaN be the solution to the equation

PnTN(βN,θ^)(h1)=1ni=1nTN(i)(βN,θ^)(h1)=0,

for all h1 [set membership] H1. Let beta be the solution to the equation

PnT(β,θ^)(h1)=1ni=1nT(i)(β,θ^)(h1)=0,

for all h1 [set membership] H1. Define linear operator An external file that holds a picture, illustration, etc.
Object name is nihms168144ig2.jpg as a map from H1 to itself satisfying

<Q0h1,h1>H1=E0{T(η0)β(h1)(h1)}.
(1)

Assumption 9 in the Appendix guarantees that An external file that holds a picture, illustration, etc.
Object name is nihms168144ig2.jpg exists, is uniquely defined, and is continuously invertible because of the following. For any given h1, the right-hand side of the foregoing equation defines a linear functional on H1. By Riesz representation theorem, there exists an h** [set membership] H1 such that, for all h1H1,

<h1,h1>H1=P0{T(η0)β(h1)(h1)}.

We can thus define the map An external file that holds a picture, illustration, etc.
Object name is nihms168144ig2.jpg from H1 to H1 such that Q0h1=h1. By varying h1 [set membership] H1, we see that An external file that holds a picture, illustration, etc.
Object name is nihms168144ig2.jpg is well-defined on H1. It is straightforward to verify that An external file that holds a picture, illustration, etc.
Object name is nihms168144ig2.jpg is a linear operator on H1. Similarly, we define linear operators, An external file that holds a picture, illustration, etc.
Object name is nihms168144ig3.jpg and An external file that holds a picture, illustration, etc.
Object name is nihms168144ig4.jpg, that map H1 to itself, respectively as

<QNh1,h1>H1=P0{TN(βN,θ)β(h1)(h1)}.

and

<Q0Nh1,h1>H1=P0{TN(β0,θ)β(h1)(h1)},

which are respectively continuously invertible from assumption 10 in the Appendix and from the continuity of the right-hand side with respect to β.

We now state theorems on the asymptotic properties of the β estimators when data are missing at random and either the missing data mechanism model or the nuisance full data model is correctly specified. Theorem 1 describes the asymptotic behavior of beta when n → ∞. Theorem 2 states the asymptotic property of betaN when N is fixed and n → ∞. Theorem 3 describes the asymptotic behavior of betaN when n → ∞ and N, as a function of n, also tends to ∞. Assumptions and proofs are given in the Appendix.

Theorem 1

Under assumptions 1–9, as n →; ∞, betaβ0 almost surely and

<n(ββ0),h1>H1N(0,V0(h1)),

uniformly for h1 [set membership] H10, and

V0(h1)=E0{T(β0,θ)(Q01h1)}2,

which can be consistently estimated uniformly for h1 [set membership] H10 by

1ni=1n[T(i)(β,θ^){Q^01(h1)}]2,

where θ* = θ(γ*) and γ* is the limit of [gamma with circumflex], and

<Q^0(h1),h1>H1=1ni=1nn[T(i){β+h1/n,θ^}(h1)T(i){β,θ^}(h1)],

for h1H1. When the nuisance models for θ are correctly specified, θ* = θ0 and V0 attains the semiparametric efficient variance bound.

Theorem 2

Under assumptions 1–8, and 10–12, for any fixed N, as n → ∞, betaN converges almost surely to βN satisfying E0{TN(βN, θ*)(h1)} = 0 for all h1 [set membership] H1. The asymptotic bias of betaN can be approximated by

<(βNβ0),h1>H1E0{TN(β0,θ)(Q0N1h1)}.

Further,

<n(βNβN),h1>H1N(0,VN(h1))

uniformly over h1 [set membership] H10, where

VN(h1)=E0[TN(βN,θ)(QN1h1)+E0{TN(βN,θ)θ(u)(QN1h1)}u=U1]2

can be consistently estimated by

1ni=1n(TN(i){βN,θ^}{Q^N1h1)+n[T¯N{βN,θ^+Uin}(Q^N1h1)T¯N{βN,θ^}{Q^N1h1)])2

uniformly for h1 [set membership] H10, where An external file that holds a picture, illustration, etc.
Object name is nihms168144ig5.jpg (h1) is defined as

<Q^N(h1),h1>H1=1ni=1nTN(i){βN+h1/n,θ^}(h1)

for h1, h1H1, and U1, ···, Un are influence functions of [theta w/ hat] in estimating θ*, i.e.,

θ^θ=1ni=1nUi+op(1n).

Theorem 3

Let Nn be a sequence such that log n/Nn → 0 as n → ∞. Under assumptions 1–9, betaNnbeta → 0 P0-almost surely, and

<n(βNnβ),h1>H1=oP0(1)

uniformly over h1 [set membership] H10 as n → ∞. Further, V0(h1) can be consistently estimated by

1ni=1n{TNn(i)(βNn,θ^}(Q^0Nn1(h1)}2

uniformly over h1 [set membership] H10, where

<Q^0Nn(h1),h1>H1=1ni=1nn[TNn(i){βNn+h1/n,θ^}(h1)TNn(i){βNn,θ^}(h1)],

for h1, h1H1.

In practice, Theorem 1 is useful only when the locally efficient score has a closed-form expression. This can happen sometimes. In general, Theorem 1 cannot be applied directly because of the unknown form of the locally efficient score. Theorem 2 can almost always be applied to the approximation score with a finite N. It can be seen from Theorem 2 that, although the bias in estimating β cannot be totally avoided, the magnitude of the bias can be controlled by selecting a sufficiently large N. This is because E0TN (β0, θ*)(h1) → E0T(β0, θ*)(h1) = 0 uniformly over h1 [set membership] H10 as N → ∞. Furthermore, if the nuisance model for the full data is correctly specified in the missing data problem, bias in estimating β is asymptotically zero for any fixed N when n → ∞ because E0TN(β0, θ*)(h1) = 0 from Proposition 1(a). Theorem 3 states that the approximation score with N sufficient large relative to the sample size is asymptotically equivalent to the locally efficient score in estimating β. This confirms that the algorithm indeed finds the locally efficient estimator when carefully implemented.

4 Applications to examples

In this section, we apply the theorems in the previous section to several regression models frequently used in practice.

Example 1: Missing data in parametric regression models

Let Y = (V, W, X) with density p(v|w, x)f(w|x, β)q(x) where f(w|x, β) is the parametric regression model with a Euclidean parameter β [set membership] Rk, which is of primary interest, and V is the auxiliary information observed in addition to outcome W and covariates X. The nuisance parameter for the complete data model is (p, q). We know from Robins et al. (1994) that doubly robust estimating scores have the form Eη{mη1(D)R,R(Y)}, where An external file that holds a picture, illustration, etc.
Object name is nihms168144ig6.jpg is in the orthogonal complement of the nuisance score space. Specifically, An external file that holds a picture, illustration, etc.
Object name is nihms168144ig6.jpg = S(W, X) − Eη{S(W, X)|X} for any square-integrable function S. The semiparametric efficient score is the special doubly robust estimating score with An external file that holds a picture, illustration, etc.
Object name is nihms168144ig6.jpg satisfying the integral equation

Eη{mη1(D)X,W}Eη{mη1(D)X}=βlogf(WX,β).

When missing data form monotone patterns, mη1 has a closed-form expression. But the foregoing integral equation does not have a closed-form solution even with the simplest missing data pattern and when no auxiliary covariates are involved. As a result, successive approximation is needed except when the missing data form monotone patterns and we are satisfied with a doubly robust estimator.

The score operator for the parameters (q, β, p) is

Aη(h1,h21,h22)=h1Tlogfβ+h21(v,w,x)+h22(x),

where h1 [set membership] H1 = Rk with k the dimension of β, and

h21H21={h21(v,w,x)L2(Pη)Eη{h21(V,W,X)W,X}=0}

and

h22H22={h22(x)L2(Pη)Eη{h22(X)}=0}.

Note that H1 does not vary for different β [set membership] Ω. H10 can be chosen as H10 = {h1 [set membership] Rk||h1| ≤ 1}. Let H2 = H21 × H22. Let A2η(h21, h22) = Aη(0, h21, h22). The adjoint operator of A2η is

A2η{s(v,w,x)}={s(v,w,x)Eη(sw,x),Eη(sx)Eη(s)}.

It follows that A2ηA2η(h21,h22)=(h21(v,w,x),h22(x)). Hence, A2ηA2η is continuously invertible on H2 and Pη=A2η(A2ηA2η)1A2η having the form

Pη(s)=s(v,w,x)Eη(sw,x)+Eη(sx),

for any mean-zero square integrable function s. Assume that the densities involving the nuisance parameters are uniformly bounded, the convexity requirement for q(v|w, x)f(w|x, β)p(x) with respect to qp can be verified as follows.

τq1(vw,x)p1(x)+(1τ)q2(vw,x)p2(x)=qτ(vw,x)pτ(x),

where

qτ(vw,x)=τq1(vw,x)p1(x)+(1τ)q2(vw,x)p2(x)τp1(x)+(1τ)p2(x)

and

pτ(x)=τp1(x)+(1τ)p2(x)

for τ [set membership] [0, 1].

Example 2: Marginal mean model

Let W = (W1, ···, WK)T and E(Wk|X) = gk(Xkβ) for k = 1, ···, K. Let g(β) = {g1(X1β), ···, gK(XKβ)}T, and f(ε) be the joint density of W given X, where εi = wigi(xiβ) and ε = (ε1, ···, εK). Let the density for X be q and the density for V given (W, X) be p, where V denotes the auxiliary covariate. The nuisance parameter for the complete data model is (q, f, p). When data are missing at random, the efficient score for estimating β is Eη{mη1(D)R,R(Y)} (Robins et al., 1994; 1995), where An external file that holds a picture, illustration, etc.
Object name is nihms168144ig6.jpg = Covη (S, ε|X){Varη (ε|X)}−1ε satisfying

Covη{mη1(D),εX}=dgdβ.

When X is completely observed, the efficient score has the form

dgdβ[Varη{mη1(ε)X}]1mη1(ε).

Successive approximation is needed when either missing data form nonmontone patterns or covariates are subject to missing values.

When data are fully observed, the score for estimating β is

A1ηh1=k=1Kh1TXkTgk(Xkβ)logfεk(ε1,,εK),

where h1 [set membership] H1 = Rd and d is the dimension of parameter β. H10 can be taken as the unit ball in Rd. The nuisance score is

A2η{h21(V,X,W),h22(X,W),h23(X)}=h22(X,W)+h21(V,X,W)+h23(X),

where H2 = H21 × H22 × H23 with

H21={h21(v,w,x)L2(Pη)Eη{h21(V,W,X)W,X}=0},H22={h22(w,x)L2(Pη)Eη{h22(X,W)X}=0,Eη{εh22(X,W)X}=0},

and

H23={h23(x)L2(Pη)Eη{h23(X)}=0}.

The adjoint operator of A2η is

A2ηS(V,X,W)={S(V,X,W)Eη(SX,W),Eη(SX,W)Eη{SX}Covη(S,εX){Varη(εX)}1ε,Eη(SX)Eη(S)}.

It follows that A2ηA2η{h21,h22,h23}=(h21(v,w,x),h22(w,x),h23(x)). Hence, A2ηA2η has continuous inverse on H21 × H22 × H23 and Pη=A2η(A2ηA2η)1A2η appears as

Pηs=s(V,X,W)Eη(sεX){Varη(εX)}1ε.

for mean-zero square-integrable function s. The efficient score for β under the full data is

Sβeff,F={X1Tg1(X1β),,XKTgK(XKβ)}{Varη(WX)}1ε

When the densities involving the nuisance parameters are uniformly bounded, the convexity requirement can be verified as follows.

τq1(vw,x)f1{wg(β)}p1(x)+(1τ)q2(vw,x)f2(wg(β)}p2(x)=qτ(vw,x)fτ{wg(β)}pτ(x),

where

qτ(vw,x)=τq1(vw,x)f{wg(β)}p1(x)+(1τ)q2(vw,x)f2(wg(β)}p2(x)τf{wg(β)}p1(x)+(1τ)f2(wg(β)}p2(x),fτ{wg(β)}=τf{wg(β)}p1(x)+(1τ)f2(wg(β)}p2(x)τp1(x)+(1τ)p2(x),

and pτ(x) = τp1(x) + (1 − τ)p2(x) for τ [set membership] [0, 1].

Example 3: The missing covariate problem in Cox regression model

Suppose that T is the survival time of a subject, which is subject to censoring by censoring time C. Given covariate Z (time-independent), T and C are independent. X = TC = min(T, C) and δ = 1{TC} rather than (T, C) are observed. Z is subject to missing values. Assume that, given (T, C, Z), the missing data mechanism depends on the observed data R(Y) = {X, δ, R, R(Z)} only. Suppose that the Cox proportional hazards model holds, that is

limΔ01ΔP(t<Tt+ΔTt,Z}=λ(t)φ(βZ),

where [var phi] is a known function and λ(t) is an unknown baseline hazard function. The nuisance parameter includes the censoring distribution, baseline hazard, and covariate distribution. The efficient score for estimating β when data are subject to MAR missing values is Eη[mη1{D(X,δ,Z)}R,R(X,δ,Z)] (Robins et al., 1994; Nan et al., 2004) where An external file that holds a picture, illustration, etc.
Object name is nihms168144ig6.jpg is the unique solution to

logφβEη{ξ(u)φβ}Eη{ξ(u)φ}=mη1(D(u,1,Z)Eη{m1(D)ξ(u)Z}Eη{ξ(u)Z}Eη(ξ(u)φ[mη1(D)(u,1,Z)Eη{mη1(D)ξ(u)Z}Eη{ξ(u)Z}])/Eη{ξ(u)φ}

and

D={b1(u,Z)Eη{ξ(u)φb1(u,Z)}Eη{ξ(u)φ}}{dN(u)ξ(u)φ(βZ)dΛ(u)}

for some b1, where ξ(u) = 1{Xu} and N(u) = 1{Xu,δ=1}. The successive approximation is needed in obtaining a locally efficient estimator of β.

The density for (X, δ, Z) is

f(x,δz,β,λ)p(z)=λδ(x)φδ(βz)exp{Λ(x)φ(βz)}gc1δ(xz)G¯cδ(xz)p(z),

where Λ(x)=0xλ(t)dt and gc is the density function of the censoring time distribution Gc and Gc = 1 − Gc. Let Λc(x,z)=0xλc(t,z)dt, λc = gc/Gc, dMT(t, z) = dNT(t) − Y(t)λT(t, z)dt with NT(t) = 1{Xt,δ=1}, and dMC(t, z) = dNC(t) − Y(t)λC(t, z)dt with NC(t) = 1{Xt,δ=0}. The score operator for the parameters (β, λ, gc, p) is

Aη{h11,h12(x),h21(x,z),h22(z)}=h11Tβlogφ(βZ)dMT(t,z)+h12(t)dMT(t,z)+h21(t,Z)dMC(t,Z)+h22(Z),

where H1 = H11 × H12, H2 = H21 × H22, and H11 = Rk with k being the dimension of β,

H12={h12(t)h12(t)L2{dΛ(t)}},H21={h21(t,z)h12(t,z)L2{dΛC(tz)dP(z)}},

and

H22={h22(z)L2(Pη)Eη{h22(Z)}=0}.

For Λs that are bounded at T0, the study stopping time, H12 does not change with Λ. Hence, H1 is fixed. Define the inner product on H2 as

<(h21,h22),(h21,h22)>H2=Eη{h21(t,Z)h21(t,Z)dΛc(tZ)}+Eη{h22(Z)h22(Z)}.

It is not difficult to see that H2 is a Hilbert space under the inner product. Similarly, we can define an inner product on H1 as

<(h11,h12),(h11,h12)>H1=h11Th11+h12(t)h12(t)dΛ(t)

to make it a Hilbert space. Let H10 be the subset of H1 such that for any (h11, h12) [set membership] H10, h11 is bounded by 1 and h12 [set membership] BV [0, T0], i.e., h12 has bounded variation on [0, T0].

Let A2η(h21, h22) = Aη(0, 0, h21, h22). The adjoint of A2η satisfies

<A2η(h21,h22),s(x,δ,z)>L2(Pη)=<(h21,h22),A2η{s(x,δ,z)}>H2.

Note that, for any s(X, δ, Z) [set membership] L2(Pη), s can be represented as (Sasieni, 1992; Nan et al., 2004)

s(X,δ,Z)=[s(t,1,Z)Eη{s(X,δ,Z)Y(t)Z}Eη{Y(t)Z}]dMT(t,z)=[s(t,0,Z)Eη{s(X,δ,Z)Y(t)Z}Eη{Y(t)Z}]dMC(t,z)+Eη{s(X,δ,Z)Z},

and the three components are orthogonal to each other. It follows that

<A2η(h21,h22),s(X,δ,Z)>L2(Pη)=Eη{h22(Z)Eη(sZ)}+Eη[h21(t,Z){s(t,0,Z)Eη{s(X,δ,Z)Y(t)Z}Eη{Y(t)Z}}d<MC>(t,z)],

where d < MC > (t, z) = Y (t)dΛC(t, z). It can be seen that the adjoint operator A2η can be obtained as

A2η(s)=(s(t,0,z)Eη{Y(y)Z}Eη{s(X,δ,Z)Y(t)Z},Eη{s(X,δ,Z)Z}Eη{s(X,δ,Z)}).

By direct calculation, it follows that

A2ηA2η(h21,h22)=(h21(t,Z)E{Y(t)Z},h22(Z)),

which implies that A2ηA2η is continuously invertible on H2 when Eη{Y(t)|Z} > σ > 0 for all tT0. The projection operator appears as

Pη(s)=[s(t,0,Z)Eη{s(X,δ,Z)Y(t)Z}Eη(Y(t)Z)]dMC(t,Z)+Eη{s(X,δ,Z)Z}.

for a mean-zero square-integrable function s. The efficient score for estimating (β, Λ) can be expressed as

limNEη{(IPηmη)Na(X,δ,Z)X,δ,R,R(Z)},

where

a(X,δ,Z)=h11Tβlogφ(βZ)dMT(t,Z)+h12(t)dMT(t,Z).

The convexity requirement can be verified as follows.

τgc11δ(xz)G¯c1δ(xz)p1(z)+(1τ)gc21δ(xz)G¯c2δ(xz)p2(z)=gcτ1δ(xz)G¯cτδ(xz)pτ(z),

where

gcτ(tz)=τgc1(tz)p1(z)+(1τ)gc2(tz)p2(z)τp1(z)+(1τ)p2(z)

and

pτ(z)=τp1(x)+(1τ)p0(z),

for τ [set membership] [0, 1]. Note that we considered the regression parameter and the baseline hazard as the parameters of interest rather than the regression parameter alone because of the convexity requirement. This treatment is different from those treated in the literature.

5 Numeric study

5.1 Simulation studies

We perform a simulation study on missing data in parametric regression with/without auxiliary covariates. Two parametric regression models were simulated. The first was the logistic regression. The second was the linear regression with a normal error.

In the logistic regression model, two independent covariates were simulated. One was binary and the other was normally distributed. One normally distributed auxiliary covariate was also simulated in this case. It was assumed that, given the covariates, the outcome and the auxiliary covariates were independent. But the auxiliary covariate depended on the other covariates. In the simulation, both covariates were subject to missing values and the missingness depended on the outcome and the auxiliary covariate only. More specifically, we assumed that E(Y |X1, X2) = g(β0+β1x1 +β2x2), where g(t) = (1+et)−1. The parameter (β0, β1, β2) = (1, −0.5, 0.5). The model for the auxiliary covariate V given X1 and X2 was set to E(V|X1, X2) = ψ0 + ψ1X1 + ψ2X2 + ψ3X1X2 with a standard normal error. In the simulation, we set (ψ0, ψ1, ψ2) = (0.5, 0.3, 1) and ψ3 = 0 which corresponds to a correct model for V given X1 and X2, and ψ3 = −2, which corresponds to a severely misspecified model in the data analysis. Three missing covariate patterns were simulated. They were completely observed, observed X1 only, and observed X2 only. Let R1 and R2 denote the missingness indicators for X1 and X2 respectively. The missing data were generated by the model

logP(R1=i,R2=jY,V)P(R1=1,R2=1Y,V)=α0+α1Y+α2V+α3YV,

where (i, j) = (1, 0) or (0, 1) and (α0, α1, α2) = (−0.5, −0.5, 0.5), and α3 = 0 corresponding to a correct missing data mechanism model and α3 = −1.5 corresponding to an incorrect missing data mechanism model in the data analysis.

The correct model for Y given X1 and X2 was always used in the analysis of the simulated data. The distributions of the covariates and the auxiliary covariate were assumed unknown in the analysis of the simulated data. The semiparametric odds ratio models with bilinear log-odds ratio functions were used for modeling the distributions of the covariates (X1, X2) and of the auxiliary covariate given the outcome and the covariates (X1, X2) in the analysis. The polytomous logit regression model with different sets of parameters for different missing patterns and without the interaction term was always assumed in the data analysis. This implies that the missing data mechanism model was misspecified in the analytical model if the model generating the missing data included the interaction term. To compare the performance of different methods, we computed the following estimators for the regression parameter. The first one was the estimator from the complete-case analysis (CC), which is the solution to the estimating equation,

i=1n1{Ri=1}βlogf(YiXi,β)=0,

where 1 is a vector with 1 in all of its components. The second one was the simple missing-data-probability weighted estimator (SW), which is the solution to the estimating equation,

i=1n1{Ri=1}πi(1)βlogf(YiXi,β)=0,

where πi(r) = π{r, r(Vi, Xi, Yi), [alpha]} for all missing-data pattern r and [alpha] is the maximum likelihood estimate under the missing-data mechanism model, that is, the polytomous linear logit model without interaction. The third was the augmented missing data probability weighted estimator (AW), which is the solution to the estimating equation,

i=1n[1{Ri=1}πi(1)βlogf(YiXi,β)+r{1{Ri=r}1{Ri=1}πi(1)πi(r)}×E^{βlogf(YiXi,β)|R=r,r(Vi,Xi,Yi)}]=0,

where Ê was computed using the distribution estimated from the following maximum likelihood estimator. The fourth was the maximum likelihood estimator (ML) with the bilinear odds ratio model and without interaction for the covariate distribution. The last two were the likelihood robustification estimators as proposed in this paper using approximation with N = 10 (LR-10) and N = 20 (LR-20) respectively.

The simulation results were based on 500 replicates of a sample size of 400. The missing proportions for X1 and for X2 were approximately 25%. The average number of complete cases thus obtained was close to 200. Table 1 lists the simulation results for the binary outcome data. As expected, when all models are correct, except the CC estimator, the biases of all the other five estimators are relatively small. But the efficiency is different: with the ML estimator the most efficient and the SW estimator the least efficient. When the covariate model is correct and the missing data mechanism is incorrect, the CC and SW estimators can have substantial biases. Biases of all the other estimators are small. When the missing data mechanism model is correct and the covariate model is incorrect, the SW estimator is unbiased. The ML estimator is subject to a sizable bias. Both LR-10 and LR-20 estimators largely correct the bias in the ML estimator. When neither model is correct, the LR-10 and LR-20 estimators along with the AW estimator appear to have much smaller biases when compared with the CC, SW, and ML estimators although all the estimators are biased. The variance estimates for the likelihood robustification estimators appear to work well. The AW estimator has good performance in all the above cases both in terms of bias and variation. This is partly because it has the doubly robust property in the narrow sense that the estimator is consistent when either the covariate models or the missing data mechanism model is correct as long as both the missing data mechanism and its model depend only on the fully observed covariates and the outcome.

Table 1
Simulation results for the logistic regression model with missing covariates and auxiliary information.

In the linear regression model, the variance of the residual error was set to 1. Variables were generated in the same way as in the logistic regression model with the exception that the normal error was used in generating Y and g(t) = t. To simplify the computation involved in the analysis of the simulated data, we included V as the third covariate in the linear regression model. However, V had no effect on Y conditional on X1 and X2. The integral with respect to y in computing the expectations in the robustification procedure was approximated by 10 points Gauss-Hermite quadrature. LR-5 (N = 5) and LR-10 (N = 10) estimators were computed. Five hundred replicates of a sample of 200 were used in the simulation. The results are shown in Table 2. The behavior of the estimators is almost the same as that observed in the previous scenario for the logistic regression model. The difference between LR – 5 and LR – 10 is still relatively small, which indicates that the convergence rate of the likelihood robustification approximation is reasonably fast in the simulated cases.

Table 2
Simulation results for the linear regression model with missing covariates.

In summary, the SW estimator is sensitive to misspecification of the missing data mechanism. The ML estimator can have sizable bias when the covariate models are severely misspecified. We have also simulated other scenarios (not shown) which suggest that the ML estimator with the semiparametric odds ratio model for the covariates is relatively robust against covariate model misspecification. The AW estimator is very robust although it does not have the doubly robust property in general. The likelihood robustification estimators perform better than the AW estimator in all the cases. The estimators from LR-5, LR-10 and LR-20 are nearly indistinguishable, which suggests that approximation using N = 10 or even N = 5 is good enough in the simulated cases. Other simulations not shown indicate that the number N that gives good approximation depends on the amount of missing data. In general, the higher the percentage of missing data is, the larger the number N is required. In practice, N can be empirically determined by comparing estimators using different numbers of approximation. In the computation, the covariates that were subject to missing were rounded to the nearest 0.05 in the logistic regression, and to 0.1 in the linear regression. The effect of the rounding on the parameter estimates was nearly negligible as indicated in the results (not shown) when finer roundings were used.

5.2 Application to hip fracture data

The hip fracture data were collected by Dr. Barengolts at the College of Medicine of the University of Illinois at Chicago in studying the hip fracture in veterans. The study matched a case and a control by age and race. Risk factors on bone fracture were assessed. As in Chen (2004), we concentrated on 9 of the risk factors in the analysis. One of the challenging problems in analyzing this dataset is that most of the risk factors are subject to missing values and there are a large number (38 altogether) of missing patterns. This dataset was analyzed in Chen (2004) by the likelihood method using the semiparametric odds ratio models proposed there for the covariates. Since the covariate models applied there are not guaranteed to be correctly specified, it is of interest to verify whether any substantial bias is introduced into the parameter estimator due to the potential covariate model misspecification. This is assessed here by computing the doubly robust estimators of the parameter and comparing them with the maximum likelihood estimator.

There were a few obstacles in actually implementing the proposed method to this dataset. The primary problem was to estimate the missing data probabilities. Since many missing patterns (26 out of 38) have less than 5 observations, it is virtually impossible to estimate the missing data probabilities that depend on one or more variables. As a compromise, we assumed that the missing data did not depend on the observed or unobserved data, i.e., MCAR. Under this assumption, the simple missing data probability weighted estimator is the same as the estimator from the complete-case analysis. We computed the estimator from the complete-case analysis, the maximum likelihood estimator, the augmented weighted estimator, and the likelihood robustification estimators with N = 10 and N = 20 respectively. In computing these estimators, we rounded data for the three continuous variables: BMI, log(HGB), and Albumin to allow each of them to have about 10 categories. This reduces the computation time and the storage space required. However, the effect of rounding on the parameter estimators is small as discussed in Chen (2004). All the parameter estimates except LR-20, which is the same as LR-10, are shown in Table 3.

Table 3
Analysis of the hip fracture data

The regression coefficients for LevoT4 and dementia estimated from the complete-case analysis are substantially different from those estimated by the other methods. The estimates from the maximum likelihood, the augmented weighted estimating equation, and the likelihood robustification are very close. Estimates from the latter two are even closer. This suggests that the covariate models used in the likelihood approach appear to be reasonable in the sense that it may be close to correctly specified or even if it is incorrectly specified, the influence of the misspecification on the parameter estimates is very small.

6 Discussion

We have shown that the Neumann series approximation can be used to find a locally efficient estimator in missing data problems under the assumption that all configurations of the full data can be observed with a probability bounded away from zero. This helps to close a gap between the semiparametric efficient theory for the missing data problem and the implementation of the procedure in finding such an estimator. The results can be easily modified to be applied to the study of the asymptotic behavior of the doubly robust estimators when missing data are nonmonotone. Note that the results do not cover the case where a continuous inverse of m does not exist. Similar results in the latter case is expected to be much harder to obtain.

Supplementary Material

Supplementary

Acknowledgments

The author thanks the editor for the detailed comments which have greatly improved the presentation of the paper. The author would also like to thank Professor James Robins for his insightful comments on the earlier versions of the paper. Comments from Drs. Y. Q. Chen and C. Y. Wang at FHCRC on the earlier versions of the paper are also very much appreciated. The research was supported by a grant from NCI/NIH on statistical methods for missing data.

Appendix: Proof of Theorems

Assume that the semiparametric model under consideration is π(r|y, η)p(y, η), where p(y, η) is the density of the full data y with respect to μ, a product of Lebesgue measures and counting measures. Assume that η = (β, θ) [set membership] Ω × Θ and β is the parameter of interest and θ is the nuisance parameter. The marginal distribution for observed data (R, R(Y)) is g(r, r(y), η) = ∫ π (r|y, η)p(y, η)d[r with macron](y), where [r with macron](y) denote those components of y that are missing in the missing pattern r. Assume that θ(γ), γ [set membership] Γ is a restricted parameterization of θ such that θ(Γ) [subset or is implied by] Θ. Let η0 = (β0, θ*) and P0 be the true probability measure that generated the data. The following regularity conditions are used in the theorems.

  1. For any β [set membership] Ω and γ [set membership] Γ, aside from a μ-zero set, {(r, y)|g(r, r(y), β, θ(γ)) > 0} is the same for any fixed r. π(r|y, η) and p(y, η) are bounded away from zero and ∞ uniformly for all y and η [set membership] x2130 = Ω × θ(Γ) [subset or is implied by] Ω × Θ, if π(r|y, η) > 0 for a μ-nonzero set of y. The full data model p(y, η) is a convex family with respect to θ.
  2. Ω is a compact subset of a Hilbert space. The true parameter value β0 is an inner point of Ω. Θ is a subset of another Hilbert space. As n → ∞, [gamma with circumflex]γ* in the norm defined on Γ. θ(γ) is continuous.
  3. As a L2(μ) function of (β, θ) [set membership] Ω × Θ, {π (r|y, η)p(y, η)}1/2 is Fréchet differentiable with respect to η [set membership] Ω× Θ. The score operator defined as 2{π (r|y, η)p(y, η)}−1/2 times the derivative is denoted by (A1η(h1), A2η(h2)) with h1 [set membership] H1 and h1 [set membership] H2. Both H1 and H2 are Hilbert spaces.
  4. A2ηA2η is continuously invertible at η0, where A2η, mapping L2{Pη(y)} to H2, is the adjoint operator of A2η.
  5. For η [set membership] x2130, (πηpη)1/2A1η(h1) and (πηpη)1/2A2η(h2) are continuous with respect to η in L2(μ) and are Fréchet differentiable with respect to β in a neighborhood of η0 in L2(μ) and A2η((πηpη)1/2s) is continuous with respect to η in H2 norm for s [set membership] L2(μ) and is Fréchet differentiable with respect to β in a neighborhood of η0 in H2 norm.
  6. Suppose that p(y, η) = dPη/d(μ1 ×···× μJ) where μj is Lebesgue measure on R1 or a counting measure, j = 1, ···, J. Suppose that, for any missing pattern r, π(r|y, η)p(y, η), A1η(h1), A2η(h2), and A2η(s), and their derivatives with respect to β for η [set membership] x2130, are all continuous with respect to the jth argument of y if μj is Lebesgue measure, j = 1, ···, J and h1, h2, and s are continuous with respect to yj.
    There exists a norm on x2130, denoted by ||·||, such that
    ||π(ry,η1)π(ry,η2)||L(Pη0)C1||η1η2||,||p(y,η1)p(y,η2)||L(Pη0)C2||η1η2||,||A1η1(h1)A1η2(h1)||L(Pη0)C3||η1η2||||h1||H1,||A2η1(h2)A2η2(h2)||L(Pη0)C4||η1η2||||h2||H2,
    and
    ||A2η1(s)A2η2(s)||H2C5||η1η2||||s||L(Pη0)
    for any η1, η2 [set membership] x2130 and some constants Ci, i = 1, ···, 5. The ε-covering number for x2130 under ||·||, N(x2130, ε, ||·||) satisfies
    0logN(E,ε/logε,||·||)dε<.
  7. There exists an H10 [subset or is implied by] H1, for any fixed continuously invertible map An external file that holds a picture, illustration, etc.
Object name is nihms168144ig7.jpg from H1 to itself, if < An external file that holds a picture, illustration, etc.
Object name is nihms168144ig7.jpgh1, β > H1 = 0 for all h1 [set membership] H10, then β = 0. The covering number of H10 under supremum norm, N(H10, ε, ||·||) satisfies
    0logN(H10,ε,||·||)dε<.
  8. inf||h1||H1=1,||h1||H1=1|E0{T(η0)β(h1)(h1)}|>0.
  9. inf||h1||H1=1,||h1||H1=1|E0{TN(βN,θ)β(h1)(h1)}|>0.
  10. There exists Ui, i = 1, ···, n, iid with E0U[multiply sign in circle]2 finite such that
    θ(γ^)θ=1ni=1nUi+op(1n),
    where θ* = θ(γ*).
  11. A1η(h1) and A2η(h2) are Fréchet differentiable with respect to θ along the path θ(γ), γ [set membership] Γ in a neighborhood of η0 in L2(Pη0) and A2η(s) is Fréchet differentiable with respect to θ along the path θ(γ), γ [set membership] Γ in a neighborhood of η0 in H2.

Before we prove Theorems 1–3, we first establish a set of lemmas for the proofs of the theorems. These lemmas are mostly for showing that T and TN are differentiable and that F=N=0{TN(η)(h1)ηE,h1H10}{T(η)(h1)ηE,h1H10} is a P0-Donsker class. Let Dη= (IAn external file that holds a picture, illustration, etc.
Object name is nihms168144ig1.jpgmη) where Pη=A2η(A2ηA2η)1A2η. It follows that TN(η,h1)=Eη{DηNA1ηh1R,R(Y)}. We start from lemmas on the differentiable of TN with respect to β. Proofs of the lemmas are suppressed and can be found in the supplement materials.

Lemma 1

  1. Under assumptions 1–5, gη1/2TN(η)(h1), for any N, is Fréchet differentiable with respect to β in L2(μ) in a neighborhood of η0 in x2130, and both gη1/2TN(η)(h1), for any N, and the derivatives are continuous at η0 in L2(μ). If we define the derivative of TN with respect to β as
    TN(η)β(h1)(h1)=gη1/2{{gη1/2TN(η)(h1)}β(h1)12gη1/2B1η(h1)TN(η)(h1)},
    where the first term on the right-hand side of the equation refers to the derivative of gη1/2TN(η)(h1) with respect to β, then
    ε1{TN(η+εh1)(h1)TN(η)(h1)}TN(η)β(h1)(h1)L2(Pη)0,
    as n → ∞.
  2. (b) Under assumptions 1–5 and 12, gη1/2TN(η)(h1), for any N, is Fréchet differentiable with respect to θ along the path θ(γ), γ [set membership] Γ, in L2(μ) in a neighborhood of η0 and the derivatives are continuous at η0 in L2(μ). The derivative of TN with respect to θ is defined similarly.

Lemma 2

Under the assumptions 1–5, T(η)(h1) is Fréchet differentiable with respect to β in L2(Pη0) in a neighborhood of 0 and both T(η)(h1) and the derivative are continuous at η0 in L2(Pη0). If we define the derivative of T with respect to β as

T(η)β(h1)(h1)=gη1/2{{gη1/2T(η)(h1)}β(h1)12gη1/2B1η(h1)T(η)(h1)},

where the first term on the right-hand side of the equation refers to the derivative of gη1/2T(η)(h1) with respect to β, then

ε{T(η+εh1)(h1)T(η)(h1)}T(η)β(h1)(h1)L2(Pη)0,

as n → ∞.

Lemma 3

Under assumptions 1–3, for any p > 0, N > 0, and s, a function of Y in L(Pη),

||DηN+psDηNs||L(Pη)KNc(Y)(1σ)N||s||L(Pη),

where c(Y) denotes the cardinality of Y and K is a constant independent of N.

Lemma 4

Suppose that there exists a measurable set An external file that holds a picture, illustration, etc.
Object name is nihms168144ig8.jpg with P0(An external file that holds a picture, illustration, etc.
Object name is nihms168144ig8.jpg) = 0 such that for all t(Y)N=0{DηNssS,ηE}{limNDηNsinL(Pη)sS,ηE},

||t(Y)||L(Pη)bsupyNct(y),
(2)

where S is a set of functions of y and 0 < b ≤ 1 is a constant. Then,

N=0{DηNssS,ηE}{limNDηNsinL(Pη)sS,ηE}

is a P0-Donsker class if

  1. S is P0-measurable and L(P0) bounded satisfying
    0logN(S,ε,L(Pη))dε<,
    (3)
    for a fixed η [set membership] x2130.
  2. For any η1, η2 [set membership] x2130, s [set membership] S, and a fixed η [set membership] x2130, there exists a C(η) < ∞ such that
    Dη1sDη2sC(η)||η1η2||||s||L(Pη),
    and x2130 has covering numbers under ||·|| satisfying
    0logN(E,ε/logε,||·||)dε<.
    (4)

Lemma 5

Under assumption 6, if s(y, η) is continuous with respect to argument j when μj is Lebesgue measure, then there exists a measurable set An external file that holds a picture, illustration, etc.
Object name is nihms168144ig8.jpg with Pη(An external file that holds a picture, illustration, etc.
Object name is nihms168144ig8.jpg) = 0 such that

||t(Y,η)||L(Pη)=supYNct(y,η),

for any

tN=0{DηNssS,ηE}{limNDηNssS,ηE},

where the limit is in the sense of the L(Pη) norm.

Lemma 6

Under assumptions 1–7, we have

||Dη1sDη2s||L(Pη)C1(η)||η1η2||||s||L(Pη),

and

||Eη1{Dη1sR,R(y)}Eη2{Dη2sR,R(y)}||L(Pη)C2(η)||η1η2||||s||L(Pη),

for some constants C1(η), C2(η) < ∞.

Lemma 7

Under assumptions 1–8,

F=N=0{TN(η)(h1)ηE,h1H10}{T(η)(h1)ηE,h1H10}

is a P0-Donsker class.

Proof of Theorem 1

Let [eta w/ tilde] = (beta, [theta w/ hat]). By definition, PnT([eta w/ tilde])(h1) = 0 for h1 [set membership] H10. From Lemma 7, {T(η)(h1)| η [set membership] x2130, h1 [set membership] H10} is a P0-Donsker class with bounded envelop function, and thus a P0-Glivenko-Cantelli class. For a convergent point of a subsequence of [eta w/ tilde], denoted by η0=(β0,θ), it follows from the continuity of T(η) in a neighborhood of η0 (Lamma B.1 (a)) that P0T(η0)(h1)=0. Note that P0T(η0)(h1) = 0, where η0 = (β0, θ*). Since T(η)(h1) is differentiable with respect to β in a neighborhood of β0 in L2(Pη0) at η0 and the derivative is continuous at η0 in η, by the mean value theorem,

P0{T(β0,θ)(h1)T(β0,θ)(h1)}=P0[Tβ{η0+λ(η0η0)}(h1)(β0β0)]

for some 0 ≤ λ ≤ 1 and all h1 [set membership] H10. From assumptions 8 and 9, we can conclude that β0=β0 locally. Since [theta w/ hat] converges (Assumption 2) and beta varies in a compact set, which implies each convergent subsequence converges to the same limit, beta locally converges to β0 almost surely.

Since E0{T([eta w/ tilde])(h1)− T(η0)(h1)}2 → 0 uniformly for h1 [set membership] H10 and {T(η)(h1) [set membership] x2130, h1 [set membership] H10} is a P0-Donsker class with bounded envelope function, it follows (van der Vaart & Wellner, 1996, Lemma 3.3.5 on page 311) that n(PnP0){T(η)(h1)T(η0)(h1)}=oP(1), which implies that nP0T(η)(h1)=n(PnP0)T(η0)(h1)+oP0(1). Note that

nP0{T(η)(h1)T(β0,θ^)(h1)}=P0{Tβ(β0,θ)(h1)n(ββ0)}+oP0(||n(ββ0)||),

and that P0T(β0, [theta w/ hat])(h1) = P0T(β0, θ*)(h1) = 0. It now follows that

P0[Tβ(η0)(h1){n(ββ0)}]=n(PnP0)T(η0)(h1)+oP0(1+||n(ββ0)||).

By replacing h1 in the foregoing equation by Q01h1, it follows that

<n(ββ0),h1>H1=nPnT(η0){Q01(h1)}+oP0(1+||n(ββ0)||),

which implies that <n(ββ0),h1>H1=OP0(1) and that <n(ββ0),h1>H1N(0,V(h1)) uniformly on h1 [set membership] H10, where V(h1)=E0[T(η0){Q01(h1)}]2.

To prove the consistency of the variance estimate, let ηh=(β^+hn,θ^) for a fixed h [set membership] H1. That {T([eta w/ tilde]h)(h1) − T([eta w/ tilde])(h1)|h1 [set membership] H10} is a P0-Donsker class implies that n(PnP0){T(ηh)T(η)}(h1)=oP0(1). It follows that, apart from a oP0 (1) term,

nPn{T(ηh)T(η)}(h1)=nP0{T(ηh)T(η)}(h1)=P0{T(η0)β(h)(h1)}=<Q0h,h1>H1.

Since {T2([eta w/ tilde])(h1)|h1 [set membership] H10} is a Glivenko-Cantelli class, Pn{T([eta w/ tilde])(h1)}2 = P0{T(η0)(h1)}2+ oP0 (1) uniformly in h1 [set membership] H10. It can now be seen that the asymptotic variance of <n(ββ0),h1>H1 can be consistently estimated by Pn{T([eta w/ tilde])(h)}2, where h [set membership] H1 solves the equation <h1,h1>H1=nPn{T(ηh)(h1)T(η)(h1)}. Note that, from the previous derivation, for any fixed h1 [set membership] H10, h thus defined converges in probability (P0) to Q01(h1) uniformly over h1 [set membership] H10.

When both the missing data mechanism model and the nuisance model for the full data are correctly specified, θ* = θ0. That is, Pη0= P0. It follows from E(β0+h/n,θ0)T(β0+h/n,θ0)(h1)=0 for any h, h1 [set membership] H10 that

E0{Tβ(η0)(h)(h1)}=E0{T(β0,θ0)(h1)A1(β0,θ0)(h)}=E0{T(β0,θ0)(h1)T(β0,θ0)(h)}.

This implies that beta is asymptotically efficient.

Proof of Theorem 2

For any convergent point of betaN, denoted by βN, let [eta w/ hat]N = (betaN, [theta w/ hat]) and ηN = (βN, [theta w/ hat]). By definition, PnTN([eta w/ hat]N)(h1) = 0. Note that F (see Lemma 7 for its definition) is a Pη0-Donsker class with bounded envelop function, and thus a Pη0-Glivenko-Cantelli class. Since dPη0/dP0 is bounded away from 0 and ∞, F is a P0-Glivenko-Cantelli class. Since P0{TN([eta w/ hat]N)(h1) − TN(ηN)(h1)}2 → 0 uniformly over ηN and h1 [set membership] H10 for any fixed N as n → ∞, it follows that (PnP0) {TN([eta w/ hat]N)(h1) − TN(ηN)(h1)} = oP0 (1). Hence, PnTN(ηN)(h1) = oP0 (1), which implies that P0TN(ηN)(h1) = 0 for all h1 [set membership] H10. Note that

P0{TN(βN,θ^)(h1)TN(β0,θ^)(h1)}=P0{TNβ(β0,θ)(h1)(βNβ0)}+oP0(||βNβ0||H1).

It follows from the definition of An external file that holds a picture, illustration, etc.
Object name is nihms168144ig4.jpg that, for all h1 [set membership] H10,

<βNβ0,h1>H1=P0{TN(βN,θ^)(Q0N1h1)TN(β0,θ^)(Q0N1h1)}+oP0(||βNβ0||H1).

Next, note that n(PnP0){TN(βN,θ^)(h1)TN(βN,θ^)(h1)}=oP0(1) uniformly over h1 [set membership] H10. It follows that, for all h1 [set membership] H10,

nPnTN(βN,θ^)(h1)=nP0{TN(βN,θ^)(h1)TN(βN,θ^)(h1)}+oP0(1)=P0{Tβ(βN,θ)n(βNβN)(h1)}+oP0(n||βNβN||H1+1).

From assumption 10 and the definition of An external file that holds a picture, illustration, etc.
Object name is nihms168144ig3.jpg, it follows that

<n(βNβN),h1>H1=nP0{TN(βN,θ^)(QN1h1)TN(βN,θ)(QN1h1)}+n(PnP0)TN(βN,θ)(QN1h1)+oP0(||n(βNβ0))||H1+1).

Note that, apart from a oP0 (1) term,

nP0{TN(βN,θ^)(h1)TN(βN,θ)(h1)}=i=1nP0{TN(βN,θ+Uin)(h1)TN(βN,θ)(h1)}

because TN is Fréchet differentiable at (βN, θ*) with respect to θ along the path θ(γ) in L2(P0) and θ^θ=1ni=1nUi+oP0(1n), where E0(U) = 0. It follows that, uniformly over h1 [set membership] H10, n(βNβN)(h1)N(0,VN(h1)) where

VN(h1)=E0[TN(βN,θ)(QN1h1)+P0{TN(βN,θ)θ(u)(QN1h1)}u=U]2.

The consistency of the variance estimate can be shown by virtually identical statements as those given in the previous theorem.

Proof of Theorem 3

Let βNn denote the solution to the equation PnTNn(βNn, [theta w/ hat])(h1) = 0 for all h1 [set membership] H1. From the Donsker class result and the uniform convergence of TN to T in L2(P0), it follows that n(PnP0){TNn(βNn,θ^)(h1)T(βNn,θ^)(h1)}=oP0(1), uniformly over h1 [set membership] H10. The equations imply that

nPn{T(βNn,θ^)(h1)}=nP0{TNn(βNn,θ^)(h1)T(βNn,θ^)(h1)}+oP0(1)

uniformly for h1 [set membership] H10. Since nP0{TNn(βNn,θ^)(h1)T(βNn,θ^)(h1)}Kn(1σ)Nn, and that log(n)/Nn → 0 implies Kn(1σ)Nn0 as n → ∞, it follows that nPn{T(βNn,θ^)(h1)}=oP0(1), uniformly over h1 [set membership] H10. From the definition of beta, it follows that nPn{T(βNn,θ^)(h1)T(β,θ^)(h1)}=oP0(1), uniformly over h1 [set membership] H10. From the fact that F is a P0-Donsker class and that T is continuous in L2(P0) at η0, it follows that P0{T(βNn, [theta w/ hat])(h1) − T(beta, [theta w/ hat])(h1)}2 → 0, uniformly over h1 [set membership] H10 and thus that n(PnP0){T(βNn,θ^)(h1)T(β,θ^)(h1)}=oP0(1), uniformly over h1 [set membership] H10. It follows that nP0{T(βNn,θ^)(h1)T(β,θ^)(h1)}=oP0(1), uniformly over h1 [set membership] H10. It now follows from Lemma 2 and assumption 9 that <n(βNnβ),h1>H1=oP0(1), uniformly over h1 [set membership] H10

For the variance estimate, let ηNnh=(β^Nn+hn,θ^) for a fixed h [set membership] H1. Again, from the Donsker class result and the uniform convergence of TN to T in L2(P0), it follows that n(PnP0){TNn(ηNnh)(h1)TNn(ηNn)(h1)}=oP0(1), uniformly over h1 [set membership] H10. It follows from the foregoing equation that nPn{TNn(ηNnh)(h1)TNn(ηNn)(h1)} can be rewritten as nP0{TNn(ηNnh)(h1)T(ηNnh)(h1)}+nP0{TNn(ηNn)(h1)T(ηNn)(h1)} plus nP0{T(ηNnh)(h1)T(ηNn)(h1)}+oP0(1). Since

nP0{TNn(ηNnh)(h1)T(ηNnh)(h1)}nK(1σ)Nn0

and nP0{TNn(ηNn)(h1)T(ηNn)(h1)}nK(1σ)Nn0 as n → ∞ when log(n)/Nn → 0, it follows that

nPn{TNn(ηNnh)(h1)TNn(ηNn)(h1)}=nP0{T(ηNnh)(h1)T(ηNn)(h1)}+oP0(1).

Note that { TNn2(ηNn)(h1)n,h1H10} is a Glivenko-Cantelli class. It follows that

Pn{TNn(ηNn)(h1)}2=P0{T(η0)(h1)}2+oP0(1),

uniformly in h1 [set membership] H10. Those two results combined with the proof of consistency of the variance estimate in Theorem 1 imply the consistency of the variance estimate.

References

  • Begun JM, Hall WJ, Huang WM, Wellner J. Information and asymptotic efficiency in parametric-nonparametric models. Ann Statist. 1983;11:432–452.
  • Bickel P, Klassen C, Ritov Y, Wellner J. Efficient and Adaptive Estimation for Semiparametric Models. Baltimore: John Hopkins University Press; 1993.
  • Chen HY. Nonparametric and semiparametric models for missing covariates in parametric regression. J Amer Statist Assoc. 2004;99:1176–1189.
  • Huang Y. Calibration regression of censored lifetime medical cost. J Amer Statist Assoc. 2002;97:318–327.
  • Lipsitz SR, Ibrahim JG, Zhao LP. A weighted estimating equation for missing covariate data with properties similar to maximum likelihood. J Amer Statist Assoc. 1999;94:1147–1160.
  • Little RJA, Rubin DB. Statistical Analysis with Missing Data. 2. New York: John Wiley; 2002.
  • Nan B, Emond MJ, Wellner JA. Information bounds for Cox regression models with missing data. Ann Statist. 2004;32:723–735.
  • Robins JM, Hsieh FS, Newey W. Semiparametric efficient estimation of a conditional density with missing or mismeasured covariates. J Roy Statist Soc, Ser B. 1995;57:409–424.
  • Robins JM, Rotnitzky A. Comments on “Inference for semiparametric models: Some questions and an answer” by Bickel, P. J. and Kwon, J. in the millennium series of Statist. Sinica. 2001;11:920–936.
  • Robins JM, Rotnitzky A, van der Laan MJ. Discussion of “On profile likelihood” by Murphy, S.A. and van der Vaart, A. W. J Amer Statist Assoc. 1999;94:477–482.
  • Robins JM, Rotnitzky A, Zhao LP. Estimation of regression coefficients when some regressors are not always observed. J Amer Statist Assoc. 1994;89:846–866.
  • Robins JM, Rotnitzky A, Van der Laan M. Comment on “On profile likelihood” by Murphy and Van der Vaart. J Amer Statist Assoc. 2000;95:477–485.
  • Robins JM, Wang N. Discussion on the papers by Forster and Smith and Clayton et al. J Roy Statist Soc, Ser B. 1998;60:91–93.
  • Rotnitzky A, Robins JM. Analysis of semiparametric models with nonignorable nonresponses. Stat Med. 1997;16:81–102. [PubMed]
  • Rubin DB. Inference and missing data. Biometrika. 1976;63:581–92.
  • Sasieni P. Information bounds for the conditional hazard ratio in a nested family of regression models. J Roy Statist Soc, Ser B. 1992;54:617–635.
  • Scharfstein DO, Rotnitzky A, Robins JM. Adjusting for nonignorable drop-out using semiparametric nonresponse models (with discussion) J Amer Statist Assoc. 1999;94:1096–1120.
  • Van der Laan MJ, Robins JM. Unified methods for censored longitudinal data and causality. New York: Springer; 2003.
  • Van der Vaart AW, Wellner JA. Weak Convergence and Empirical Processes With Application to Statistics. New York: Springer; 1996.
  • Yu M, Nan B. Semiparametric regression models with missing data: the mathematical review and a new application. Statist Sinica. 2006;16:1193–1212.