Search tips
Search criteria 


Logo of ijbiostatThe International Journal of BiostatisticsThe International Journal of BiostatisticsSubmit to The International Journal of BiostatisticsSubscribe
Int J Biostat. 2010 January 6; 6(2): Article 9.
Published online 2010 March 3. doi:  10.2202/1557-4679.1242
PMCID: PMC2854089

Dynamic Regime Marginal Structural Mean Models for Estimation of Optimal Dynamic Treatment Regimes, Part II: Proofs of Results*


In this companion article to “Dynamic Regime Marginal Structural Mean Models for Estimation of Optimal Dynamic Treatment Regimes, Part I: Main Content” [Orellana, Rotnitzky and Robins (2010), IJB, Vol. 6, Iss. 2, Art. 7] we present (i) proofs of the claims in that paper, (ii) a proposal for the computation of a confidence set for the optimal index when this lies in a finite set, and (iii) an example to aid the interpretation of the positivity assumption.

Keywords: dynamic treatment regime, double-robust, inverse probability weighted, marginal structural model, optimal treatment regime, causality

1. Introduction

In this companion article to “Dynamic regime marginal structural mean models for estimation of optimal dynamic treatment regimes. Part I: Main Content” (Orellana, Rotnitzky and Robins, 2010) we present (i) proofs of the claims in that paper, (ii) a proposal for the computation of a confidence set for the optimal index when this lies in a finite set, and (iii) an example to aid the interpretation of the positivity assumption.

The notation, definitions and acronyms are the same as in the companion paper. Througout, we refer to the companion article as ORR-I.

2. Proof of Claims in ORR-I

2.1. Proof of Lemma 1

First note that the consistency assumption C implies that the event


is the same as the event


So, with the definitions

V_k,k+l[equivalent](Vk+1,,Vk+l),l>0 and V_k,k[equivalent]nil

we obtain


Next, note that the fact that ω_k1,K((O¯kg,O_k,K),(g¯k1(O¯k1g),A_k1,K))=0 unless Ak=gk(O¯kg), Ak+1=gk+1(O¯kg,Ok+1),,AK=gK+1(O¯kg,O_k,K) implies that


Then, it follows from the second to last displayed equality that



So, part 1 of the Lemma is proved if we show that


Define for any k = 0, ..., K,


To prove equality (1) first note that,

=E[ω_k1,K(O¯Kg,(g¯k1(O¯k1g),A_k1,K))|O¯K+1g,A¯k1=g¯k1(O¯k1g),A_k1,K1]ω_k1,K1(O¯K1g,(g¯k1(O¯k1g),A_k1,K1))×E[ω_K1,K(O¯Kg,(g¯k1(O¯k1g),A_k1,K))|O¯K+1g,A¯k1=g¯k1(O¯k1g),A_k1,K1]= ω_k1,K1(O¯K1g,(g¯k1(O¯k1g),A_k1,K1))×           E[ω_K1,K(O¯Kg,A¯K)|O¯K+1g,A¯K1=g¯K1(O¯K1g)]=ω_k1,K1(O¯K1g,(g¯k1(O¯k1g),A_k1,K1))×E[I{gK(O¯Kg)}(AK)λK(gK(O¯Kg)|O¯Kg,g¯K1(O¯K1g))|O¯K+1g,A¯K1=g¯K1(O¯K1g)]=ω_k1,K1(O¯K1g,(g¯k1(O¯k1g),A_k1,K1))×E[I{gK(O¯Kg)}(AK)|O¯K+1g,A¯K1=g¯K1(O¯K1g)]λK(gK(O¯Kg)|O¯Kg,g¯K1(O¯K1g))=ω_k1,K1(O¯K1g,(g¯k1(O¯k1g),A_k1,K1))×P[AK=gK(O¯Kg)|O¯K+1g,A¯K1=g¯K1(O¯K1g)]λK(gK(O¯Kg)|O¯Kg,g¯K1(O¯K1g))


where the second to last equality follows because given O¯Kg and A¯K1=g¯K1(O¯K1g), OK+1g is a fixed, i.e. non-random function of O and consequently by the sequential randomization assumption, OK+1g is conditionally independent of AK given O¯Kg and A¯K1=g¯K1(O¯K1g). The last equality follows by the definition of λK (·|·, ·).

We thus arrive at


This proves the result for the case k = K. If k < K – 1, we analyze the conditional expectation of the last equality in a similar fashion. Specifically, following the same steps as in the long sequence of equalities in the second to last display we arrive at


the last equality follows once again from the sequential randomization assumption. This is so because given O¯K1g and A¯K2=g¯K2(O¯K2g), O¯Kg and O¯K+1g are fixed, i.e. deterministic, functions of O and the SR assumption ensures then that O¯Kg and O¯K+1g are conditionally independent of AK–1 given O¯K1g and A¯K2=g¯K2(O¯K2g).

Equality (1) is thus shown by continuing in this fashion recursively for K – 2, K – 3, ..., Kl until l such that Kl = k – 1.

To show Part 2 of the Lemma, note that specializing part 1 to the case k = 0, we obtain


Thus, taking expectations on both sides of the equality in the last display we obtain


This shows part 2 because B is an arbitrary Borel set.

2.2. Proof of the Assertions in Section 3.2, ORR-I

2.2.1. Proof of Item (a)

Lemma 1, part 2 implies that the densities pgmarg factors as


In particular, the event {A¯k1g=g¯k1(O¯k1g)} has probability 1. Consequently,

pgmarg(o,a|o¯k)=[product]j=0KI{gj(o¯j)}(aj)pgmarg(o,a|o¯k,a¯k=g¯k(o¯k))                     =[product]j=0KI{gj(o¯j)}(aj)[product]j=k+1K+1pmarg(oj|o¯j1,a¯j1=g¯j1(o¯j1)).


E{u(Og,Ag)|O¯kg=o¯k}==al[set membership]All=0,,Ku(o,a)[product]j=0KI{gj(o¯j)}(aj)[product]j=k+1K+1dPOj|O¯j1,A¯j1marg(oj|o¯j1,g¯j1(o¯j1))=u(o,a)[al[set membership]All=0,,K[product]j=0KI{gj(o¯j)}(aj)][product]j=k+1K+1dPOj|O¯j1,A¯j1marg(oj|o¯j1,g¯j1(o¯j1))=u(o,a)[product]j=k+1K+1dPOj|O¯j1,A¯j1marg(oj|o¯j1,g¯j1(o¯j1))=[product]j=k+1KdPOj|O¯j1,A¯j1marg(oj|o¯j1,g¯j1(o¯j1))×   [u(o,a)dPOK+1|O¯K,A¯Kmarg(oK+1|o¯K,a¯K=g¯K(o¯K))]=[[product]j=k+1KdPOj|O¯j1,A¯j1marg(oj|o¯j1,g¯j1(o¯j1))][var phi]K+1(o¯K)=[[product]j=k+1K1dPOj|O¯j1,A¯j1marg(oj|o¯j1,g¯j1(o¯j1))]×   [[var phi]K+1(o¯K)dPOK|O¯K1,A¯K1marg(oK|o¯K1,g¯K1(o¯K1))]=[product]j=k+1K1dPOj|O¯j1,A¯j1marg(oj|o¯j1,g¯j1(o¯j1))[var phi]K+1(o¯K)==[var phi]k+1(o¯k).

2.2.2. Proof of Item (b)

Lemma 1, part 1 implies that


The left hand side of this equality is equal to

ak[set membership]Akk=k,,Ku(o,a)ω_k1,K(o¯K,a¯K)[product]j=kKλj(aj|o¯j,a¯j1)[product]j=k+1K+1dPmarg(oj|o¯j1,a¯j1)

and this coincides with the right hand side of (2) which, as we have just argued, is equal to [var phi]k+1 (ōk).

2.3. Proof of Lemma 2 in ORR-I

Let X be the identity random element on (X,A) and let EPmarg × PX (·) stand for the expectation operation computed under the product law Pmarg × PX for the random vector (O, A, X). Then the restriction stated in 2) is equivalent to

EPmarg×PX[b(X,Z)ωK(O¯K,A¯K){u(O,A)hpar(X,Z;β*)}]=0 for all b

and the restriction stated in 3) is equivalent to

EPmarg×PX[{b(X,Z)EP×PX[b(X,Z)|Z]}×ωK(O¯K,A¯K){u(O,A)hsem(X,Z;β*)}]=0 for all b.

To show 2) let d (O, A, X) [equivalent] ωK (ŌK, ĀK) {u (O, A) – hpar (X, Z, β*)}.

(ORR-I, (14)) [implies] (3).

EPmarg×PX[b(X,Z)d(O,A,X)]=EPmarg×PX[b(X,Z)EP×PX[d(O,A,X)|X,Z]]                                                 =0

where the last equality follows because EPmarg × PX [d (O, A, X) |X = x, Z] = EPmarg [d (O, A, x) |Z] by independence of (O, A) with X under the law Pmarg × PX and, by assumption, EPmarg [d (O, A, x) |Z] = 0 μ-a.e.(x) and hence EPmarg [d (O, A, x) |Z] because PX and μ are mutually absolute continuous.

(3) [implies] (ORR-I, (14)). Define b* (X; Z) = EPmarg × PX [d(O, A, X)|X, Z]. Then,


consequently, EPmarg × PX [d (O, A, X) |X, Z] = 0 with Pmarg × PX prob. 1 which is equivalent to (ORR-I, (14)) because PX is mutually absolutely continuous with μ.

To show 3) redefine d (O, A, X) as ωK (ŌK, ĀK) {u (O, A) − hsem (X, Z, β*)}.

(ORR-1, (15)) [implies] (4)


where the third equality follows because EPmarg × PX {d (O, A, X) |X = x, Z} = EPmarg {d (O, A, x) |Z} and EPmarg {d (O, A, x) |Z}= q (Z) μ-a.e.(x) and hence PX-a.e.(x) by absolute continuity.

(4) [implies] (ORR-I, (15)). Define b* (X; Z) = EP × PX [ d(O, A, X)|X, Z]. Then,

0=EPmarg×PX[{b*(X,Z)EPmarg×PX[b*(X,Z)|Z]}d(O,A,X)] =EPmarg×PX[{b*(X,Z)EPmarg×PX[b*(X,Z)|Z]}b*(X,Z)] =EPmarg×PX[{b*(X,Z)EPmarg×PX[b*(X,Z)|Z]}2].

Consequently, b* (X, Z) = EPmarg × PX [b* (X, Z) |Z] [equivalent] q (Z) PXa.e. (X) and hence μXa.e. (X) by absolute continuity. The result follows because b* (x, Z) = EPmarg × PX [d (O, A, X) |X = x, Z] = EPmarg [d (O, A, X) |Z].

2.4. Derivation of Some Formulas in Section 5.3, ORR-I

2.4.1. Derivation of Formula (26) in ORR-I

Any element


of the set Λ is the sum of K + 1 uncorrelated terms because for any l, j such that 0 ≤ l < l + jK + 1,

E[{dl+j(O¯l+j,A¯l+j)E[dl+j(O¯l+j,A¯l+j)|O¯l+j,A¯l+j1]}×       {dl(O¯l,A¯l)E[dl(O¯l,A¯l)|O¯l,A¯l1]}]  =E[E[{dl+j(O¯l+j,A¯l+j)E[dl+j(O¯l+j,A¯l+j)|O¯l+j,A¯l+j1]}|O¯l+j,A¯l+j1]×       {dl(O¯l,A¯l)E[dl(O¯l,A¯l)|O¯l,A¯l1]}]  =E[0×{dl(O¯l,A¯l)E[dl(O¯l,A¯l)|O¯l,A¯l1]}]=0.

Thus, Λ is equal to Λ0 [plus sign in circle] Λ1 [plus sign in circle] . . . [plus sign in circle] ΛK where

Λk[equivalent]{dk(O¯k,A¯k)E[dk(O¯k,A¯k)|O¯k,A¯k1]:dk arbitrary scalar function}

and [plus sign in circle] stands for the direct sum operator. Then,


and it can be easily checked that Π [Qk] = E (Q|Ōk, Āk) – E [Q|Ōk, Āk–1].

2.4.2. Derivation of Formula (27) in ORR-I

Applying formula (26, in ORR-I) we obtain


So, for k = 0, ..., K,

d[center dot],opt,kb(O¯k,A¯k)=E[S.(β,γ*,b)|O¯k,A¯k].


 E[S.(β,γ*,b)|O¯k,A¯k]=Xposb[center dot](x,z)E[ωKx(O¯K,A¯K){u(O,A)h[center dot](x,Z;β)}|O¯k,A¯k]dPX(x)=Xposb[center dot](x,z)ωkx(O¯k,A¯k)×    ×E[ω_k,Kx(O¯K,A¯K){u(O,A)h[center dot](x,Z;β)}|O¯k,A¯k]dPX(x)=Xposb[center dot](x,z)ωkx(O¯k,A¯k)×    ×E[ω_k,Kx(O¯K,A¯K){u(O,A)h[center dot](x,Z;β)}|O¯k,A¯k=gx(O¯k)]dPX(x).

So formula ((27), ORR-I) is proved if we show that

E[ω_k,Kx(O¯K,A¯K){u(O,A)h[center dot](x,Z;β)}|O¯k,A¯k=gx(O¯k)]=       {[var phi]k+1x(O¯k)h[center dot](x,Z;β)}.

This follows immediately from the preceding proof of Result (b) of Section 3.2. Specifically, it was shown there that

E[ω_k,Kx(O¯K,A¯K){u(O,A)h[center dot](x,Z;β)}|O¯k+1,A¯k=gx(O¯k)]=[var phi]k+2(O¯k+1).

Consequently, the left hand side of (5) is equal to

  E[E[ω_k,Kx(O¯K,A¯K){u(O,A)h[center dot](x,Z;β)}|O¯k+1,A¯k=gx(O¯k)]|  O¯k,A¯k=gx(O¯k)]=E[[var phi]k+2x(O¯k+1)|O¯k,A¯k=gx(O¯k)] h[center dot](x,Z;β)E[ω_k,Kx(O¯K,A¯K)|O¯k,A¯k=gx(O¯k)]=[var phi]k+1x(O¯k)h[center dot](x,Z;β)

where the last equality follows by the definition of [var phi]k+1x(O¯k) and the fact that E[ω_k,Kx(O¯K,A¯K)|O¯k,A¯k=gx(O¯k)]=1 (as this is just the function [var phi]k+1x(O¯k) resulting from applying the integration to the utility u (O, A) = 1).

2.4.3. Derivation of Formula (31) in ORR-I

It suffices to show that Saug(γ,d[center dot],β,γ,τ,optb)=Σk=0KXposb(x,Z)Mk(x;β,γ,τ)dPX(x) where

Mk(x;β,γ,τ)[equivalent]{ωkx(γ)ωk1x(γ)}{[var phi]k+1x(O¯k;τ)h[center dot](x,Z;β)}.

But by definition

Saug(γ,d[center dot],β,γ,τ,optb)==k=0K{d[center dot],β,γ,τ,opt,kb(O¯k,A¯k)Eγ[d[center dot],β,γ,τ,opt,kb(O¯k,A¯k)|O¯k,A¯k1]}=k=0K{Xposb(x,Z)ωkx(γ){[var phi]k+1x(O¯k;τ)h[center dot](x,Z;β)}dPX(x) E[Xposb(x,Z)ωkx(γ){[var phi]k+1x(O¯k;τ)h[center dot](x,Z;β)}dPX(x)|O¯k,A¯k1]}=k=0KXposb(x,Z){ωkx(γ)Eγ[ωkx(γ)|O¯k,A¯k1]}×   {[var phi]k+1x(O¯k;τ)h[center dot](x,Z;β)}dPX(x)=k=0KXposb(x,Z){ωkx(γ)ωk1x(γ)}{[var phi]k+1x(O¯k;τ)h[center dot](x,Z;β)}dPX(x)

where the last equality follows because


2.5. Proof that b·, opt is Optimal

Write for short, [beta]· (b) [equivalent] [beta]· (b, d·, opt),

Qpar(b)[equivalent]Xposb(x,Z)Qpar(x;β,γ,τ)dPX(x) andQsem(b)[equivalent]Xpos{b(x,Z)b¯(Z)}Qsem(x;β,γ,τ)dPX(x).

We will show that J· (b) = E {Q· (b) Q· (b·, opt)} for · = par and · = sem. When either model (16, ORR-I) or (29, ORR-I) are correct, β* = β. Consequently, for · = par we have that Jpar (b) is equal to

E{Xposb(x,Z)[partial differential][partial differential]βhpar(x,Z;β)|βdPX(x)}=E[Xposb(x,Z)dPX(x)××{Xposbpar,opt(x,Z)E{Qpar(x;β,γ,τ)Qpar(x˜;β,γ,τ)|Z}dPX(x˜)}]=E[{Xposb(x,Z)Qpar(x;β,γ,τ)dPX(x)}××{Xposbpar,opt(x,Z)Qpar(x˜;β,γ,τ)dPX(x˜)}]=E{Qpar(b)Qpar(bpar,opt)}.

For · = sem and with the definitions [b with tilde] (x, Z) [equivalent] b (x, Z) – b (Z) and [Q with tilde]sem ([x with tilde]; β, γ, τ) [equivalent] Qsem ([x with tilde]; β, γ, τ) – Qsem ([x with tilde]; β, γ, τ), the same argument yields Jsem (b) equal to


Now, with varA ([beta]· (b)) denoting the asymptotic variance of [beta]· (b), we have that from expansion ((32) in ORR-I)

varA(β^[center dot](b))=var{E[Q[center dot](b)Q[center dot](b[center dot],opt)]1Q[center dot](b)}

and consequently

covA(β^[center dot](b),β^[center dot](b[center dot],opt))==E[Q[center dot](b)Q[center dot](b[center dot],opt)]1cov(Q[center dot](b),Q[center dot](b[center dot],opt))E[Q[center dot](b[center dot],opt)[multiply sign in circle]2]1=E[Q[center dot](b[center dot],opt)[multiply sign in circle]2]1=varA(β^[center dot](b[center dot],opt)).

Thus, 0 ≤ varA ([beta]· (b) – [beta]· (b·, opt)) = varA ([beta]· (b)) + varA ([beta]· (b·, opt) – 2covA ([beta]· (b), [beta]· (b·, opt)) = varA ([beta]· (b)) – varA ([beta]· (b·, opt)) which concludes the proof.

3. Confidence Set for xopt (z) when X is Finite and h· (z, x; β) is Linear in β

We first prove the assertion that the computation of the confidence set Bb entails an algorithm for determining if the intersection of #(X)1 half spaces in Rp and a ball in Rp centered at the origin is non-empty. To do so, first note that linearity implies that h[center dot](z,x;β)=Σj=1psj(x,z)βj for some fixed functions sj, j = 1, ..., p. Let N=#(X) and write X={x1,,xN}. The point xl is in Bb iff

there exists β in Cb:j=1p[sj(xl,z)sj(xk,z)]βi0 for all xk[set membership]X{xl}.

Define the p × 1 vector vlk whose jth entry is equal to sj (xl, z) – sj (xk, z), j = 1, ..., p. Define also the vectors vl*k=vlkΓ^[center dot](b) and the constants alk=vlkβ^[center dot](b,d^[center dot],optb). Then Σj=1p[sj(xl,z)sj(xk,z)]βj>0 iff vj*kΓ^[center dot](b)1/2×(ββ^[center dot](b,d^[center dot],optb))>alk. Noting that β in Cb iff Γ^[center dot](b)1/2(ββ^[center dot](b,d^[center dot],optb)) is in the ball

U[equivalent]{u[set membership]Rp:uuχp,1α2}

we conclude that the condition in the display (6) is equivalent to

there exists u in U such that vl*ku>alk for k=1,,N,kl.

The set {u[set membership]Rp:vj*ku=alk} is a hyper-plane in Rp which divides the Euclidean space Rp into two half-spaces, one of which is {u[set membership]Rp:vj*ku>alk}. Thus, the condition in the last display imposes that the intersection of N – 1 half-spaces (each one defined by the condition vl*ku>alk for each k) and the ball U is non-empty.

Turn now to the construction of a confidence set Bb* that includes Bb. Our construction relies on the following Lemma.

Lemma. Let

D={u[set membership]Rp:(uu0)Σ1(uu0)c0}

where u0 is a fixed p × 1 real valued vector and Σ is a fixed non-singular p × p matrix.

Let α be a fixed, non-null, p×1 real valued vector. Let τ0 [equivalent] α′ u0 and α* = Σ1/2α. Assume that α1 ≠ 0. Let, v1* be the p×1 vector (α1*1τ0,0,,0). Let [Upsilon] be the linear space generated by the p×1 vectors v2*=(α1*1α2*,1,0,0,,0), v3*=(α1*1α3*,0,1,0,,0),,vp*=(α1*1αp*,0,0,0,1) and define

v1,proj*=v1*Π[v1*|[Upsilon]]        =v1*V*(V*V*)1V*v1*



Then there exists u[set membership]D satisfying


if and only if



αu=0[left and right double arrow ]αΣ1/2Σ1/2(uu0)=αu0.

Then, with τ0 [equivalent]α′ u0 and α* = Σ1/2α, we conclude that there exists u[set membership]D satisfying α′ u = 0 if and only if there exists u* [set membership] Rp such that

u*u*c0 and α*u*=τ0.

Now, by the assumption α1*0 we have −α* u* = τ0 iff u1=α1*1×[τ0+Σj=2pαj*uj*]. Thus, the collection of all vectors u* satisfying −α* u* = τ0 is the linear variety


where vj*s and [Upsilon] are defined in the statement of the lemma. The vector v1,proj* is the residual from the (Euclidean) projection of v1* into the space [Upsilon].

Thus, −α* u* τ0 iff u*=v1,proj*+v[Upsilon]* for some v[Upsilon]*[set membership][Upsilon]. Consequently, by the orthogonality of v1,proj* with [Upsilon] we have that for u* satisfying −α* u* = τ0 it holds that

u*u*=u*2   =v1,proj*2+v[Upsilon]*2.

Therefore, since v[Upsilon]*2 is unrestricted,

u*u*c0 for some u* satisfying α*u*=τ0

if and only if


This concludes the proof of the Lemma.

To construct the set Bb* we note that the condition in the display (6) implies the negation, for every subset X(l) of X{xl}, of the statement

j=1pk[set membership]X(1)[sj(xl,z)sj(xk,z)]βj<0 for all β[set membership]Cb.

Thus, suppose that for a given xl we find that (8) holds for some subset X(1) of X{xl}, then we know that xl cannot be in Bb. The proposed confidence set Bb* is comprised by the points in X for which condition (8) cannot be negated for all subsets X(1). The set Bb* is conservative (i.e. it includes Bb but is not necessarily equal to Bb) because the simultaneous negation of the statement (8) for all X(l) does not imply the statement (6). To check if condition (8) holds for any given subset X(l) and xl, we apply the result of Lemma as follows. We define the vector α [set membership] Rp whose jth component is equal to Σk[set membership]X(l)[sj(xl,z)sj(xk,z)], j = 1,..., p and the vector u0=β^[center dot](b,d^[center dot],optb)[set membership]Rp. We also define the constant c0=χp,1α2, and the matrix Σ = [Gamma]· (b). We compute the vectors α*=Σ1/2α, v1*,,vp* and the matrix V* as defined in Lemma. We then check if the condition (7) holds. If it holds then this implies that the hyperplane comprised by the set of β’s that satisfy the condition in display (8) with the < sign replaced by the = sign, intersects the confidence ellipsoid Cb, in which case we know that (8) is false. If it does not hold, then we check if condition

j=1pk[set membership]X(1)[sj(xl,z)sj(xk,z)]β^[center dot](b,d^[center dot],optb)j<0

holds. If (9) does not hold, then we conclude that (8) is false for this choice of X(1). If (9) holds, then we conclude that (8) is true and we then exclude xl from the set Bb*.

4. Positivity Assumption: Example

Suppose that K = 1 and that Rk=Rkg=1 with probability 1 for k = 0, 1, so that no subject dies in neither the actual world nor in the hypothetical world in which g is enforced in the population. Thus, for k = 0, 1, Ok = Lk since both Tk and Rk are deterministic and hence can be ignored. Suppose that Lk and Ak are binary variables (and so are therefore Akg and Lkg) and that the treatment regime g specifies that

g0(l0)=1l0 and g1(l0,l1)=l0(1l1).

Assume that


Assumption PO imposes two requirements,

P[λ0(A0g|L0g)>0]=1 and


Because by definition of regime g, A0g=1L0g, then requirement (11) can be re-expressed as


Since indicators can only take the values 0 or 1 and P(L0g=l0)<1, l0 = 0, 1 (by assumption (10)), the preceding equality is equivalent to

I(0,1](λ0(1|0))=1 and I(0,1](λ0(0|1))=1,

that is to say,

λ0(1|0)>0 and λ0(0|1)>0.

By the definition of λ0 (·|·) (see (3) in ORR-I), the last display is equivalent to

P(A0=1|L0=0)>0 and P(A0=0|L0=1)>0.

Likewise, because A1g=L0g(1L1g), and because P(L0g=l0,L1g=l1,A0g=l0)=0 by the fact that A0g=1L0, requirement (12) can be re-expressed as

1=P(L0g=0,L1g=0,A0g=1)I(0,1](λ1(0|0,0,1)) +P(L0g=0,L1g=1,A0g=1)I(0,1](λ1(0|0,1,1)) +P(L0g=1,L1g=0,A0g=0)I(0,1](λ1(1|1,0,0)) +P(L0g=1,L1g=1,A0g=0)I(0,1](λ1(0|1,1,0))

or equivalently, (again because the events (L0g=l0,L1g=l1,A0g=1l0) and (L0gl0,L1gl1) have the same probability by P(L0g=l0,L1g=l1,A0g=l0)=0,

1=P(L0g=0,L1g=0)I(0,1](λ1(0|0,0,1))+P(L0g=0,L1g=1)  ×I(0,1](λ1(0|0,1,1))+P(L0g=1,L1g=0)I(0,1](λ1(1|1,0,0))  +P(L0g=1,L1g=1)I(0,1](λ1(0|1,1,0)).

Under the assumption (10), the last display is equivalent to

λ1(0|0,0,1)>0,λ1(0|0,1,1)>0,λ1(1|1,0,0)>0 and λ1(0|1,1,0)>0

which, by the definition of λ0 (·|·, ·, ·) in ((3), ORR-I), is, in turn, the same as

P(A1=0|L0=0,L1=0,A0=1)>0,  P(A1=0|L0=0,L1=1,A0=1)>0P(A1=1|L0=1,L1=0,A0=0)>0,  P(A1=0|L0=1,L1=1,A0=0)>0.

We conclude that in this example, the assumption PO is equivalent to the conditions (13) and (14). We will now analyze what these conditions encode.

Condition (13) encodes two requirements:

  • i) the requirement that in the actual world there exist subjects with L0 = 1 and L0 = 0 (i.e. that the conditioning events L0 = 1 and L0 = 0 have positive probabilities), for otherwise at least one of the conditional probabilities in (13) would not be defined, and
  • ii) the requirement that in the actual world there be subjects with L0 = 0 that take treatment A0 = 1 and subjects with L0 = 1 that take treatment A0 = 0, for otherwise at least one of the conditional probabilities in (13) would be 0.

Condition i) is automatically satisfied, i.e. it does not impose a restriction on the law of L0, by the fact that L0g=L0 (since baseline covariates cannot be affected by interventions taking place after baseline) and the fact that we have assumed that P(L0g=l0)>0, l0 = 0, 1.

Condition ii) is indeed a non-trivial requirement and coincides with the interpretation of the PO assumption given in section 3.1 for the case k = 0. Specifically, in the world in which g were to be implemented there would exist subjects with L0 = 0. In such world the subjects with L0 = 0 would take treatment A0g=1, then the PO assumption for k = 0 requires that in the actual world there also be subjects with L0 = 0 that at time 0 take treatment A0 = 1. Likewise the PO condition also requires that for k = 0 the same be true with 0 and 1 reversed in the right hand side of each of the equalities of the preceding sentence. A key point is that (11) does not require that in the observational world there be subjects with L0 = 0 that take A0 = 0, nor subjects with L0 = 1 that take A1 = 1. The intuition is clear. If we want to learn from data collected in the actual (observational) world what would happen in the hypothetical world in which everybody obeyed regime g, we must observe people in the study that obeyed the treatment at every level of L0 for otherwise if, say, nobody in the actual world with L0 = 0 obeyed regime g there would be no way to learn what the distribution of the outcomes for subjects in that stratum would be if g were enforced. However, we don t care that there be subjects with L0 = 0 that do not obey g, i.e. that take A0 = 0, because data from those subjects are not informative about the distribution of outcomes when g is enforced.

Condition (14) encodes two requirements:

  • iii) the requirement that in the actual world there be subjects in the four strata (L0 = 0, L1 = 0, A0 = 1), (L0 = 0, L1 = 1, A0 = 1), (L0 = 1, L1 = 0, A0 = 0) and (L0 = 1, L1 = 1, A0 = 0) (i.e. that the conditioning events in the display (14) have positive probabilities), for otherwise at least one of the conditional probabilities would not be defined, and
  • iv) the requirement that in the actual world there be subjects in every one of the strata (L0 = 0, L1 = 0, A0 = 1), (L0 = 0, L1 = 1, A0 = 1), (L0 = 1, L1 = 1, A0 = 0) that have A1 = 0 at time 1 and the requirement that there be subjects in stratum (L0 = 1, L1 = 0, A0 = 0) that have A1 = 1 at time 1, for otherwise at least one of the conditional probabilities in (14) would be 0.

Given condition ii) and the sequential randomization (SR) and consistency (C) assumptions, condition iii) is automatically satisfied, i.e. it does not impose a further restriction on the joint distribution of (L0, L1, A0). To see this, first note that by condition (ii) the strata (L0 = 0, A0 = 1) and (L0 = 1, A0 = 0) are non-empty. So condition (iii) is satisfied provided

P(L1=l1|L0=0,A0=1)>0 and P(L1=l1|L0=1,A0=0)>0 for l1=0,1.


P(L1=l1|L0=0,A0=1)=P(L1g=l1|L0=0,A0=1) by assumption (C)          =P(L1g=l1|L0=0) by assumption (SR)          =P(L1g=l1|L0g=0) by assumption (C)

and P(L1g=l1|L0g=0)>0 by (10). An analogous argument shows that P(L1=l1|L0=1,A0=0)>0. Finally, condition (iv) is a formalization our interpretation of assumption PO in section 3.1 for k = 1. In the world in which g was implemented there would exist subjects that would have all four combination of values for (L0g,L1g). However, subjects with L0g=l0 will only have A0g=1l0, so in this hypothetical world we will see at time 1 only four possible recorded histories, (L0g=0,L1g=0,A0g=1), (L0g=0,L1g=1,A0g=1), (L0g=1,L1g0,A0g=0) and (L0g=1,L1g=1,A0g=0). In this hypothetical world subjects with any of the first three possible recorded histories will take A1g=0 and subjects with the last one will take A1g=1. Thus, in the actual world we must require that there be subjects in each of the first three strata (L0 = 0, L1 = 0, A0 = 1), (L0 = 0, L1 = 1, A0 = 1), (L0 = 1, L1 = 0, A0 = 0) that take A1 = 0 and subjects in the stratum (L0 = 1, L1 = 1, A0 = 0) that take A1 = 1. A point of note is that we don t make any requirement about the existence of subjects in strata other than the four mentioned in (iii) or about the treatment that subjects in these remaining strata take. The reason is that subjects that are not in the four strata of condition (iii) have already violated regime g at time 0 so they are uninformative about the outcome distribution under regime g.


*This work was supported by NIH grant R01 GM48704.


  • Orellana L, Rotnitzky A, Robins JM. 2010. Dynamic regime marginal structural mean models for estimation of optimal dynamic treatment regimes, Part I: Main content The International Journal of Biostatistics 62Article 7. [PubMed]

Articles from The International Journal of Biostatistics are provided here courtesy of Berkeley Electronic Press