Int J Biostat. 2010 January 6; 6(2): Article 9.
Published online 2010 March 3.
PMCID: PMC2854089

# Dynamic Regime Marginal Structural Mean Models for Estimation of Optimal Dynamic Treatment Regimes, Part II: Proofs of Results*

## Abstract

In this companion article to “Dynamic Regime Marginal Structural Mean Models for Estimation of Optimal Dynamic Treatment Regimes, Part I: Main Content” [Orellana, Rotnitzky and Robins (2010), IJB, Vol. 6, Iss. 2, Art. 7] we present (i) proofs of the claims in that paper, (ii) a proposal for the computation of a confidence set for the optimal index when this lies in a finite set, and (iii) an example to aid the interpretation of the positivity assumption.

Keywords: dynamic treatment regime, double-robust, inverse probability weighted, marginal structural model, optimal treatment regime, causality

## 1. Introduction

In this companion article to “Dynamic regime marginal structural mean models for estimation of optimal dynamic treatment regimes. Part I: Main Content” (Orellana, Rotnitzky and Robins, 2010) we present (i) proofs of the claims in that paper, (ii) a proposal for the computation of a confidence set for the optimal index when this lies in a finite set, and (iii) an example to aid the interpretation of the positivity assumption.

The notation, definitions and acronyms are the same as in the companion paper. Througout, we refer to the companion article as ORR-I.

## 2. Proof of Claims in ORR-I

### 2.1. Proof of Lemma 1

First note that the consistency assumption C implies that the event

$O¯k=o¯k,A¯k−1=g¯k−1(o¯k−1)$

is the same as the event

$O¯kg=o¯k,A¯k−1=g¯k−1(o¯k−1).$

So, with the definitions

we obtain

$=E[IB(O,A)ω_k−1,K(O¯K,A¯K)|O¯k,A¯k−1=g¯k−1(O¯k−1)]E[IB((O¯kg,O_k,K+1),(g¯k−1(O¯k−1g),A_k−1,K))ω_k−1,K((O¯kg,O_k,K),(g¯k−1(O¯k−1g),A_k−1,K))|O¯kg,A¯k−1=g¯k−1(O¯k−1g)]w.p.1.$

Next, note that the fact that $ω_k−1,K((O¯kg,O_k,K),(g¯k−1(O¯k−1g),A_k−1,K))=0$ unless $Ak=gk(O¯kg)$, $Ak+1=gk+1(O¯kg,Ok+1),…,AK=gK+1(O¯kg,O_k,K)$ implies that

$IB((O¯kg,O_k,K+1),(g¯k−1(O¯k−1g),A_k−1,K))×ω_k−1,K((O¯kg,O_k,K),(g¯k−1(O¯k−1g),A_k−1,K))=IB(O¯K+1g,g¯K(O¯Kg))ω_k−1,K(O¯Kg,(g¯k−1(O¯k−1g),A_k−1,K)).$

Then, it follows from the second to last displayed equality that

$=E[IB(O,A)ω_k−1,K(O¯K,A¯K)|O¯k,A¯k−1=g¯k−1(O¯k−1)]E[IB(O¯K+1g,g¯K(O¯Kg))ω_k−1,K(O¯Kg,(g¯k−1(O¯k−1g),A_k−1,K))|O¯kg,A¯k−1=g¯k−1(O¯k−1g)]$

$=E[E[ω_k−1,K(O¯Kg,(g¯k−1(O¯k−1g),A_k−1,K))|O¯K+1g,A¯k−1=g¯k−1(O¯k−1g)]×IB(O¯K+1g,g¯K(O¯Kg))|O¯kg,A¯k−1=g¯k−1(O¯k−1g)].$

So, part 1 of the Lemma is proved if we show that

$E[ω_k−1,K(O¯Kg,(g¯k−1(O¯k−1g),A_k−1,K))|O¯K+1g,A¯k−1=g¯k−1(O¯k−1g)]=1.$
(1)

Define for any k = 0, ..., K,

To prove equality (1) first note that,

where the second to last equality follows because given $O¯Kg$ and $A¯K−1=g¯K−1(O¯K−1g)$, $OK+1g$ is a fixed, i.e. non-random function of $O$ and consequently by the sequential randomization assumption, $OK+1g$ is conditionally independent of AK given $O¯Kg$ and $A¯K−1=g¯K−1(O¯K−1g)$. The last equality follows by the definition of λK (·|·, ·).

We thus arrive at

$=E[ω_k−1,K(O¯Kg,(g¯k−1(O¯k−1g),A_k−1,K))|O¯K+1g,A¯k−1=g¯k−1(O¯k−1g)]E{E[ω_k−1,K(O¯Kg,(g¯k−1(O¯k−1g),A_k−1,K))|O¯K+1g,A¯k−1=g¯k−1(O¯k−1g),A_k−1,K−1]|O¯K+1g,A¯k−1=g¯k−1(O¯k−1g)}=E{ω_k−1,K−1(O¯K−1g,(g¯k−1(O¯k−1g),A_k−1,K−1))|O¯K+1g,A¯k−1=g¯k−1(O¯k−1g)}$

This proves the result for the case k = K. If k < K – 1, we analyze the conditional expectation of the last equality in a similar fashion. Specifically, following the same steps as in the long sequence of equalities in the second to last display we arrive at

the last equality follows once again from the sequential randomization assumption. This is so because given $O¯K−1g$ and $A¯K−2=g¯K−2(O¯K−2g)$, $O¯Kg$ and $O¯K+1g$ are fixed, i.e. deterministic, functions of $O$ and the SR assumption ensures then that $O¯Kg$ and $O¯K+1g$ are conditionally independent of AK–1 given $O¯K−1g$ and $A¯K−2=g¯K−2(O¯K−2g)$.

Equality (1) is thus shown by continuing in this fashion recursively for K – 2, K – 3, ..., Kl until l such that Kl = k – 1.

To show Part 2 of the Lemma, note that specializing part 1 to the case k = 0, we obtain

$E[IB(Og,Ag)|O0]=E[IB(O,A)ωK(O¯K,A¯K)|O0].$

Thus, taking expectations on both sides of the equality in the last display we obtain

$E[IB(Og,Ag)]=E[IB(O,A)ωK(O¯K,A¯K)].$

This shows part 2 because B is an arbitrary Borel set.

### 2.2. Proof of the Assertions in Section 3.2, ORR-I

#### 2.2.1. Proof of Item (a)

Lemma 1, part 2 implies that the densities $pgmarg$ factors as

In particular, the event ${A¯k−1g=g¯k−1(O¯k−1g)}$ has probability 1. Consequently,

Therefore,

(2)

#### 2.2.2. Proof of Item (b)

Lemma 1, part 1 implies that

The left hand side of this equality is equal to

and this coincides with the right hand side of (2) which, as we have just argued, is equal to k+1 (ōk).

### 2.3. Proof of Lemma 2 in ORR-I

Let X be the identity random element on $(X,A)$ and let EPmarg × PX (·) stand for the expectation operation computed under the product law Pmarg × PX for the random vector (O, A, X). Then the restriction stated in 2) is equivalent to

(3)

and the restriction stated in 3) is equivalent to

(4)

To show 2) let d (O, A, X) ωK (ŌK, ĀK) {u (O, A) – hpar (X, Z, β*)}.

(ORR-I, (14)) (3).

where the last equality follows because EPmarg × PX [d (O, A, X) |X = x, Z] = EPmarg [d (O, A, x) |Z] by independence of (O, A) with X under the law Pmarg × PX and, by assumption, EPmarg [d (O, A, x) |Z] = 0 μ-a.e.(x) and hence EPmarg [d (O, A, x) |Z] because PX and μ are mutually absolute continuous.

(3) (ORR-I, (14)). Define b* (X; Z) = EPmarg × PX [d(O, A, X)|X, Z]. Then,

$0=EPmarg×PX[b*(X,Z)d(O,A,X)]=EPmarg×PX[EP×PX[d(O,A,X)|X,Z]2]$

consequently, EPmarg × PX [d (O, A, X) |X, Z] = 0 with Pmarg × PX prob. 1 which is equivalent to (ORR-I, (14)) because PX is mutually absolutely continuous with μ.

To show 3) redefine d (O, A, X) as ωK (ŌK, ĀK) {u (O, A) − hsem (X, Z, β*)}.

(ORR-1, (15)) (4)

$EPmarg×PX[{b(X,Z)−EPmarg×PX[b(X,Z)|Z]}d(O,A,X)]=EPmarg×PX[{b(X,Z)−EP×PX[b(X,Z)|Z]}EPmarg×PX{d(O,A,X)|X,Z}]=EPmarg×PX[{b(X,Z)−EPmarg×PX[b(X,Z)|Z]}q(Z)]=0$

where the third equality follows because EPmarg × PX {d (O, A, X) |X = x, Z} = EPmarg {d (O, A, x) |Z} and EPmarg {d (O, A, x) |Z}= q (Z) μ-a.e.(x) and hence PX-a.e.(x) by absolute continuity.

(4) (ORR-I, (15)). Define b* (X; Z) = EP × PX [ d(O, A, X)|X, Z]. Then,

Consequently, b* (X, Z) = EPmarg × PX [b* (X, Z) |Z] q (Z) PXa.e. (X) and hence μXa.e. (X) by absolute continuity. The result follows because b* (x, Z) = EPmarg × PX [d (O, A, X) |X = x, Z] = EPmarg [d (O, A, X) |Z].

### 2.4. Derivation of Some Formulas in Section 5.3, ORR-I

#### 2.4.1. Derivation of Formula (26) in ORR-I

Any element

$∑k=0K{dk(O¯k,A¯k)−E[dk(O¯k,A¯k)|O¯k,A¯k−1]}$

of the set Λ is the sum of K + 1 uncorrelated terms because for any l, j such that 0 ≤ l < l + jK + 1,

Thus, Λ is equal to Λ0 Λ1 . . . ΛK where

and stands for the direct sum operator. Then,

$Π[Q|Λ]=∑k=0KΠ[Q|Λk]$

and it can be easily checked that Π [Qk] = E (Q|Ōk, Āk) – E [Q|Ōk, Āk–1].

#### 2.4.2. Derivation of Formula (27) in ORR-I

Applying formula (26, in ORR-I) we obtain

$Π[S.(β,γ*,b)|Λ]=∑k=0K{E[S.(β,γ*,b)|O¯k,A¯k]−E[S.(β,γ*,b)|O¯k,A¯k−1]}.$

So, for k = 0, ..., K,

But,

So formula ((27), ORR-I) is proved if we show that

(5)

This follows immediately from the preceding proof of Result (b) of Section 3.2. Specifically, it was shown there that

Consequently, the left hand side of (5) is equal to

where the last equality follows by the definition of and the fact that $E[ω_k,Kx(O¯K,A¯K)|O¯k,A¯k=gx(O¯k)]=1$ (as this is just the function resulting from applying the integration to the utility u (O, A) = 1).

#### 2.4.3. Derivation of Formula (31) in ORR-I

It suffices to show that where

But by definition

where the last equality follows because

$Eγ[ωkx(γ)|O¯k,A¯k−1]==ωk−1x(γ)Eγ[I{gx,k(O¯k)}(Ak)λk(Ak|O¯k,A¯k−1)|O¯k,A¯k−1]=ωk−1x(γ)Eγ[I{gx,k(O¯k)}(Ak)|O¯k,A¯k−1]λk(gx,k(O¯k)|O¯k,A¯k−1)=ωk−1x(γ).$

### 2.5. Proof that b·, opt is Optimal

Write for short, · (b) · (b, ·, opt),

We will show that J· (b) = E {Q· (b) Q· (b·, opt)} for · = par and · = sem. When either model (16, ORR-I) or (29, ORR-I) are correct, β* = β. Consequently, for · = par we have that Jpar (b) is equal to

For · = sem and with the definitions (x, Z) b (x, Z) – (Z) and sem (; β, γ, τ) Qsem (; β, γ, τ) – sem (; β, γ, τ), the same argument yields Jsem (b) equal to

$E[{∫Xposb˜(x,Z)Q˜sem(x˜;β†,γ†,τ†)dPX(x)}×{∫Xposb˜sem,opt(x,Z)Q˜sem(x˜;β†,γ†,τ†)′dPX(x˜)}]=E[{∫Xposb˜(x,Z)Qsem(x˜;β†,γ†,τ†)dPX(x)}{∫Xposb˜sem,opt(x,Z)Qsem(x˜;β†,γ†,τ†)′dPX(x˜)}]=E{Qsem(b)Qsem(bpar,opt)′}.$

Now, with varA (· (b)) denoting the asymptotic variance of · (b), we have that from expansion ((32) in ORR-I)

and consequently

Thus, 0 ≤ varA (· (b) – · (b·, opt)) = varA (· (b)) + varA (· (b·, opt) – 2covA (· (b), · (b·, opt)) = varA (· (b)) – varA (· (b·, opt)) which concludes the proof.

## 3. Confidence Set for xopt (z) when $X$ is Finite and h· (z, x; β) is Linear in β

We first prove the assertion that the computation of the confidence set Bb entails an algorithm for determining if the intersection of $#(X)−1$ half spaces in p and a ball in p centered at the origin is non-empty. To do so, first note that linearity implies that for some fixed functions sj, j = 1, ..., p. Let $N=#(X)$ and write $X={x1,…,xN}$. The point xl is in Bb iff

(6)

Define the p × 1 vector $vlk$ whose jth entry is equal to sj (xl, z) – sj (xk, z), j = 1, ..., p. Define also the vectors and the constants . Then $Σj=1p[sj(xl,z)−sj(xk,z)]βj>0$ iff . Noting that β in Cb iff is in the ball

we conclude that the condition in the display (6) is equivalent to

The set is a hyper-plane in p which divides the Euclidean space p into two half-spaces, one of which is . Thus, the condition in the last display imposes that the intersection of N – 1 half-spaces (each one defined by the condition $vl*k′u>alk$ for each k) and the ball $U$ is non-empty.

Turn now to the construction of a confidence set $Bb*$ that includes Bb. Our construction relies on the following Lemma.

Lemma. Let

where u0 is a fixed p × 1 real valued vector and Σ is a fixed non-singular p × p matrix.

Let α be a fixed, non-null, p×1 real valued vector. Let τ0 α′ u0 and α* = Σ1/2α. Assume that α1 ≠ 0. Let, $v1*$ be the p×1 vector $(−α1*−1τ0,0,…,0)′$. Let be the linear space generated by the p×1 vectors $v2*=(α1*−1α2*,1,0,0,…,0)′$, $v3*=(α1*−1α3*,0,1,0,…,0)′,…,vp*=(α1*−1αp*,0,0,0,…1)′$ and define

where

$V*=(v2*,…,vp*).$

Then there exists satisfying

$α′u=0$

if and only if

$c0−‖v1,proj*‖2≥0.$

Proof

Then, with τ0 α′ u0 and α* = Σ1/2α, we conclude that there exists satisfying α′ u = 0 if and only if there exists u* Rp such that

Now, by the assumption $α1*≠0$ we have −α* u* = τ0 iff $u1=−α1*−1×[τ0+Σj=2pαj*uj*]$. Thus, the collection of all vectors u* satisfying −α* u* = τ0 is the linear variety

where $vj*′s$ and are defined in the statement of the lemma. The vector $v1,proj*$ is the residual from the (Euclidean) projection of $v1*$ into the space .

Thus, −α* u* τ0 iff for some . Consequently, by the orthogonality of $v1,proj*$ with we have that for u* satisfying −α* u* = τ0 it holds that

Therefore, since is unrestricted,

if and only if

$c0−‖v1,proj*‖2≥0.$
(7)

This concludes the proof of the Lemma.

To construct the set $Bb*$ we note that the condition in the display (6) implies the negation, for every subset $X(−l)$ of $X−{xl}$, of the statement

(8)

Thus, suppose that for a given xl we find that (8) holds for some subset $X(−1)$ of $X−{xl}$, then we know that xl cannot be in Bb. The proposed confidence set $Bb*$ is comprised by the points in $X$ for which condition (8) cannot be negated for all subsets $X(−1)$. The set $Bb*$ is conservative (i.e. it includes Bb but is not necessarily equal to Bb) because the simultaneous negation of the statement (8) for all $X(−l)$ does not imply the statement (6). To check if condition (8) holds for any given subset $X(−l)$ and xl, we apply the result of Lemma as follows. We define the vector α p whose jth component is equal to , j = 1,..., p and the vector . We also define the constant $c0=χp,1−α2$, and the matrix Σ = · (b). We compute the vectors $α*=Σ1/2α$, $v1*,…,vp*$ and the matrix V* as defined in Lemma. We then check if the condition (7) holds. If it holds then this implies that the hyperplane comprised by the set of β’s that satisfy the condition in display (8) with the < sign replaced by the = sign, intersects the confidence ellipsoid Cb, in which case we know that (8) is false. If it does not hold, then we check if condition

(9)

holds. If (9) does not hold, then we conclude that (8) is false for this choice of $X(−1)$. If (9) holds, then we conclude that (8) is true and we then exclude xl from the set $Bb*$.

## 4. Positivity Assumption: Example

Suppose that K = 1 and that $Rk=Rkg=1$ with probability 1 for k = 0, 1, so that no subject dies in neither the actual world nor in the hypothetical world in which g is enforced in the population. Thus, for k = 0, 1, Ok = Lk since both Tk and Rk are deterministic and hence can be ignored. Suppose that Lk and Ak are binary variables (and so are therefore $Akg$ and $Lkg$) and that the treatment regime g specifies that

Assume that

(10)

Assumption PO imposes two requirements,

(11)

(12)

Because by definition of regime g, $A0g=1−L0g$, then requirement (11) can be re-expressed as

Since indicators can only take the values 0 or 1 and , l0 = 0, 1 (by assumption (10)), the preceding equality is equivalent to

that is to say,

By the definition of λ0 (·|·) (see (3) in ORR-I), the last display is equivalent to

(13)

Likewise, because $A1g=L0g(1−L1g)$, and because by the fact that $A0g=1−L0$, requirement (12) can be re-expressed as

or equivalently, (again because the events $(L0g=l0,L1g=l1,A0g=1−l0)$ and $(L0g−l0,L1g−l1)$ have the same probability by ,

Under the assumption (10), the last display is equivalent to

which, by the definition of λ0 (·|·, ·, ·) in ((3), ORR-I), is, in turn, the same as

(14)

We conclude that in this example, the assumption PO is equivalent to the conditions (13) and (14). We will now analyze what these conditions encode.

Condition (13) encodes two requirements:

• i) the requirement that in the actual world there exist subjects with L0 = 1 and L0 = 0 (i.e. that the conditioning events L0 = 1 and L0 = 0 have positive probabilities), for otherwise at least one of the conditional probabilities in (13) would not be defined, and
• ii) the requirement that in the actual world there be subjects with L0 = 0 that take treatment A0 = 1 and subjects with L0 = 1 that take treatment A0 = 0, for otherwise at least one of the conditional probabilities in (13) would be 0.

Condition i) is automatically satisfied, i.e. it does not impose a restriction on the law of L0, by the fact that $L0g=L0$ (since baseline covariates cannot be affected by interventions taking place after baseline) and the fact that we have assumed that , l0 = 0, 1.

Condition ii) is indeed a non-trivial requirement and coincides with the interpretation of the PO assumption given in section 3.1 for the case k = 0. Specifically, in the world in which g were to be implemented there would exist subjects with L0 = 0. In such world the subjects with L0 = 0 would take treatment $A0g=1$, then the PO assumption for k = 0 requires that in the actual world there also be subjects with L0 = 0 that at time 0 take treatment A0 = 1. Likewise the PO condition also requires that for k = 0 the same be true with 0 and 1 reversed in the right hand side of each of the equalities of the preceding sentence. A key point is that (11) does not require that in the observational world there be subjects with L0 = 0 that take A0 = 0, nor subjects with L0 = 1 that take A1 = 1. The intuition is clear. If we want to learn from data collected in the actual (observational) world what would happen in the hypothetical world in which everybody obeyed regime g, we must observe people in the study that obeyed the treatment at every level of L0 for otherwise if, say, nobody in the actual world with L0 = 0 obeyed regime g there would be no way to learn what the distribution of the outcomes for subjects in that stratum would be if g were enforced. However, we don t care that there be subjects with L0 = 0 that do not obey g, i.e. that take A0 = 0, because data from those subjects are not informative about the distribution of outcomes when g is enforced.

Condition (14) encodes two requirements:

• iii) the requirement that in the actual world there be subjects in the four strata (L0 = 0, L1 = 0, A0 = 1), (L0 = 0, L1 = 1, A0 = 1), (L0 = 1, L1 = 0, A0 = 0) and (L0 = 1, L1 = 1, A0 = 0) (i.e. that the conditioning events in the display (14) have positive probabilities), for otherwise at least one of the conditional probabilities would not be defined, and
• iv) the requirement that in the actual world there be subjects in every one of the strata (L0 = 0, L1 = 0, A0 = 1), (L0 = 0, L1 = 1, A0 = 1), (L0 = 1, L1 = 1, A0 = 0) that have A1 = 0 at time 1 and the requirement that there be subjects in stratum (L0 = 1, L1 = 0, A0 = 0) that have A1 = 1 at time 1, for otherwise at least one of the conditional probabilities in (14) would be 0.

Given condition ii) and the sequential randomization (SR) and consistency (C) assumptions, condition iii) is automatically satisfied, i.e. it does not impose a further restriction on the joint distribution of (L0, L1, A0). To see this, first note that by condition (ii) the strata (L0 = 0, A0 = 1) and (L0 = 1, A0 = 0) are non-empty. So condition (iii) is satisfied provided

But

and by (10). An analogous argument shows that . Finally, condition (iv) is a formalization our interpretation of assumption PO in section 3.1 for k = 1. In the world in which g was implemented there would exist subjects that would have all four combination of values for $(L0g,L1g)$. However, subjects with $L0g=l0$ will only have $A0g=1−l0$, so in this hypothetical world we will see at time 1 only four possible recorded histories, $(L0g=0,L1g=0,A0g=1)$, $(L0g=0,L1g=1,A0g=1)$, $(L0g=1,L1g−0,A0g=0)$ and $(L0g=1,L1g=1,A0g=0)$. In this hypothetical world subjects with any of the first three possible recorded histories will take $A1g=0$ and subjects with the last one will take $A1g=1$. Thus, in the actual world we must require that there be subjects in each of the first three strata (L0 = 0, L1 = 0, A0 = 1), (L0 = 0, L1 = 1, A0 = 1), (L0 = 1, L1 = 0, A0 = 0) that take A1 = 0 and subjects in the stratum (L0 = 1, L1 = 1, A0 = 0) that take A1 = 1. A point of note is that we don t make any requirement about the existence of subjects in strata other than the four mentioned in (iii) or about the treatment that subjects in these remaining strata take. The reason is that subjects that are not in the four strata of condition (iii) have already violated regime g at time 0 so they are uninformative about the outcome distribution under regime g.

## Footnotes

*This work was supported by NIH grant R01 GM48704.

## References

• Orellana L, Rotnitzky A, Robins JM. 2010. Dynamic regime marginal structural mean models for estimation of optimal dynamic treatment regimes, Part I: Main content The International Journal of Biostatistics 62Article 7. [PubMed]

Articles from The International Journal of Biostatistics are provided here courtesy of Berkeley Electronic Press

 PubMed Central Canada is a service of the Canadian Institutes of Health Research (CIHR) working in partnership with the National Research Council's Canada Institute for Scientific and Technical Information in cooperation with the National Center for Biotechnology Information at the U.S. National Library of Medicine(NCBI/NLM). It includes content provided to the PubMed Central International archive by participating publishers.