PMCCPMCCPMCC

Search tips
Search criteria 

Advanced

 
Logo of ijbiostatThe International Journal of BiostatisticsThe International Journal of BiostatisticsSubmit to The International Journal of BiostatisticsSubscribe
 
Int J Biostat. 2010 January 6; 6(2): Article 9.
Published online 2010 March 3. doi:  10.2202/1557-4679.1242
PMCID: PMC2854089

Dynamic Regime Marginal Structural Mean Models for Estimation of Optimal Dynamic Treatment Regimes, Part II: Proofs of Results*

Abstract

In this companion article to “Dynamic Regime Marginal Structural Mean Models for Estimation of Optimal Dynamic Treatment Regimes, Part I: Main Content” [Orellana, Rotnitzky and Robins (2010), IJB, Vol. 6, Iss. 2, Art. 7] we present (i) proofs of the claims in that paper, (ii) a proposal for the computation of a confidence set for the optimal index when this lies in a finite set, and (iii) an example to aid the interpretation of the positivity assumption.

Keywords: dynamic treatment regime, double-robust, inverse probability weighted, marginal structural model, optimal treatment regime, causality

1. Introduction

In this companion article to “Dynamic regime marginal structural mean models for estimation of optimal dynamic treatment regimes. Part I: Main Content” (Orellana, Rotnitzky and Robins, 2010) we present (i) proofs of the claims in that paper, (ii) a proposal for the computation of a confidence set for the optimal index when this lies in a finite set, and (iii) an example to aid the interpretation of the positivity assumption.

The notation, definitions and acronyms are the same as in the companion paper. Througout, we refer to the companion article as ORR-I.

2. Proof of Claims in ORR-I

2.1. Proof of Lemma 1

First note that the consistency assumption C implies that the event

equation M1

is the same as the event

equation M2

So, with the definitions

equation M3

we obtain

equation M4

Next, note that the fact that equation M5 unless equation M6, equation M7 implies that

equation M8

Then, it follows from the second to last displayed equality that

equation M9

equation M10

So, part 1 of the Lemma is proved if we show that

equation M11
(1)

Define for any k = 0, ..., K,

equation M12

To prove equality (1) first note that,

equation M13

equation M14

where the second to last equality follows because given equation M15 and equation M16, equation M17 is a fixed, i.e. non-random function of equation M18 and consequently by the sequential randomization assumption, equation M19 is conditionally independent of AK given equation M20 and equation M21. The last equality follows by the definition of λK (·|·, ·).

We thus arrive at

equation M22

This proves the result for the case k = K. If k < K – 1, we analyze the conditional expectation of the last equality in a similar fashion. Specifically, following the same steps as in the long sequence of equalities in the second to last display we arrive at

equation M23

the last equality follows once again from the sequential randomization assumption. This is so because given equation M24 and equation M25, equation M26 and equation M27 are fixed, i.e. deterministic, functions of equation M28 and the SR assumption ensures then that equation M29 and equation M30 are conditionally independent of AK–1 given equation M31 and equation M32.

Equality (1) is thus shown by continuing in this fashion recursively for K – 2, K – 3, ..., Kl until l such that Kl = k – 1.

To show Part 2 of the Lemma, note that specializing part 1 to the case k = 0, we obtain

equation M33

Thus, taking expectations on both sides of the equality in the last display we obtain

equation M34

This shows part 2 because B is an arbitrary Borel set.

2.2. Proof of the Assertions in Section 3.2, ORR-I

2.2.1. Proof of Item (a)

Lemma 1, part 2 implies that the densities equation M35 factors as

equation M36

In particular, the event equation M37 has probability 1. Consequently,

equation M38

Therefore,

equation M39
(2)

2.2.2. Proof of Item (b)

Lemma 1, part 1 implies that

equation M40

The left hand side of this equality is equal to

equation M41

and this coincides with the right hand side of (2) which, as we have just argued, is equal to [var phi]k+1 (ōk).

2.3. Proof of Lemma 2 in ORR-I

Let X be the identity random element on equation M42 and let EPmarg × PX (·) stand for the expectation operation computed under the product law Pmarg × PX for the random vector (O, A, X). Then the restriction stated in 2) is equivalent to

equation M43
(3)

and the restriction stated in 3) is equivalent to

equation M44
(4)

To show 2) let d (O, A, X) [equivalent] ωK (ŌK, ĀK) {u (O, A) – hpar (X, Z, β*)}.

(ORR-I, (14)) [implies] (3).

equation M45

where the last equality follows because EPmarg × PX [d (O, A, X) |X = x, Z] = EPmarg [d (O, A, x) |Z] by independence of (O, A) with X under the law Pmarg × PX and, by assumption, EPmarg [d (O, A, x) |Z] = 0 μ-a.e.(x) and hence EPmarg [d (O, A, x) |Z] because PX and μ are mutually absolute continuous.

(3) [implies] (ORR-I, (14)). Define b* (X; Z) = EPmarg × PX [d(O, A, X)|X, Z]. Then,

equation M46

consequently, EPmarg × PX [d (O, A, X) |X, Z] = 0 with Pmarg × PX prob. 1 which is equivalent to (ORR-I, (14)) because PX is mutually absolutely continuous with μ.

To show 3) redefine d (O, A, X) as ωK (ŌK, ĀK) {u (O, A) − hsem (X, Z, β*)}.

(ORR-1, (15)) [implies] (4)

equation M47

where the third equality follows because EPmarg × PX {d (O, A, X) |X = x, Z} = EPmarg {d (O, A, x) |Z} and EPmarg {d (O, A, x) |Z}= q (Z) μ-a.e.(x) and hence PX-a.e.(x) by absolute continuity.

(4) [implies] (ORR-I, (15)). Define b* (X; Z) = EP × PX [ d(O, A, X)|X, Z]. Then,

equation M48

Consequently, b* (X, Z) = EPmarg × PX [b* (X, Z) |Z] [equivalent] q (Z) PXa.e. (X) and hence μXa.e. (X) by absolute continuity. The result follows because b* (x, Z) = EPmarg × PX [d (O, A, X) |X = x, Z] = EPmarg [d (O, A, X) |Z].

2.4. Derivation of Some Formulas in Section 5.3, ORR-I

2.4.1. Derivation of Formula (26) in ORR-I

Any element

equation M49

of the set Λ is the sum of K + 1 uncorrelated terms because for any l, j such that 0 ≤ l < l + jK + 1,

equation M50

Thus, Λ is equal to Λ0 [plus sign in circle] Λ1 [plus sign in circle] . . . [plus sign in circle] ΛK where

equation M51

and [plus sign in circle] stands for the direct sum operator. Then,

equation M52

and it can be easily checked that Π [Qk] = E (Q|Ōk, Āk) – E [Q|Ōk, Āk–1].

2.4.2. Derivation of Formula (27) in ORR-I

Applying formula (26, in ORR-I) we obtain

equation M53

So, for k = 0, ..., K,

equation M54

But,

equation M55

So formula ((27), ORR-I) is proved if we show that

equation M56
(5)

This follows immediately from the preceding proof of Result (b) of Section 3.2. Specifically, it was shown there that

equation M57

Consequently, the left hand side of (5) is equal to

equation M58

where the last equality follows by the definition of equation M59 and the fact that equation M60 (as this is just the function equation M61 resulting from applying the integration to the utility u (O, A) = 1).

2.4.3. Derivation of Formula (31) in ORR-I

It suffices to show that equation M62 where

equation M63

But by definition

equation M64

where the last equality follows because

equation M65

2.5. Proof that b·, opt is Optimal

Write for short, [beta]· (b) [equivalent] [beta]· (b, d·, opt),

equation M66

We will show that J· (b) = E {Q· (b) Q· (b·, opt)} for · = par and · = sem. When either model (16, ORR-I) or (29, ORR-I) are correct, β* = β. Consequently, for · = par we have that Jpar (b) is equal to

equation M67

For · = sem and with the definitions [b with tilde] (x, Z) [equivalent] b (x, Z) – b (Z) and [Q with tilde]sem ([x with tilde]; β, γ, τ) [equivalent] Qsem ([x with tilde]; β, γ, τ) – Qsem ([x with tilde]; β, γ, τ), the same argument yields Jsem (b) equal to

equation M68

Now, with varA ([beta]· (b)) denoting the asymptotic variance of [beta]· (b), we have that from expansion ((32) in ORR-I)

equation M69

and consequently

equation M70

Thus, 0 ≤ varA ([beta]· (b) – [beta]· (b·, opt)) = varA ([beta]· (b)) + varA ([beta]· (b·, opt) – 2covA ([beta]· (b), [beta]· (b·, opt)) = varA ([beta]· (b)) – varA ([beta]· (b·, opt)) which concludes the proof.

3. Confidence Set for xopt (z) when equation M71 is Finite and h· (z, x; β) is Linear in β

We first prove the assertion that the computation of the confidence set Bb entails an algorithm for determining if the intersection of equation M72 half spaces in Rp and a ball in Rp centered at the origin is non-empty. To do so, first note that linearity implies that equation M73 for some fixed functions sj, j = 1, ..., p. Let equation M74 and write equation M75. The point xl is in Bb iff

equation M76
(6)

Define the p × 1 vector equation M77 whose jth entry is equal to sj (xl, z) – sj (xk, z), j = 1, ..., p. Define also the vectors equation M78 and the constants equation M79. Then equation M80 iff equation M81. Noting that β in Cb iff equation M82 is in the ball

equation M83

we conclude that the condition in the display (6) is equivalent to

equation M84

The set equation M85 is a hyper-plane in Rp which divides the Euclidean space Rp into two half-spaces, one of which is equation M86. Thus, the condition in the last display imposes that the intersection of N – 1 half-spaces (each one defined by the condition equation M87 for each k) and the ball equation M88 is non-empty.

Turn now to the construction of a confidence set equation M89 that includes Bb. Our construction relies on the following Lemma.

Lemma. Let

equation M90

where u0 is a fixed p × 1 real valued vector and Σ is a fixed non-singular p × p matrix.

Let α be a fixed, non-null, p×1 real valued vector. Let τ0 [equivalent] α′ u0 and α* = Σ1/2α. Assume that α1 ≠ 0. Let, equation M91 be the p×1 vector equation M92. Let [Upsilon] be the linear space generated by the p×1 vectors equation M93, equation M94 and define

equation M95

where

equation M96

Then there exists equation M97 satisfying

equation M98

if and only if

equation M99

Proof

equation M100

Then, with τ0 [equivalent]α′ u0 and α* = Σ1/2α, we conclude that there exists equation M101 satisfying α′ u = 0 if and only if there exists u* [set membership] Rp such that

equation M102

Now, by the assumption equation M103 we have −α* u* = τ0 iff equation M104. Thus, the collection of all vectors u* satisfying −α* u* = τ0 is the linear variety

equation M105

where equation M106 and [Upsilon] are defined in the statement of the lemma. The vector equation M107 is the residual from the (Euclidean) projection of equation M108 into the space [Upsilon].

Thus, −α* u* τ0 iff equation M109 for some equation M110. Consequently, by the orthogonality of equation M111 with [Upsilon] we have that for u* satisfying −α* u* = τ0 it holds that

equation M112

Therefore, since equation M113 is unrestricted,

equation M114

if and only if

equation M115
(7)

This concludes the proof of the Lemma.

To construct the set equation M116 we note that the condition in the display (6) implies the negation, for every subset equation M117 of equation M118, of the statement

equation M119
(8)

Thus, suppose that for a given xl we find that (8) holds for some subset equation M120 of equation M121, then we know that xl cannot be in Bb. The proposed confidence set equation M122 is comprised by the points in equation M123 for which condition (8) cannot be negated for all subsets equation M124. The set equation M125 is conservative (i.e. it includes Bb but is not necessarily equal to Bb) because the simultaneous negation of the statement (8) for all equation M126 does not imply the statement (6). To check if condition (8) holds for any given subset equation M127 and xl, we apply the result of Lemma as follows. We define the vector α [set membership] Rp whose jth component is equal to equation M128, j = 1,..., p and the vector equation M129. We also define the constant equation M130, and the matrix Σ = [Gamma]· (b). We compute the vectors equation M131, equation M132 and the matrix V* as defined in Lemma. We then check if the condition (7) holds. If it holds then this implies that the hyperplane comprised by the set of β’s that satisfy the condition in display (8) with the < sign replaced by the = sign, intersects the confidence ellipsoid Cb, in which case we know that (8) is false. If it does not hold, then we check if condition

equation M133
(9)

holds. If (9) does not hold, then we conclude that (8) is false for this choice of equation M134. If (9) holds, then we conclude that (8) is true and we then exclude xl from the set equation M135.

4. Positivity Assumption: Example

Suppose that K = 1 and that equation M136 with probability 1 for k = 0, 1, so that no subject dies in neither the actual world nor in the hypothetical world in which g is enforced in the population. Thus, for k = 0, 1, Ok = Lk since both Tk and Rk are deterministic and hence can be ignored. Suppose that Lk and Ak are binary variables (and so are therefore equation M137 and equation M138) and that the treatment regime g specifies that

equation M139

Assume that

equation M140
(10)

Assumption PO imposes two requirements,

equation M141
(11)

equation M142
(12)

Because by definition of regime g, equation M143, then requirement (11) can be re-expressed as

equation M144

Since indicators can only take the values 0 or 1 and equation M145, l0 = 0, 1 (by assumption (10)), the preceding equality is equivalent to

equation M146

that is to say,

equation M147

By the definition of λ0 (·|·) (see (3) in ORR-I), the last display is equivalent to

equation M148
(13)

Likewise, because equation M149, and because equation M150 by the fact that equation M151, requirement (12) can be re-expressed as

equation M152

or equivalently, (again because the events equation M153 and equation M154 have the same probability by equation M155,

equation M156

Under the assumption (10), the last display is equivalent to

equation M157

which, by the definition of λ0 (·|·, ·, ·) in ((3), ORR-I), is, in turn, the same as

equation M158
(14)

We conclude that in this example, the assumption PO is equivalent to the conditions (13) and (14). We will now analyze what these conditions encode.

Condition (13) encodes two requirements:

  • i) the requirement that in the actual world there exist subjects with L0 = 1 and L0 = 0 (i.e. that the conditioning events L0 = 1 and L0 = 0 have positive probabilities), for otherwise at least one of the conditional probabilities in (13) would not be defined, and
  • ii) the requirement that in the actual world there be subjects with L0 = 0 that take treatment A0 = 1 and subjects with L0 = 1 that take treatment A0 = 0, for otherwise at least one of the conditional probabilities in (13) would be 0.

Condition i) is automatically satisfied, i.e. it does not impose a restriction on the law of L0, by the fact that equation M159 (since baseline covariates cannot be affected by interventions taking place after baseline) and the fact that we have assumed that equation M160, l0 = 0, 1.

Condition ii) is indeed a non-trivial requirement and coincides with the interpretation of the PO assumption given in section 3.1 for the case k = 0. Specifically, in the world in which g were to be implemented there would exist subjects with L0 = 0. In such world the subjects with L0 = 0 would take treatment equation M161, then the PO assumption for k = 0 requires that in the actual world there also be subjects with L0 = 0 that at time 0 take treatment A0 = 1. Likewise the PO condition also requires that for k = 0 the same be true with 0 and 1 reversed in the right hand side of each of the equalities of the preceding sentence. A key point is that (11) does not require that in the observational world there be subjects with L0 = 0 that take A0 = 0, nor subjects with L0 = 1 that take A1 = 1. The intuition is clear. If we want to learn from data collected in the actual (observational) world what would happen in the hypothetical world in which everybody obeyed regime g, we must observe people in the study that obeyed the treatment at every level of L0 for otherwise if, say, nobody in the actual world with L0 = 0 obeyed regime g there would be no way to learn what the distribution of the outcomes for subjects in that stratum would be if g were enforced. However, we don t care that there be subjects with L0 = 0 that do not obey g, i.e. that take A0 = 0, because data from those subjects are not informative about the distribution of outcomes when g is enforced.

Condition (14) encodes two requirements:

  • iii) the requirement that in the actual world there be subjects in the four strata (L0 = 0, L1 = 0, A0 = 1), (L0 = 0, L1 = 1, A0 = 1), (L0 = 1, L1 = 0, A0 = 0) and (L0 = 1, L1 = 1, A0 = 0) (i.e. that the conditioning events in the display (14) have positive probabilities), for otherwise at least one of the conditional probabilities would not be defined, and
  • iv) the requirement that in the actual world there be subjects in every one of the strata (L0 = 0, L1 = 0, A0 = 1), (L0 = 0, L1 = 1, A0 = 1), (L0 = 1, L1 = 1, A0 = 0) that have A1 = 0 at time 1 and the requirement that there be subjects in stratum (L0 = 1, L1 = 0, A0 = 0) that have A1 = 1 at time 1, for otherwise at least one of the conditional probabilities in (14) would be 0.

Given condition ii) and the sequential randomization (SR) and consistency (C) assumptions, condition iii) is automatically satisfied, i.e. it does not impose a further restriction on the joint distribution of (L0, L1, A0). To see this, first note that by condition (ii) the strata (L0 = 0, A0 = 1) and (L0 = 1, A0 = 0) are non-empty. So condition (iii) is satisfied provided

equation M162

But

equation M163

and equation M164 by (10). An analogous argument shows that equation M165. Finally, condition (iv) is a formalization our interpretation of assumption PO in section 3.1 for k = 1. In the world in which g was implemented there would exist subjects that would have all four combination of values for equation M166. However, subjects with equation M167 will only have equation M168, so in this hypothetical world we will see at time 1 only four possible recorded histories, equation M169, equation M170, equation M171 and equation M172. In this hypothetical world subjects with any of the first three possible recorded histories will take equation M173 and subjects with the last one will take equation M174. Thus, in the actual world we must require that there be subjects in each of the first three strata (L0 = 0, L1 = 0, A0 = 1), (L0 = 0, L1 = 1, A0 = 1), (L0 = 1, L1 = 0, A0 = 0) that take A1 = 0 and subjects in the stratum (L0 = 1, L1 = 1, A0 = 0) that take A1 = 1. A point of note is that we don t make any requirement about the existence of subjects in strata other than the four mentioned in (iii) or about the treatment that subjects in these remaining strata take. The reason is that subjects that are not in the four strata of condition (iii) have already violated regime g at time 0 so they are uninformative about the outcome distribution under regime g.

Footnotes

*This work was supported by NIH grant R01 GM48704.

References

  • Orellana L, Rotnitzky A, Robins JM. 2010. Dynamic regime marginal structural mean models for estimation of optimal dynamic treatment regimes, Part I: Main content The International Journal of Biostatistics 62Article 7. [PubMed]

Articles from The International Journal of Biostatistics are provided here courtesy of Berkeley Electronic Press