Home | About | Journals | Submit | Contact Us | Français |

**|**HHS Author Manuscripts**|**PMC2562926

Formats

Article sections

- SUMMARY
- 1. INTRODUCTION
- 2. FRAMEWORK AND SCOPE OF INFERENCE
- 3. SEMIPARAMETRIC INFERENCE
- 4. PRACTICAL IMPLEMENTATION
- 5. APPLICATION TO AIDS CLINICAL TRIALS GROUP 175
- 6. SIMULATION STUDIES
- 7. DISCUSSION
- References

Authors

Related links

Stat Med. Author manuscript; available in PMC 2009 October 15.

Published in final edited form as:

Stat Med. 2008 October 15; 27(23): 4658–4677.

doi: 10.1002/sim.3113PMCID: PMC2562926

NIHMSID: NIHMS46485

*Correspondence to: Department of Statistics, North Carolina State University, Raleigh, NC 27695-8203, U.S.A

See other articles in PMC that cite the published article.

There is considerable debate regarding whether and how covariate adjusted analyses should be used in the comparison of treatments in randomized clinical trials. Substantial baseline covariate information is routinely collected in such trials, and one goal of adjustment is to exploit covariates associated with outcome to increase precision of estimation of the treatment effect. However, concerns are routinely raised over the potential for bias when the covariates used are selected *post hoc*; and the potential for adjustment based on a model of the relationship between outcome, covariates, and treatment to invite a “fishing expedition” for that leading to the most dramatic effect estimate. By appealing to the theory of semiparametrics, we are led naturally to a characterization of all treatment effect estimators and to principled, practically-feasible methods for covariate adjustment that yield the desired gains in efficiency and that allow covariate relationships to be identified and exploited while circumventing the usual concerns. The methods and strategies for their implementation in practice are presented. Simulation studies and an application to data from an HIV clinical trial demonstrate the performance of the techniques relative to existing methods.

The primary objective of many randomized clinical trials is to evaluate the difference in mean outcome between two treatments. In typical moderate-to-large-scale trials, the setting addressed herein, in addition to the primary outcome, extensive baseline data are collected on each participant prior to treatment administration, such as baseline observations on the outcome and qualitative and quantitative variables reflecting demographics, prior medical and treatment history, and physiological status. Some of these baseline covariates may be related to the primary outcome and may exhibit chance imbalances between the two treatment groups.

A vast literature exists on whether or not and how to “adjust” the analysis of treatment difference for the effects of covariates in order to increase the precision of the estimator for this treatment effect, thereby increasing statistical power, and to take imbalances into account [1, 2, 3, 4, 5, 6]. Indeed, that many studies fail to meet their accrual goals and the desire to use the data from patient volunteers most efficiently are strong rationales for this practice. However, covariate adjustment has inspired considerable controversy among numerous authors [1, 7, 8, 9] and regulatory authorities [10, 11] because of the potential for biased estimation due to *post hoc* selection of covariates and, more ominously, the temptation for analysts to engage in a “fishing expedition” to find “the covariate model that best accentuates the estimate and/or statistical significance of the treatment difference” [1]. Thus, trialists and regulatory agencies have been reluctant to endorse adjusted analyses, and current guidelines assert strongly that, if adjustment is undertaken, only a few such covariates should be used, chosen based on prior knowledge of their prognostic value; and these should be prespecified in the protocol or analysis plan, as should be the form of the model relating covariates to outcome to be used for adjustment (e.g., [11, 12]). However, associations between covariates and outcome may not be appreciated at the design stage [1], particularly if such information was not collected systematically in previous studies, but may be evident only at the analysis stage, subsequent to unblinding. An unfortunate consequence of these recommendations may be that a critical opportunity to enhance efficiency and reveal important, real effects may be lost.

Clearly, approaches that seek to resolve the tension between the need to make the best use of the data and concerns over the properties of adjusted estimators and possible lack of objectivity are needed. Pocock et al. [1] strongly encourage research along these lines, arguing that covariate adjustment should be carried out whenever appropriate while simultaneously making “one’s statistical policy for covariate adjustment completely objective.” Some approaches in this spirit, such as that of Koch et al. [2], which does not require regression modeling of covariate effects, have been proposed. Nonetheless, to our knowledge, a general, practically-feasible strategy that achieves this goal has not been elucidated.

In this article, we consider covariate adjustment in estimation of treatment differences in randomized clinical trials from the formal point of view of semiparametric theory (e.g., [13]). This leads to characterization of all treatment effect estimators, facilitating comparisons among competing methods. Moreover, emerging elegantly from this perspective is principled adjustment methodology that supports objective incorporation of covariate effects while simultaneously exploiting covariate-outcome relationships to increase precision. Because the approach automatically separates modeling of these relationships from evaluation of the treatment effect, it obviates concerns over suspicious “data dredging” exercises.

In Section 2, we introduce a formal model framework and identify the parameter representing the treatment effect of interest. We present the semiparametric theory results in Section 3. In Section 4, based on the theory, we propose a practical strategy for adjusted analysis. The methods are applied to data from an HIV clinical trial in Section 5, and simulation studies demonstrating performance are summarized in Section 6.

Consider a clinical trial with *n* subjects sampled from a population of interest. Let *Y* denote the outcome on which the primary analysis will be based (continuous or discrete), and let *Z* = 1 or 0 with probabilities *δ* or 1 − *δ* indicating randomization to, e.g., experimental treatment or control. Let *X* (*p*×1) be a vector of baseline covariates; *X* may include a baseline measurement on *Y* and additional quantitative and qualitative characteristics recorded prior to treatment initiation. Randomization guarantees statistical independence of *Z* and *X*, written as *Z* *X*, which is critical to our further developments. The observed data from the trial are (*Y _{i}*,

Within this framework, we may identify unambiguously the “treatment effect” that is ordinarily targeted by the primary analysis, given by

$$\beta =E(Y|Z=1)-E(Y|Z=0),$$

(1)

i.e., the difference in mean outcome between the two treatments. This may be representedequivalently by *E*(*Y*|*Z*) = *μ*_{0} + *βZ*, where *μ*_{0} = *E*(*Y* | *Z* = 0); note that this is a model only for the mean outcome for each treatment, with no additional assumptions, such as normality or equal variances in the two groups, implied.

The usual treatment effect *β* in (1) is defined *unconditionally*; i.e., as the effect of treatment relative to control averaged across the population. An alternative measure of treatment effect is defined *conditional* on a subset of the population having the particular covariate values *X*,

$${\beta}_{x}=E(Y|Z=1,X=x)-E(Y|Z=0,X=x).$$

(2)

For continuous *Y*, a standard approach to estimate *β _{x}* is to postulate a linear regression model

$$E(Y|Z,X)={\gamma}_{0}+{\gamma}_{X}^{T}X+{\beta}_{Z}Z,$$

(3)

often referred to as the “analysis of covariance” (ANCOVA) model. Model (3) is a popular basisfor “covariate-adjustment,” where *β _{Z}* is interpreted as the “treatment effect after adjusting for the covariates

$$E(Y|Z,X)=\frac{exp({\gamma}_{0}+{\gamma}_{X}^{T}X+{\gamma}_{Z}Z)}{1+exp({\gamma}_{0}+{\gamma}_{X}^{T}X+{\gamma}_{Z}Z)},$$

(4)

where *γ _{Z}* denotes the log-odds ratio conditional on

The unconditional treatment effect (1) is overwhelmingly the focus of the primary analysis in most randomized trials, with inference on conditional treatment effects as in (2) often specified as secondary analyses. However, this is a matter of some debate; some researchers advocate that the conditional treatment effect (2) is a more appropriate basis for primary inferences; e.g., Hauck et al. [12] “recommend that the primary analysis adjust for important prognostic covariates in order to come as close as possible to the clinically most relevant subject-specific measure of treatment effect.” Clearly, both unconditional and conditional treatment effects are of considerable and complementary importance in developing a comprehensive understanding of how treatments compare. The former provides a measure of overall effect useful for broad policy recommendations, which explains its role as the primary focus of regulatory authorities. Inference on the latter can reveal interactions between treatment and patient characteristics; qualitative such interactions (i.e, the direction of the effect changes depending on *x*) may have critical implications for use of the treatment in certain subpopulations.

With continuous outcome, this debate rarely receives explicit mention because, if (3) is an exactly correct representation of the relationship *E*(*Y* |*Z*, *X*), then *β* and *β _{x}* coincide. In fact, it is well-appreciated that, with

In this article, we do not enter into this debate. Rather, given the long-standing status of the unconditional treatment effect as the primary parameter of interest in most clinical trials, we focus henceforth on covariate adjustment in the context of inference on *β* in (1), with the goal of making this inference as precise as possible under very general conditions.

We consider estimation of *β* based on the iid data (*Y _{i}*,

Under these conditions, when one of the elements of *X* is a baseline observation on *Y*, Leon et al. [14] and Davidian et al. [15] derive the class of all consistent estimators for *β* by appealing directly to semiparametric theory [13] or by making an analogy to missing data problems and using the semiparametric missing-data theory of Robins et al. [16]. We comment on this “missing data” analogy below. Because a baseline outcome is just another baseline covariate, these results are immediately applicable here and lead to the following.

Let the numbers of subjects randomized to experimental treatment and control be
${n}_{1}={\sum}_{i=1}^{n}{Z}_{i}$ and
${n}_{0}={\sum}_{i=1}^{n}(1-{Z}_{i})$, *n* = *n*_{0} + *n*_{1}. Write the sample means of outcome in each group as
${\overline{Y}}^{(1)}={n}_{1}^{-1}{\sum}_{i=1}^{n}{Z}_{i}{Y}_{i}$ and
${\overline{Y}}^{(0)}={n}_{0}^{-1}{\sum}_{i=1}^{n}(1-{Z}_{i}){Y}_{i}$, with
$\overline{Z}={n}^{-1}{\sum}_{i=1}^{n}{Z}_{i}={n}_{1}/n$ the sample proportion randomized to treatment. Then it follows from References [14, 15] that all reasonable consistent and asymptotically normal estimators for *β* either can be written exactly as or are asymptotically equivalent to an expression of the form

$${\overline{Y}}^{(1)}-{\overline{Y}}^{(0)}-\sum _{i=1}^{n}({Z}_{i}-\overline{Z})\left\{{n}_{0}^{-1}{h}^{(0)}({X}_{i})+{n}_{1}^{-1}{h}^{(1)}({X}_{i})\right\},$$

(5)

where *h*^{(}^{k}^{)}(*X*), *k* = 0, 1, are arbitrary scalar functions of *X*.

When *h*^{(0)}(*X _{i}*) =

As noted above, a popular adjusted estimator for *β* is the least squares estimator for *β _{Z}* in the ANCOVA model (3), which we denote as

$${h}^{(0)}({X}_{i})={h}^{(1)}({X}_{i})={\mathrm{\sum}}_{XY}^{T}{\mathrm{\sum}}_{XX}^{-1}{X}_{i},$$

(6)

$${\mathrm{\sum}}_{XY}=E[\{X-E(X)\}\{Y-E(Y)\}],\phantom{\rule{0.50001em}{0ex}}{\mathrm{\sum}}_{XX}=E[\{X-E(X)\}{\{X-E(X)\}}^{T}],$$

(7)

the covariance between *X* and *Y* and the covariance matrix of *X* in the overall population, respectively. Because _{ANCOV A}_{1} is asymptotically equivalent to an estimator of form (5), we may conclude immediately that it is consistent for *β* and asymptotically normal under entirely unrestrictive conditions; normality of the outcome conditional on (*Z*, *X*), continuous outcome, or constancy of var(*Y* |*Z*, *X*) are not required. Indeed, the model (3) from which it is derived need not even be a correct representation of *E*(*Y* |*Z*, *X*) for these results to hold.

One could in fact use formulation (5) to estimate *β* directly by replacing Σ* _{XY}* and Σ

From (6), the *h*^{(}^{k}^{)}(*X _{i}*),

$${\mathrm{\sum}}_{XY}^{(k)}=E[\{X-E(X)\}\{Y-E(Y)\}|Z=k],\phantom{\rule{0.38889em}{0ex}}k=0,1,$$

(8)

and noting that ${\mathrm{\sum}}_{XY}=(1-\delta ){\mathrm{\sum}}_{XY}^{(0)}+\delta {\mathrm{\sum}}_{XY}^{(1)}$, may be written equivalently as

$${h}^{(0)}({X}_{i})={h}^{(1)}({X}_{i})={\{(1-\delta ){\mathrm{\sum}}_{XY}^{(0)}+\delta {\mathrm{\sum}}_{XY}^{(1)}\}}^{T}{\mathrm{\sum}}_{XY}^{-1}{X}_{i}.$$

(9)

Other familiar estimators may be shown to be asymptotically equivalent to estimators of form (5), with corresponding *h*^{(0)} = *h*^{(1)} that, while still linear in *X _{i}*, is possibly different from (6) and (9). Consider an ANCOVA model like (3) but also including an interaction term between

$$E\{Y-E(Y)|Z,X\}={\gamma}_{X}^{T}\{X-E(X)\}+{\gamma}_{XZ}^{T}\{X-E(X)\}\{Z-E(Z)\}+{\beta}_{Z}\{Z-E(Z)\},$$

(10)

and fitted by least squares regression of *Y _{i}* − on

$${h}^{(0)}({X}_{i})={h}^{(1)}({X}_{i})={\{\delta {\mathrm{\sum}}_{XY}^{(0)}+(1-\delta ){\mathrm{\sum}}_{XY}^{(1)}\}}^{T}{\mathrm{\sum}}_{XY}^{-1}{X}_{i}.$$

(11)

Thus, that _{ANCOV A}_{2} is consistent for *β* and asymptotically normal under very general conditions is immediate and holds even if (10) is an incorrect representation of *E*(*Y* |*Z*, *X*).

Expressions (9) and (11) are identical if either *δ* = 0.5 or
${\mathrm{\sum}}_{XY}^{(0)}={\mathrm{\sum}}_{XY}^{(1)}$. Accordingly, under these conditions, _{ANCOV A}_{1} and _{ANCOV A}_{2} are asymptotically equivalent and hence equally precise (asymptotically). Otherwise, _{ANCOV A}_{2} has smaller asymptotic variance than _{ANCOV A}_{1}; in fact, this variance is the smallest among all estimators for which *h*(*k*)(*X _{i}*), k = 0, 1, are

Koch et al. [2] propose an estimator for *β* given by

$${\widehat{\beta}}_{\mathit{KOCH}}={\overline{Y}}^{(1)}-{\overline{Y}}^{(0)}-{V}_{XY}^{T}{V}_{XX}^{-1}({\overline{X}}^{(1)}-{\overline{X}}^{(0)}),$$

(12)

where ${\overline{X}}^{(0)}={n}_{0}^{-1}{\sum}_{i=1}^{n}(1-{Z}_{i}){X}_{i};\phantom{\rule{0.38889em}{0ex}}{\overline{X}}^{(1)}={n}_{1}^{-1}{\sum}_{i=1}^{n}{Z}_{i}{X}_{i},$

$${V}_{XY}={n}_{0}^{-1}{\widehat{\mathrm{\sum}}}_{XY}^{(0)}+{n}_{1}^{-1}{\widehat{\mathrm{\sum}}}_{XY}^{(1)},\phantom{\rule{0.50001em}{0ex}}{V}_{XX}={n}_{0}^{-1}{\widehat{\mathrm{\sum}}}_{XX}^{(0)}+{n}_{1}^{-1}{\widehat{\mathrm{\sum}}}_{XX}^{(1)},$$

(13)

$$\begin{array}{c}{\widehat{\mathrm{\sum}}}_{XY}^{(k)}={({n}_{k}-1)}^{-1}\sum _{i=1}^{n}I({Z}_{i}=k)({Y}_{i}-{\overline{Y}}^{(k)})({X}_{i}-{\overline{X}}^{(k)}),\\ {\widehat{\mathrm{\sum}}}_{XX}^{(k)}={({n}_{k}-1)}^{-1}\sum _{i=1}^{n}I({Z}_{i}=k)({X}_{i}-{\overline{X}}^{(k)}){({X}_{i}-{\overline{X}}^{(k)})}^{T},\phantom{\rule{0.50001em}{0ex}}k=0,1,\end{array}$$

(14)

and *I*(·) is the indicator function. Noting that

$${\overline{X}}^{(1)}-{\overline{X}}^{(0)}=\frac{n}{{n}_{0}{n}_{1}}\sum _{i=1}^{n}({Z}_{i}-\overline{Z}){X}_{i},\phantom{\rule{0.50001em}{0ex}}\frac{n}{{n}_{0}{n}_{1}}={n}_{0}^{-1}+{n}_{1}^{-1},$$

(15)

it is easy to appreciate that * _{KOCH}* is asymptotically equivalent to an expression of form (5), where
${V}_{XY}^{T}{V}_{XX}^{-1}$ is replaced by its limit in probability, so that

Yang and Tsiatis [21] discuss an estimator that involves considering “response” vectors
${\mathcal{Y}}_{i}={({Y}_{i},{X}_{i}^{T})}^{T}$, *i* = 1, . . . , *n*, and fitting the model
$E(\mathcal{Y}|Z)={({\mu}_{0}+\beta Z,{\mu}_{X}^{T})}^{T}$ via solution of corresponding generalized estimating equations (GEEs), with separate unstructured working covariance matrices for each treatment group. Generalizing their results, it is possible to show that the resulting estimator for *β* is also asymptotically equivalent to _{ANCOV A}_{2}.

We have verified that several common estimators are members of the class of all consistent estimators for *β* and correspond to *h*^{(}^{k}^{)} in (5) that are *linear* in *X _{i}*. It is natural to wonder whether there are estimators with different

$${h}^{(k)}({X}_{i})=E({Y}_{i}|{Z}_{i}=k,{X}_{i}),\phantom{\rule{0.50001em}{0ex}}k=0,1;$$

(16)

an alternative, direct argument is given in the Appendix. That is, the “optimal” *h*^{(}^{k}^{)}, *k* = 0, 1, are the true regression relationships of *Y* on *X* for each treatment separately, which may neither be linear in *X* nor the same function of *X* for each treatment. Given (16), then, one way to view the estimators discussed above is that they are equivalent to postulating for these true regressions the same linear function for each *k* and will achieve the smallest possible variance in the event that the true regressions are both exactly equal to this linear function.

Result (16) suggests that better estimators for *β* may be constructed by positing separate models for the *E*(*Y* |*Z* = *k*, *X*), *k* = 0, 1, that come as close as possible to the true relationships and substituting resulting treatment-specific predicted values for each *i* into (5). Here, any parametric functional forms may be considered. As noted above, substitution of estimators for parameters in these models will lead to an estimator for *β* having the same asymptotic variance as if the functions of *X* represented by them were fully specified; see the Appendix. Thus, if the models do correspond to the true mean relationships for each treatment, then the resulting estimator for *β* will achieve the smallest asymptotic variance, and, as shown explicitly in the Appendix, improve over that of the unadjusted estimator ^{(1)}− ^{(0)}. However, failure to specify these models correctly will *not* affect consistency; the estimator will have larger variance than the “optimal,” but will still be consistent and asymptotically normal by virtue of being in class (5). Indeed, estimators in class (5) are “semiparametric” because they are consistent and asymptotically normal under no assumptions about any aspect of the distribution of *Y* given (*Z*, *X*), including the form of *E*(*Y* |*Z* = *k*, *X*), *k* = 0, 1. Elegantly, if *h*^{(}^{k}^{)}, *k* = 0, 1, in (5) coincide with the true treatment-specific relationships, then the estimator will be “optimal.” In fact, if one restricts the *h*^{(}^{k}^{)}, *k* = 0, 1, to be linear models in *X* with an intercept, even if the true *E*(*Y* |*Z* = *k*, *X*) are not linear, it may be shown (see the Appendix) that the asymptotic variance of the resulting estimator for *β* will improve over that of the unadjusted.

There is a further, key feature of this approach that makes it especially compelling in light of the concerns reviewed in Section 1. Covariate adjustment in practice is typically based on a model for the regression of *Y* on *both Z* and *X*, e.g., (3), where the effect of treatment is inextricably linked to that of the covariates, fueling suspicions regarding subjectivity due to ability to inspect the effect estimator during the modeling exercise. In contrast, the proposed estimator *decouples* evaluation of the treatment effect from regression modeling, as *E*(*Y* |*Z* = *k*, *X*), *k* = 0, 1, are postulated and fitted *separately* by treatment. This suggests an objective approach to covariate adjustment, as modeling may be carried out independently of reference to treatment effect, circumventing such bias. Simultaneously, the flexibility afforded by the opportunity to exploit freely modeling methods and expertise allows the covariate information to be best used to obtain as efficient an estimator for *β* as possible. On these grounds, we propose this approach for routine use in trial analysis, and in Section 4, we suggest a practical strategy for implementation.

An approximate sampling variance for the proposed semiparametric estimator obtained via separate model-building exercises as above may be specified by noting that may be re-cast as an M-estimator [13, Section 3.2], [23], from whence the standard “sandwich” technique may be used to derive a variance estimator. We present a practical expression for an approximate sampling variance in Section 4, and in Section 6 we show that, for sample sizes under which we envision use of the proposed approach, it leads to reliable assessments of precision.

Like the method of Koch et al. [2], the proposed estimators provide a straightforward basis for covariate adjustment when the outcome is binary and interest focuses on the unconditional difference in proportions experiencing the event (e.g., [3]) rather than the log-odds ratio.

We close this section by touching on the “missing data” analogy. As we have indicated, one way to motivate the class of estimators (5) is to conceptualize inference on *β* as a “missing data problem;” see Reference [14] for fuller discussion. Ideally, if we could observe *Y* on each subject under *both* treatments, we would have complete sample information on treatment effect. Of course, this is usually impossible, but randomization still facilitates a valid treatment comparison, albeit using less information than the “ideal:” for subjects randomized to experimental treatment, we observe only their outcome under that treatment; the outcome they would have experienced under control is hence “missing,” and vice versa. Covariate adjustment may be viewed as an attempt to use covariates that are correlated with outcome to recover some of the “lost” information (relative to the “ideal”) due to this “missingness.” Notably, the form of estimators in the class (5) is exactly that encountered when semiparametric theory is used in “actual” missing data problems [13, 16].

We now outline a practical strategy for exploiting the foregoing developments in the analysis of randomized clinical trials. We envision the following series of steps:

- Partition the data into the two sets determined by the randomized treatment groups; denote the data for treatment
*k*by ${\mathcal{D}}^{(k)}=\{({Y}_{i},{X}_{i})$,*i*such that*Z*=_{i}*k*},*k*= 0, 1. - Based on each of ${\mathcal{D}}^{(k)}$ separately, develop parametric models for
*E*(*Y*|*Z*=*k*,*X*),*k*= 0, 1. Because for each*k*this only uses ${\mathcal{D}}^{(k)}$, advantage may be taken of available any techniques to achieve a model as close to the true*E*(*Y*|*Z*=*k*,*X*) as possible yielding as good predictions as possible without concerns over bias. One may inspect graphical evidence and entertain different functional forms and covariate transformations; in general, any sensible modeling strategies [27] may be used. One may also consider “automated” methods. E.g., for continuous outcome, one may focus on linear models involving an intercept; all elements*X*_{}, = 1, …,*p*, of*X*; all squared terms ${X}_{\ell}^{2}$, = 1, …,*p*, and all two-way interactions*X*_{}*X*, ≠_{m}*m*. Model selection procedures may also be used. Forward, backward, or stepwise selection methods are a natural choice owing to their availability in standard software. Penalized methods, such as LASSO [24] or SCAD [25], which seek to minimize prediction error through selection of the penalty via some form of cross-validation, are also possibilities, as are other techniques [26].The separate model development may be implemented several ways in a cooperative group or pharmaceutical company setting. Modeling for each*k*may be carried out sequentially by the same analysts, who may or may not be members of the study team. Alternatively, two teams of analysts may be designated, with each provided only the data for its assigned treatment. For total transparency, the two analysis teams may be completely independent of the analysts who will prepare the final analysis; e.g., contracted from outside the group or sponsor solely for this purpose. The teams may be given flexibility to exploit resident expertise in their model development efforts. A more conservative approach would dictate the specific modeling techniques to be employed and guidelines on their use in the trial protocol. - Denote the models so developed by
*f*(_{k}*X*,*α*),_{k}*k*= 0, 1, and let,_{k}*k*= 0, 1, be the estimators for the parameters*α*(_{k}*p*× 1) in these models, obtained, for example, by least squares for linear models (including an intercept to ensure efficiency gain over_{k}^{(1)}−^{(0)}) or by logistic regression. For each*i*= 1, …,*n*, form predicted values_{0,}=_{i}*f*_{0}(*X*,_{i}_{0}) and_{1,}=_{i}*f*_{1}(*X*,_{i}_{1}) for*i*under each treatment. The analysis team(s) responsible for developing each model may provide the form of the fitted model to the analysts responsible for inference on the treatment effect, who may then calculate the predicted values directly. - The estimator may then be calculated by the analysts responsible for the final analysis as$$\widehat{\beta}={\overline{Y}}^{(1)}-{\overline{Y}}^{(0)}-\sum _{i=1}^{n}({Z}_{i}-\overline{Z})\left({n}_{0}^{-1}{\widehat{f}}_{0,i}+{n}_{1}^{-1}{\widehat{f}}_{1,i}\right).$$(17)Using the “sandwich” technique, an estimator for the sampling variance of (17) may be obtained. Although semiparametric theory dictates that, asymptotically, there should be no effect of estimating the parameters in the postulated models
*f*,_{k}*k*= 0, 1, the sandwich estimator can understate the true sampling variation for small*n*, likely due in part to second-order effects of this estimation. This phenomenon was noted for the variance estimator for given by Koch et al. [2] by Lesaffre and Senn [4], who proposed a small-samplecorrection when_{KOCH}*n*_{0}=*n*_{1}. Accordingly, we propose the variance estimator$$\widehat{var}(\widehat{\beta})=C\sum _{i=1}^{n}[\{{n}_{1}^{-1}{Z}_{i}-{n}_{0}^{-1}(1-{Z}_{i})\}{Y}_{i}-{n}^{-1}\widehat{\beta}-({Z}_{i}-\overline{Z})\left({n}_{0}^{-1}{\widehat{f}}_{0,i}+{n}_{1}^{-1}{\widehat{f}}_{1,i}\right)$$(18)where ${\overline{f}}_{k}={n}_{k}^{-1}{\sum}_{i=1}^{n}I({Z}_{i}=k){\widehat{f}}_{k,i}$,$${-({Z}_{i}-\overline{Z})\left\{{n}_{0}^{-1}\left({\overline{Y}}^{(0)}-{\overline{f}}_{0}\right)+{n}_{1}^{-1}\left({\overline{Y}}^{(1)}-{\overline{f}}_{1}\right)\right\}]}^{2}$$(19)*k*= 0, 1, and*C*is a small-sample “correction factor” (see the Appendix). When the models*f*are linear with intercept and fitted by treatment-specific least squares, the final term in braces in (19) is equal to zero._{k}

Appealing to the asymptotic normality of , one may construct Wald 100(1−*α*)% confidence intervals for the true treatment effect in the usual way as
$\widehat{\beta}\pm {z}_{\alpha /2}{\{\widehat{var}(\widehat{\beta})\}}^{1/2}$, where *z _{α}*

We demonstrate the proposed methods and contrast them to competing techniques by application to data from 2139 HIV-infected subjects enrolled in AIDS Clinical Trials Group Protocol 175 (ACTG 175), which randomized subjects to four different antiretroviral regimens in equal proportions: zidovudine (ZDV) monotherapy, ZDV+didanosine (ddI), ZDV+zalcitabine, and ddI monotherapy [28]. We follow References [14, 15] and consider two groups: ZDV monotherapy, with *n*_{0} = 532 subjects, and the other three groups combined, with *n*_{1} = 1607 subjects, so that *δ* = 0.75. We focus on analysis of the differences in mean CD4 count (cells/mm^{3}, *Y*) at 20 ± 5 weeks post-baseline between these two treatment groups. For potential use in covariate adjustment, we consider the following baseline variables: CD4 count (cells/mm^{3}), CD8 count (cells/mm^{3}), age (years), weight (kg), Karnofsky score (scale of 0-100), all of which are continuous measures; and indicator variables for hemophilia, homosexual activity, history of intravenous drug use, race (0=white, 1=non-white), gender (0=female), antiretroviral history (0=naive, 1=experienced), and symptomatic status (0=asymptomatic).

Because they often exhibit skewed distributions, CD4 count outcomes are routinely analyzed on a transformed scale (e.g., cube-root, fourth-root, or logarithmic). However, as long as the skewness is not severe, comparison of mean responses on their original scale is reasonable, more readily interpretable, and consistent with the way in which clinicians think about these measures in practice. Figure 1 shows histograms of CD4 at 20 ± 5 weeks for each treatment and suggests that this view is appropriate. Of course, because all of the usual estimators are semiparametric as members of class (5), they are consistent and asymptotically normal regardless of the true distributions of the data. We thus consider inference on *β* in (1).

Table I shows results for estimation of *β* using several methods, including the unadjusted estimator ^{(1)} − ^{(0)} and; because one of the baseline covariates is CD4 count, the usual estimator based on “change scores,”
${\overline{Y}}^{(1)}-{\overline{Y}}^{(0)}-({\overline{X}}_{CD4}^{(1)}-{\overline{X}}_{CD4}^{(0)})$, where
${\overline{X}}_{CD4}^{(k)}$ is mean baseline CD4 count in group *k* = 0, 1, which, using (15), may be written in the form (5). Also presented are * _{ANCOV A}*,

All methods indicate strong evidence of a treatment difference. All estimates are very similar with the exception of the unadjusted estimate, which is slightly lower due to a mild imbalance for baseline CD4 between groups. Baseline CD4 exhibits moderate association with CD4 at 20 ± 5 weeks, with correlation coefficients of roughly 0.6 in each treatment and a hint of curvature in the relationships; see Figure 1 of Reference [14]. Failure of the unadjusted estimator to take this relationship into account results in a much larger standard error than those of the other estimators; moreover, although the change score estimator offers substantial improvement, inclusion of additional covariate information yields further gains in precision. The proposed estimator with forward selection on linear terms and * _{KOCH}* are virtually identical; allowing second-order effects to enter in the forward selection for the treatment-specific regression models for leads to very little additional reduction in estimated sampling variation; the resulting models include the square of baseline CD4, but because this effect is so mild, little gain is realized. Interestingly, the usual least squares standard error for

The fitted treatment-specific models selected by “Forward-2” are, in obvious notation,

$$\begin{array}{l}E(Y|Z=0,X)\approx -79.705+1.599(\text{CD}4)-0.0007{(\text{CD}4)}^{2}-0.107(\text{CD}4\times \text{HEMO})\\ -0.005(\text{CD}4\times \text{WT})+0.013(\text{WT}\times \text{KARN})-0.040(\text{CD}8\times \text{HIST})-23.199(\text{HOMO}\times \text{RACE})\end{array}$$

(20)

$$\begin{array}{l}E(Y|Z=1,X)\approx 95.445+1.100(\text{CD}4)-0.0005{(\text{CD}4)}^{2}-142.288(\text{HOMO})\\ -0.178(\text{CD}4\times \text{DRUG})-0.087(\text{CD}4\times \text{RACE})+0.033(\text{CD}8\times \text{HEMO})-0.014(\text{CD}8\times \text{HOMO})\\ -0.021(\text{CD}8\times \text{HIST})-0.720(\text{AGE}\times \text{HIST})-0.554(\text{AGE}\times \text{SYMP})-0.706(\text{WT}\times \text{HEMO})\\ +1.282(\text{WT}\times \text{DRUG})+1.688(\text{KARN}\times \text{HOMO})-28.321(\text{DRUG}\times \text{RACE})\\ -45.337(\text{DRUG}\times \text{SEX})+35.981(\text{DRUG}\times \text{HIST})+24.032(\text{RACE}\times \text{HIST})-3.602(\text{SEX}\times \text{HIST}),\end{array}$$

(21)

with treatment-specific variance estimates var(*Y* |*Z* = 0, *X*) ≈ (95.82)^{2} and var(*Y* |*Z* = 1, *X*) ≈ (115.63)^{2}, and coefficients of determination *R*^{2} = 0.50 and 0.38.

It is important to recognize that we are uninterested in the interpretation of models (20) and (21). What is important is that they represent functions of *X* yielding predictions that come as close as possible to the values of the true treatment-specific regressions at the *X _{i}*. Thus, that these models do not, for example, include all main effect terms involving variables in the interaction terms is of no consequence for the purpose of estimating

We report on several simulation studies to demonstrate the performance of the proposed methods, each involving 5000 Monte Carlo data sets.

We consider first estimation of *β* under two scenarios. The initial scenario is based on the fit of the ACTG 175 data in Section 5. For each simulated data set, we generated for each of *n* subjects the continuous baseline covariates CD4 count, CD8 count, age, weight, and Karnofsky score from a multivariate normal distribution with the empirical mean and covariance matrix of these variables in the data. Independently, baseline binary indicators for hemophilia, homosexual activity, history of drug use, race, gender, antiretroviral history, and symptomatic status were generated for each subject from independent Bernoulli distributions using the observed data proportions for each. Treatment indicator *Z* was generated from Bernoulli(*δ*) for each subject, independently of all other variables. Finally, CD4 count at 20±5 weeks for each subject was generated from a normal distribution with conditional mean (20) or (21) and conditional variance given after (21) depending on his/her treatment assignment and covariates. The true value of *β* = 54.203, with *R*^{2} = 0.50 and 0.39 for the treatment-specific regressions for *k* = 0, 1, consistent with the data. For each data set, *β* and standard errors were estimated using all methods in Table I. We also estimated *β* using (17), but with _{k}_{,}* _{i}* for each

Table II shows results for two instances of this scenario: *n* = 2139 and *δ* = 0.75, as in ACTG 175; and *n* = 400 and *δ* = 0.5, representing a moderate-sized trial with the 1:1 randomization common in practice. As all estimators showed negligible bias, bias is not reported. For both cases, any form of adjustment yields considerable efficiency gain over the unadjusted estimator. Improvement over simple adjustment based on change scores is achieved by incorporating additional covariates. The proposed method, _{ANCOV A}_{1}, and * _{KOCH}* show similar precision, likely because baseline CD4 has a strong linear but only mild quadratic relationship with outcome, and other, weaker covariate relationships are captured adequately by main effect terms in all estimators, a view supported by the results for the “Benchmark” estimator, which shows little additional gain in efficiency. All methods except ANCOVA using least squares standard errors, shown for

To emphasize this, we considered a second scenario identical to the first except that a stronger quadratic effect in baseline CD4 was introduced in the true *E*(*Y* |*Z* = *k*, *X*), *k* = 0, 1, while maintaining *R*^{2} for these relationships at 0.50 and 0.39, and *β* = 54.203. This was accomplished by replacing the first three terms in (20) by −247.074+2.850(CD4)−0.0026(CD4)^{2} and those in (21) by −82.931+2.400(CD4)−0.0025(CD4)^{2}. Table III shows results for *n* = 400, *δ* = 0.5. “Forward-1,” which considers only linear terms in elements of *X; _{ANCOV A}*

As noted above, confidence intervals based on _{ANCOV A}_{1} and the usual standard errors obtained from the output of the least squares fit of (3) achieve Monte Carlo coverage exceeding the nominal level in Table II. Comparison of the average of these estimated least squares standard errors to the Monte Carlo standard deviation shows that this is because the former tends to overstate the true sampling variation. If the ANCOVA model (3) is a correct representation of *E*(*Y* |*Z*, *X*), and if in truth var(*Y* |*Z*, *X*) is constant, then the least squares standard errors will be consistent for the true sampling standard deviation of _{ANCOV A}_{1}. However, if these assumptions are violated, then this need not be the case; indeed, these assumptions do not hold in our simulation scenarios. Valid standard errors and nominal coverage may be obtained using the “sandwich” formula (18), as shown in Tables II and III, because (18) is not predicated on these assumptions. Thus, if ANCOVA is the basis for adjustment, as is widely proposed, least squares standard errors should not be used in general.

For each of the two scenarios with *n* = 400, *δ* = 0.5, we modified the intercept term in the true relationships *E*(*Y* |*Z* = *k*, *X*), *k* = 0, 1, so that the true value of *β* = 0, 15, and 30, and for each value of *β* we report in Table IV the proportion of 5000 Monte Carlo data sets for which a Wald test based on each estimator in Tables II and III rejected the null hypothesis *β* = 0 in favor of the one-sided alternative *β* > 0, where all tests were carried out at significance level 0.025. All tests exhibit the nominal level under the null hypothesis; under alternatives, the proposed methods achieve the highest power in both scenarios, notably in scenario 2.

Proportion of 5000 Monte Carlo data sets for which the null hypothesis *β* = 0 is rejected in favor of the alternative *β* > 0 using the test statistic based on each estimator and level of significance 0.025. Each of the two simulation **...**

As in any regression modeling context, there may be uncertainty associated with model development tasks for the *f _{k}*, including use of variable selection techniques such as forward selection, that is not taken into account by usual standard error formulæ [29]. We advocate the proposed methods when

We have demonstrated that systematic consideration of the covariate adjustment problem from the perspective of semiparametric theory leads to characterization of all consistent and asymptotically normal estimators for the treatment mean difference. Properties of familiar estimators and correspondences among them may be established and the most precise estimator identified. The results suggest methods for principled analysis, where adjustment for covariate effects is carried out separately from estimation of the treatment effect.

The decision on whether to propose a covariate-adjusted analysis during trial planning must weigh possible benefits relative to the increased effort involved [3]. Our proposed strategy involves logistical and cost considerations, and whether these are worthwhile must be determined in the particular context. Associations among covariates must be sufficiently strong for adjustment to pay off, and such covariates may not always be available. When adjustment is deemed potentially fruitful, our proposed approach may offer practical resolution to the conflict over whether and how to exploit covariate information to enhance efficiency.

We have focused on parametric modeling of the treatment-specific regressions. One may wonder if it is possible to use nonparametric approaches such as generalized additive models [31] or other multivariate smoothing methods to estimate these regressions; these may be prohibitive with more than a few covariates. As discussed by Leon et al. [14, sec. 4], because nonparametric estimators typically have large sample properties different from those of parametric estimators, such smoothing methods may be viable only in very large studies.

The methods presented in this article may be modified to accommodate outcome missing at random as shown in Davidian et al. [15]. As in the full data case considered here, models associated with both covariate adjustment and accounting for missing outcomes may be postulated and fitted independently of reference to the treatment effect, again supporting a principled analysis. Via application of semiparametric theory, the techniques for comparing two treatment means presented in this article may be extended to general measures of treatment effect, such as an odds ratio associated with a binary outcome, a hazards ratio associated with a censored time-to-event outcome, and so on, including accommodation of missing outcome and covariate information. We report on these developments elsewhere.

This research was supported by Grants R37 AI031789 from the National Institute of Allergy and Infectious Diseases and R01 CA085848 and R01 CA051962 from the National Cancer Institute. This paper benefited greatly from discussion following its presentation among participants at the October 2006 Workshop on Statistical Methods in HIV/AIDS and its Practical Application, organized by Dr. Misrak Gezmu, and from the comments of two reviewers.

Contract/grant sponsor: National Institutes of Health; contract/grant number: R37 AI031789, R01 CA085848, R01 CA051962

In this appendix, we sketch arguments supporting assertions made in the main text. *Consistency of estimators in (*5*). *^{(1)}−^{(0)} is consistent for *β*; by Slutsky’s theorem, (5) itself is consistent for *β* if its second term
$\stackrel{p}{\to}$ 0. Because
${n}_{1}/n\stackrel{p}{\to}\delta $, the second term is approximately equal to
${n}^{-1}{\sum}_{i=1}^{n}({Z}_{i}-\delta )\{{(1-\delta )}^{-1}{h}^{(0)}({X}_{i})+{\delta}^{-1}{h}^{(1)}({X}_{i})\}\stackrel{p}{\to}E[(Z-\delta )\{{(1-\delta )}^{-1}{h}^{(0)}(X)+{\delta}^{-1}{h}^{(1)}(X)\}]=0$ because *Z* *X*.

*Asymptotic equivalence* of _{ANCOV A}_{1} to (5) with *h*(*k*), *k* = 0, 1, as in (6). Straightforward algebra shows that the least squares estimator for *β _{Z}* in (3) is

$${\left\{1-\frac{{n}^{2}}{{n}_{0}{n}_{1}}{({n}^{-1}{d}_{1})}^{T}{\widehat{\mathrm{\sum}}}_{XX}^{-1}({n}^{-1}{d}_{1})\right\}}^{-1}\{{\overline{Y}}^{(1)}-{\overline{Y}}^{(0)}-\frac{n}{{n}_{0}{n}_{1}}\sum _{i=1}^{n}({Z}_{i}-\overline{Z}){\widehat{\mathrm{\sum}}}_{XY}^{T}{\widehat{\mathrm{\sum}}}_{XX}^{-1}{X}_{i}\},$$

(A.1)

where
${d}_{1}={\sum}_{i=1}^{n}({Z}_{i}-\overline{Z}){X}_{i},\phantom{\rule{0.16667em}{0ex}}{\widehat{\mathrm{\sum}}}_{XY}={n}^{-1}{\sum}_{i=1}^{n}({X}_{i}-\overline{X})({Y}_{i}-\overline{Y})$, and
${\widehat{\mathrm{\sum}}}_{XX}={n}^{-1}{\sum}_{i=1}^{n}({X}_{i}-\overline{X}){({X}_{i}-\overline{X})}^{T}$. Because * _{XY}* and

Asymptotic equivalence of _{ANCOV A}_{2} to (5) with *h*(*k*), *k* = 0, 1, as in (11), and to * _{KOCH}*. The least squares estimator for

$${\left\{1-\frac{{n}^{2}}{{n}_{0}{n}_{1}}{({n}^{-1}{d}_{2})}^{T}{D}^{-1}({n}^{-1}{d}_{2})\right\}}^{-1}\left\{{\overline{Y}}^{(1)}-{\overline{Y}}^{(0)}-\frac{n}{{n}_{0}{n}_{1}}{d}_{2}^{T}{D}^{-1}\left(\begin{array}{c}{\widehat{\mathrm{\sum}}}_{XY}\\ {\widehat{\mathrm{\sum}}}_{\mathit{XYZ}}\end{array}\right)\right\},$$

(A.2)

where ${d}_{2}={\{{d}_{1}^{T},{\sum}_{i=1}^{n}{({Z}_{i}-\overline{Z})}^{2}{({X}_{i}-\overline{X})}^{T}\}}^{T},\phantom{\rule{0.16667em}{0ex}}{\widehat{\mathrm{\sum}}}_{\mathit{XYZ}}={n}^{-1}{\sum}_{i=1}^{n}({X}_{i}-\overline{X})({Y}_{i}-\overline{Y})({Z}_{i}-\overline{Z})$, and

$$D=\left(\begin{array}{cc}{\widehat{\mathrm{\sum}}}_{XX}^{(0)}& {\widehat{\mathrm{\sum}}}_{XX}^{(1)}\\ {\widehat{\mathrm{\sum}}}_{XX}^{(1)}& {\widehat{\mathrm{\sum}}}_{XX}^{(2)}\end{array}\right),\phantom{\rule{1.16667em}{0ex}}{\widehat{\mathrm{\sum}}}_{XX}^{(\ell )}={n}^{-1}\sum _{i=1}^{n}{({Z}_{i}-\overline{Z})}^{\ell}({X}_{i}-\overline{X}){({X}_{i}-\overline{X})}^{T},$$

so
${\widehat{\mathrm{\sum}}}_{XX}^{(0)}={\widehat{\mathrm{\sum}}}_{XX}$. Clearly, *D*
$\stackrel{p}{\to}$ block diag{Σ* _{XX}*,

$$\begin{array}{l}{d}_{2}^{T}{D}^{-1}\left(\begin{array}{c}{\widehat{\mathrm{\sum}}}_{XY}\\ {\widehat{\mathrm{\sum}}}_{\mathit{XYZ}}\end{array}\right)\approx \sum _{i=1}^{n}({Z}_{i}-\overline{Z}){[{\widehat{\mathrm{\sum}}}_{XY}+(1-2\delta ){\widehat{\mathrm{\sum}}}_{\mathit{XYZ}}/\{\delta (1-\delta )\}]}^{T}{\mathrm{\sum}}_{XX}^{-1}{X}_{i}\\ \approx \sum _{i=1}^{n}({Z}_{i}-\overline{Z}){\{\delta {\mathrm{\sum}}_{XY}^{(0)}+(1-\delta ){\mathrm{\sum}}_{XY}^{(1)}\}}^{T}{\mathrm{\sum}}_{XX}^{-1}{X}_{i},\end{array}$$

(A.3)

as required. To show the equivalence of * _{KOCH}* to

We show that _{ANCOV A}_{2} has smallest asymptotic variance among all estimators of form (5) with *h*^{(}^{k}^{)}, *k* = 0, 1, linear in *X _{i}*; i.e.,
${h}^{(k)}({X}_{i})={\alpha}_{0k}+{\alpha}_{k}^{T}{X}_{i}$, say. It is straightforward to show that all such estimators satisfy

$$\begin{array}{l}{n}^{1/2}(\widehat{\beta}-\beta )={n}^{1/2}({\overline{Y}}^{(1)}-{\overline{Y}}^{(0)}-\beta )-{n}^{-1/2}\sum _{i=1}^{n}({Z}_{i}-\overline{Z})\left(\frac{n{\alpha}_{0}^{T}}{{n}_{0}}+\frac{n{\alpha}_{1}^{T}}{{n}_{1}}\right){X}_{i}\\ \approx {n}^{-1/2}\sum _{i=1}^{n}\left(\left\{\frac{{Z}_{i}}{\delta}-\frac{1-{Z}_{i}}{1-\delta}\right\}{Y}_{i}-\beta -\frac{({Z}_{i}-\delta )}{\delta (1-\delta )}[{\eta}_{0}+{\eta}^{T}\{{X}_{i}-E(X)\}]\right),\end{array}$$

(A.4)

where *η*_{0} = *δE*(*Y* |*Z* = 0) + (1 − *δ*)*E*(*Y* |*Z* = 1), and *η* = *δα*_{0} + (1 − *δ*)*α*_{1}. That with smallest variance takes *η* to minimize the variance of the summand in (A.4). The summand is *A* − η* ^{T} B*, say. This is least squares problem [14, p. 1050], which yields
${\eta}^{T}=cov(A,B){\{var{(B)}^{T}\}}^{-1}={\{\delta {\mathrm{\sum}}_{XY}^{(0)}+(1-\delta ){\mathrm{\sum}}_{XY}^{(1)}\}}^{T}{\mathrm{\sum}}_{XX}^{-1}$. Comparing to (11), the result follows.

Similar to (A.4), for arbitrary *h*^{(}^{k}^{)}, *k* = 0, 1, it is straightforward to show that
${n}^{1/2}(\widehat{\beta}-\beta )\approx {n}^{-1/2}{\sum}_{i=1}^{n}\phi ({Y}_{i},{Z}_{i},{X}_{i};{h}^{0},{h}^{(1)})$, where

$$\phi (Y,Z,X;{h}^{0},{h}^{(1)})=\left(\frac{Z}{\delta}-\frac{1-Z}{1-\delta}\right)Y-\beta -\frac{(Z-\delta )}{\delta (1-\delta )}\{{\eta}_{0}+\delta {h}_{c}^{(0)}(X)+(1-\delta ){h}_{c}^{(1)}(X)\},$$

where
${h}_{c}^{(k)}(X)={h}^{(k)}(X)-E\{{h}^{(k)}(X)|Z=k\}$, *k* = 0, 1. As *f*(*Y*, *Z*, *X; h*^{0}, *h*^{(1)}) has mean zero because *Z* *X*, the choices of *h*^{(}^{k}^{)}, *k* = 0, 1, leading to the smallest variance asymptotically are those minimizing *E*{*f*^{2}(*Y*, *Z*, *X; h*^{(0)}, *h*^{(1)})}. Letting
${h}_{\mathit{opt}}^{(k)}(X)=E(Y|Z=k,X)$, *k* = 0, 1, for brevity and writing
${g}_{\mathit{opt}}(X;{h}^{(0)},{h}^{(1)})=[{\eta}_{0}+\delta \{{h}_{c}^{(0)}(X)-{h}_{\mathit{opt}}^{(0)}(x)\}+(1-\delta )\{{h}_{c}^{(1)}(X)-{h}_{\mathit{opt}}^{(1)}(X)\}]/\{\delta (1-\delta )\}$, for any *h*^{(}^{k}^{)}, *k* = 0, 1, we have

$$\begin{array}{l}E\{{\phi}^{2}(Y,Z,X;{h}^{(0)},{h}^{(1)})\}=E{[\{\phi (Y,Z,X;{h}_{\mathit{opt}}^{(0)},{h}_{\mathit{opt}}^{(1)})\}-(Z-\delta ){g}_{\mathit{opt}}(X;{h}^{(0)},{h}^{(1)})\}}^{2}]\\ =E\{{\phi}^{2}(Y,Z,X;{h}_{\mathit{opt}}^{(0)},{h}_{\mathit{opt}}^{(1)})\}+\delta (1-\delta )E\{{g}_{\mathit{opt}}^{2}(X;{h}^{(0)},{h}^{(1)})\}\\ \ge E\{{\phi}^{2}(Y,Z,X;{h}_{\mathit{opt}}^{(0)},{h}_{\mathit{opt}}^{(1)})\}\end{array}$$

(A.5)

where (A.5) follows because *Z* *X* implies that the crossproduct
$E\{{\phi}^{2}(Y,Z,X;{h}_{\mathit{opt}}^{0},{h}_{\mathit{opt}}^{(1)})(Z-\delta ){g}_{\mathit{opt}}(X;{h}^{(0)},{h}^{(1)})\}=0$, demonstrating (16). In fact, it is immediate from (A.5) that, by taking *h*^{(}^{k}^{)} = 0, *k* = 0, 1,
$\text{Avar}(\widehat{\beta})=\text{Avar}({\overline{Y}}^{(1)}-{\overline{Y}}^{(0)})-\delta (1-\delta )E\{{g}_{\mathit{opt}}^{2}(X;0,0)\}$, where “Avar” denotes “asymptotic variance,” showing that using the optimal choices in (16) is guaranteed to lead to a reduction in variance over the unadjusted estimator.

By a similar argument, one may in fact show that, if one restricts attention to representations for *h*^{(}^{k}^{)}(*X*) that are *linear* in *X*; i.e.,
${h}^{(k)}(X)={\alpha}_{0k}+{\alpha}_{k}^{T}X$, k = 0, 1, and fits this model by treatment-specific least squares, then the resulting estimator for *β* has asymptotic variance
$\text{Avar}({\overline{Y}}^{(1)}-{\overline{Y}}^{(0)})-{\{\delta (1-\delta )\}}^{-1}{\{\delta {\mathrm{\sum}}_{XY}^{(0)}+(1-\delta ){\mathrm{\sum}}_{XY}^{(1)}\}}^{T}{\mathrm{\sum}}_{XX}^{-1}\{\delta {\mathrm{\sum}}_{XY}^{(0)}+(1-\delta ){\mathrm{\sum}}_{XY}^{(1)}\}$. This holds *regardless* of whether the true *E*(*Y* |*Z* = *k*, *X*) are linear. Thus, representing *h*^{(}^{k}^{)}, *k* = 0, 1 by linear functions leads to a reduction in variance over ^{(1)} − ^{(0)}.

*Effect of parameter estimation in postulated models* for *E*(*Y* |*Z* = *k*, *X*), *k* = 0, 1. As in Section 4, suppose we specify regression models *f _{k}*(

$$\begin{array}{l}{n}^{1/2}(\widehat{\beta}-\beta )={n}^{1/2}({\overline{Y}}^{(1)}-{\overline{Y}}^{(0)}-\beta )-{n}^{-1/2}\sum _{i=1}^{n}({Z}_{i}-\overline{Z})\left\{\frac{n}{{n}_{0}}{f}_{0}({X}_{i},{\widehat{\alpha}}_{0})+\frac{n}{{n}_{1}}{f}_{1}({X}_{i},{\widehat{\alpha}}_{1})\right\}\\ \approx {n}^{-1/2}\sum _{i=1}^{n}\left[\left\{\frac{{Z}_{i}}{\delta}-\frac{1-{Z}_{i}}{1-\delta}\right\}{Y}_{i}-\beta -\frac{({Z}_{i}-\delta )}{\delta (1-\delta )}\{{\eta}_{0}+\delta {f}_{0}^{c}({X}_{i},{\alpha}_{0}^{\ast})+(1-\delta ){f}_{1}^{c}({X}_{i},{\alpha}_{1}^{\ast})\}\right]\end{array}$$

(A.6)

$$+\sum _{k=0}^{1}{\delta}^{-k}{(1-\delta )}^{k-1}\left\{{n}^{-1}\sum _{i=1}^{n}({Z}_{i}-\delta ){f}_{k,\alpha}({X}_{i},{\alpha}_{k}^{\ast})\right\}{n}^{1/2}({\widehat{\alpha}}_{k}-{\alpha}_{k}^{\ast}),$$

(A.7)

where
${f}_{k}^{c}({X}_{i},{\alpha}_{k}^{\ast})={f}_{k}({X}_{i},{\alpha}_{k}^{\ast})-E\{{f}_{k}({X}_{i},{\alpha}_{k}^{\ast})\}$, *k* = 0, 1. The term in (A.7) converges in probability to zero because *Z* *X*. Thus, *n*^{1/2}( − *β*) has the same limit in distribution as (A.6), which depends on the
${f}_{k}(X,{\alpha}_{k}^{\ast})$, which are fully specified as functions of *X* given
${\alpha}_{k}^{\ast}$. The smallest achievable large sample variance is that of the limit in distribution of (A.6) when
${f}_{k}(X,{\alpha}_{k}^{\ast})=E(Y|Z=k,X)$, *k* = 0, 1; i.e., *f _{k}* coincide with the true regression relationships. Estimator for sampling variance of . The summand in (A.6) is the form of the influence function [13] for the proposed estimator . Applying the sandwich technique and replacing this summand by an empirical version yields the sum in (18). We take

To obtain an alternative estimator for the sampling variance using the bootstrap, at step (i) of Section 4, *B* bootstrap data sets could be obtained, each by resampling *n* subjects with replacement from the original data. Each could be partitioned into two sets, i.e.,
${\mathcal{D}}_{b}^{(k)\ast}$, *k* = 0, 1 for *b* = 1, …, *B*. In step (ii) of Section 4, the modeling strategy used on the actual data
${\mathcal{D}}^{(k)}$ for each *k* would also be replicated by the analysts responsible for each
${\mathcal{D}}_{b}^{(k)\ast}$, *b* = 1, …, *B*. The fitted model so obtained for each *B* = 1, …, *B*, *f _{k}*

1. Pocock SJ, Assmann SE, Enos LE, Kasten LE. Subgroup analysis, covariate adjustment and baseline comparisons in clinical trial reporting: current practice and problems. Statistics in Medicine. 2002;21:2917–2930. [PubMed]

2. Koch GG, Tangen CM, Jung JW, Amara IA. Issues for covariance analysis of dichotomous and ordered categorical data from randomized clinical trials and non-parametric strategies for addressing them. Statistics in Medicine. 1998;17:1863–1892. [PubMed]

3. Lesaffre E, Bogaerts K, Li X, Bluhmki E. On the variability of covariance adjustment: experience with Koch’s method for evaluating the absolute difference in proportions in randomized clinical trials. Controlled Clinical Trials. 2002;23:127–142. [PubMed]

4. Lesaffre E, Senn S. A note on non-parametric ANCOVA for covariate adjustment in randomized clinical trials. Statistics in Medicine. 2003;22:3586–3596. [PubMed]

5. Senn S. Covariate imbalance and random allocation in clinical trials. Statistics in Medicine. 1989;8:467–475. [PubMed]

6. Altman DG. Adjustment for covariate imbalance. In: Armitage P, Colton T, editors. Encyclopedia of Biostatistics. 2. Wiley: Chichester; 2005. pp. 1273–1278.

7. Assmann SF, Pocock SJ, Enos LE, Kasten LE. Subgroup analysis and other (mis)uses of baseline data in clinical trials. The Lancet. 2000;355:1064–1069. [PubMed]

8. Raab GM, Day S, Sales J. How to select covariates to include in the analysis of a clinical trial. Controlled Clinical Trials. 2000;21:330–342. [PubMed]

9. Senn S. Consensus and controversy in pharmaceutical statistics. The Statistician. 2000;49:135–176.

10. Lewis JA. Statistical principles for clinical trials (ICH E9): an introductory note on an international guideline. Statistics in Medicine. 1999;18:1903–1904. [PubMed]

11. Grouin JM, Day S, Lewis J. Adjustment for baseline covariates: an introductory note. Statistics in Medicine. 2004;23:697–699. [PubMed]

12. Hauck WW, Anderson S, Marcus SM. Should we adjust for covariates in nonlinear regression analyses of randomized trials? Controlled Clinical Trials. 1998;19:249–256. [PubMed]

13. Tsiatis AA. Semiparametric Theory and Missing Data. Springer; New York: 2006.

14. Leon S, Tsiatis AA, Davidian M. Semiparametric estimation of treatment effect in a pretest-posttest study. Biometrics. 2003;59:1048–1057. [PubMed]

15. Davidian M, Tsiatis AA, Leon S. Semiparametric estimation of treatment effect in a pretest-posttest study with missing data (with Discussion) Statistical Science. 2005;20:261–301. [PMC free article] [PubMed]

16. Robins JM, Rotnitzky A, Zhao LP. Estimation of regression coefficients when some regressors are not always observed. Journal of the American Statistical Association. 1994;89:846–866.

17. Robins JM. ASA Proceedings of the Bayesian Statistical Science Section. American Statistical Association; Alexandria, Virginia: 2000. Robust estimation in sequentially ignorable missing data and causal inference models; pp. 6–10.

18. Cassel CM, Sarndal CE, Wretman JH. Some results on generalized difference estimation and generalized regression estimation for finite populations. Biometrika. 1976;63:615–620.

19. Cochran WG. Sampling Techniques. 3. Wiley; New York: 1977.

20. Sarndal CE, Swensson B, Wretman J. Model Assisted Survey Sampling. Springer; New York: 1992.

21. Yang L, Tsiatis AA. Efficiency study for a treatment effect in a pretest-posttest trial. The American Statistician. 2001;55:314–321.

22. Korsholm L, Vach W. Covariate adjustment in clinical trials - A semiparametric view (meeting abstract 41) Controlled Clinical Trials. 2003;24:62S–63S.

23. Stefanski LA, Boos DD. The calculus of M-estimation. The American Statistician. 2002;56:29–38.

24. Tibshirani R. Regression shrinkage and selection via the Lasso. Journal of the Royal Statistical Society, Series B. 1996;58:267–288.

25. Fan J, Li R. Variable selection via nonconcave penalized likelihood and its oracle property. Journal of the American Statistical Association. 2001;96:1348–1360.

26. Wu T, Boos DD, Stefanski LA. Controlling variable selection by the addition of pseudo variables. Journal of the American Statistical Association. 2007;102:235–243.

27. Harrell FE. Regression Modeling Strategies, With Applications to Linear Models, Logistic Regression, and Survival Analysis. Springer; New York: 2001.

28. Hammer SM, Katzenstein DA, Hughes MD, Gundaker H, Schooley RT, Haubrich RH, Henry WK, Lederman MM, Phair JP, Niu M, Hirsch MS, Merigan TC. for the AIDS Clinical Trials Group Study 175 Study Team. A trial comparing nucleoside monotherapy with combination therapy in HIV-infected adults with CD4 cell counts from 200 to 500 per cubic millimeter. New England Journal of Medicine. 1996;335:1081–1089. [PubMed]

29. Shen X, Huang HC, Ye J. Inference after model selection. Journal of the American Statistical Association. 2004;99:751–762.

30. Brookhart MA, van der Laan MJ. A semiparametric model selection criterion with applications to the marginal structural model. Computational Statistics and Data Analysis. 2006;50:475–498.

31. Hastie TJ, Tibshirani RJ. Generalized Additive Models. Chapman and Hall; London: 1990.

32. Carroll RJ, Ruppert D, Stefanski LA, Crainiceanu CM. Measurement Error in Nonlinear Models: A Modern Perspective. 2. Chapman and Hall/CRC; New York: 2006.

PubMed Central Canada is a service of the Canadian Institutes of Health Research (CIHR) working in partnership with the National Research Council's national science library in cooperation with the National Center for Biotechnology Information at the U.S. National Library of Medicine(NCBI/NLM). It includes content provided to the PubMed Central International archive by participating publishers. |