Search tips
Search criteria 


Logo of nihpaAbout Author manuscriptsSubmit a manuscriptHHS Public Access; Author Manuscript; Accepted for publication in peer reviewed journal;
Biometrics. Author manuscript; available in PMC 2008 October 28.
Published in final edited form as:
PMCID: PMC2574960

Improving efficiency of inferences in randomized clinical trials using auxiliary covariates


The primary goal of a randomized clinical trial is to make comparisons among two or more treatments. For example, in a two-arm trial with continuous response, the focus may be on the difference in treatment means; with more than two treatments, the comparison may be based on pairwise differences. With binary outcomes, pairwise odds-ratios or log-odds ratios may be used. In general, comparisons may be based on meaningful parameters in a relevant statistical model. Standard analyses for estimation and testing in this context typically are based on the data collected on response and treatment assignment only. In many trials, auxiliary baseline covariate information may also be available, and it is of interest to exploit these data to improve the efficiency of inferences. Taking a semiparametric theory perspective, we propose a broadly-applicable approach to adjustment for auxiliary covariates to achieve more efficient estimators and tests for treatment parameters in the analysis of randomized clinical trials. Simulations and applications demonstrate the performance of the methods.

Keywords: Covariate adjustment, Hypothesis test, k-arm trial, Kruskal-Wallis test, Log-odds ratio, Longitudinal data, Semiparametric theory

1. Introduction

In randomized clinical trials, the primary objective is to compare two or more treatments on the basis of an outcome of interest. Along with treatment assignment and outcome, baseline auxiliary covariates may be recorded on each subject, including demographical and physiological characteristics, prior medical history, and baseline measures of the outcome. For example, the international Platelet Glycoprotein IIb/IIIa in Unstable Angina: Receptor Suppression Using Integrilin Therapy (PURSUIT) study (Harrington, 1998) in subjects with acute coronary syndromes compared the anti-coagulant Integrilin plus heparin and aspirin to heparin and aspirin alone (control) on the basis of the binary endpoint death or myocardial infarction at 30 days. Similarly, AIDS Clinical Trials Group (ACTG) 175 (Hammer et al., 1996) randomized HIV-infected subjects to four antiretroviral regimens with equal probabilities, and an objective was to compare measures of immunological status under the three newer treatments to those under standard zidovudine (ZDV) monotherapy. In both studies, in addition to the endpoint, substantial auxiliary baseline information was collected.

Ordinarily, the primary analysis is based only on the data on outcome and treatment assignment. However, if some of the auxiliary covariates are associated with outcome, precision may be improved by “adjusting” for these relationships (e.g., Pocock et al., 2002), and there is an extensive literature on such covariate adjustment (e.g., Senn, 1989; Hauck, Anderson, and Marcus, 1998; Koch et al., 1998; Tangen and Koch, 1999; Lesaffre and Senn, 2003; Grouin, Day, and Lewis, 2004). Much of this work focuses on inference on the difference of two means and/or on adjustment via a regression model for mean outcome as a function of treatment assignment and covariates. In the special case of the difference of two treatment means, Tsiatis et al. (2007) proposed an adjustment method that follows from application of the theory of semiparametrics (e.g., van der Laan and Robins, 2003; Tsiatis, 2006) by Leon, Tsiatis, and Davidian (2003) to the related problem of “pretest-posttest” analysis, from which the form of the “optimal” (most precise) estimator for the treatment mean difference, adjusting for covariates, emerges readily. This approach separates estimation of the treatment difference from the adjustment, which may lessen concerns over bias that could result under regression-based adjustment because of the ability to inspect treatment effect estimates obtained simultaneously with different combinations of covariates and “to focus on the covariate model that best accentuates the estimate” (Pocock et al., 2002, p. 2925).

In this paper, we expand on this idea by developing a broad framework for covariate adjustment in settings with two or more treatments and general outcome summary measures (e.g., log-odds ratios) by appealing to the theory of semiparametrics. The resulting methods seek to use the available data as efficiently as possible while making as few assumptions as possible. In Section 2, we present a semiparametric model framework involving parameters relevant to making general treatment comparisons. Using the theory of semiparametrics, we derive the class of estimating functions for these parameters in Section 3 and in Section 4 demonstrate how these results lead to practical estimators. This development suggests a general approach to adjusting any test statistic for making treatment comparisons to increase efficiency, described in Section 5. Performance of the proposed methods is evaluated in simulation studies in Section 6 and is shown in representative applications in Section 7.

2. Semiparametric Model Framework

Denote the data from a k-arm randomized trial, k ≥ 2, as (Yi, Xi, Zi), i = 1, . . . , n, independent and identically distributed (iid) across i, where, for subject i, Yi is outcome; Xi is the vector of all available auxiliary baseline covariates; and Zi = g indicates assignment to treatment group g with known randomization probabilities P(Z = g) = πg, g = 1, . . . , k, equation M1. Randomization ensures that Z[perpendicular]X, where “[dbl vert, bar (under)]” means “independent of.”

Let β denote a vector of parameters involved in making treatment comparisons under a specified statistical model. For example, in a two-arm trial, for a continuous real-valued response Y, a natural basis for comparison is the difference in means for each treatment, E(Y | Z = 2) − E(Y | Z = 1), represented directly as β2 in the model

equation M2

In a three-arm trial, we may consider the model

equation M3

In contrast to (1), we have parameterized (2) equivalently in terms of the three treatment means rather than differences relative to a reference treatment, and treatment comparisons may be based on pairwise contrasts among elements of β. For binary outcome Y = 0 or 1, where Y = 1 indicates the event of interest, we may consider for a k-arm trial

equation M4

where logit(p) = log{p/(1 − p)}; β = (β1, . . . , βk)T ; and the log-odds ratio for treatment g relative to treatment 1 is βg, g = 2, . . . , k.

If Yi is a vector of continuous longitudinal responses Yij, j = 1, . . . , mi, at times ti1, . . . , timi, response-time profiles in a two-arm trial might be described by the simple linear mixed model

equation M5

where β = (β1, β2)T, and β2 is the difference in mean slope between the two treatments; extension to k > 2 treatment groups is straightforward. Alternatively, instead of considering the fully parametric model (4), one might make no assumption beyond

equation M6

leaving remaining features of the distribution of Y given Z unspecified. For binary Yij, the marginal model logit{E(Yij | Zi)} = α + {β1 + β2I(Zi = 2)}tij might be adopted.

In all of (1)-(5), β (p × 1) is a parameter involved in making treatment comparisons in a model describing aspects of the conditional distribution of Y given Z and is of central interest. In addition to β, models like (4) and (5) depend on a vector of parameters γ, say; e.g., in (4), equation M7; and γ = α in (5). In general, we define θ = (βT , γT)T (r × 1), recognizing that models like (1)-(3) do not involve an additional γ, so that θ = β.

For these and similar models, consistent, asymptotically normal estimators for θ, and hence for β and functions of its elements reflecting treatment comparisons, based on the data (Yi, Zi), i = 1, . . . , n, only and thus “unadjusted” for covariates, are readily available. Unadjusted, large-sample tests of null hypotheses of “no treatment effects” are also well-established. The difference of sample means is the obvious such estimator for β2 in (1) and is efficient (i.e., has smallest asymptotic variance) among estimators depending only on these data, and a test of H0 : β2 = 0 may be based on the usual t statistic. Similarly, the maximum likelihood estimator (MLE) for β2 in (4) and associated tests may be obtained from standard mixed model software. For k > 2, pairwise and global comparisons are possible; e.g., in (2), the sample means are efficient estimators for each element of β, and a test of H0 : β1 = β2 = β3 may be based on the corresponding two-degree-of-freedom Wald statistic.

As noted in Section 1, the standard approach in practice for covariate adjustment, thus using all of (Yi, Xi, Zi), i = 1, . . . , n, is based on regression models for mean outcome as a function of X and Z. E.g., for k = 2 and continuous Y, a popular such estimator for β2 in (1) is the ordinary least squares (OLS) estimator for [var phi] in the analysis of covariance model

equation M8

extension to k > 2 treatments is immediate. See Tsiatis et al. (2007, Section 3) for discussion of related estimators for β2 in the particular case of (1). If (6) is the correct model for E(Y | X, Z), then [var phi] and β2 in (1) coincide, and, moreover, the OLS estimator for [var phi] in (6) is a consistent estimator for β2 that is generally more precise than the usual unadjusted estimator, even if (6) is not correct (e.g., Yang and Tsiatis, 2001). For binary Y, covariate adjustment is often carried out based on the logistic regression model

equation M9

where the MLE of [var phi] is taken as the adjusted estimator for the log-odds ratio β2 in (3) with k = 2. In (7), [var phi] is the log-odds ratio conditional on X, assuming this quantity is constant for all X. This assumption may or may not be correct; even if it were, [var phi] is generally different from β2 in (3). Tsiatis et al. (2007, Section 2) discuss this point in more detail.

To derive alternative methods, we begin by describing our assumed semiparametric statistical model for the full data (Y, X, Z), which is a characterization of the class of all joint densities for (Y, X, Z) that could have generated the data. We seek methods that perform well over as large a class as possible; thus, we assume that densities in this class involve no restrictions beyond the facts that Z[perpendicular]X, guaranteed by randomization; that πg = P(Z = g), g = 1, . . . , k, are known; and that β is defined through a specification on the conditional distribution of Y given Z as in (1)-(5). We thus first describe the conditional density of Y given Z. Under (3) and (4), this density is completely specified in terms of θ, while (5) describes only one aspect of the conditional distribution, the mean, in terms of θ, and (1) and (2) make no restrictions on the conditional distribution of Y given Z. To represent all such situations, we assume that this conditional density may be written as pY|Z (y|z; θ, η), where η is an additional nuisance parameter possibly needed to describe the density fully. For (3) and (4), η is null, as the density is already entirely characterized. For (1), (2), and (5), η is infinite-dimensional, as these specifications do not impose any additional constraints on what the density might be, so any density consistent with these models is possible.

Under the above conditions, we assume that all joint densities for (Y, X, Z) may be written, in obvious notation, as pY,X,Z(y, x, z; θ, η, ψ, π) = pY,X|Z(y, x | z; θ, η, ψ)pZ(z; π), where pZ(z; π) is completely specified, as π = (π1, . . . , πk)T is known, and satisfy the constraints

equation M10
equation M11

The joint density involves an additional, possibly infinite-dimensional nuisance parameter ψ, needed to include in the class all joint densities satisfying (i) and (ii). Here, pX(x) is any arbitrary marginal density for the covariates, and (ii) follows because Z[perpendicular]X. In Web Appendix A, we demonstrate that a rich class of joint distributions for (Y, X, Z) may be identified such that X is correlated with Y and Z[dbl vert, bar (under)]X [condition (ii)] that also satisfy condition (i). Because the joint density involves both finite (parametric) and infinite-dimensional components, it represents a semiparametric statistical model (see Tsiatis, 2006, Section 1.2).

3. Estimating Functions for Treatment Parameters Using Auxiliary Covariates

We now derive consistent, asymptotically normal estimators for θ, and hence β, in a given pY|Z (y|z; θ, η) and using the iid data (Yi, Xi, Zi), i = 1 . . . , n, under the semiparametric framework satisfying (8) and (9). To do this, we identify the class of all estimating functions for θ based on (Y, X, Z) leading to all estimators for θ that are consistent and asymptotically normal under this framework. An estimating function is a function of a single observation and parameters used to construct estimating equations yielding an estimator for the parameters.

When the data on auxiliary covariates X are not taken into account, estimating functions for θ based only on (Y, Z) in models like those in (1)-(5) leading to consistent, asymptotically normal estimators are well known. For example, the OLS estimator for θ = β in the linear regression model (1) may be obtained by considering the estimating function

equation M12

and solving the estimating equation equation M13 in θ. The OLS estimator for β2 so obtained equals the usual difference in sample means. Likewise, with θ = β = (β1, . . . , βk)T and expit(u) = exp(u)/{1+exp(u)}, the usual logistic regression MLE for β in (3) is obtained by solving equation M14, where the estimating function m(Y, Z; θ) is equal to

equation M15

The estimating functions (10) and (11) are unbiased; i.e., have mean zero assuming that (1) and (3), respectively, are correct. Under regularity conditions, unbiased estimating functions lead to consistent, asymptotically normal estimators (e.g., Carroll et al., 2006, Section A.6).

Our key result is that, given a semiparametric model pY,X,Z(y, x, z; θ, η, ψ, π) based on a specific pY|Z(y|z; θ, η) and satisfying (8) and (9), and given a fixed unbiased estimating function m(Y, Z; θ) (r × 1) for θ, such as (10) or (11), members of the class of all unbiased estimating functions for θ, and hence β, using all of (Y, X, Z) may be written as

equation M16

where ag(X), g = 1, . . . , k, are arbitrary r-dimensional functions of X. Because Z[perpendicular]X, the second term in (12) has mean zero; thus, (12) is an unbiased estimating function based on (Y, X, Z). When ag(X) 0, g = 1, . . . , k, (12) reduces to the original estimating function, which does not take account of auxiliary covariates, and solving equation M17 leads to the unadjusted estimator equation M18 to which it corresponds. Otherwise, (12) “augments” m(Y, Z; θ) by the second term. With appropriate choice of the ag(X), the augmentation term exploits correlations between Y and elements of X to yield an estimator for θ solving equation M19 that is relatively more efficient than equation M20. The proof of (12) is based on applying principles of semiparametric theory and is given in Web Appendix B.

Full advantage of this result may be taken by identifying the optimal estimating function within class (12), that for which the elements of the corresponding estimator for θ have smallest asymptotic variance. This estimator for β thus yields the greatest efficiency gain over equation M21 among all estimators with estimating functions in class (12) and hence more efficient inferences on treatment comparisons. By standard arguments for M-estimators (e.g., Stefanski and Boos, 2002), an estimator for θ corresponding to an estimating function of form (12) is consistent and asymptotically normal with asymptotic covariance matrix

equation M22

where θ0 is the true value of θ, equation M23, and equation M24. . Thus, to find the optimal estimating function, one need only consider [left ceiling] in (13) and determine ag(X), g = 1, . . . , k, leading to [left ceiling]opt, say, such that [left ceiling][left ceiling]opt is nonnegative definite. For given m(Y, Z; θ), it is shown in Web Appendix C that [left ceiling]opt takes ag(X) = E{m(Y, Z; θ) | X, Z = g}, g = 1, . . . , k. Thus, in general, the optimal estimator in class (12) is the solution to

equation M25

In the case of β2 in (1), (14) yields the optimal estimator in (16) of Tsiatis et al. (2007).

4. Implementation of Improved Estimators

The optimal estimator in class (12) solving (14) depends on the conditional expectations E{m(Y, Z; θ) | Xi, Z = g}, g = 1, . . . , k, the forms of which are of course unknown. Thus, to obtain practical estimators, we first consider a general adaptive strategy based on postulating regression models for these conditional expectations, which involves the following steps:

(1) Solve the original estimating equation equation M26 to obtain the unadjusted estimator equation M27. For each subject i, obtain the values equation M28 for each g = 1, . . . , k.

(2) Note that the equation M29 are (r × 1). For each treatment group g = 1, . . . , k separately, based on the r-variate “data” equation M30 for i in group g, develop a parametric regression model equation M31, where equation M32; i.e., such that qgu(X, ζgu), u = 1, . . . , r, are regression models for each component of equation M33. We recommend an approach analogous to that in Leon et al. (2003, Section 4) where the qgu(X, ζgu) are represented as equation M34, , u = 1, . . . , r, and cgu(X) are vectors of basis functions in X that may include polynomial terms in elements of X, interaction terms, splines, and so on. This offers considerable latitude for achieving representations that can approximate the true conditional expectations, and hence predictions derived from them, well. We also recommend obtaining estimates equation M35 via OLS separately for each u = 1, . . . , r, as, by a generalization of the argument in Leon et al. (2003, Section 4), this will yield the most efficient estimator for θ in step (3) below when the qg(X, ζg) are of this form. For each subject i = 1, . . . , n, form predicted values equation M36 for each g = 1, . . . , k. (3) Using the predicted values from step (2), form the augmented estimating equation

equation M37

and solve for θ to obtain the final, adjusted estimator equation M38. We recommend substituting equation M39, in (15).

The foregoing three-step algorithm applies to very general m(Y, Z; θ). Often,

equation M40

for some A(Z, θ) with r rows and some f(Z, θ), as in (10) and (11). Here, a simpler, “direct” implementation strategy is possible. Note that E{m(Y, Z; θ) | X, Z = g} = A(g, θ){E(Y |X, Z = g) − f(g; θ)}; thus, for each g = 1, . . . , k, based on the data (Yi, Xi) for i in group g, we may postulate parametric regression models equation M41, for a vector of basis functions cg(X), and obtain OLS estimators equation M42. Then form for each i = 1, . . . , n the corresponding predicted values for E{m(Y, Z; θ) | X, Z = g} as equation M43, where we emphasize that, here, equation M44, g = 1, . . . , k, are functions of θ. Substituting the equation M45 equation M46 in (15), solve the resulting equation in θ directly to obtain equation M47.

Several observations follow from semiparametric theory. Although we advocate representing E{m(Y, Z; θ) | X, Z = g} or E(Y |X, Z = g), g = 1, . . . , k, by parametric models, consistency and asymptotic normality of equation M48 hold regardless of whether or not these models are correct specifications of the true E{m(Y, Z; θ) | X, Z = g} or E(Y |X, Z = g). Thus, the proposed methods are not parametric, as their validity does not depend on parametric assumptions. The theory also shows that, in either implementation strategy, if the qg are specified and fitted via OLS as described above, then, by an argument similar to that in Leon et al. (2003, Section 4), equation M49 is guaranteed to be relatively more efficient than the corresponding unadjusted estimator. Moreover, under these conditions, although ζg and πg, g = 1, . . . , k, are estimated, equation M50 will have the same properties asymptotically as the estimator that could be obtained if the limits in probability of the equation M51 were known and substituted in (14) and if the true πg were substituted, regardless of whether the qg are correct or not. In the direct strategy, if Y is discrete, it is natural to instead posit the equation M52 as generalized linear models; e.g., logistic regression for binary Y, and fit these using iteratively reweighted least squares (IRWLS). Although the previous statements do not necessarily hold exactly, in our experience, they hold approximately. Regardless of whether or not the qg are represented by parametric linear models and fitted by OLS, if the representation chosen contains the true form of E{m(Y, Z; θ)|X, Z = g) or E(Y |X, Z = g), respectively, then equation M53 is asymptotically equivalent to the optimal estimator solving (14). In general, the closer the predictions from these models are to the true functions of X, the closer equation M54 will come to achieving the precision of the optimal estimator. Because β is contained in θ, all of these results apply equally to equation M55.

Because in either implementation strategy equation M56 solving (15) is an M-estimator, the sandwich method (e.g., Stefanski and Boos, 2002) may be used to obtain a sampling covariance matrix for equation M57, from which standard errors for functions of equation M58 may be derived. This matrix is of form (13), with expectations replaced by sample averages evaluated at the estimates and ag(X) replaced by the predicted values using the qg, g = 1, . . . , k.

The regression models qg in either implementation, which are the mechanism by which covariate adjustment is incorporated, are determined separately by treatment group and are developed independently of reference to the adjusted estimator equation M59. Thus, estimation of β could be carried out by a generalization of the “principled” strategy proposed by Tsiatis et al. (2007, Section 4) in the context of a two-arm trial and inference on β2 in (1), where development of the models qg would be undertaken by analysts different from those responsible for obtaining equation M60 to lessen concerns over possible bias, as discussed in Section 1.

5. Improved Hypothesis Tests

The principles in Section 3 may be used to construct more powerful tests of null hypotheses of no treatment effects by exploiting auxiliary covariates. The key is that, under a general null hypothesis H0 involving s degrees of freedom, a usual test statistic Tn, say, based on the data (Yi, Zi), i = 1, . . . , n, only is asymptotically equivalent to a quadratic form; i.e.,

equation M61

where equation M62 is a (s×1) function of (Y, Z), discussed further below, such that equation M63, with equation M64 denoting expectation under H0; and equation M65.

When the notion of “treatment effects” may be formulated in terms of β in a model like (1)-(5), the null hypothesis is typically of the form H0 : Cβ = 0, where C is a (s×p) contrast matrix. E.g., in (2), C is (2 × 3) with rows (1, −1, 0) and (1, 0, −1). When inference on H0 is based on a Wald test of the form equation M66, where equation M67 is unadjusted estimator corresponding to an estimating function m(Y, Z; θ), and equation M68 is an estimator for the covariance matrix of equation M69. Here, B is the (p × r) matrix equal to the first p rows of equation M70, and θ0 is the value of θ H0.

In other situations, the null hypothesis may not refer to a parameter like β in a given model. For example, the null hypothesis for a k-arm trial may be H0 : S1(u) = · · · = Sk(u) = S(u), where Sg(u) = 1 − P(Yu|Z = g), and S(u) = 1 − P(Yu). A popular test in this setting is the Kruskal-Wallis test, which reduces to the Wilcoxon rank sum test for k = 2. Letting equation M71 and equation M72 be the average of the overall ranks for subjects in group g, the test statistic is equation M73. By results in van der Vaart (1998, Section 12.2), it may be shown that Tn is asymptotically equivalent to a statistic of the form (17), where equation M74 is (k − 1 × 1) with gth element {I(Z = g) − πg}{S(Y) − 1/2}.

To motivate the proposed more powerful tests, we consider the behavior of Tn in (17) under a sequence of local alternatives H1n converging to H0 at rate n−1/2. Typically, under suitable regularity conditions, equation M75 in (17) converges under the sequence H1n to a equation M76 random vector for some τ, so that Tn has asymptotically a noncentral equation M77 distribution with noncentrality parameter τTΣ−1τ. To obtain a more powerful test, then, we wish to construct a test statistic with noncentrality parameter as large as possible. Based on the developments in Section 3, we consider test statistics of the form

equation M78
equation M79

where equation M80. The second term in (19) has mean zero by randomization under H0 or any alternative. Accordingly, it follows under the sequence of alternatives H1n that equation M81 converges in distribution to a equation M82 random vector, so that equation M83 in (18) has an asymptotic equation M84 distribution with noncentrality parameter τTΣ*−1τ.

These results suggest that, to maximize the noncentrality parameter and thus power, we wish to find the particular Σ* , equation M85, say, that makes equation M86 non-negative definite for all Σ*, which is equivalent to making equation M87 non-negative definite for all Σ*. This corresponds to finding the optimal choice of ag(X), g = 1, . . . , k, in (19). By an argument similar to that leading to (14), the optimal choice is equation M88.

These developments suggest an implementation strategy analogous to that in Section 4:

(1) For the test statistic Tn, determine equation M89 and substitute sample quantities for any unknown parameters to obtain equation M90. E.g., for H0 : Cβ = 0 in model (2), with C (2 × 3) as above, m(Y, Z, θ) = {I(Z = 1), I(Z = 2), I(Z = 3)}T{Yβ1I(Z = 1) − β2I(Z = 2) − β3I(Z = 3)}, θ = (β1, β2, β3)T. Under H0, θ0 = (μ, μ, μ)T, say, so that m(Y, Z, θ0) = {I(Z = 1), I(Z = 2), I(Z = 3)}T(Yμ), and

equation M91

As μ is unknown, equation M92 is obtained by substituting equation M93 for μ. We recommend substituting equation M94 for πg, g = 1, 2, 3, in (20), as above. Similarly, for the Kruskal-Wallis test, equation M95.

(2) For each treatment group g = 1, . . . , k separately, treating the equation M96 for subjects i in group g as s-variate “data,” develop a regression model equation M97 by representing each component qgu(X, ζgu), u = 1, . . . , s, by the parametric “basis function” approach in Section 4; estimate each ζgu by OLS to obtain equation M98; and form predicted values equation M99, i = 1, . . . , n.

(3) Using the predicted values from step (2), form

equation M100

and substitute these values into (18). Estimate Σ* in (18) by equation M101.

Compare the resulting test statistic equation M102 to the equation M103 distribution. As in Section 4, there is no effect asymptotically of estimating ζg and πg, g = 1, . . . , k, so that equation M104 will achieve the same power asymptotically as if the limits in probability of equation M105 and the true πg were substituted. Notably, the test based on equation M106 will be asymptotically more powerful than the corresponding unadjusted test against any sequence of alternatives.

The approach of Tangen and Koch (1999) to modifying the Wilcoxon test for two treatments is in a similar spirit to this general approach.

6. Simulation Studies

6.1 Estimation

We report results of several simulations, each based on 5000 Monte Carlo data sets. Tsiatis et al. (2007, Section 6) carried out extensive simulations in the particular case of (1); thus, we focus here on estimation of quantities other than differences of treatment means.

In the first set of simulations, we considered k = 2, a binary response Y, and

equation M107

so that β2 is the log-odds ratio for treatment 2 relative to treatment 1, the parameter of interest; and θ = β = (β1, β2)T . For each scenario, we generated Z as Bernoulli with P(Z = 1) = P(Z = 2) = 0.5 and covariates X = (X1, . . . , X8)T such that X1, X3, X8 ~ equation M108; X4 and X6 were Bernoulli with P(X4 = 1) = 0.3 and P(X6 = 1) = 0.5; and X2 = 0.2X1 + 0.98U1, X5 = 0.1X1 + 0.2X3 + 0.97U2, and X7 = 0.1X3 + 0.99U3, where equation M109 = 1, 2, 3. We then generated Y as Bernoulli according to equation M110, g = 1, 2, with α0g and αg chosen to yield mild, moderate, and strong association between Y and X within each treatment, as follows. Using the coefficient of determination R2 to measure the strength of association, R2 = (0.18, 0.16) for treatments (1,2) in the “mild” scenario, with (α01, α02) = (0.25, −0.8), α1 = (0.8, 0.5, 0, 0, 0, 0, 0, 0)T , and α2 = (0.3, 0.7, 0.3, 0.8, 0, 0, 0, 0)T ; R2 = (0.32, 0.33) in the “moderate” scenario, with (α01, α02) = (0.38, −0.8), α1 = (1.2, 1.0, 0, 0, 0, 0, 0, 0)T , and α2 = (0.5, 1.3, 0.5, 1.5, 0, 0, 0, 0)T ; and R2 = (0.43, 0.41) in the “strong” scenario, with (α01, α02) = (0.8, −0.8), α1 = (1.5, 1.8, 0, 0, 0, 0, 0, 0)T and α2 = (1.0, 1.3, 0.8, 2.5, 0, 0, 0, 0)T . Thus, in all cases, X1, . . . , X4 are covariates “important” for adjustment while X5, . . . , X8 are “unimportant.” For each data set, n = 600, and, we fitted (22) by IRWLS to (Yi, Zi), i = 1, . . . , n, to obtain the unadjusted estimate of β. We also estimated β by the proposed methods using the direct implementation strategy, where the models equation M111 for each g = 1, 2 in the augmentation term were developed six ways:

  • Aug. 1 equation M112, fit by OLS
  • Aug. 2 equation M113, fit by OLS
  • Aug. 3 equation M114 fit by IRWLS
  • Aug . 4 equation M115, fit by IRWLS
  • Aug. 5 equation M116 by OLS with forward selection
  • Aug. 6 equation M117 by IRWLS with forward selection

where “true” means that cg(X) contained only equation M118, for which the corresponding element of αg was not zero (i.e., using the “true important covariates” for each g); and in Aug. 5 and 6 forward selection from linear terms in X1, . . . , X8 for linear or logistic regression was used to determine each equation M119, with entry criterion 0.05. Aug. 3, 4, and 6 demonstrate performance when nonlinear models and methods other than OLS are used. We also estimated β2 by estimating [var phi] in (7) via IRWLS two ways: Usual 1, where only the “important” covariates X1, . . . , X4 were included in the model; and Usual 2, where the subset of X1, . . . , X8 to include was identified via forward selection with entry criterion 0.05.

Table 1 shows modest to considerable gains in efficiency for the proposed estimators, depending on the strength of the association. The estimators are unbiased, and associated confidence intervals achieve the nominal level. In contrast, the usual adjustment based on (7) leads to biased estimation of β2, considerable efficiency loss, and unreliable intervals. This is a consequence of the fact that β2 is an unconditional measure of treatment effect while [var phi] is defined conditional on X; this distinction does not matter when the model for Y is linear but is important when it is nonlinear, as is (7) (see, e.g., Robinson et al., 1998).

Table 1
Simulation results for estimation of the log-odds ratio β2 for treatment Z = 2 relative to Z = 1 in (22) based on 5,000 Monte Carlo data sets. “Unadjusted” refers to the unadjusted estimator based on the data on (Y, Z) only, “Aug. ...

In the second set of simulations, we again took k = 2 and focused on β2, the difference in treatment slopes in the linear mixed model (4). In each scenario, we generated for each i = 1, . . . , n = 200 Zi as Bernoulli with P(Z = 1) = P(Z = 2) = 0.5; X1i, X2i, X3i as above; and subject-specific intercept β0i = 0.5 + 0.2X1i + 0.5X2i + b0i and slope equation M120, where (α01, α02) = (1.0, 1.3), equation M121, with D11 = 1, D12 = 0.2, and D22 = 0.4, so that corr(b0i, b1i) = 0.5. We generated mi = 9, 10, 11 with equal probabilities; took tij = 2(j − 1) for j = 1, . . . , mi; and generated Yij = β0i + β1itij + eij, j = 1, . . . , mi, where equation M122. Writing αg = (α1g, α2g, α3g), we took α1 = (0.2, 0.2, 0)T and α2 = (0.2, 0, 0.2)T, yielding R2 values between subject-specific slopes and covariates of (0.11, 0.14) in the two groups, for “mild” association; α1 = (0.13, 0.1, 0)T and α2 = (0.13, 0, 0.15)T , R2 = (0.24, 0.24), for “moderate” association; and α1 = (0.28, 0.25, 0)T and α2 = (0.28, 0, 0.25)T , R2 = (0.36, 0.36), for “strong” association. For each data set, we obtained the unadjusted estimate for θ by fitting (4) using SAS proc mixed (SAS Institute, 2006). For (4), m(Y, Z; θ) has components of form (16) for α and β and more complicated components quadratic in Y for D and equation M123. For simplicity, because the estimators for (α, β) and equation M124 are uncorrelated, we fixed D and equation M125 at the unadjusted analysis estimates in the components of m(Y, Z; θ) for (α, β), as asymptotically this will not impact precision of the estimators for (α, β), and used the direct implementation strategy based on the components for (α, β) only. We considered three variants on the proposed methods, all with each element of equation M126 fitted by OLS: Aug 1., taking equation M127, corresponding to the form of the true relationship; Aug 2., with cg(X) = (1, X1, X2, X3)T , so not exploiting the quadratic relationship in X1; and Aug 3., with equation M128, including an unneeded linear effect of X1. Writing now Xi = (X1i, X2i, X3i) , we also estimated β2 by the estimate of [var phi] from fitting via proc mixed the linear mixed model equation M129, denoted as Usual; such a model, with linear covariate effects only, might be prespecified in a trial protocol (e.g., Grouin et al., 2004). Table 2 shows that the proposed methods lead to relatively more efficient estimators when quadratic terms in X1 are included in the equation M130.

Table 2
Simulation results for estimation of β2 in the linear mixed model (4) using the usual unadjusted method, the proposed augmented methods denoted by “Aug. a” for a=1,2,3, and the “Usual” method, as described in the ...

6.2 Testing

We carried out simulations based on 10,000 Monte Carlo data sets involving k = 3 and the Kruskal-Wallis test. For each data set, we generated for each of n = 200 or 400 subjects Z with P(Z = g) = 1/3, g = 1, 2, 3, and (Y, X) with joint distribution of (Y, X) given Z bivariate normal with mean {β1I(Z = 1) + β2I(Z = 2), 0}T and covariance matrix vech(1, ρ, 1), where ρ = 0.25, 0.50, 0.75 corresponds to mild, moderate, and strong association between covariate and response. Under the null hypothesis, we set β1 = β2 = 0; simulations under the alternative involved β1 = 0.25, β2 = 0.4. For each data set, we calculated the unadjusted Kruskal-Wallis test statistic Tn and the proposed statistic equation M131 using the strategy in Section 5, with each component of the s = 2-dimensional models qg(X, ζg) in (21) represented as equation M132. Each statistic was compared to the 0.95 quantile of the equation M133 distribution. Table 3 shows that the proposed procedure yields greater power than the unadjusted test while achieving the nominal level, where the extent of improvement depends on the strength of the association between Y and X, as expected.

Table 3
Empirical size and power of the usual Kruskal-Wallis test Tn (unadjusted) and the proposed test equation M140 based on 10,000 Monte Carlo replications. Each entry in the columns labeled Tn and equation M141 is the number of times out of 10,000 that each test rejected the null ...

7. Applications

7.1 PURSUIT Clinical Trial

We consider data from 5,710 patients in the PURSUIT trial introduced in Section 1 and focus on the log-odds ratio for Integrilin relative to control. The 35 baseline auxiliary covariates are listed in Web Appendix D.

The unadjusted estimate of the log-odds ratio based on (22), equation M134, is −0.174 with standard error 0.073. To calculate the augmented estimator based on (22), we used the direct implementation strategy and took equation M135, g = 1, 2, with cg(X) including main effects of all 35 covariates, and fitted the models by OLS. The resulting estimate equation M136, with standard error 0.071. For these data, the relative efficiency of the proposed estimator to the unadjusted, computed as the square of the ratio of the estimated standard errors, is 1.06. For binary response, substantial increases in efficiency via covariate adjustment are not likely; thus, this admittedly modest improvement is encouraging.

7.2 AIDS Clinical Trials Group Protocol 175

We consider data on 2139 subjects from ACTG 175, discussed in Section 1, where the k = 4 treatments were zidovudine (ZDV) monotherapy (g = 1), ZDV+didanosine (ddI, g = 2), ZDV+zalcitabine (g = 3), and ddI monotherapy (g = 4). The continuous response is CD4 count (cells/mm3, Y ) at 20±5 weeks, and we focus on the four treatment means, with the same 12 auxiliary covariates considered by Tsiatis et al. (2007, Section 5).

We consider the extension of model (2) to k = 4 treatments, so that θ = β = (β1, . . . , β4)T, βg = E(Y|Z = g), g = 1, . . . , 4. The standard unadjusted estimator for β is the vector of sample averages; these are (336.14, 403.17, 372.04, 374.32)T for g = (1, 2, 3, 4), with standard errors (5.68, 6.84, 5.90, 6.22)T . Using the direct implementation strategy with each element of equation M137 represented using cg(X) containing all linear terms in the 12 covariates, the proposed methods yield equation M138 = (333.85, 403.83, 370.43, 376.45)T , with standard errors obtained via the sandwich method as (4.61, 5.93, 4.89, 5.11)T . This is of course one realization of data; however, it is noteworthy that the standard errors for the proposed estimator correspond to relative efficiencies of 1.51, 1.33, 1.46 and 1.48, respectively.

We also carried out the standard unadjusted three-degree-of-freedom Wald test for H0 : β1 = β2 = β3 = β4 and Kruskal-Wallis test for H0 : S1(u) = · · · = S4(u) = S(u), as well as their adjusted counterparts using cgu(X) containing linear and quadratic terms in the continuous components of X and linear terms in the binary elements. The unadjusted and adjusted Wald statistics are 59.40 and 109.58, respectively; the unadjusted and adjusted Kruskal-Wallis statistics are 49.04 and 100.53; and all are to be compared to equation M139 critical values. Again, although the evidence against the null hypotheses is overwhelming even without adjustment, the proposed test statistics are considerably larger.

See Web Appendix D for further results for these data.

8. Discussion

We have proposed a general approach to using auxiliary baseline covariates to improve the precision of estimators and tests for general measures of treatment effect and general null hypotheses in the analysis of randomized clinical trials by using semiparametric theory.

We identify the optimal estimating function involving covariates within the class of such estimating functions based on a given m(Y, Z; θ). For differences of treatment means or measures of treatment effect for binary outcomes, this estimating function in fact leads to the efficient estimator for the treatment effect. In more complicated models, e.g., repeated measures models, we do not identify the optimal estimating function among all possible. Our experience in other problems suggests that gains over the methods here would be modest.

The use of model selection techniques, such as forward selection in our simulations, to determine covariates to include in the augmentation term models should have no effect asymptotically on the properties of the estimators for θ. However, such effects may be evident in smaller samples, requiring a “correction” to account for failure of the asymptotic theory to represent faithfully the uncertainty due to model selection. Investigation of how approaches to inference after model selection (e.g., Hjort and Claeskens, 2003; Shen, Huang and Ye, 2004) may be adapted to this setting would be a fruitful area for future research.


This work was supported by NIH grants R37 AI031789, R01 CA051962, and R01 CA085848.

Supplementary Material


Supplementary Materials

Web Appendices A–D, referenced in Sections 2, 3, and 7, are available under the Paper Information link at the Biometrics website


  • Carroll RJ, Ruppert D, Stefanski LA, Crainiceanu CM. Measurement Error in Nonlinear Models: A Modern Perspective, Second Edition. Chapman and Hall/CRC; Boca Raton: 2006.
  • Grouin JM, Day S, Lewis J. Adjustment for baseline covariates: An introductory note. Statistics in Medicine. 2004;23:697–699. [PubMed]
  • Hammer SM, Katzenstein DA, Hughes MD, Gundaker H, Schooley RT, Haubrich RH, Henry WK, Lederman MM, Phair JP, Niu M, Hirsch MS, Merigan TC, AIDS Clinical Trials Group Study 175 Study Team A trial comparing nucleoside monotherapy with combination therapy in HIV-infected adults with CD4 cell counts from 200 to 500 per cubic millimeter. New England Journal of Medicine. 1996;335:1081–1089. [PubMed]
  • Harrington RA, PURSUIT Investigators Inhibition of platelet glycoprotein IIb/IIIa with eptifibatide in patients with acute coronary syndromes without persistent ST-segment elevation. New England Journal of Medicine. 1998;339:436–443. [PubMed]
  • Hauck WW, Anderson S, Marcus SM. Should we adjust for covariates in nonlinear regression analyses of randomized trials? Controlled Clinical Trials. 1998;19:249– 256. [PubMed]
  • Hjort NL, Claeskens G. Frequentist model average estimators. Journal of the American Statistical Association. 2003;98:879–899.
  • Koch GG, Tangen CM, Jung JW, Amara IA. Issues for covariance analysis of dichotomous and ordered categorical data from randomized clinical trials and non-parametric strategies for addressing them. Statistics in Medicine. 1998;17:1863–1892. [PubMed]
  • Leon S, Tsiatis AA, Davidian M. Semiparametric e cient estimation of treatment e ect in a pretest-posttest study. Biometrics. 2003;59:1046–1055. [PubMed]
  • Lesaffre E, Senn S. A note on non-parametric ANCOVA for covariate adjustment in randomized clinical trials. Statistics in Medicine. 2003;22:3586–3596. [PubMed]
  • Pocock SJ, Assmann SE, Enos LE, Kasten LE. Subgroup analysis, covariate adjustment, and baseline comparisons in clinical trial reporting: Current practice and problems. Statistics in Medicine. 2002;21:2917–2930. [PubMed]
  • Robinson LD, Dorroh JR, Lein D, Tiku ML. The e ects of covariate adjustment in generalized linear models. Communications in Statistics, Theory and Methods. 1998;27:1653–1675.
  • SAS Institute, Inc. SAS Online Doc 9.1.3. SAS Institute, Inc.; Cary, NC: 2006.
  • Senn S. Covariate imbalance and random allocation in clinical trials. Statistics in Medicine. 1989;8:467–475. [PubMed]
  • Shen X, Huang HC, Ye J. Inference after model selection. Journal of the American Statistical Association. 2004;99:751–762.
  • Stefanski LA, Boos DD. The calculus of M-estimation. The American Statistician. 2002;56:29–38.
  • Tangen CM, Koch GG. Nonparametric analysis of covariance for hypothesis testing with logrank and Wilcoxon scores and survival-rate estimation in a randomized clinical trial. Journal of Biopharmaceutical Statistics. 1999;9:307–338. [PubMed]
  • Tsiatis AA. Semiparametric Theory and Missing Data. Springer; New York: 2006.
  • Tsiatis AA, Davidian M, Zhang M, Lu X. Covariate adjustment for twosample treatment comparisons in randomized clinical trials: A principled yet flexible approach. Statistics in Medicine. 2007 in press. [PMC free article] [PubMed]
  • van der Laan MJ, Robins JM. Unified Methods for Censored Longitudinal Data and Causality. Springer; New York: 2003.
  • van der Vaart AW. Asymptotic Statistics. Cambridge University Press; Cambridge: 1998.
  • Yang L, Tsiatis AA. E ciency study for a treatment e ect in a pretest-posttest trial. The American Statistician. 2001;56:29–38.