|Home | About | Journals | Submit | Contact Us | Français|
For longitudinal binary data with non-monotone non-ignorably missing outcomes over time, a full likelihood approach is complicated algebraically, and with many follow-up times, maximum likelihood estimation can be computationally prohibitive. As alternatives, two pseudo-likelihood approaches have been proposed that use minimal parametric assumptions. One formulation requires specification of the marginal distributions of the outcome and missing data mechanism at each time point, but uses an “independence working assumption,” i.e., an assumption that observations are independent over time. Another method avoids having to estimate the missing data mechanism by formulating a “protective estimator.” In simulations, these two estimators can be very inefficient, both for estimating time trends in the first case and for estimating both time-varying and time-stationary effects in the second. In this paper, we propose use of the optimal weighted combination of these two estimators, and in simulations we show that the optimal weighted combination can be much more efficient than either estimator alone. Finally, the proposed method is used to analyze data from two longitudinal clinical trials of HIV-infected patients.
Longitudinal studies in which each subject is to be observed at a fixed number of time points have become very popular in social science and medical applications. For example, longitudinal data are often collected in AIDS, cardiovascular, and cancer clinical trials and observational studies. We focus on the case where the response variable over time is binary (e.g., success edu or failure) and are interested in modeling the marginal means or success probabilities; this setting has been well-described [1, 2, 3, 4, 5]. Such modeling is often complicated by the fact that in longitudinal studies, the outcome is not always observed at all assessment times. In addition, this missing data is often non-ignorable , since the probability that an outcome is missing at a given time can depend on the potentially missing value of the outcome at that time. The missing outcome data must be properly accounted for in the analysis, and numerous approaches have been proposed [7, 8, 9, 10, 11]. In clinical trials, an individual’s response is often missing at one follow-up time but observed at the next follow-up time, resulting in a large class of distinct missingness patterns, often called “non-monotone” missingness. We will, however, assume that all subjects have the outcome measured at the first time point; e.g., to be part of the study, the subject must be seen at baseline.
An example of a data set with this structure comes from two longitudinal clinical trials of HIV-infected patients sponsored by the AIDS Clinical Trials Group (ACTG): ACTG 116A  and 116B/117 . The two studies were randomized phase III double-blind trials, designed to compare two treatments, zidovudine (AZT) and didanosine (ddI); they differed with respect to the length of prior treatment with AZT and have been used in several combined analyses . The response of interest is normal CD4 cell count (> 200 cells per cubic millimeter) versus abnormal CD4 cell count (≤ 200) measured at baseline (week 0) and every week for up to 5 weeks from baseline. The cutoff of 200 was initially chosen because of its strong predictive value for development of opportunistic infections and has been adopted as a standard threshold of clinical importance . Previously, we analyzed these data for HIV patients with and without AIDS ; here we consider only the 431 patients with AIDS at baseline. The main question of scientific interest is the effect of treatment on changes in CD4 cell count sufficiency over time. As with most longitudinal studies, missing outcome data over time complicate the analysis. For example, fewer than 50% of the patients (202/431= 46.9%) have outcomes measured at all 6 occasions.
Table I shows the number of subjects seen at each of the six possible occasions. In Table I, we see that 383 of the 431 patients (88.9%) had a measurement at week 1; the percentage of patients seen slowly drops until 285 (66.1%) of the 431 patients are seen at week 5. A majority of the missing data is due to patients who drop out, i.e., once the patient misses a scheduled visit, no further measurements of the response variable are obtained. However, there are 109 (25.3%) patients who missed at least one measurement, but returned for a later measurement. In this setting, it is quite plausible that patients with high or normal CD4 counts are more likely to miss the scheduled study visits. If this is true, then missingness depends on the unobserved outcome of interest and is nonignorable. Indeed, some have argued that the only plausible non-monotone missing at random mechanisms are those that derive from randomized monotone missingness processes [17, 18]. In the longitudinal data setting, these processes require that missingness at an assessment depends on the prior assessment if and only if the prior assessment is observed; such processes are thus highly implausible here.
To formulate a full likelihood for non-ignorable non-monotone missing outcomes over time, one must specify a joint distribution for the T repeated binary outcomes of interest, of dimension 2T, and a model for the missingness mechanism. To estimate the parameters, a full likelihood approach has many nuisance parameters and is complicated algebraically; furthermore, estimation can be computationally prohibitive, especially when the number of times is large. As alternatives to a full likelihood procedure, two pseudo-likelihood [19, 20] procedures have been proposed by Troxel et al.  and Fitzmaurice et al.  under minimal parametric assumptions.
First, Troxel et al.  proposed a pseudo-likelihood that is formed by an “independence working assumption,” i.e., assuming for the purpose of estimation that the longitudinal binary measurements are independent over time and ultimately applying a robust “sandwich” variance estimate  to achieve proper inference. Specifically, their pseudo-likelihood first assumes a marginal logistic regression model for the outcome at each time point; it also assumes that the missingness probability at a given time depends only on the possibly missing response at that time and the covariates (the covariates are assumed to be fully observed). The chief attraction of this pseudo-likelihood approach is that it substantially eases the numerical complexities of the full likelihood approach by reducing high-dimensional sums to sums of a single dimension. Further, it alleviates the need to specify and estimate many nuisance parameters that are needed in a full likelihood approach. In addition, asymptotically unbiased estimators of the regression parameters and missingness parameters can be obtained. However, by assuming independence of repeated measures across measurement occasions, the method can be highly inefficient for estimating the regression parameters. For example, results from Table 1 of Troxel et al.  indicate that their pseudo-likelihood method can be very inefficient compared to the MLE, and in particular in estimating time trends.
Alternatively, Fitzmaurice et al.  proposed a pseudo-likelihood based on the idea of formulating a “protective estimator”  without having to estimate the missing data mechanism. Specifically, they assume that the baseline response is fully observed and that the probability that a response is missing at any future occasion is conditionally independent of the baseline response given the response at that occasion. This assumption ensures that the conditional distributions of the outcome at time 1 given the outcome at any future time are fully identifiable. These conditional distributions are functions only of the parameters of primary scientific interest (the regression parameters), and not the parameters of the missing data mechanism. Their pseudo-likelihood is based on the conditional distributions of the baseline response, given the response at each future occasion, for estimation of the regression parameters. The pseudo-likelihood requires only specification of the bivariate distribution of the outcome at time 1 and any future time, and is thus computationally much more feasible than maximum likelihood. The resulting parameter estimates are asymptotically unbiased when the identifying assumption holds. However, the results of their simulation study showed that, with high correlation, the “protective estimator” can be highly efficient for estimating time trends, but inefficient for estimating the effects of time-stationary covariates.
Since the estimate from Troxel et al.’s  approach is very inefficient for estimating time trends, and the estimate from Fitzmaurice et al.’s  approach is very inefficient for estimating time-stationary effects, this suggests formulating a new estimator of the marginal regression parameters that is a combination of these two estimators. In this paper, we propose forming a new estimator that is the asymptotic minimum variance linear combination of the two estimators [25, 26]. The new estimator is basically a weighted least squares estimate, where the weight matrix is the inverse of the estimated asymptotic covariance matrix of the vector formed from concatenating the Troxel et al. and Fitzmaurice et al. estimates. This estimated asymptotic covariance matrix is obtained using the “sandwich” variance estimator of White .
The remainder of the paper is organized as follows. In Section 2, we describe the underlying data models and introduce the necessary notation. In Section 3, we review the pseudo-likelihoods of Troxel et al. and Fitzmaurice et al., and our proposed weighted combination of the two. Section 4 illustrates the methods with the AIDS example. In Section 5, we present results from our simulation study, showing that our proposed estimator produces much more efficient estimates of the time trends than the Troxel et al. estimator, and much more efficient estimates of the time-stationary effects than the Fitzmaurice et al. estimator.
We assume that n independent subjects are to be observed at a fixed set of T occasions, t = 1, …, T. For the ith individual (i = 1, ……, n), we can form a T × 1 vector, Yi = [Yi1, …, YiT]′, where the binary random variable Yit equals 1 if the ith individual has response 1 (e.g., “success”) at time t, and 0 otherwise. Each individual also has a J × 1 covariate vector xit; we assume that all covariates are fully observed. The main interest here is in the marginal model for each binary outcome Yit, which we assume follows a logistic regression. The marginal distribution of Yit is Bernoulli with success probability
In a marginal model, the goal is to make inferences about the marginal regression parameters β, whereas the within-subject association among the repeated responses is regarded as a nuisance characteristic of the data. Although the association model is not even specified in the pseudo-likelihood of Troxel et al., the pairwise associations between the outcome at time 1 and the follow-up times must be correctly specified in the protective pseudo-likelihood of Fitzmaurice et al. to obtain consistent estimates. Thus, we briefly discuss the association model here.
The association between a pair of binary outcomes is typically measured in terms of marginal odds ratios  or marginal correlations . For ease of exposition, as well as to be compatible with the original protective pseudo-likelihood of Fitzmaurice et al., here we discuss marginal correlations. In general, we propose a generalized autoregressive(1)-type correlation structure. For two different points in time s ≠ t, the generalized autoregressive(1) model states that
where −1 < ρ < 1 and −∞ < θ < ∞. Note that if θ = 0, this correlation reduces to an exhangeable correlation, ρist = ρ, and if θ = 1, this correlation reduces to the usual autoregressive(1). Depending on the context, sometimes θ will be estimated, and sometimes it will be specified as 0 or 1. Notation-wise, we generally let α represent the parameter vector of the correlation model.
In many longitudinal studies, individuals are not observed at all T occasions on account of some stochastic missing data mechanism. Here, we assume that all subjects are observed at baseline (t = 1). However, subjects can be missing at any follow-up time. It is convenient then to introduce (T − 1) random variables, Rit, (t = 2, …, T), that equal 1 if Yit is observed and 0 if Yit is missing. As we discuss briefly in the following section, under the protective assumption used in the pseudo-likelihood of Fitzmaurice et al., a model for Rit does not even need to be specified. However, when using the pseudo-likelihood of Troxel, et al., the marginal model for Rit given Yit and xit does need to be correctly specified. Since Rit is binary, the marginal model for Rit involves specifiying the probability of being observed (Rit = 1). This probability is assumed to follow a logistic regression,
In this marginal model, if γ1 ≠ 0, then the missing data mechanism is non-ignorable, since the probability of being missing depends on possibly unobserved data Yit. In the next section, we briefly discuss the pseudo-likelihoods of Troxel et al. and Fitzmaurice et al., and we describe our proposed estimator.
In this section we review the pseudo-likelihood approach proposed by Troxel et al.  that uses an “independence working assumption,” i.e., assumes that observations are independent over time. The resulting pseudo-likelihood is a product of simple marginal terms and can be used to estimate the marginal regression parameters β and the marginal missingness parameters γ, but not the association parameters α. To describe this pseudo-likelihood, we let f(yit, rit|xit, β, γ) denote the marginal distribution of (Yit, Rit) at time t. We can write this distribution as
where f(yit|xit, β) is Bernoulli with success probability given in (1), and f(rit|yit, xit, γ) is Bernoulli with probability of being observed as given in (2). If we consider only the data at time t, then our observed data likelihood would be
if Yit were observed, and would be
if Yit were missing.
The pseudo-likelihood , then, which treats the observations at different times as independent, is
This pseudo-likelihood is simply a product of terms at each measurement occasion: when an observation is present, the Bernoulli probability function f(yit|xit, β) is multiplied by the probability of being observed (πit), and when the observation is missing, the product of f(yit|xit, β) and the missingness probability (1 − πit) is summed over the range of the possible values of Yit. Note that these marginal distributions are not a function of the association parameter α.
The maximum pseudo-likelihood estimate of Troxel et al.  under independence maximizes the log pseudo-likelihood, which can be obtained by setting the first derivative of the log pseudo-likelihood, i.e., the pseudo-score vector,
equal to 0 and solving for (ind, ind). Using method of moments ideas, the pseudo-likelihood estimator (ind, ind) can be shown to be consistent and asymptotically normal if the marginal bivariate distribution f(yit, rit|xit, β, γ) is correctly specified. The maximum pseudo-likelihood estimate can be obtained using a Newton-Raphson algorithm, or the same EM-algorithm  that would be used if the (Yit, Rit) pairs were truly independent. Finally, we note that the negative second derivative of the log pseudo-likelihood will not provide a consistent estimator of the asymptotic variance; instead, the so-called “robust” or “sandwich” variance estimator can be used .
The pseudo-likelihood of Fitzmaurice et al.  under the protective assumption is a product of (T − 1) simple conditional distributions and the marginal distribution of the outcomes at time 1. Recall that we assume Yi1 is observed for all subjects in the dataset, as in the AIDS study. The marginal distribution at time 1 for all subjects is the product of Bernoulli distributions over the n subjects, denoted by
Note that since no data are missing at time 1, one could obtain a consistent, albeit inefficient, estimate of β (excluding time effects or interactions with time) from (3).
Next, consider the conditional probability of the outcome at time 1 given the outcome at time t (t > 1) (and that Yit is observed),
Now suppose the conditional probability pr(Rit = 1|yi1, yit, xit, γ) does not depend on yi1, i.e.,
This implies that the probability of being missing at a time-point can be predicted by all (or some combination) of the data at that time. Under (4),
This implies that the conditional distribution of Yi1 given (yit, xit) for those observed at time t (Rit = 1), equals the population conditional distribution, f (yi1|yit, xit, β, α), which is a function of the parameters of interest β (as well as α). Since yit will be observed for all subjects with Rit = 1, one can get a consistent estimate of (β, α) by maximizing the following pseudo-likelihood ,
which includes all subjects at time 1 and f(yi1|yit, xit, β, α) when yit is observed. The marginal parameters are identified using arguments analogous to those presented by Brown for the normal case ; the means at the baseline assessment are clearly identified by the complete data at time 1, and subsequent means and correlation parameters are identified using combinations of the marginal distribution at time 1 and the conditional distributions involving the later assessments.
The maximum pseudo-likelihood can again be obtained by setting the pseudo-score vector,
equal to 0 and solving for (prot, prot). Using method of moments ideas, the pseudo-likelihood estimator (prot, prot) can be shown to be consistent and asymptotically normal if f(yi1|yit, xit, β, α) is correctly specified and (5) holds. The maximum pseudo-likelihood estimate can again be obtained using a Newton-Raphson algorithm. Again, the so-called “sandwich” variance estimator  must be used to obtain a consistent estimate of the variance.
The two approaches described above require different but in each case non-trivial assumptions related to the missing data mechanism. The independence approach of Troxel et al. requires correct specification of a missingness model in which the missingness probabilities at a given time may depend only on outcomes at that time. The protective approach of Fitzmaurice et al. requires that the missingness probabilities at a given time may depend on outcomes at that time but must not depend on outcomes observed at baseline, an assumption that obviates the need to specify the missing data model directly. While the protective assumption in (4) is not the most general non-ignorable missing data mechanism, it is still non-ignorable due to dependence of Rit on the outcomes at time t. This assumption is often quite reasonable, since for many nonignorable missing data mechanisms, missingness depends primarily on the unobserved data at time t, Yit, and possibly the covariates xit, but, conditional on Yit (and xit), missingness is independent of Yi1. However, even though the missing data mechanism does not have to be estimated, this protective assumption is, in a sense, stronger than the missingness assumptions made in the pseudo-likelihood proposed by Troxel et al. The Troxel et al. approach does not make any assumptions about the missingness probabilities at one time given data at that time and another time, but only makes assumptions about the missingness probability at one time given data at that time. On the other hand, the pseudo-likelihood proposed by Troxel et al. does require correct specification of the model for the missingness probability given in (2), which is not required by the Fitzmaurice et al. approach.
There are numerous scenarios in which both models would appear to be reasonable, for example, repeated assessments of highly correlated indicators of symptom occurrence such as tingling of the hands and feet in cancer patients receiving chemotherapy. One can plausibly hypothesize that the current occurrence of the symptom almost entirely determines the patient’s ability to attend the clinic and thus have the symptom measured; one can equally plausibly be confident of modeling correctly (or nearly correctly) the predictors of missingness, including the symptom value itself but also numerous other known complicating factors such as patient age, presence of family members to assist, treatment with anthracycline-based chemotherapy, etc.
Of greater interest are scenarios in which one set of assumptions holds but not the other. Consider, for example, a setting in which the likelihood of missingness depends on both the current assessment and the baseline value. This is plausible in the setting of quality of life where difficulty coping at baseline is often indicative of later missingness, but difficulty coping at later assessments also increases the likelihood of being missing; in addition, repeated assessments of quality of life tend to be highly variable and poorly correlated. In this setting, the protective assumption is violated, and the low correlation means that the protective estimator will not be robust to the violation; while the missingness model using the independence approach will also be misspecified by not including the baseline values, it will still correctly capture the nonignorability and thus suffer minimal bias. On the other hand, there are many scenarios in which the protective assumption is satisfied, but the missingness model in the independence approach is so badly misspecified that the subsequent estimates will be biased. In the coping example above, we might specify a model in which those with difficulty coping are more likely to be missing. In reality, however, it may be that both those with very poor coping and very high levels of coping may be equally likely to be missing, the former because they can’t manage their disease and the latter because they see no need for follow-up care. Subjects with missing values will be a mixture of these two populations; the monotone model for missingness that links difficulty coping with higher rates of missingness will fail to capture the higher rates of missingness among a subset of those who are coping well, resulting in biased estimates.
As shown in the previous subsections, under the protective assumption given in (4), and assuming the missingness probability given in (2) is correctly modeled, both the estimate ind from Troxel et al.’s pseudo-likelihood, and the estimate prot from Fitzmaurice et al.’s pseudo-likelihood will be consistent. Compared to the assumptions required for consistency of the full likelihood, namely correct specification of the full joint distribution of the outcome Yit and the missingness indicators Rit, these are still weak assumptions. Both pseudo-likelihood approaches are particularly attractive in this setting since the full likelihood can be far more complicated algebraically. In addition, ML estimation is computationally very demanding for T > 4, due to the additional nuisance parameters induced by the specification of the full joint distribution mentioned above. Note, however, because the association among the repeated measures is not used at all in the pseudo-likelihood of Troxel et al., their estimate can be very inefficient for estimating time trends. Further, when using Fitzmaurice et al.’s pseudo-likelihood, if the repeated measures were truly independent, then their pseudo-likelihood would simply reduce to the likelihood at time 1 (since the estimated correlations would be close to 0, and the pseudo-likelihood would mainly be a function of data at time 1). In this case, prot would be inefficient for estimating both time-stationary effects, since it only uses data at time 1, and time trends; in fact, there may be very little information in the pseudo-likelihood for estimating time trends in this case. In the simulations given in Section 5, with high correlations, we have found ind to be inefficient for estimating time trends and prot to be inefficient for estimating time-stationary effects. This suggests formulating a new estimator of the marginal regression parameters that is the optimal combination of ind and prot.
First, note that under the protective assumption, the joint asymptotic distribution of (ind, ind, prot, prot) is multivariate normal with mean vector (β, γ, β, α) and variance-covariance matrix
The variance estimate is obtained by replacing (β, γ) in Si,ind(β, γ) and with (ind, ind) and (β, α) in Si,prot(β, α) and with (prot, prot). We denote the submatrix of this estimated variance-covariance matrix corresponding to Vβ = Var(ind, prot) by β.
Under the protective assumption,
where IJ is a (J × J) identity matrix, and J is the dimension of xi.
We propose forming a new estimator that is the asymptotic minimum variance linear combination of ind and prot [25, 26]. The new estimator is basically a weighted least squares estimate, where the weight matrix is the inverse of β, the estimate of Vβ = Var(ind, prot). In particular, our proposed estimate is
which has asymptotic variance estimated by
The estimate wls has the minimum asymptotic variance of any linear combination of ind and prot, including both ind and prot. Thus, with large n, wls will have smaller variance than both ind and prot. The decrease in variance of wls with respect to ind and prot will depend on the configuration of the data, which we explore in simulations in Section 5.
We present an analysis of the CD4 count data from the AIDS clinical trials described in the Introduction. The parameters are estimated using the protective pseudo-likelihood, the non-ignorable pseudo-likelihood under independence, WLS, and generalized estimating equations (GEE)  under ignorable assumptions, described below. The two AIDS clinical trials are randomised phase III double-blind trials, designed to compare two therapeutic treatments: zidovudine (AZT) and didanosine (ddI); the dataset contains records on n = 431 patients diagnosed with AIDS or AIDS-related complex. The response of interest at time (week) t = 0, 1, …, 5 is the patient’s CD4 count sufficiency, with Yit = 1 if the CD4 count exceeds 200 and 0 otherwise. As discussed in the Introduction and given in Table I, CD4 count data are missing for 11% to 44% of patients at the five follow-up occasions; moreover, the missing data patterns are non-monotone.
To describe the treatment effect, we form the following indicator variable
Because of the stratified randomization, to control for baseline age we define the indicator variable
We model the logit of pit = pr(Yit = 1|xit), the probability that CD4 count > 200 at a given time, as a function of treatment, time and baseline age,
for t = 0, 1, …, 5. Note the exclusion of a main effect of treatment (AZTi). The main effect of AZT corresponds to the baseline (t = 0) treatment effect, and, because of randomization, there is no treatment effect at baseline, i.e., the main effect of AZT equals 0. For the ignorable GEE approach, we used the glimmix macro in SAS, which uses a linearization approach and allows incorporation of a random effect to accommodate the assumption of MAR data . We assume a compound symmetry correlation structure for simplicity; results were robust to various other choices.
Recall that the protective pseudo-likelihood requires specification of the correlations, ρ1t. We estimated the parameters under both AR(1) and exchangeable correlations; the results were so similar that for simplicity, we present results under an exchangeable assumption. Further, recall that for the non-ignorable pseudo-likelihood under independence, we must model the probability of being observed at each time point. It was conjectured that CD4 count is nonignorably missing since sicker patients may not come in for a further GP visit, e.g., sicker patients may have been hospitalized. We considered the following missing data mechanism:
for t > 0. Using the pseudo-likelihood approach in (6), both the yitAZTi and yitt interactions are significant at the 0.1% level. In general, the non-ignorable models suggest that subjects with normal CD4 counts and on AZT are less likely to be seen over time.
Table II displays estimates and standard errors for the parameters β for all models and methods. Note that the WLS estimator of the kth element of β is not just a weighted combination of the kth elements of the independence and protective estimators, ind,k and prot,k, but rather a weighted combination of the full vectors ind and prot, since the weight matrix is not diagonal; thus the WLS estimates do not always fall between the protective and pseudo-likelihood under independence estimates. From Table II, we see that the estimates from the WLS, protective approach, and pseudo-likelihood approach under independence are all similar, but the WLS has the smaller standard errors than these other two non-ignorable approaches. For example, for the time-stationary age effect, the estimated relative efficiency (ratio of estimated variances) is 44% for protective versus WLS and 87% for the pseudo-likelihood under independence versus WLS. For the AZT*TIME interaction, the estimated relative efficiency (ratio of estimated variances) is 46% for protective versus WLS and 15% for the pseudo-likelihood under independence versus WLS. The estimated exchangeable correlation is 0.54, indicating high correlation among the repeated responses, and we show in the simulation section that very substantial efficiency gains over the protective and pseudo-likelihood under independence approaches can be made using the WLS estimator when the correlation is high. This highlights the efficiency that can be gained using the WLS approach. However, this is just one example. To examine the finite sample properties of these approaches, we conducted a simulation study in the next section.
From Table II, we see that, among the non-ignorable approaches, the estimates are similar, except for the AGE effect using the protective estimate, which is over 50% smaller (and, as discussed above, also over 50% more variable). Comparing GEE to the non-ignorable approaches, we see that the GEE estimate of the time by treatment interaction is much smaller than the estimate using the non-ignorable approaches. This also highlights how different assumptions about the missing data mechanism can produce discernibly different, and possibly conflicting, estimates of effects.
We compared the WLS estimator, the protective estimator, the pseudo-likelihood estimator under independence, the ML estimator using the correct non-ignorable missingness mechanism, and GEE under an ignorable missing data mechanism. To ensure feasibility of the simulation study, we restricted the number of occasions to T = 3 and considered a simple two-group study design configuration (e.g., evenly randomized between active treatment and placebo).
Let xi = 0, 1 indicate treatment group membership. The binary outcomes, denoted by (Yi1, Yi2, Yi3), are assumed to follow a Bahadur model , with joint probabilities
for t = 1, 2, 3. We group α = [ρ12, ρ13, ρ23, ρ123]′. For the simulation study, we choose β0 = −0.25, βx = 0.5, and βt = 0.20. A variety of different correlation structures were examined and the same overall pattern of results was obtained. For reasons of parsimony, we present the results from an exchangeable correlation with ρist = ρ.
We performed simulations with the following true non-ignorable missingness mechanism,
for t > 1, and we let the missingness indicators be independent at the three occasions. For the simulation study, the true model parameters in (7) are γ0 = −0.5, γ1 = 1.0, γ2 = 0.2, and γ3 = 1.0. Here, missingness at a given occasion depends upon group membership, time, and the possibly missing outcome at that occasion. In this mechanism, non-monotone missingness can occur in that an outcome can be missing at time s (Ris = 0), but observed at a future time t (Rit = 1 for t > s). Given the true γ parameters, the percentage missing is approximately 34% at time 2 and and 30% at time 3. The full distribution fr(ri|yi, xit, γ) is
In the simulations reported in Table III, all of the non-ignorable methods are approximately unbiased, whereas GEE is clearly biased. The main interest of this simulation is to explore the efficiency gains of WLS over the protective and pseudo-likelihood estimator under independence. We provide both the average of the estimated variance and the empirical simulation variance; in general they match closely, except for the protective estimator when the correlation is low and the variance is poorly estimated. We see that the WLS estimator displays considerable gains in efficiency over the protective estimator for both βτ and βx for all correlations and sample sizes. In fact, the protective estimator is never more than 65% efficient compared to the WLS estimator; it is usually considerably less efficient. When the correlation is weak (e.g., ρ = 0.1), the pseudo-likelihood under independence is nearly as efficient as the MLE (and thus also the WLS), since in this case, this estimator is close to the MLE. The variance of the WLS estimator is always smaller than the pseudo-likelihood under independence, although it performs less well when the correlation is low. In general, for the group effect, the pseudo-likelihood under independence is at least 90% as efficient as WLS. However, for the time effect, the pseudo-likelihood under independence can be substantially less efficient when the correlation is moderate to high. For example, when n = 450 and ρ = .25, for the time effect, the pseudo-likelihood under independence is only 64% as efficient as WLS. Further, when n = 450 and ρ = .4, for the time effect, the pseudo-likelihood under independence is only 48% as efficient as WLS. Comparing WLS to maximum likelihood, we see that WLS has at least 90% efficiency for any configuration, except for the group effect when n = 300 and ρ = .4, in which case it is 85% efficient.
We have proposed a weighted least squares estimator (WLS) which is an optimal combination of ind and prot. The WLS estimator is appropriate for the estimation of marginal models for longitudinal binary data with non-monotone, nonignorably missing outcomes. Unlike the full likelihood, WLS requires specification of the bivariate distribution of the data at time 1 given all future times on the same subject (for the protective estimator) and the marginal missing model at each time point. Further, compared to maximum likelihood, which requires the full likelihood to be correctly specified in order to obtain consistent estimates, the pseudo-likelihood estimates are consistent as long as the protective assumption holds and the marginal missingness model is correctly specified. We have discussed above some of the scenarios in which both the protective assumption should hold and the missingness model can be specified with a fair degree of confidence. In such cases, the use of the WLS estimator has the benefit of added efficiency compared to both of its components. In some scenarios, however, the analyst may be unwilling to require additional assumptions in order to achieve this efficiency. As many other authors have noted, sensitivity analyses are a crucial component of any analysis involving potentially nonignorable missing data [6, 31, 32, 33]. Comparisons of results obtained using models such as those described here, that allow for nonignorable missing data, with models making assumptions of MAR data are extremely instructive; in addition, comparisons of results from models making different assumptions about the mechanisms of nonignorability, as in the various approaches discussed here, can help elucidate the missing data mechanism.
Because of the broad range of possible missing data configurations and underlying probability distributions generating the data, it is difficult to draw definitive conclusions from simulation studies, and we can make only general suggestions. Based on our simulation studies, however, we have shown that one can take two relatively inefficient estimators (the protective and pseudo-likelihood under independence), and create a highly efficient estimator in the WLS estimator.
The authors are grateful for constructive comments from two reviewers, and for the support provided by the following grants from the US National Institutes of Health: AI 60373, GM 29745, CA 74015, CA 70101, MH 054693, and CA 68484. Andrea Troxel gratefully acknowledges support from the Columbia University Institute for Scholars at Reid Hall, Paris. Geert Molenberghs gratefully acknowledges financial support from the Belgian Science Policy IAP research network #P6/03.