|Home | About | Journals | Submit | Contact Us | Français|
In longitudinal clinical trials, a common objective is to compare the rates of changes in an outcome variable between two treatment groups. Generalized estimating equation (GEE) has been widely used to examine if the rates of changes are significantly different between treatment groups due to its robustness to misspecification of the true correlation structure and randomly missing data. The sample size formula for repeated outcomes is based on the assumption of missing completely at random and a large sample approximation. A simulation study is conducted to investigate the performance of GEE sample size formula with small sample sizes, damped exponential family of correlation structure and non-ignorable missing data.
In controlled clinical trials, subjects are often evaluated at baseline and intervals across a treatment period. A common objective of longitudinal clinical trials is to investigate whether the rates of changes in an outcome variable are significantly different between two treatment groups. Generalized estimating equation (GEE) has been widely used to compare the rates of changes between treatment arms due to its robustness to misspecification of the true correlation structure and randomly missing data .
Sample size calculation is an important issue in the design of longitudinal clinical trials to compare the rates of changes between two treatment groups. Several authors present sample size estimation for repeated measurements studies using GEE. Liu and Liang  propose a general sample size formula for studies with correlated observations using GEE. Their sample size formula is applicable for both clustered data and longitudinal data. Their method is based on the generalized score test and the resulting score test statistic follows an asymptotically non-central χ2 distribution under the alternative hypothesis. No closed-form sample size formula is available except for some special cases. Rochon  proposes a sample size formula using a non-central version of the Wald χ2 test statistics. The sample size formula is provided to handle monotone missing data patterns. Jung and Ahn  derive an explicit closed-form sample size formula using GEE for the comparison of the rates of changes between two treatment groups. Their formula handles both independent and monotone missing data patterns and general correlation structures.
The sample size formula of Jung and Ahn  is based on an asymptotic theory and the assumption of missing completely at random (MCAR), see Rubin . In this paper, we investigate the performance of the sample size formula with small sample sizes, damped exponential family of correlation structures and non-ignorable missing data.
For subject i (1 ≤ i ≤ n), let yij dente a continuous response variable at the measurement time tij (j=1,…,Ki). We consider the linear model
where serial correlation may exist among error terms εi1, …, εiKi, which have mean zero and variance σ2. Here, ri is the treatment indicator taking 0 for control group and 1 for treatment group. Observations are assumed to be MCAR.
Suppose that each subject is scheduled to be assessed at K time points. Let σ2 = var(εij) and ρjj′ = corr(εij, εij′) for j ≠ j′ and ρjj = 1. Let pj = E(δij) be the proportion of subjects with assessment at tj, and pjj′ = E(δijδij′) be the proportion of subjects with assessments at both tj and tj′ (pjj = pj).
With the two-sided α and power 1−γ, the required sample size to test H0: β4 = 0 versus Ha: β4 = β40 is given by
where for k=1, 2, , = E(ri), and . See Jung and Ahn  for details.
The adequacy of the sample size formula will be assessed through simulation for large effect sizes and non-ignorable missing situations. We estimate the required number of subjects (n) using equation (2), and then generate for each of 5000 simulated samples n subjects from a multivariate normal distribution with the damped exponential correlations and missing data patterns. We consider two missing patterns. One is independent missing, where pjj′ = pjpj′, and the other is monotone missing, where pjj′ = pj′ for j<j′ (p1 ≥ ··· ≥ pK). We set the maximum number of measurements for each subject at K=6, and use the following vectors for the probability of assessment at each time point:
Note that P1, P2 and P3 describe the scenarios where an increasing number of subjects miss visits over time, with the dropout rate at the end of study being 0.3. We use P4 to denote no missing data. In sample size estimation, a common practice is to use n = n0/(1 − q), where n0 is the sample size estimate under no dropout and q is the expected dropout rate. Following this procedure, the final sample size estimates under P1, P2 and P3 would all be n=n0/0.7, even though different amounts of information have been lost in the data. Simply adjusting the sample size estimate by n=n0/(1−q) is a crude and conservative method. As we would demonstrate in the simulation study, the sample sizes estimated by (2) under P1, P2 and P3 are usually much less than the adjusted sample size under P4 by a factor of 0.7.
We estimate the sample size using the sample size formula given in equation (2) with the prespecified missing data patterns, missing probabilities, damped exponential correlation structures, σ2=1, type I error α=0.05, power 1−γ=0.8, β40 = 0.1 and = 0.5. Once the sample size (n) is estimated, we generate 5000 replicated samples of n subjects using β = (0.3, 0.3, 0.5, 0.1) with correlated measurement errors generated from the multivariate normal distribution. Note that the sample size estimate does not depend on the values of β1, β2 and β3.
We examine the effect of correlation using a damped exponential family of correlation structures proposed by Munoz et al. . The correlation between two observations separated by s units of time is modeled by ρj,j+s = ρc, where c = sθ. Here, ρ is the correlation between observations separated by one unit and θ is a damping parameter. The damped exponential correlation structure provides a rich family of correlation structures. Compound symmetry (CS) and first-order autoregressive (AR(1)) correlation structures are obtained by setting θ=0 and 1. The correlation structures of the repeated measurements change from CS to AR(1) in graded steps as θ increases from 0 to 1. We investigate the effect of the correlation structures on sample size estimate using and 1 and ρ=0.1, 0.25 and 0.5.
Table I presents the sample size estimates obtained to produce 80% power along with the empirical powers computed from the simulated samples under independent and monotone missing data patterns, respectively. As θ increases, sample size increases since the dependency decays when two measurement times get farther apart. The estimated sample size decreases as ρ increases. The empirical power obtained using the estimated sample size is generally close to the nominal value of 80%. Furthermore, given ρ and θ, the estimated sample sizes decrease in the order from P1 to P4. Comparing the estimated sample sizes under independent missingness and monotone missingness, we found that the difference increases with ρ. For example, under P2 and the CS correlation structure, the difference in sample sizes is 199−197=2 when ρ=0.1, 175−169=6 when ρ=0.25 and 135−124=11 when ρ=0.5. Table I also reports that adjusting for missing data by n0/(1−q) is too conservative. For example, under monotone missing pattern, for ρ=0.25 and θ=1, the estimated sample sizes under P1, P2 and P3 are 270, 266 and 262, respectively. If we use the naive adjustment method, the required sample size would be .
To better understand the impact of dropout on sample size and power, we conduct a simulation study with the dropout rate at the end of study equal to 0.5. The vectors of assessment probability are defined as follows:
The comparison of Tables I and andIIII indicates that the dropout rate has a great impact on the sample size. For a given missing pattern, ρ and θ, a higher dropout rate leads to a much larger sample size. The differences between independent and monotone missing patterns are more pronounced under the higher dropout rate. For example, under and the CS correlation structure, the difference in sample sizes is 240−234=6 when ρ=0.1, 220−206=14 when ρ=0.25 and 187−159=28 when ρ=0.5.
The GEE method is based on a large sample approximation. A large value of β4 will lead to a small sample size estimate. We investigate the performance of sample size formula when the estimated sample size is small. We set β40 = 0.2, and then estimate the required sample size n to achieve 80% power using the assessment probability vectors given in (3) and the damped exponential correlation structure under independent and monotone missing data patterns. Five thousand samples of repeated measurements data are generated. Table III reports the estimated sample sizes and the corresponding empirical powers calculated from the simulated samples under independent and monotone missing data patterns. According to formula (2), the sample size estimate is proportional to . Thus, the estimated sample sizes in Table III are about one-fourth of those estimated in Table I. The empirical powers are generally close to the nominal 80% power.
The sample size formula given in equation (2) is constructed under the MCAR assumption, which has been assumed in Sections 3.1 and 3.2. In this section, we investigate the performance of the sample size formula under non-ignorable missingness, where the missing probability depends on unobserved outcomes . Specifically, we assume that a higher outcome value leads to a lower chance of being observed. We define μ0j = β1+ β3tj, j=1,…,K, to be the mean response at time j for the control arm. The probability of followup for an individual i at time j is
with vij = 1 − a(Φ(yij; μ0j, 1) − 0.5). Here Φ(·; μ, s2) is the cumulative normal distribution function with mean μ and variance s2, pj is the overall assessment probability at time j and a is a tuning parameter controlling the impact of outcome on followup. For example, when a=0, we have qij = pj for i=1,…,n. Model (5) clearly shows the dependence of qij on yij. When a>0, larger values of yij lead to smaller values of vij or a lower followup rate qij. For the control group, yij centers around μ0j, which means that vij fluctuates around 1. On the other hand, with β2=0.3 and β4=0.1, the treatment group on average has outcomes greater than μ0j or on average vij<1 for a positive value of a. As a result, the treatment group tends to have a lower followup rate. We consider four possible values of the tuning parameter a=(0.5, 0.75, 1, 1.25), and the same four vectors of overall followup probability specified in (3).
We generate the simulated data by imposing non-ignorable missingness, and evaluate the performance of the sample size formula in terms of empirical powers when the MCAR assumption is violated. We compute the empirical powers as the proportion of samples rejecting H0: β4=0 among 5000 simulated samples. Table IV gives the sample size estimates and empirical powers under monotone missing data patterns. Larger values of a lead to greater loss in power. The simulation results under independent missing data patterns are similar to those under monotone missing data patterns. The table is not included here for independent missing data patterns. In both monotone and independent missing data patterns, we did not observe obvious trend in the effects of θ, ρ or assessment probability. Table IV gives the impact of non-ignorable missingness. Specifically, a great deal of empirical powers are below the nominal value 0.8, which is especially true for a=1.25. Note that the missing mechanism assumed in (5) leads to higher dropout in subjects with above-average measurements. That is, the large measurement value yields high dropout. Such a mechanism causes an underestimation of β40 and a smaller empirical power than the nominal level. Figure 1 plots the summary of assessment probabilities under the non-ignorable missingness with independent missing pattern, averaged over ρ and θ. The baseline assessment probability is P1=(1, 0.82, 0.79, 0.76, 0.73, 0.7). Under the assumed missing mechanism (5), the treatment group suffers higher dropout than the control group. Thus, each value of a corresponds to two curves, the upper curve for the control group and the lower curve for the treatment group. This group disparity grows as the value of a increases.
The GEE method has been widely used for the analysis of longitudinal clinical trials data since it does not depend on the restrictive symmetry assumption and does not require complete data for all subjects. The GEE sample size formula is based on the missing completely at random (MCAR) assumption and a large sample approximation. We evaluate the performance of the sample size estimate with the damped exponential family of correlation structures, small sample sizes and non-ignorable missing mechanisms. AR(1) and CS have been the formal correlation structures most often used by researchers for the sample size (power) estimation of longitudinal clinical trials due to their simplicity and distinctness of correlation pattern. We investigate the effect of correlation structures on sample size estimates using the damped exponential correlation structures, in which AR(1) and CS are special cases. Simulation results suggest that the sample size formula given in equation (2) yields empirical powers close to nominal levels for various correlation structures. When the information concerning the appropriate correlation pattern is absent, a conservative approach for testing the rates of changes between treatment groups is to adopt the AR(1) model instead of the CS model. Sample size formula also yields empirical powers close to the nominal level when the estimated total sample size ranges from 23 to 75, that is, when the estimated sample size in each group ranges from 12 to 38. The sample size formula for repeated outcomes is based on the assumption of MCAR. A simulation study is conducted to investigate the performance of the sample size formula under non-ignorable missingness. The empirical powers are also close to nominal levels under nonignorable missing mechanism even though the larger values of the tuning parameter a reduce the power of the study. However, more extensive simulation study is needed to evaluate the performance of GEE sample size formula with other non-ignorable missing mechanisms. The GEE sample size formula generally yields empirical powers close to the nominal levels under damped exponential correlation structures and small sample sizes. The computer program to estimate the sample size can be obtained from the author.
This work was supported in part by NIH grant UL1 RR024892. We thank two anonymous reviewers for their constructive comments and helpful suggestions.