|Home | About | Journals | Submit | Contact Us | Français|
Cronbach Coefficient Alpha (CCA) is a classic measure of item internal consistency of an instrument and is used in a wide range of behavioral, biomedical, psychosocial, and health-care related research. Methods are available for making inference about one CCA or multiple CCAs from correlated outcomes. However, none of the existing approaches effectively address missing data. As longitudinal study designs become increasingly popular and complex in modern-day clinical studies, missing data has become a serious issue, and the lack of methods to systematically address this problem has hampered the progress of research in the aforementioned fields. In this paper, we develop a novel approach to tackle the complexities involved in addressing missing data (at the instrument level due to subject dropout) within a longitudinal data setting. The approach is illustrated with both clinical and simulated data.
Measurement error models play a particularly important role in psychosocial research, as many quantities of interest are latent constructs and not directly observable. Outcomes for measuring such constructs are often derived based on instruments (questionnaires) that delve into multifaceted dimensions of an individual’s mental and physical health. For most instruments, items are scored on a Likert scale such as 0, 1, 2, 3, 4 and then totaled either across all items or a subset of items from a subscale to provide an assessment of the latent construct or a sub-domain of interest. An elegant measure to ensure that the different items in an instrument or a sub-scale capture a large amount of variability of either a uni-dimensioinal construct or multi-dimensional constructs sharing a common source is the Cronbach Coefficient Alpha (CCA). CCA ranges between 0 and 1, with larger values indicating higher degrees of consistency across the different items in terms of capturing a common source of variation [1, 2].
Methods are available for making inference about a single CCA and multiple dependent CCAs, with the latter based on data from the same subjects –. However, as these methods are derived based on joint multivariate normal distributions of item scores, they are sensitive to departures from such parametric distributional assumptions. Since item scores from most instruments in psychosocial research are based on Likert-type scales, they are intrinsically discrete and modeling them using parametric distributions for continuous outcomes such as the multivariate normal is theoretically flawed, especially when modeling subscales of an instrument with a very limited numerical range. Further, as these methods for comparing multiple CCAs from either several instruments when applied to a common group of subjects or subscales of an instrument are developed based on statistics for omnibus tests of overall homogeneity involving all the CCAs, they do not provide explicit asymptotic joint distributions of estimates of individual CCAs. As a result, they cannot be applied to test the equality of some combination of the CCAs, which is particularly common in longitudinal studies. For example, we may want to know whether the CCA for an instrument has dropped by 30% from baseline to some follow-up visit in a longitudinal study. None of these methods can be applied to address such a hypothesis.
As longitudinal study designs become increasingly popular and complex in modern-day studies, missing data are inevitable due to loss to follow up, making inference more challenging for analysis of longitudinal CCAs. Although a variety of methods are available for addressing missing data, none applies directly to the current setting involving inference for comparing multiple CCAs within a longitudinal data setting.
In this paper, we develop a novel approach to tackle the complexities involved in addressing missing data within a longitudinal and clustered data setting. The approach is illustrated with both clinical and simulated data.
Consider an instrument consisting of K items, which is administered to n subjects in a longitudinal study with T assessments. Let yitk denote the item score to the kth item from the ith subject at time t and let
where 1 denotes a K × 1 column vector of 1’s. The Cronbach Coefficient Alpha α can be expressed as:
Then, it is readily checked that
is a one-sample, vector-valued U-statistic and E () = θ [11, chap. 3]. By applying the theory of multivariate U-statistics [12, chap. 5], and the Delta method, = f () b is a consistent and asymptotically normal estimate of α. We summarize the asymptotic properties of in a theorem below.
Let Σθ = 4V ar (E (hij | yi)). Then, under mild regularity conditions, we have:
The proof of Theorem 1 is not provided since these conclusions follow as a special case from the more general results concerning the missing data case to be discussed in Section 2.2.
To find a consistent estimate of Σθ, first note that:
Thus, a consistent estimate of Σθ is given by:
where Ê (hij | yi) denotes E (hij | yi) given in (4) with estimated and substituting in place of θ and μ.
We can use Theorem 1 to test linear contrasts of the form:
where K is some p × T full rank matrix of known constants. For inference, we can use the Wald statistic, , which has an asymptotic central distribution with p degrees of freedom under the null H0.
In the presence of missing data, one approach is to apply the complete-data approach above to the subsample consisting of those subjects with complete data. However, in most longitudinal studies, missing data do not occur completely at random, as they often result from subjects’s deteriorated/improved health and other related conditions in response to the treatment. In such cases, missing data are predicted by observed responses and as a result, the missing completely at random (MCAR) assumption does not apply . Thus, such a complete-data approach not only reduces power, but also likely yields biased estimates. A better alternative is to include all available data and to address the inherent missing data problem.
To this end, we assume that missing data only occurs at the instrument level and define a vector of binary variables for indicating missing (or rather observed) instrument-level response as follows:
Define a new estimate of α with the form as before = f (), but with redefined by:
Note that although hijt in (7) is not defined if one of the yit and yjt is missing, t is still well-defined since hijt can be assigned any value in such cases without affecting t. The estimate t in (7) may be viewed as a generalization to a U-statistics setting of the classic inverse probability weighted (IPW) estimate for distribution-free inference about standard statistical models such as the weighted generalized estimating equations (WGEE) .
First, assume that Δi is known. Then, it can be shown that is both consistent and asymptotically normal. Thus, by the Delta method, = f () is a consistent and asymptotically normal estimate of α. We summarize these results in a theorem below, with a proof sketched in Appendix A1.
This case with known Δi may arise if data are missing by designs, as in some multi-stage trials in which we may want to over-sample those subjects with larger outcomes at a prior stage to participate in the current stage. For example, in a two-stage study with n subjects at stage one, we can model the selection probability Δi2 for stage two using the following logistic regression:
where ri2 denotes the selection indicator (1 for selected) and yi1 the outcome from the ith subject in the first stage. In the above model, β0 and β1 (≥ 0) determines the selection probability for subjects with outcomes yi1 to participate in the second stage.
In most applications, Δi is unknown and must be estimated. Under MCAR, ri is independent of yi. Consequently, Δi is functionally independent of yi and is readily estimated by the respective sample moments: . When Δi becomes dependent on yi, it is necessary to model Δi as a function of yi. However, it is difficult to model such a relationship without imposing some additional assumptions on the relationship between the occurrence of missing data and outcome . As in the literature, we focus on the missing at random (MAR) assumption, in which case the occurrence of missing data only depends on the observed response.
Let denote the observed responses for the ith subject. Then, under MAR,
As is the sub-vector of yi containing the non-missing responses, Δit above is essentially a function of the missing data patterns across the subjects. Since there are potentially a total of 2T different patterns, it is generally not feasible to model and estimate Δit in most real studies unless there is a certain structure in the patterns. Fortunately, in most longitudinal trials, missing data often occur as the result of subject dropout due to deteriorated/improved health and other related conditions, exhibiting the so-called monotone missing data pattern (MMDP). The structured MMDP reduces the number of missing data patterns to T, making it possible to model such a dependence with limited data in most studies.
We assume no missing data at baseline t = 1 so that ri1 = 1. Under MMDP, if yit is observed at t, yis is then observed at all earlier times s < t. Let denote the sub-vector of yi containing responses up to and including time t − 1 (2 ≤ t ≤ T ). Then, under MAR, it follows from (12) that
To estimate Δit, we first model the one-step transition probability of the occurrence of missing data using logistic regression:
In many applications, the limited data may prevent us from reliably estimating β, especially with a large K. In such cases, we may simplify the above model by substituting the mean item score in place of yis, yielding:
Now by invoking MMDP and the assumption of no missing data at t = 1, we obtain:
When Δi is estimated by the logistic regression model in (14), Δi () becomes subject to the sampling variability of . Since Theorem 2 treats Δi as a known constant, we must account for this extra variability for correct inference about α.
Suppose that β in the logistic model in (14) is estimated by maximum likelihood. Then, is the solution to the following score equations:
When using the estimated , the estimate defined in (7) becomes
Thus, we must also include the variability of in the asymptotic variance of (). By utilizing the properties of score equations and the projection-based U-statistics asymptotic expansion, we can derive the asymptotic variance to take into account this extra variability. We summarize the results below with a justification given in Appendix A2.
Then, under the assumptions of Theorem 2,
It is seen from (20) that when the weight function is estimated, the asymptotic variance of has an additional component Φ, which reflects the sampling variability in .
Although derived from a longitudinal data setting, the results in the theorems can also be applied to cross-sectional study data. For example, most of the available methods have been developed to compare whether several CCAs either from different instruments or from multiple domains, or subscales of an instrument are identical –. By identifying T as the number of instruments or domains of an instrument, yitk then represents the score from the ith subject in response to the kth item within the tth domain. We can use Theorem 1 for inference about such an omnibus hypothesis if there is no missing data. If missing data do occur at the domain level, we can still model the missing data using the above approaches. Although MCAR presents no additional problem, applications under MAR to the current context are generally more difficult, since it may not be possible to identify a simple structure to model the missing data indicators in (6) such as the MMDP for longitudinal data.
We illustrate the approach with both clinical and simulated data. We first present a clinical application and then follow up with investigations of the performance of the approach under small to moderate samples by simulation. In all of the examples, we set the statistical significance at α = 0.05. All analyses are carried out using a package we have developed for implementing the proposed approach using the R software platform (R Development Core Team (2009). R: A language and environment for statistical computing. R Foundation for Statistical Computing, Vienna, Austria. ISBN 3-900051-07-0, URL http://www.R-project.org)
A study on the benefits of total knee arthroplasty (TKA) was recently conducted in the Hospital for Special Surgery in 2008 (“Early clinical and radiographic results of the standard posterior stabilized and high flex polyethylene inserts in the genesis II total knee replacement system: A pilot study” was approved by the Institutional Review Board of the Hospital for Special Surgery). A modified version of the Knee Society rating System (KSS)  was used to report results for patients who underwent TKA at three visits including pre-operation, 6 weeks, and 4 months post-operation. For illustration purposes, we focused on a total of 117 patients who had observed data at the pre-op in the domain of the patient reported portion of the KSS that consists of 5 items measuring knee functions in walking (a 6-level ordinal scale), going up stairs (a 4-level ordinal scale), going down stairs (a 4-level ordinal scale), getting out of chair (a 4-level ordinal scale), and necessity of support for walking (a 3-level ordinal scale), respectively.
In this sample, 20 of the 117 patients were lost to follow-up, yielding a monotone missing data pattern. Let yit = (yit1, yit2,…, yit5) denote the 5 item scores at time t with t = 1 denoting pre-op, and t = 2, 3 corresponding to the two post-op assessments at 6 weeks, and 4 months, respectively. The missingness at time t was modeled using the logistic regression model in (15), with the predictor, ỹit = i(t−1), the mean score of yi(t−1), and baseline covariates including age, sex, and body mass index (BMI):
Shown in Table 1 are the estimates of parameters from the above model, their standard errors, and corresponding p-values. Missingness at 4 months post-op was dependent on the observed KSS at 6 weeks post-op (p-value=0.043). There was no evidence indicating the dependence of missingness at 6 weeks on the observed KSS at pre-op. The occurrence of missing data was not highly associated with any of the baseline covariates. Given these results, we proceeded with all subsequent analyses under MAR with inference based on Theorem 3.
Shown in Table 2 are the estimated αt’s and their standard errors over time, and the Wald test statistic and associated p-value under H0 of equal CCAs over all assessments computed under MAR. The estimates seem to show a drop in CCA at the first 6 weeks postoperative visit, but the Wald test indicates that this drop is not statistically significant (p-value=0.54). The early postoperative course of patients who underwent a TKA is generally different across patients, with considerable variations in symptoms, range of motion, and functions. These clinical differences are likely responsible for the drop of CCA at the first postoperative visit. As most patients are fully recovered 4 months after the surgery, the variability in such outcomes will reduce substantially, making the CCA bounce back to the level at the pre-operative visit. Thus, despite the large variation in patients’clinical outcomes over the course of surgery, the KSS questionnaire seems to remain internally consistent so that the total score can be used to provide a reliable measure to assist objective evaluation of postoperative total knee arthroplasty.
A limited simulation study was conducted to examine the empirical type I error rate for testing the null of equal CCAs over time based on a 5-item, Likert-scale questionnaire under a longitudinal study design with three assessments for four sample sizes–30, 50, 75, and 100–under complete, and missing data with MCAR and MAR. For each sample size, we generated observers’ratings over time yit = (yit1, yit2,…,yit5) under the null, H0: αt = α, with αt denoting the CCA at time t (1 ≤ t ≤ 3). We tested the null using the Wald statistic and estimated the type I error rate based on the empirical distribution of the test statistic obtained from 1,000 Monte Carlo (MC) replications.
We created the data yit over three assessments in two steps. First, we generated continuous outcomes over time, zit = (zit1, zit2,…,zit5), by simulating a 15 × 1 vector, , from a 15-variate normal distribution with mean 0 and variance matrix Σ = Σ2 Σ1, where denotes the Kronecker product and Σk are defined by:
In the above, Σ1 denotes the within-visit, between-item correlation matrix, ARm (1, ρbi) represents a m × m first-order autoregressive correlation structure with auto-correlation ρbi, Σ2 is the within-subject correlation matrix, and Cm (ρbt) denotes a m × m compound symmetry correlation matrix with correlation ρbt. For the simulation study, we set ρbi = 0.6 and ρbt = 0.5, yielding the true CCAs over time, αt = α = 0.742 (1 ≤ t ≤ 3). Next, we transformed each of the continuous observer score zitk into a 5-level categorical response by:
where Φ−1 denotes the inverse of the CDF of the standard normal.
For a given sample size n, let be the Wald statistic from the jth MC replication and G0.95 the 95th percentile of the distribution with 2 degrees of freedom. The empirical size is calculated as .
For the missing data case, we assumed no missing data at baseline t = 1 and simulated the missing response according to a MCAR and MAR model. For MAR, we considered the MMDP model, and simulated the missing data indicators at each time t, rit, from a Bernoulli distribution with the probability of success equal to pit (t = 2, 3). We specified the one-step transition probabilities pit at time t according to the logistic model in (15) under the Markov assumption, which in this case had the following form:
where i(t−1) denotes the mean of yit. We fixed βt1 = 0.3 (t = 1, 2) and solved the following equations for β20 and β30 to generate missing response rates of approximately 10% and 15% for times 2 and 3, respectively:
To ensure that the missing data indicators rit followed the MMDP, we further imposed the following restrictions, ri3 = ri3 × ri2 (1 ≤ i ≤ n). For MCAR, the same approach above was used except for setting βt1 = 0 in (22) (t = 1, 2).
In addition to examining type I error rates, power analysis for testing the null of equal CCAs over time was conducted in a second simulation study within the same longitudinal data setting under complete data, MCAR, and MAR. The true alpha was set at α1 = 0.7, α2 = 0.803, and α3 = 0.898 for times 1, 2, and 3, respectively. The power was estimated by .
We computed the estimate of α = (α1, α2, α3) and estimated its asymptotic variance αΦ using Theorems 3. Shown in Table 3 are the averaged over 1,000 Monte Carlo (MC) replications, along with the empirical variance V ar (). The type I error rates were obtained from . Under complete data, MCAR, and MAR, the averaged seemed to have some downward bias, especially for small samples n = 30, 50 under missing data. The asymptotic standard errors based on αΦ were almost identical to their empirical counterparts, V ar (), across samples under both complete and missing data. Although the empirical type I error rates were slightly higher than the nominal value 0.05 at n = 30, the upward bias seemed to have rapidly disappeared as the sample size increased.
Shown in Table 4 are the averaged over 1,000 Monte Carlo (MC) replications, the empirical variance V ar (), the model based asymptotic variance αΦ, and estimated power . Similar patterns in power were found across complete data, MCAR, and MAR. Although the estimated power was low at n = 30, it increased rapidly as the sample size increased.
We repeated the simulation for a larger number of items, k = 10. Compared to the results for k = 5 items, type I error rates are closer to the nominal 0.05 even for the small sample size n = 30. Power analysis was also improved significantly by achieving 60% power for sample size n = 30, which jumped up to 85% for complete, and MCAR, and 83% for MAR at n = 50.
We developed a U-statistics based approach to concurrently address missing data and stringent distribution assumptions when modeling the Cronbach Coefficient Alpha (CCA) within a longitudinal data setting. Unlike available methods, this approach requires no parametric distribution assumption on the item score and thus provides robust inference regardless of the data distribution. As longitudinal studies have become increasing popular in today’s research, the proposed approach fills a critical gap on the extant literature to address this timely data-analytic issue.
To demonstrate the robustness feature, we also conducted a simulation study to compare the U-statistic based approach with a popular method developed by Feldt et al  in a cross-sectional setting by considering the hypothesis H0: α = α0 vs. H0: α ≠ α0 under various sample sizes (n =30, 50, 100, and 200). Feldt et al  showed that the transformed alpha follows a F distribution under the assumed ANOVA model where items by subjects interaction effects and the residual errors are independent and normally distributed with homogeneous variance. To examine its performance under likert-type item scores, we generated skewed likert-type item scores with true alpha α0 = 0.693 and k = 5 items. Shown in Table 5 are the estimates of CCA and type I error rates for the proposed U-statistic based approach and their F-distribution based counterpart. Although the estimates of alpha, U (U-statistics based) and F (F-distribution based), are identical and close to the true alpha, the type I error rates are quite different between the two approaches. The type I error rate for the U-statistic based approach had some upward bias when the sample size was small, but rapidly decreased to the nominal level 0.05 as the sample size increased. In contrast, the type I error rate for the F-distribution based method remained upwardly biased despite the increase of the sample size. In fact, the type I error will remain upwardly biased even if the sample size approaches infinity, since the F-distribution is not the asymptotic distribution of the estimate. The observed difference in the behavior of type I error rates reflects the fact that the U-statistic based approach is a distribution-free procedure, yielding not only consistent estimates of CCA but also valid inference regardless of the distribution of the item score, while the inference of the F-distribution based method is based on the normal distribution and is not robust against departures from this underlying assumption.
Modeling CCA presents a major challenge under the popular mean response based regression paradigm as typified by the weighted generalized estimating equations (WGEE) due to the complexity in the analytic expression of this coefficient involving second-order moments. Although GEE II may be used to model the required second-order moments [12, 16, 17], it leads to more cumbersome algebra. Further, as it requires modeling more parameters, GEE II is generally not as efficient as the proposed U-statistics based approach.
The proposed IPW-based estimate may not be fully efficient since information from the subjects with missing data is only used to estimate the weight function. In many situations, reliable models may also exist for directly modeling the relationship between (missing) response and observed covariates or other auxiliary information. In such cases, we may combine the proposed IPW with this extra model to develop a double robust estimate, which not only ensures consistency if only one of the models–the model for missing data or the one for the auxiliary information–is correct, but also improves efficiency [14, 18].
In addition to CCA, the proposed approach is similarly applied to modeling other measures of agreement for continuous outcome such as the closely related intraclass correlation coefficient (ICC) –. In addition, missing data may also occur at the item level which can become complicated depending on the nature of the question and on the design of the study. For example, person-to-person interviews often result in more missing data and/or unreliable responses compared to questionnaires as interviewees may feel uncomfortable and refuse to answer certain types of questions in a face to face meeting. As missing responses arising from such sensitive questions likely follow the non-ignorable missing data rather than the missing at random (MAR) mechanism as in longitudinal studies, they are best addressed up front by study designs, since the former missingness mechanism is more difficult to model as compared to the MAR mechanism. Work is currently underway to generalize the proposed approach to address these limitations as well as to develop double robust estimates within a longitudinal setting.
This research is supported in part by NIH grant U54 RR023480. Dr. Ma was partially supported by the following grants: Center for Education and Research in Therapeutics (CERTs) (AHRQ RFA-HS-05-14) and Clinical Translational Science Center (CTSC) (UL1-RR024996). We sincerely thank Ms. Cheryl Bliss-Clark at the University of Rochester, an Editor and two anonymous reviewers for their constructive comments to improve the presentation of the manuscript.
We first establish a lemma.
where vijt are defined in (8). Then, E (Un) = 0 and
It is readily checked by the iterated conditional expectation that [12, chap. 1]
It follows that E (Un) = 0. Since vkjt = vjkt (j ≠ k), we have
Let . It then follows that
Thus, the projection of Ut,n is given by [12, chap. 5]
Since Ũt,n is a sum of independently and identically distributed random variables, it follows from the central limit theorem (CLT) that
By the theory of U-statistics (e.g. [12, chap. 5], Ũt,n and Ut,n have the same asymptotic distribution and thus
The lemma follows by applying a similar argument to the vector Un.
Let . By an argument similar to (24), we have, E (gijt) = θt. It then follows from the theory of U-statistics that
Thus, by Slutsky’s theorem, t = f (t) is consistent. Further, by applying the Lemma and Slutsky’s theorem, we obtain the asymptotic distribution of t:
Similarly, by considering the vector , we obtain
Theorem 2/(a) follows by applying the Delta method to = f ().
To show (10), first note that
Further, we have
Thus, Theorem 2/(b) follows by taking the covariance between it and it′ with μt substituted by its consistent estimate in (10).
As noted in Section 2.2, is the solution to the score equations in (17). From the properties of score equations, we have: