|Home | About | Journals | Submit | Contact Us | Français|
Increasingly multiple outcomes are collected in order to characterize treatment effectiveness or to evaluate the impact of large policy initiatives. Often the multiple outcomes are non-commensurate, e.g., measured on different scales. The common approach to inference is to model each outcome separately ignoring the potential correlation among the responses. We describe and contrast several full likelihood and quasi-likelihood multivariate methods for non-commensurate outcomes. We present a new multivariate model to analyze binary and continuous correlated outcomes using a latent variable. We study the efficiency gains of the multivariate methods relative to the univariate approach. For complete data, all approaches yield consistent parameter estimates. When the mean structure of all outcomes depends on the same set of covariates, efficiency gains by adopting a multivariate approach are negligible. In contrast, when the mean outcomes depend on different covariate sets large efficiency gains are realized. Three real examples illustrate the different approaches.
Often multiple outcomes are collected in health-related studies in order to characterize treatment effectiveness or associations with covariates. This observation is particularly true in psychiatric studies where the primary outcome is an abstract construct that cannot be measured directly. Instead, several variables are measured as proxies of the underlying outcome of interest. For example, in evaluating the effectiveness of a new anti-psychotic, researchers will examine several outcomes such as the positive and negative syndrome scale (PANSS) score, symptom relapse, and quality of life.
Typically the multiple outcomes are non-commensurate, i.e., they are measured on different scales such as continuous and binary responses. Although there has been some development of multivariate methods for non-commensurate outcomes, the usual modeling strategy is to consider each outcome separately in a univariate framework. This strategy is less efficient in the sense that such an approach ignores the extra information contained in the correlation among the outcomes. Other advantages of a multivariate setting include better control over the type I error rates in multiple tests and the ability to answer intrinsically multivariate questions. For example, we might be interested in assessing the impact of a policy change on the quality of care (underlying outcome) rather than its impact on each outcome measured as a proxy of quality of care.
The challenge for multivariate methods is the nonexistence of obvious multivariate distributions for non-commensurate variables. Two general likelihood-based multivariate approaches have been proposed to avoid the direct specification of the joint distribution of the outcomes: factorizing the joint distribution of the outcomes and introducing an unobserved (latent) variable to model the correlation among the multiple outcomes.
The main idea of the factorization method is to write the likelihood as the product of the marginal distribution of one outcome and the conditional distribution of the second outcome given the previous outcome. Cox and Wermuth  discussed two possible factorizations for modeling a continuous and a binary outcome as functions of covariates. Fitzmaurice and Laird , and Catalano and Ryan  extended this approach to situations of clustered data.
Several models using latent variables have been proposed to analyze multiple non-commensurate outcomes as functions of covariates. Sammel et al.  discussed a model where the outcomes are assumed to be a physical manifestation of a latent variable and conditional on this latent variable, the observed outcomes follow a one-parameter exponential family model. The observed outcomes are modeled as functions of fixed covariates and a subject-specific latent variable. A drawback of this model that was later addressed by the authors, is its non-robustness to misspecification of the covariance because the mean parameters depend heavily on the covariance parameters. For example, if the outcomes are not correlated, the estimates of the covariate effects may be biased .
Arminger and Kusters  considered each outcome as a manifestation of an underlying continuous latent variable that is normally distributed. Dunson  extended this approach to accommodate non-normal latent variables, clustered data, non-linear relationships between the observed outcome and the underlying variables, multiple latent variables for each outcome type and covariate-dependent modifications of the relationship between the latent and underlying variables. Similar approaches were presented in the context of toxicity studies where longitudinal measurements are taken regarding multiple outcomes [8; 9]. Although very general, Dunson’s approach  produces a non-identifiable model for the case of a bivariate, binary or continuous outcome. This fact is well known in factor analysis (see for example Reilly ) where each factor needs to be a combination of three or more outcomes in order for the model to be identifiable, otherwise the parameter space has to be reduced. Often this is achieved by fixing some parameters to a constant. However, in Dunson’s model, it is not clear how to constrain the parameters to make the model identifiable without misspecifying the model for the mean or covariance. Lin et al.  addressed a similar identifiability problem in the context of models for multiple continuous outcomes by scaling the outcomes to have the same variance.
A quasi-likelihood approach was also proposed for non-commensurate outcomes. The generalized estimating equations (GEE) described by Liang and Zeger  were extended by Prentice and Zhao , and Zhao et al.  for mixtures of continuous and discrete outcomes. In their approach separate equations are used for each outcome and a working correlation matrix is used to induce the correlation among the outcomes. A sandwich-type variance can then be computed for the model parameters which is robust to the misspecification of the working correlation matrix. Despite the attractive properties of this approach, we are unaware of its use in practice.
In this paper we review the different approaches to model a binary and a continuous outcome. We introduce a new latent variable model by constraining the parameters of the latent model proposed by Dunson  for identifiability without restrictions on the correlation. We show that this latent model is equivalent to the factorization model presented by Catalano and Ryan  by demonstrating that they are only different parameterizations of the same model. We also implement the GEE approach proposed by Prentice and Zhao . Simulation studies are used to compare consistency, efficiency, and coverage of the multivariate approach with the univariate approach. Section 2 describes the usual univariate approach, and both likelihood-based and quasi-likelihood multivariate methods to model a continuous and a binary outcome. In Section 3 we compare estimates obtained from the latent variable model, the factorization model, and the GEE with those from the univariate approach in terms of bias and efficiency. Finally in Section 4 three real data sets illustrate our methods.
Let ybi denote a binary outcome, yci denote continuous outcome for the ith of n patients, and xbi and xci denote rb × 1 and rc × 1 vectors of covariates associated with each outcome. We use subindex k to denote a particular covariate, xbk or xck. We use a probit link for the binary outcome and the identity link for the continuous outcome. In some models these link functions arise naturally from construction and in other models, the links are used for illustration only, although other links could be used.
One common approach to model multiple outcomes as functions of covariates is to ignore the correlation between the outcomes and fit a separate model to each response variable. In this setting we use a probit regression for the binary response and a linear regression for the continuous response,
where βb = (βb1, … , βbrb), βc = (βc1, … , βcrc), and . The interpretation of the regression parameters for these models is the usual interpretation in univariate generalized linear regression models: βbk is the change in the probit of the expected value of ybi for an increase of one unit in the covariate xbk and βck is the change in expected value of yci for one unit increase in the kth-covariate. Estimates for the regression parameters can be obtained by maximizing the likelihood.
Fitzmaurice and Laird  proposed a model for a correlated binary and a continuous outcome based on the factorization of the joint distribution of the outcomes, f(yb, yc) = f(yb)f(yc | yb). The expected values of the outcomes are related to the covariates xb and xc, for example,
where and τ is the parameter for the regression of yci on ybi. Large absolute values of τ indicate a strong correlation between the two outcomes. If τ = 0 the two outcomes are independent given the covariate(s). The correlation that results from this model is
This factorization of the joint distribution has the convenient property that the model parameters have a marginal interpretation. βbk is change of the probit expected value of ybi for a one unit increase in the kth-covariate and because the term (ybi – μbi) has mean 0, βck is the change in the expected value of yci | xci, xbi for an increase of one unit in the covariate xck. Another characteristic of this model that makes it different from other approaches is the assumption regarding the distribution of yci. Conditional on ybi and the covariates yci is assumed to be normally distributed, implying that the marginal distribution of yci is a mixture of two normals. For a high correlation between the two response variables, the marginal distribution of yci | xci, xbi will in fact be bimodal. Therefore the covariance of yci | xci, xbi depends on xbi, i.e, .
Maximum likelihood estimates for the regression parameters of the factorization method can be obtained with commonly used algorithms for maximizing the likelihood. The log-likelihood function under the factorization model (2) is
where , and Φ(·) represents the cdf of the standard normal distribution.
The factorization of the joint distribution of ybi and yci can also be considered in the reverse order: f(yb, yc) = f(yc)f(yb | yc) . The model for the two outcomes is written as:
where and τ′ is the parameter for the regression of ybi on yci. In this case the interpretation of the regression parameters for the binary outcome is conditional on the continuous outcome. To obtain the marginal effects we have to average over yci. The marginal effect of the covariates on the binary outcome is then .
Sammel et al.  presented a latent variable model where it is assumed that the observed outcomes are physical manifestations of a latent variable. Conditional on this latent variable, the outcomes are assumed to be independent, and are modeled as functions of fixed covariates and a subject-specific latent variable. The effect of the covariate(s) is modeled through the latent variable. Let ui denote the latent variable and xik the covariate of interest, such as treatment. Then, ui | xik = γxik + δi, with δi ~ N(0, 1). The parameter γ represents the association between the covariate and the unobserved latent variable. The outcomes are modeled as functions of the latent variable:
and . Here, βb2 and βc2 indicate the strength of the association between the observed outcomes and the latent variable. Conceptually this model is very appealing because it translates the idea that the outcomes are measuring an underlying construct. A drawback however is that some covariance parameters are also present in the mean. For example, because and the model is very sensitive to misspecification of the correlation structure .
Another approach based on latent variables was proposed by Dunson . A major difference between this approach and Sammel’s approach relates to the association between the responses and the covariates. In Dunson’s approach the covariates are not included in the model through the latent variable but rather introduced separately. For the case of a binary and a continuous outcome, Dunson’s model would be written as
where and is a subject-specific latent variable. The means and covariance structure are modeled through difference parameters. The latent variable shared by both outcomes induces the correlation and it is assumed that given the latent variable, the two outcomes are independent. However, λb, λc, σu and σc are not identifiable. Fixing these parameters to any constant will result in a misspecification of the correlation between the outcomes. To better understand this argument, consider a similar model for two correlated continuous outcomes, y1 and y2,
where , and . The parameters associated with the variance components of the outcomes (λ1, λ2, σu, σ1 and σ2) are not identifiable. There are five parameters to be estimated but only information from the V ar(y1), V ar(y2) and Cov(y1, y2). We have to restrict at least two parameters to obtain an identifiable model. The correlation induced by the model is given by . If we constrain the parameters λ1 and λ2 to be 1, for example, the correlation becomes . It is easy to build a case where such model fails to induce the correct correlation. Suppose that the , and Corr(y1, y2 | x1i; x2i) = .8. So, and the correlation induced by the model (7) becomes which is incorrect. Fixing the variances of the error terms or the latent variable will lead to similar inconsistencies. A similar argument can be given for the model in (7). Although there is one less parameter than (8), there is less information to estimate the parameters because V ar(yb | xbi) is fully determined by the E(yb | xbi).
To determine appropriate constraints to the parameters in (7) we use an idea similar to the scaled multivariate mixed model proposed by Lin and Ryan . Let y1 and y2 be two continuous normally distributed outcomes associated with covariates x1 and x2, respectively. Given the covariates, we assume that the two outcomes are correlated. We define and where σ1 and σ2 are scaling parameters such that
where , and is a latent variable that induces the correlation between the two variables and . We can rewrite (9) and obtain the final expression for a latent model for two continuous outcomes:
where , , , and . The correlation between the two outcomes induced by the model is . So, the range of correlations that we can model is [0; 1) which requires that the outcomes are positively correlated. In many practical situations the researcher can anticipate the sign of the correlation. If the outcomes are expected to be negatively correlated a possible solution is to invert the coding of the binary outcome or to multiply the continuous outcomes by −1 and this way reverse the sign of the correlation.
These considerations motivate the constraints for the model (7) as follows. Let yb and yc be a binary and a continuous variable associated with covariates xb and xc, respectively. We want to develop a multivariate model that takes into account the potential correlation between yb and yc. The variable yc is assumed to be normally distributed given the covariates xc. Suppose there is an underlying variable , normally distributed given the covariates xbi, that is associated with the binary outcome, ybi, in the following way:
Define where σc is a scale parameter for the continuous outcome. The regression equations for the two variables can be written as:
with , and . The variances of the error terms are fixed at 1 by design. This it is just a convenient standardization to obtain a common variance and does not represent a restriction of the model. Any other standardization would work as well. The latent variable ui is introduced in both equations to induce the correlation between the outcomes. It is assumed that given ui, and are independent and consequently ybi and yci are also independent given ui.
Because , we can write the equation for the continuous outcome as , where and . The correlation between and yci is a function only of σu and is given by . However, is not observed. We can write the regression equation for the binary outcome, ybi, as . The final model is then,
The correlation between ybi and yci that results from this model can be calculated as:
where ϕ(·) is the standard normal density.
The parameters in (13) are interpreted conditional on ui. Given ui, is the change on the probit of the expected value of ybi for an increase of one unit in the covariate xbk. For this reason the parameters of the latent model cannot be directly compared with the regression parameters of the marginal models such as (1) and (2). To obtain the marginal effects that can be compared with the other models we have to average over the ui’s (see eq. 15).
So, are the marginal effects associated with the covariates. For the continuous outcome, βc is interpreted as conditional or marginal effects of the covariates.
The log likelihood for the model is written as:
where and μci = xciβc. Estimates for the marginal effects are obtained using . The estimated standard errors for can be approximated using the Delta method.
The properties of the probit link allow a simplification of the likelihood for the latent variable model. The integral in (15) has a closed-form solution and solving this integral (Appendix 1) we obtain the same model as the reverse factorization (5) but with a different parameterization:
where and .
Liang and Zeger  introduced the methodology of generalized estimating equations (GEE) in the context of longitudinal data. In this methodology, the correlation among measurements on the same individual (or in the same cluster) is treated as a nuisance parameter. A ’working’ correlation matrix is plugged in the equations to obtain estimates for the regression parameters. These estimators are consistent even if the ’working’ correlation matrix is misspecified. The variances of the parameters estimators are obtained by correcting the ’working’ correlation matrix resulting in what became known as the sandwich estimator. The main advantage of the GEE method is this robustness to misspecification of the covariance. Prentice and Zhao, and Zhao et al.  also proposed an estimation approach for mixed continuous and discrete using the quadratic exponential and the partly exponential families, respectively. For the case of a binary and a continuous outcomes if we assume the following model for the means of the two outcomes,
then the estimating equation
has a solution that is a consistent and asymptotically normal estimator for βb and βc  with variance Γ−1Ω−1, where Vi is a ’working’ covariance matrix for ybi and yci, ρ is the correlation between the outcomes, and
Typically, Di is a block-diagonal matrix because the equations for each outcome do not share the regression parameters. The solution for the estimating equation is a consistent estimator of βb and βc even if Vi is misspecified. There are several strategies to obtain estimates for the parameters in the covariance matrix. A simple solution is to use the method of moments to estimates σb, σc and ρ,
We performed a Monte Carlo simulation study to investigate consistency, efficiency and coverage of 95% confidence intervals for estimates obtained by the univariate model, factorization model, latent variable model and GEE. Two different sets of simulations were considered. In the first set, two outcomes associated with a common covariate (exposure) were simulated. Different effect sizes of the covariate on the outcomes were used to simulate no effect, small effect and large effect. Data were generated from a bivariate normal distribution,
with xi generated from a Bernoulli(.5). In the first simulation, the vector of coefficients associated with the covariate was chosen as (βb1, βc1) = (0, 2), representing no effect of x on yb and a small effect (defined as 1/3 of a standard deviation) on yc. For the second simulation, the vector of coefficients was chosen as (βb1, βc1) = (0.2, 2) representing a small effect on both outcomes (1/5 and 1/3 of a standard deviation respectfully). Finally, (βb1, βc1) = (1, 6) representing a large effect (1 standard deviation) of x on yb and yc.
In the second set of simulations, a different covariate was added to each outcome and data were generated from
with xi generated from a Bernoulli(.5), xbi generated from N(0,1) and xci generated from a N(0,4). For the this set of simulations, the vector of coefficients (βb1, βc1) was chosen as in the first set, combining different situations of no effect, small effect and large effect of the covariate x on the two outcomes.
For each simulation, we used (11) to create the binary variable ybi from . The covariates xi, xbi and xci, were chosen so that the simulation would include binary and continuous covariates with some ad-hoc distribution. The estimation of the parameters that define the mean structure is expected to be identical for the different models used. Hence, the key parameter for our simulation study is the correlation between the two underlying variables and yci because it is the parameter that should have an impact on the standard errors of the estimates obtained by the different approaches. We thus generated datasets with different levels of correlation (ρ = 0, .3, .6, .9). For each level of correlation we generated 1000 independent samples with 200 subjects each. However, the correlation between the outcomes ybi and yci depends on the covariate values. For xi = 0 (and for xbi = xci = 0) the correlations between the outcomes ybi and yci corresponding to (ρ = 0, .3, .6, .9) are (0, .2, .5, .7), respectively.
The data generated from (22) were modeled using the following:
For data generated from (21) similar models were used but without the terms associated with xbi and xci. For the latent variable model, the parameters , and corresponding to the marginal effects were computed and used for comparison with the regression parameters in the other models. The latent variable model (25) is the correct model given our data generation process. Both the univariate and factorization models have the correct structure for the means but not for the covariance (except the univariate models when ρ = 0). The univariate approach (23) assumes the outcomes are independent and the factorization model (24) assumes that the variance of yci | xi, xci, xbi depends on the covariates xi and xbi.
The models (23), (24), and (26) were fitted using PROC NLMIXED from SAS to assure the same numerical algorithms were used to maximize the likelihoods. An example of the SAS code to fit the latent variable model is presented in the Appendix. The 95% confidence intervals for the parameter estimates were computed as , where represents the maximum likelihood estimate for parameter of interest. The GEE were solved using a program written in PROC IML from SAS. The nonlinear optimization algorithm by Nelder-Mead simplex method implemented in PROC IML was used because it was the most successful in converging to the solutions in the simulated datasets. Estimates of the parameters of the covariance matrix were obtain using the probit and linear regression from (23). The same estimates were used as initial values for the optimization algorithm.
All the settings produced identical point estimates (MLEs) of the parameters, despite the model used to fit the data, the effect size of the covariate or the correlation level between the two outcomes. This indicates that all the models produce consistent estimates of the regression parameters. Coverage of the confidence intervals were also close to the nominal value (95%) in all simulations. The only difference observed between the models was found on the standard errors of the estimates for some settings.
Because the MLEs were identical across all models, the differences in the mean square errors (MSE) observed in some settings are mostly due to the differences on the standard error of the estimates. Tables TablesI,I, ,IIII and III present the ratio of the MSE of the multivariate models (factorization model, latent variable model and GEE) to the univariate model in different settings depending on the effect size of the shared covariate and correlation level between the outcomes.
The results are summarized as follows. For the estimates of the parameters associated with the covariate shared by the two outcomes, the multivariate models produced estimates with MSE identical to the univariate model. The only exceptions were observed for the βb1 estimates when the correlation between the outcomes was large. In this case the multivariate models had lower MSE than the univariate model. The latent variable model, in this situation, produced the estimates with lowest MSE. When the true model involved different covariates associated with each outcome, the estimates of the parameters associated with the unshared covariates had a lower MSE for the multivariate models if the outcomes were correlated. For example, the latent variable model produced some estimates with approximately half the MSE than the univariate model, for a high correlation between the outcomes.
The first Example 4.1 illustrates the similar performances of the approaches when the outcomes share the same covariates and the correlation between the outcomes is low. Example 4.2 illustrates a similar situation to Example 4.1 but with strong correlation between the outcomes. Example 4.3 illustrates how inferences can change with a multivariate approach if the outcomes are associated with different covariates.
Dickey et al.  conducted a prospective observational study of 420 adults with schizophrenia who sought care for a psychiatric crisis. The main study objective was to compare care for patients who were and were not enrolled in managed care. Advocates for those with mental illness worried that patients who had their care managed may have worse care than those who did not. Two outcomes, one binary (whether the patient was prescribed an atypical anti-psychotic medication) and one continuous (self-reported quality of interpersonal interactions between patient and clinician) were measured for the 197 patients who had their care managed and the 223 patients whose care was not managed. Higher values for the self-reported quality represent higher quality. The means (SD) age of patients were 40 (8.5) and 41 (7.9) in the managed care and not managed care groups respectively. Seventy one percent of the patients in the managed care group received atypical anti-psychotic medication versus 68% in the not managed care group. The means (SD) self-reported quality of interpersonal interactions between patient and clinician appeared similar, 3.20 (0.67) for the managed care group and 3.21 (0.65) for the not managed group. We used the univariate (1), the factorization model(2), the latent variable model (13) and the GEE (as described in section 2.4) to estimate the marginal association of manged care and outcomes. No other covariates were used in the models. Only patients with complete data were included (n=394).
The managed care estimates and the corresponding standard errors on patient/clinician relationship and anti-psychotic prescription were identical for all the models considered (Table IV). In this example, the marginal correlation between the two outcomes was low, 0.06. For the multivariate models, it is easy to test simultaneously for an overall effect of managed care exposure on the outcomes, i.e, H0 : βb = βc = 0. This can be accomplished using a likelihood ratio test. The result for this test obtained through the latent variable model was p-value=0.97 () indicating no evidence of a managed care effect on quality of care as measured by the two outcomes.
The data arise from a randomized multi-center clinical trial comparing an experimental treatment (interferon-α) to a corresponding placebo in the treatment of patients with age-related macular degeneration. We focus on the comparison between placebo and the highest dose (6 million units daily) of interferon-α(Z). The full results of this trial have been reported elsewhere . Patients with macular degeneration progressively loose vision. In the trial, a patients visual acuity was assessed at different time points through their ability to read lines of letters on standardized vision charts. These charts display lines letters of decreasing size which the patient must read from top (largest letters) to bottom (smallest letters). Each line with at least four letters correctly read is called one line of vision. The patients visual acuity is the total number of letters correctly read. The primary endpoint of the trial was a binary outcome defined as the loss of at least three lines of vision at 1 year compared to their baseline performance. We also consider the difference between visual acuity at 6 months and baseline as a secondary endpoint (continuous outcome). We used the univariate, the factorization model, the latent variable model and the GEE to estimate the marginal effect of interferon-α treatment on visual performance. Treatment was the only covariate included in the models.
A total of 190 patients (87 in the treatment arm and 103 in the placebo arm) completed the study. The correlation between the two outcomes was 0.63. For patients who received the treatment, 54% lost at least three lines of vision at 1 year versus 38% in the placebo group. The mean (SD) loss of visual acuity at 6 months were 8.4 (11.9) letters for the treatment arm and 5.5 (13.7) for the placebo arm. The results of all approaches were identical despite the high correlation between the outcomes. However, the estimate of treatment effect for the binary outcome, loss of at least three lines of vision at 1 year, was smaller in the latent variable model (Table V). All models lead to the same conclusion regarding the poor performance of the interferon-α. The overall effect of treatment (H0 : βb = βc = 0) obtained by the latent model was not statistically significant (, p-value=0.14).
Coronary disease results from lesions of fatty plaque that build up within the arterial wall. These plaque lesions may either rupture, causing a heart attack, or gradually obstruct blood flow, causing angina. Coronary stents are thin expandable metallic tubes that are delivered within the coronary artery by a catheter and are then expanded precisely at the site of an obstructive lesion. Typically up to two primary endpoints (measures of restenosis) are measured after coronary stenting. We use data from one arm of a non-inferiority randomized trial of bare-metal coronary stents. The first endpoint obtained from all patients is the incidence of clinically-driven repeat revascularization, denoted the target lesion revascularization (TLR) rate (binary outcome). TLR is designated by a clinical events committee that have access to clinical and angiographic laboratory data. The second endpoint, proportion diameter stenosis (PDS), is the degree of vessel re-narrowing and is quantified by a computer-based system (continuous outcome). The PDS is obtained on a small randomly selected subset of patients. Both TLR and PDS are measured 9 months after coronary stenting. The goal is to estimate restenosis for diabetic patients taking into account potential confounders.
From the 313 patients, 105 had both PSD and TLR measured and included in the analysis. The overall rate of TLR was 14% and the mean (SD) of PDS was 0.43 (0.17). The correlation between the two outcomes was 0.58. Fourteen patients were diabetic. The overall mean (SD) for length of lesion was 12.8 (5.2). Using a univariate approach only history of diabetes mellitus (diabetes) was significantly associated with the outcomes. For the latent model, the lesion length was also associated with TLR but not with PDS (Table VI). Note that inference for diabetic patients is the same as in the univariate approach. If lesion length is included in the equation for the outcome TLR then both outcomes would share the same covariates and the results from the latent model become identical to the univariate model (the association of lesion length would not be significant in the latent model). This is in agreement with the simulations where efficiency gains were realized for estimates of the parameters associated with the ’non-shared’ covariates.
We presented different approaches to model correlated binary and continuous outcomes. We proposed a new multivariate latent variable model that overcomes the identifiability problems of Dunson’s model and the sensitivity to misspecification of the covariance matrix of Sammel’s model. We also implemented a quasi-likelihood approach based on a GEE. Simulation results suggest that the four approaches lead to consistent estimates of the regression parameters. Two findings are noteworthy. First, we demonstrated that if the two outcomes share the same covariates, the results of a multivariate approach are identical to that of a univariate approach that ignores the correlation between the outcomes. Although counterintuitive, this result is consistent with other situations of multivariate data. In the setting of seemingly unrelated regressions with normally distributed outcomes and for the particular case of common set of covariates associated with the outcomes, the ordinary least squares estimate is still the best linear unbiased estimator (see for example Zellner  and Rotnitzky ), despite the correlation between the outcomes.
Second, we know that for binary outcomes jointly modeled with the same covariates, there is a small gain in efficiency by taking into account the correlation. This only occurs if the outcomes strongly associated. Our result for non-commensurate outcomes is a combination of these two properties. The estimates of the parameters associated with the continuous outcome have the same standard errors as the univariate approach. The estimates of the parameters associated with the binary outcome show a small gain in efficiency when compared with the univariate approach but only for high correlation between the outcomes.
Third, the efficiency gain is higher when the outcomes share a different set of covariates and with higher levels of correlation between the outcomes. This suggests that if one anticipates that different covariates maybe associated with the outcomes, the multivariate approach offers some advantages. Fitzmaurice and Laird  have previously shown higher gains in efficiency when compared with the univariate approach than those shown here. However, the efficiency gains observed by the authors were inflated as consequence of heteroscedasticity in the data. If data are generated under the factorization model, the variance depends on the covariate. In this case the univariate approach, assuming homoscedasticity, will lead to less efficient estimates due to misspecification of the variance.
The better performance of the latent variable model over the factorization model in our simulation study was expected because the data was generated from the latent model. Nonetheless, the factorization model was sometimes superior but never inferior to the univariate approach. This suggests that the misspecification of correlation between the outcomes will not be worse than the assumption of independence. In contrast to the factorization approach, the latent variable model presented is easily extended to several continuous and/or several binary outcomes by including additional latent variables as long as the outocomes are positively correlated. However, some of the assumptions of the model, such as the distribution of the latent variables, are not easily assessed. In the presence of missing observations in one of the outcomes, the factorization approach only uses the complete cases or it requires the EM-algorithm to include all the cases in the analysis . This is not the case with the latent model. If the missing data is missing at random or missing completely at random  this situation can be easily accommodated due to the conditional independence of the outcomes given the latent variable. Furthermore, the latent variable model is easily fitted using standard software.
We focused on comparing the univariate and multivariate approaches using common operational characteristics such as MSE and coverage of the confidence intervals. We note that these characteristics may not fully capture the benefits of the multivariate models. Research to understand the advantage of adopting a multivariate model for joint inference of the parameters is an important next step. For example, when the outcomes represent an underlying construct and the primary research question relates to an exposure effect, joint inference may be a key task. Such situations occur in clinical trials with more than one primary endpoint or when there is simultaneous concern with safety and efficacy.
This work was supported by Grant R01-MH54693 (Teixeira-Pinto and Normand) and R01-MH61434 (Normand), both from the National Institute of Mental Health. The schizophrenia managed care data were generously provided through the efforts of Barbara Dickey, Ph.D., Harvard Medical School, Boston, MA; the bare-metal stent data by Laura Mauri, M.D., M.Sc., Harvard Clinical Research Institute, Boston, MA; and the macular degeneration data by Geert Molenberghs, Ph.D., Hasselt University, Belgium.
Letting and we get
This likelihood is the same likelihood for the reverse factorization model (5), i.e., both approaches are a different parameterization of the same model.
The SAS code below illustrates how to use the procedure PROC NLMIXED in SAS to fit the latent variable model (13) for a binary (y1) and a continuous (y2) outcomes associates with a common covariate (x1).
proc nlmixed data=datasetname; parms a1=1 b1=1 a2=1 b2=1 sigmab=1 sigma2=1; bounds sigma2>0, sigmab>0; ll=y1*log(PROBNORM (a1+b1*x1+u)) +(1−y1)* log(PROBNORM(−a1−b1*x1−u))−log(sigma2)− .5*1/(sigma2**2)*(y2−a2−b2*x1−u*sigma2)**2; model y1 ~ general(ll); random u ~ normal(0,sigmab) subject=id; estimate ’marginal effect of x1’ b1/sqrt(1+sigmab); run;