Home | About | Journals | Submit | Contact Us | Français |

**|**HHS Author Manuscripts**|**PMC2747645

Formats

Article sections

- SUMMARY
- 1. INTRODUCTION
- 2. A MODEL FOR THE RANDOM EFFECTS COVARIANCE MATRIX
- 3. MODEL BUILDING AND RESULTS
- 4. DISCUSSION
- REFERENCES

Authors

Related links

Stat Med. Author manuscript; available in PMC 2009 September 21.

Published in final edited form as:

Stat Med. 2003 May 30; 22(10): 1631–1647.

doi: 10.1002/sim.1470PMCID: PMC2747645

NIHMSID: NIHMS143245

See other articles in PMC that cite the published article.

A common class of models for longitudinal data are random effects (mixed) models. In these models, the random effects covariance matrix is typically assumed constant across subject. However, in many situations this matrix may differ by measured covariates. In this paper, we propose an approach to model the random effects covariance matrix by using a special Cholesky decomposition of the matrix. In particular, we will allow the parameters that result from this decomposition to depend on subject-specific covariates and also explore ways to parsimoniously model these parameters. An advantage of this parameterization is that there is no concern about the positive definiteness of the resulting estimator of the covariance matrix. In addition, the parameters resulting from this decomposition have a sensible interpretation. We propose fully Bayesian modelling for which a simple Gibbs sampler can be implemented to sample from the posterior distribution of the parameters. We illustrate these models on data from depression studies and examine the impact of heterogeneity in the covariance matrix on estimation of both fixed and random effects.

Random effects (mixed) models are a class of models used frequently to model longitudinal data. They offer many advantages including the ability to handle different observation times across subjects and to allow non-stationary covariance structures. In these models, little time is typically spent on modelling the random effects covariance matrix. In particular, examining whether this matrix is the same for all subjects or whether it differs depending on subject-specific characteristics is often neglected in the modelling. For discrete longitudinal data modelled using generalized linear mixed models, ignoring this heterogeneity can result in biased estimates of the fixed effects [1]. For continuous longitudinal data, which will be our focus here, the standard errors for the fixed and random effects, and consequently, inferences, will be incorrect, the random effects will not be shrunk correctly, and prediction of subject-specific trajectories can be poor. In addition, incorrectly modelling the covariance structure in the presence of missing data can result in biased estimates of fixed effects.

Accounting for heterogeneity in covariance matrices has recently been discussed by several authors. In marginal models, Chiu *et al.*, [2] model the covariance matrix using a log matrix parameterization and obtain estimates using estimating equations. In non-linear mixed models, heterogeneous covariance structures are frequently used, but typically with variance a function of the mean and constant correlations across subjects [3]. Pourahmadi and Daniels [4] develop a class of models they call dynamic conditionally linear mixed models in which the marginal covariance matrix is allowed to vary across individuals, but they consider the random effects covariance matrix to be constant across subjects. In the context of linear mixed models, Lin *et al.*, [5] examined heterogeneity in the within-individual variances in linear mixed models and Zhang and Weiss [6] discussed heterogeneity in the random effects covariance matrix but mainly consider models that allow the entire matrix to differ by a multiplicative factor. Little work has been done on modelling the entire random effects covariance matrix.

Here, we propose an approach that allows all the parameters of the random effects covariance matrix to be modelled, not just the variances. Specifically, we will model the parameters that result from a modified Cholesky decomposition of the covariance matrix (this decomposition has also been called a modified Gaussian [7]). This decomposition has been used previously to model the marginal covariance matrix of longitudinal observations on a subject [4, 8–10]. This decomposition results in parameters that can be easily modelled without concern of the resulting estimator being positive definite, have nice interpretations, and allow for relatively easy model fitting (sampling from the posterior distribution). Some discussion of interpretation in the context of modelling the random effects covariance matrix will be given in Section 2.

We reanalyse a data set previously discussed in Pourahmadi and Daniels [4]. The data was from a series of five depression studies conducted in Pittsburgh from 1982–1992 [11]. Patients were assigned one of two active treatments and measured at baseline and then weekly for 16-weeks. Here, we examine the rate of improvement of, and dependence in, weekly depression scores over this 16-week period for the 549 subjects with no missing baseline covariates. Previous work investigated the time to recovery from depression [11] and examined the rate of improvement in depression scores with a different class of models than those proposed here [4].

Preliminary analyses suggested that the random effects covariance matrix was not the same for subjects with different combinations of treatment (two levels: drug and psychotherapy versus psychotherapy only) and initial severity of depression (two levels: high and low). This motivated our modelling framework, in which we will model the parameters resulting from the modified Cholesky decomposition as a function of both drug (treatment) and severity.

In Section 2, we discuss the modified Cholesky decomposition of the covariance matrix, the interpretation of the parameters in this setting, and its use in modelling the random effects covariance matrix. We build models for the random effects covariance matrix on the depression data and discuss our results in Section 3. Conclusions and discussion comprise Section 4.

For each subject *i*, we observe an *n _{i}* × 1 vector of responses,

$$\begin{array}{c}{Y}_{i}={X}_{i}\beta +{Z}_{i}{b}_{i}+{\epsilon}_{i}\hfill \\ {b}_{i}~\mathrm{N}(0,{\mathrm{\Sigma}}_{i})\hfill \\ {\epsilon}_{i}~\mathrm{N}(0,{\tau}^{2}I)\hfill \end{array}$$

(1)

where β is a *p* × 1 vector of fixed effects, *X _{i}*,

We decompose the random effects covariance matrix using the approach described in Pourahmadi [8]. This decomposition results in a set of dependence parameters, generalized autoregressive parameters (GARP) and a set of variance parameters, the innovation variances (IV). These parameters can be understood in the context of the dynamic model which is described below. First, we define * _{ik}* to be the linear least-squares predictor of

$${b}_{\mathit{\text{ik}}}={\displaystyle \sum _{j=1}^{k-1}{\varphi}_{i,\mathit{\text{kj}}}{b}_{\mathit{\text{ij}}}+{e}_{\mathit{\text{ik}}}},\text{}k=1,\dots ,q$$

(2)

The special Cholesky decomposition of Σ_{i} is defined as
${T}_{i}{\mathrm{\Sigma}}_{i}{T}_{i}^{\prime}={D}_{i}$ where
${D}_{i}=\text{diag}({\sigma}_{i1}^{2},\dots ,{\sigma}_{\mathit{\text{iq}}}^{2})$ and *T _{i}* is the unit lower triangular matrix with −ϕ

The GARP parameters characterize the dependence of the random effects and the IV parameters the variance. For example, consider using orthogonal polynomials up to the second order (quadratic) as we use in the example in Section 3. Define *b*_{i1} to be the random effect corresponding to the subject-specific deviation from the overall average, *b*_{i2} to be the random effect corresponding to the deviation from the overall linear trend, and *b*_{i3} to be the random effect corresponding to the deviation from the overall quadratic trend. Then (2) can be written in three parts:

$$\begin{array}{c}{b}_{i1}={e}_{i1}\hfill \\ {b}_{i2}={\varphi}_{i,(2,1)}{b}_{i1}+{e}_{i2}\hfill \\ {b}_{i3}={\varphi}_{i,(3,1)}{b}_{i1}+{\varphi}_{i,(3,2)}{b}_{i2}+{e}_{i3}\hfill \end{array}$$

(3)

where
$\text{var}({e}_{\mathit{\text{ik}}})={\sigma}_{\mathit{\text{ik}}}^{2},\phantom{\rule{thinmathspace}{0ex}}k=1,\dots ,3$. The first equation corresponds to the marginal distribution of the random effect corresponding to the deviation from the overall average for subject *i*, the second to the conditional distribution of the linear random effect given the overall average; and the third to the conditional distribution of the quadratic random effect, given the linear random effect and the overall average random effect. The GARP are clearly modelling the dependence between the different components of the orthogonal polynomial as we add higher order components one at a time, and the IV are modelling their variability as we add more complexity (higher order terms) to the model (
${\sigma}_{i3}^{2}$ quantifies how well can we predict the deviation from the overall quadratic trend, given the deviations from the overall linear trend and the overall average).

We model the GARP/IV parameters using linear and log link functions

$${\varphi}_{i,\mathit{\text{kj}}}={W}_{i,\mathit{\text{kj}}}\gamma $$

(4)

$$\text{log}({\sigma}_{\mathit{\text{ik}}}^{2})={H}_{\mathit{\text{ik}}}\lambda $$

(5)

where *W _{i, kj}* and

We use a Gibbs sampler with Metropolis–Hastings steps to sample from the posterior distribution of (β, *b _{i}*, τ

The GARP/IV parameterization provides parameters that can easily be modelled without the concern of the estimator being positive definite, that have a sensible interpretation, and that allow for simple computations. However, there are other parameterizations that can be considered. The log matrix [16] and the eigenvalue/Givens angle [17] parameterizations also pose no problems in relation to positive definiteness, but the resulting parameters can be difficult to interpret and model fitting (sampling from the posterior distribution) can be challenging. Parameterizing the matrix using the variances and correlations [18] provides easily interpretable parameters, but is impeded by the restrictions on the variances and correlations for the matrix to be positive definite. This can be troublesome when trying to model these parameters parsimoniously and/or as a function of covariates.

There were a total of 549 subjects in the five depression studies. About 30 per cent (2840) of the possible measurements for the subjects (baseline measure + 16 weeks per subject) were missing, mostly intermittently. This can be seen in Table I, where the sample sizes at each time (out of 549) are given. As in Pourahmadi and Daniels [4], we assume the data are missing at random (MAR). Previous analyses of similar studies and communication with the investigators involved in the study provided support for a MAR assumption. This assumption implies that missingness is explained by measured covariates in the model and/or observed responses on the subject prior to and/or after the missing visit. In addition, some of the missingness was by design as some of the studies only measured patients every other week.

Weekly sample means (sample sizes) of HRSD broken down by treatment status (drug) and severity (high versus low).

The scale used to quantify depression in these studies was the Hamilton Rating Scale for Depression (HRSD), which was measured at baseline and then weekly for 16 weeks; higher scores on this scale indicate more severe depression. For a detailed description of the psychotherapy treatment and the drugs used in these studies, we refer the reader to Thase *et al*. [1]. The sample means in Table I supported a quadratic trend for the HRSD trajectory over time.

The main questions of interest in this analysis were:

- Do patients show more improvement on the combination drug/psychotherapy treatment as opposed to the only psychotherapy treatment?
- How well does the initial severity of the patient predict improvement?
- Is there an interaction between treatment and initial severity as measured by the rate of improvement?

The components of the fixed effects design vector were motivated by these questions and the apparent quadratic trend over time. For the *i*th patient at time *j*, *x _{ij}* is the 14 × 1 design vector, given by

Based on preliminary exploration of the data, we considered models for Σ_{i} which included drug and severity. In particular, we considered the models specified in Table II. The simple model is a standard random effects model. The Drug and Sev models allow the entire random effects covariance matrix to differ by drug and severity, respectively. The Drug × Sev model includes four separate random effects covariance matrices for each combination of the binary covariates drug and severity. These four models could be fit in SAS proc mixed using likelihood methods [20]; see the Appendix for the syntax. The Drug + Sev and Drug + Sev^{} models are more parsimonious models that involve modelling some or all of the parameters of the modified Cholesky decomposition of the covariance matrix as an additive function of both drug and severity and cannot be fit in SAS.

To select the best models, we use a combination of examining 95 per cent credible intervals for the relevant parameters and computation of an overall statistic of model fit. For the latter, we compute the deviance information criterion (DIC) [19]. The DIC has a form similar to the AIC: a goodness-of-fit term, the deviance evaluated at the posterior mean of the parameters, and a penalty term, two times the effective number of parameters, computed as the mean deviance minus the deviance evaluated at the posterior mean. Thus

$$\text{DIC}=\text{Dev}(\overline{\theta})+2{p}_{D}$$

(6)

where is the posterior mean of θ and *p _{D}* = Dēv − Dev(), where Dēv is the posterior mean of the deviance. We define the deviance and use the same parameterization as in Pourahmadi and Daniels [4]

$$\text{Dev}(\theta )=-2\phantom{\rule{thinmathspace}{0ex}}\times \phantom{\rule{thinmathspace}{0ex}}\text{loglik}(\theta |y)$$

where
$\theta =(\beta ,{A}_{i}^{-1}(\gamma ,\lambda ,{\tau}^{2}))$, where
${A}_{i}^{-1}={({\tau}^{2}I+{Z}_{i}{\mathrm{\Sigma}}_{i}(\gamma ,\lambda ){Z}_{i}^{\mathrm{T}})}^{-1}$. Based on the form of the DIC, we prefer models with small values. The DIC has the advantage of being easy to compute using output from a Gibbs sampler. We refer the reader to the paper by Spiegelhalter *et al.* [19] for a rigorous justification of this criterion.

Posterior means and 95 per cent credible intervals for (γ, λ) appear in Table III and Table IV. It is clear that drug and severity are important explanatory factors for the covariance structure. In particular, the additive model for drug and severity appears to fit best. This is confirmed by the table of DIC (Table V) and examination of the 95 per cent credible intervals in Table III and Table IV (that is, whether the parameters corresponding to the effects of drug and severity cover zero). The model with the smallest DIC was the one in which drug and severity are entered into the model additively. In addition, from the credible intervals for this model, it appears that the GARP did not depend on severity. We fit a simplified version of the additive model which now had the smallest DIC; this was the model Drug + Sev^{} given in Table II that had the GARP only depend on drug. In addition, given that ϕ_{i} and
${\sigma}_{i}^{2}$ were each only three-dimensional here, and there was no apparent structure, we did not pursue more parsimonious models.

The effective number of parameters, *p _{D}*, was roughly what we would expect for all the models. For example, for the simple random effects model,

The estimates of the GARP/IV parameters appear in Table III and Table IV. To examine the covariance structure of the random effects, we first focus on the simple model. The log(IV) decrease as we add more complexity to the model (linear, quadratic), but not in a linear fashion. The three GARP (dependence parameters) correspond to the regression parameters from regressing the linear random effect on the overall average random effect (ϕ_{21}) and from regressing the quadratic random effects on the linear and overall average (ϕ_{31} and ϕ_{32}, respectively) (see the sequence of equations in (3)). The posterior mean of ϕ_{21} is positive and the credible interval is above zero which implies a significant positive relationship (as the overall average increases, the linear trend increases (gets less steep since the slopes are negative)). We see a similar relationship between the quadratic trend and the overall average, after adjusting for the linear trend.

For the final model, the marginal variance of the deviation from the overall average significantly increases for an individual on drug and psychotherapy and for individuals with high initial severity. In addition, the conditional variances ${\sigma}_{2}^{2}\phantom{\rule{thinmathspace}{0ex}}\text{and}\phantom{\rule{thinmathspace}{0ex}}{\sigma}_{3}^{2}$ significantly increase for those with high initial severity. For the GARP, the effect of being on drug and psychotherapy versus only psychotherapy indicates a marginally significant weaker association between the deviations from the linear trend and the overall average and between the deviations from the quadratic and the overall average when adjusting for the deviation from the linear trend.

Figure 1 contains the posterior mean of the subject specific trajectories for eight subjects in the study for all the models in Table II. The first four are subjects with small *n _{i}* (three or four weekly observations) and the last four are subjects with large

Posterior means and 95 per cent credible intervals for β appear in Table VII for the simple random effects model and the best fitting random effects model. Given the large sample size, *n* = 549, it is not too surprising that there are only small differences in the posterior means. However, the width of the 95 per cent credible intervals for most of the β in the Drug + Sev^{*} are tighter (narrower). Thus, we can see the potential increase in efficiency from properly modelling the covariance structure here. In particular, we observe the decreased width of the credible intervals for the main effects of drug and severity. In addition, age is significant (credible interval does not cover 0) in the final model, but not in the sample random effects model. (The Gibbs sampler was run for enough iterations so that differences in credible intervals are not due to Monte Carlo error.)

In this section, we provide a comparison of the final model chosen by the DIC criterion with the Drug × Sev model which can be fit in SAS proc mixed [20]. The third column of Table VI and Table VII contain the estimated random effects and fixed effects under the Drug × Sev model fit in SAS. In Table VI, there are some differences in the point estimates of the random effects and slight differences in the confidence intervals. For the estimated fixed effects, given in Table VII, the effect of age goes from significant to marginally significant when using the model fit in SAS and severity by drug interaction becomes significant in the SAS model. Clearly, the overall conclusions changed very little, but specific inferences, as mentioned above, were affected.

There was a significant non-linear decrease in depression scores over the 16 week treatment period (Figure 2). We will first address the first question of Section 3.1. Table VIII contains the posterior means for the week 16 score and the change from baseline for the four combinations of drug and severity. The worst group at week 16 (highest HRSD) was observed in the psychotherapy only/high severity arm and the best in the drug/low severity arm and these were significantly different (95 per cent credible intervals for the difference excluded zero). In terms of change from baseline, the largest change was observed in the drug/high severity arm and the smallest change was seen in the psychotherapy only/low severity arm (and these were significantly different). If we focus on the effect of drug among low and high severity patients, for the outcome at week 16, those on drug were slightly better off (lower HRSD) and this difference was significant among high severity patients. For the change from baseline outcome, those on drug and psychotherapy had much larger improvements than those only on psychotherapy within both the high and low severity patients (and these were significant).

Posterior means and 95 per cent credible intervals for depression score at week 16 and the change from baseline for the best model.

The second question of interest stated in Section 3.1 addresses the impact of initial severity of depression. Individuals with more severe depression at the start of the study did worse on average than those with less severe depression. However, they improved at a quicker rate; there was a larger change from baseline (Table VIII), which was significant, and there was a significant interaction with the linear trend (Table VII) representing a steeper slope for the high severity subjects (also, see Figure 2).

There was not a significant interaction between drug and severity on the rate of improvement as the 95 per cent credible intervals for these two interactions covered zero (Table VII); this addresses the third question of interest given in Section 3.1.

We have proposed a simple way to model the random effects covariance matrix via structure [4] and/or subject-specific covariates. This approach is computationally attractive and provides parameters to model which have a nice interpretation for modelling trajectories over time; in particular, the covariance parameters correspond to characterizing the dependence and variability of the random effects as we increase the complexity of the subject-specific curves. For the depression example, it is clear that this matrix differs by severity and drug and it affects inferences in terms of both the fixed effects (smaller credible intervals) and prediction of subject-specific trajectories (Figure 1). In our example, the random effects covariance matrix only depended on categorical covariates. If all the parameters in the covariance matrix differ on all possible combinations of these categorical covariates, then a separate matrix can be fit for each combination and the models can be fit in SAS or using a ‘standard’ Gibbs sampler for Bayesian random effects models. However, if this is not the case, or there are continuous covariates, the models developed here would be necessary and useful.

Clearly, when the main interest in a longitudinal data analysis is the covariance structure or subject-specific predictions, care must be taken in how the covariance structure is modelled. However, even when these are not of direct interest, taking care is important. For example, when the true random effects matrix is a function of subject-specific covariates, but not modelled that way, the standard errors and confidence intervals for the fixed (and random) effects will be incorrect, which can result in incorrect inferences (for example, a parameter being significantly different from zero). In addition, in cases with unbalanced longitudinal data where the missing data is MAR, incorrectly modelling the covariance structure can even result in biased estimates of the fixed effects [21].

We can compare the DIC of the models in this paper to those in Pourahmadi and Daniels [4]. The best-fitting models in their paper were superior to the best ones here. The reason is that there is considerable non-i.i.d. variability after including the random effects; that is, the residuals in (1), ε_{i}, conditional on the random effects, *b _{i}*, were not independent with constant variance. The models in this paper could be extended to allow a more general form for the covariance of these residuals by replacing the first level covariance matrix τ

Other parameterizations can be used to model the random effects covariance matrix. The GARP/IV parameterization has the advantages of computational attractiveness and a logical interpretation of parameters. This parameterization implies a non-linear model on the variances and correlations. For example, in the simple 2 × 2 case, the covariance matrix expressed in terms of the model for the GARP/IV parameters given in (4) and (5) is

$$\left(\begin{array}{cc}\text{exp}({H}_{1}\lambda )\hfill & \hfill \text{exp}({H}_{1}\lambda ){W}_{21}\gamma \hfill \\ & \hfill \text{exp}({H}_{2}\lambda )+\text{exp}({H}_{1}\lambda ){({W}_{21}\gamma )}^{2}\hfill \end{array}\right)$$

One issue with modelling the covariance matrix is choosing the link function; that is, the vector function which transforms the parameters of the original covariance matrix, the variances and covariances, to a new set of parameters on which the regression parameters are linear. For the specific form of the link function in our model, see Pourahmadi [8]. The DIC can be used to choose between various link functions for these models.

Finally, we are exploring extending these models to discrete data (for which bias in the fixed effects is a major issue) and to multivariate longitudinal data.

Contract/grant sponsor: NIH; contract/grant number: CA85295-01A1.

The work of the first author was supported by NIH grant CA85295-01A1. The authors would like to thank Zhongqi Zhang who did some of the initial work on deriving the full conditional distributions for the Gibbs sampling algorithm and Professors Joe Hogan and Hal Stern for helping to clarify the presentation of the example and the model. The authors would also like to thank two referees for comments which improved the manuscript.

In this Appendix, we provide details on the forms of the prior distributions and full conditional distributions for the parameters used in the Gibbs sampling algorithm. We also provide the code to fit the Drug × Sev model in SAS.

We assume the prior distributions are

$$\begin{array}{c}\hfill \lambda ~\mathrm{N}({\lambda}^{*},{\mathrm{\Omega}}_{\lambda})\\ \hfill \beta ~\mathrm{N}({\beta}^{*},{\mathrm{\Omega}}_{\beta})\\ \hfill \gamma ~\mathrm{N}({\gamma}^{*},{\mathrm{\Omega}}_{\gamma})\\ {\mathrm{\tau}}^{2}~\text{IG}(a,b)\hfill \end{array}$$

where IG is an inverse gamma prior, parameterized to have mean 1/(*b*(*a* − 1)). In general, we recommend specifying these priors to be non-informative. In our example, we chose λ*, β*, γ* to all be vector’s of 0’s with the respective Ω’s identity matrices multiplied by a scalar which we set equal to 10000. Specifying non-informative priors for variance components can be tricky. Here, we specified *a* = 3 and *b* = 9 for the inverse gamma prior on τ^{2}, based on some preliminary exploration of the data. Owing to the large sample size, however, the posterior distribution for τ^{2} was insensitive to this choice.

The full conditional distributions for *b _{i}*, β, τ

$$\begin{array}{c}{b}_{i}|Y,\beta ,\gamma ,{\tau}^{2},\lambda ~\mathrm{N}\phantom{\rule{thinmathspace}{0ex}}({(\frac{{Z}_{i}^{\mathrm{T}}{Z}_{i}}{{\tau}^{2}}+{\mathrm{\Sigma}}_{i}^{-1})}^{-1}\frac{{Z}_{i}^{\mathrm{T}}({Y}_{i}-{X}_{i}\beta )}{{\tau}^{2}},{(\frac{{Z}_{i}^{\mathrm{T}}{Z}_{i}}{{\tau}^{2}}+{\mathrm{\Sigma}}_{i}^{-1})}^{-1})\hfill \\ \beta |Y,{b}_{i},\gamma ,{\tau}^{2},\lambda ~\mathrm{N}\phantom{\rule{thinmathspace}{0ex}}({(\frac{{\mathrm{\Sigma}}_{i}{X}_{i}^{\mathrm{T}}{X}_{i}}{{\tau}^{2}}+{\mathrm{\Omega}}_{\beta}^{-1})}^{-1}(\frac{{\mathrm{\Sigma}}_{i}{X}_{i}^{\mathrm{T}}({Y}_{i}-{Z}_{i}{b}_{i})}{{\tau}^{2}}+{\mathrm{\Omega}}_{\beta}^{-1}{\beta}^{*}),\hfill \\ \text{\hspace{1em}\hspace{1em}\hspace{1em}\hspace{1em}\hspace{1em}\hspace{1em}\hspace{1em}\hspace{1em}}{(\frac{{\mathrm{\Sigma}}_{i}{X}_{i}^{\mathrm{T}}{X}_{i}}{{\tau}^{2}}+{\mathrm{\Omega}}_{\beta}^{-1})}^{-1})\hfill \\ {\tau}^{2}|Y,\beta ,{b}_{i},\gamma ,\lambda ~\text{IG}\phantom{\rule{thinmathspace}{0ex}}(\frac{{\mathrm{\Sigma}}_{i}{n}_{i}}{2}+a,{({2}^{-1}\underset{i}{{\displaystyle \mathrm{\Sigma}}}{({Y}_{i}-{X}_{i}\beta -{Z}_{i}{b}_{i})}^{\mathrm{T}}({Y}_{i}-{X}_{i}\beta -{Z}_{i}{b}_{i})+{b}^{-1})}^{-1})\hfill \\ \gamma |Y,\beta ,{b}_{i},{\tau}^{2},\lambda ~\mathrm{N}\phantom{\rule{thinmathspace}{0ex}}({(\underset{i}{{\displaystyle \mathrm{\Sigma}}}{G}_{i}^{\mathrm{T}}{D}_{i}^{-1}{G}_{i}+{\mathrm{\Omega}}_{\gamma}^{-1})}^{-1}(\underset{i}{{\displaystyle \mathrm{\Sigma}}}{G}_{i}^{\mathrm{T}}{D}_{i}^{-1}{b}_{i}+{\mathrm{\Omega}}_{\gamma}^{-1}{\gamma}^{*}),\hfill \\ \text{\hspace{1em}\hspace{1em}\hspace{1em}\hspace{1em}\hspace{1em}\hspace{1em}\hspace{1em}\hspace{1em}}{(\underset{i}{{\displaystyle \mathrm{\Sigma}}}{G}_{i}^{\mathrm{T}}{D}_{i}^{-1}{G}_{i}+{\mathrm{\Omega}}_{\gamma}^{-1})}^{-1})\hfill \end{array}$$

where *G _{i}* is a

The full conditional distribution of λ does not have a standard form. It is proportional to the following:

$$\text{exp}\phantom{\rule{thinmathspace}{0ex}}\{-(1/2)\phantom{\rule{thinmathspace}{0ex}}\times \phantom{\rule{thinmathspace}{0ex}}\left[{\displaystyle \sum _{i,k}({H}_{\mathit{\text{ik}}}\lambda +{({b}_{\mathit{\text{ik}}}-{G}_{\mathit{\text{ik}}}\gamma )}^{2}\text{exp}(-{H}_{\mathit{\text{ik}}}\lambda ))+{(\lambda -{\lambda}^{*})}^{\mathrm{T}}{\mathrm{\Omega}}_{\lambda}^{-1}(\lambda -{\lambda}^{*})}\right]\phantom{\rule{thinmathspace}{0ex}}\}$$

To sample from this full conditional, we use a Metropolis–Hastings step with a normal approximation to the full conditional as the candidate distribution. For details, see Daniels and Pourahmadi [10]. In our example, the acceptance rate was very high.

The convergence of the Gibbs sampler was monitored by examining time series plots of the parameters over iteration and the Gelman and Rubin approach of using multiple chains [22]. For all the models fit, convergence to the posterior distribution was quick and the mixing was good.

The SAS code for fitting the Drug × Sev model follows:

proc mixed data=dep method=reml; class subject dsc; model y=linear quad sev drug sex age drug*sev drug*linear drug*quad sev*linear sev*quad drug*sev*linear drug*sev*quad / s; random intercept linear quad / type=un sub=subject grp = dsc; run;

The ‘grp=’ statement allows the random effects covariance matrix to differ by a categorical covariate. Here, ‘dsc’ is a covariate that takes four levels, one for each of the drug by severity combinations.

1. Heagerty PJ, Kurland BF. Misspecified maximum likelihood estimate and generalised linear mixed models. Biometrika. 2001;88:973–986.

2. Chiu TYM, Leonard T, Tsui K-W. The matrix-logarithmic covariance model. Journal of the American Statistical Association. 1996;91:198–210.

3. Davidian M, Giltinan DM. Nonlinear Models for Repeated Measurement Data. Chapman and Hall; 1995.

4. Pourahmadi M, Daniels MJ. Dynamic conditional linear mixed models for longitudinal data. Biometrics. 2002;58:225–231. [PMC free article] [PubMed]

5. Lin X, Raz J, Harlow SD. Linear mixed models with heterogeneous within-cluster variances. Biometrics. 1997;53:910–923. [PubMed]

6. Zhang F, Weiss RE. Diagnosing explainable heterogeneity of variance in random effects models. Canadian Journal of Statistics. 2000;28:3–18.

7. Bock RD. Multivariate Statistical Methods in Behavioral Research. Vol. 54. New York: McGraw Hill; 1975.

8. Pourahmadi M. Joint mean-covariance models with applications to longitudinal data: unconstrained parameterization. Biometrika. 1999;86:677–690.

9. Pourahmadi M. Maximum likelihood estimation of generalized linear models for multivariate normal covariance matrix. Biometrika. 2000;87:425–435.

10. Daniels MJ, Pourahmadi M. Bayesian analysis of covariance matrices and dynamic models for longitudinal data. Biometrika. 2002;89:553–566.

11. Thase ME, Greenhouse JB, Frank E, Reynolds CF, 3rd, Pilkonis PA, Hurley K, Grochocinski V, Kupfer DJ. Treatment of major depression with psychotherapy or psychotherapy-pharmacotherapy combinations. Archives of General Psychiatry. 1997;54:1009–1015. [PubMed]

12. Shi M, Weiss RE, Taylor JMG. An analysis of paediatric CD4 counts for acquired immune deficiency syndrome using flexible random curves. Applied Statistics. 1996;45:151–163.

13. Rice JA, Wu CO. Nonparametric mixed effects models for unequally sampled noisy curves. Biometrics. 2001;57:253–259. [PubMed]

14. Hedeker D, Gibbons RD. MIXREG: a computer program for mixed-effects regression analysis with autocorrelated errors. Computer Methods and Programs in Biomedicine. 1996;49:229–252. [PubMed]

15. Daniels MJ, Kass RE. Shrinkage estimators for covariance matrices. Biometrics. 2001;57:1173–1184. [PMC free article] [PubMed]

16. Leonard T, Hsu JSJ. Bayesian inference for a covariance matrix. Annals of Statistics. 1992;20:1669–1696.

17. Daniels MJ, Kass RE. Nonconjugate Bayesian estimation of covariance matrices and its use in hierarchical models. Journal of the American Statistical Association. 1999;94:1254–1263.

18. Barnard J, McCulloch R, Meng X. A natural strategy for modelling covariance matrices with application to shrinkage. Statistica Sinica. 2000;10:1281–1311.

19. Spiegelhalter DJ, Best NG, Carlin BP, van der Linde A. Bayesian measures of model complexity and fit (with discussion) Journal of the Royal Statistical Society, Series B. 2002;64:583–639.

20. SAS Institute Inc. SAS=STAT Software: Changes and Enhancements through Release. 6.12. Carey, NC: SAS Institute Inc; 1997.

21. Little RJA, Rubin DB. Statistical Analysis With Missing Data. New York: Wiley; 1987.

22. Gelman A, Rubin DB. Inference from iterative simulation using multiple sequences (with Discussion) Statistical Science. 1992;7:457–511.

PubMed Central Canada is a service of the Canadian Institutes of Health Research (CIHR) working in partnership with the National Research Council's national science library in cooperation with the National Center for Biotechnology Information at the U.S. National Library of Medicine(NCBI/NLM). It includes content provided to the PubMed Central International archive by participating publishers. |