Home | About | Journals | Submit | Contact Us | Français |

**|**HHS Author Manuscripts**|**PMC2952887

Formats

Article sections

- SUMMARY
- 1. INTRODUCTION
- 2. NEW METHOD FOR SYNTHESIS ANALYSIS
- 3. SIMULATION STUDY
- 4. EXAMPLE
- 5. DISCUSSION
- References

Authors

Related links

Stat Med. Author manuscript; available in PMC 2010 October 12.

Published in final edited form as:

Stat Med. 2009 May 15; 28(11): 1620–1635.

doi: 10.1002/sim.3563PMCID: PMC2952887

NIHMSID: NIHMS219175

See other articles in PMC that cite the published article.

To estimate the multivariate regression model from multiple individual studies, it would be challenging to obtain results if the input from individual studies only provide univariate or incomplete multivariate regression information. Samsa *et al.* (*J. Biomed. Biotechnol.* 2005; **2**:113–123) proposed a simple method to combine coefficients from univariate linear regression models into a multivariate linear regression model, a method known as synthesis analysis. However, the validity of this method relies on the normality assumption of the data, and it does not provide variance estimates. In this paper we propose a new synthesis method that improves on the existing synthesis method by eliminating the normality assumption, reducing bias, and allowing for the variance estimation of the estimated parameters.

Meta-analysis is a statistical technique for amalgamating, summarizing, and reviewing previous quantitative research. A typical meta-analysis is to summarize all the research results on one topic and to discuss reliability of this summary. It is based on the condition that each individual study reports the same finding for the same research question. The potential advantage of meta-analysis is the increase in the sample size and the validity of statistical inference. It would be difficult to utilize meta-analysis methodologies if individual studies only provide partial findings.

In a practical example, meta-analysis could be used to build a comprehensive and multivariate prediction model for the risk of chronic diseases such as coronary heart disease (CHD). A wide range of CHD risk factors have been reported in the literature, but a comprehensive multivariate CHD prediction model has yet to be found. The Framingham CHD model is widely considered the most comprehensive model, although many well-known CHD risk factors, such as body mass index (BMI), family history of CHD, and c-reactive protein, are not included in the model [1–3].

We propose a new process to solve several of the problems presented above. This novel multivariate meta-analysis modeling method is called synthesis analysis. Using multiple study results reported in the scientific and medical literature, the objective of our synthesis analysis is to estimate the multivariate relations between multiple predictors (*X*s) and an outcome variable (*Y*) from the univariate relation of each *X* with *Y* and the two-way correlations between each pair of *X*s. All the inputs may come from various studies in the literature, while a cross-sectional population survey may provide correlations of all *X*s. We reported the first method of synthesis analysis (the Samsa-Hu-Root or SHR method) in which the partial regression coefficients were calculated using the following matrix equation:

$$B=({R}^{-1}(Bu\#S))/S$$

where *B* is the vector of partial (excluding the intercept, *B*_{0}) regression coefficients, *Bu* is the vector of univariate regression coefficients, *R* is the vector of Pearson correlation coefficients among all independent variables, *S* is the vector of standard deviations of the independent variables, # stands for element-wise multiplication, and/stands for element-wise division. The intercept, *B*_{0}, can be calculated using the resulting multivariate formula, the mean of the predictors and outcome, and the newly calculated partial regression coefficient for each predictor.

In the present study, we propose an improvement to the existing synthesis analysis. Compared with the previous method, this method has at least two advantages: (1) it includes a method to compute the variances for predicted outcomes and estimated regression coefficients and (2) the estimates of predicted outcomes and regression coefficients can be more robust when the independent variables are not normally distributed.

Our paper is organized as follows. In Section 2, we describe our new method. In Section 3, we report a simulation study on finite-sample performance of the proposed method in comparison with the existing synthesis method. In Section 4, we illustrate the use of the proposed method in a real-life example from the 1999–2000 National Health and Nutritional Examination Survey. Finally, in Section 5, we conclude our paper with a discussion on some extensions.

Suppose that we know the individual relationships between an outcome *Y* and each of *p* risk factors, *X*_{1}, *X*_{2}, …, and *X _{p}*, which are given as follows:

$$E[Y\mid {X}_{i}]={\gamma}_{0}^{i}+{\gamma}_{1}^{i}{X}_{i}$$

(1)

where *i* = 1,2, …, *p*. In addition, we assume that we know the mean relationships between any two pairs among the *p* risk factors:

$$E[{X}_{j}\mid {X}_{i}]={\alpha}_{0}^{ij}+{\alpha}_{1}^{ij}{X}_{i}$$

(2)

where *i*, *j* = 1,2, …, *p* and *i* ≠ *j*.

The goal of synthesis analysis is to determine the multivariate linear regression model between *Y* and the *p* risk factors:

$$E(Y\mid {X}_{1},\dots ,{X}_{p})={\beta}_{0}+\sum _{i=1}^{p}{\beta}_{i}{X}_{i}$$

(3)

Note that the linear regression assumption (1) automatically holds under assumptions (2) and (3).

Taking the conditional expectation of the both sides of (3) given *X _{i}*, we obtain the following equation:

$$E(Y\mid {X}_{i}=x)={\beta}_{0}+{\beta}_{1}E({X}_{1}\mid {X}_{i}=x)+\cdots +{\beta}_{i-1}E({X}_{i-1}\mid {X}_{i}=x)+{\beta}_{i}x+\cdots +{\beta}_{p}E({X}_{p}\mid {X}_{i}=x)$$

(4)

for *i* = 1, …, *p*. Combining (1), (2), and (4), we obtain the following result:

$${\gamma}_{0}^{i}+{\gamma}_{1}^{i}x={\beta}_{0}+({\beta}_{1}{\alpha}_{0}^{i1}+\cdots +{\beta}_{i-1}{\alpha}_{0}^{i(i-1)}+{\beta}_{i+1}{\alpha}_{0}^{i(i+1)}+\cdots +{\beta}_{p}{\alpha}_{0}^{ip})+({\beta}_{1}{\alpha}_{1}^{i1}+\cdots +{\beta}_{i-1}{\alpha}_{1}^{i(i+1)}+{\beta}_{i}+{\beta}_{i+1}{\alpha}_{1}^{i(i+1)}+\cdots +{\beta}_{p}{\alpha}_{1}^{ip})x$$

for all *x*, where *i =* 1*,* …, *p*. Therefore, we obtain the following two sets of equations:

$$\begin{array}{l}{\gamma}_{0}^{1}={\beta}_{0}+({\beta}_{2}{\alpha}_{0}^{11}+\cdots +{\beta}_{p}{\alpha}_{0}^{1p})\\ {\gamma}_{0}^{i}={\beta}_{0}+({\beta}_{1}{\alpha}_{0}^{i1}+\cdots +{\beta}_{i-1}{\alpha}_{0}^{i(i-1)}+{\beta}_{i+1}{\alpha}_{0}^{i(i+1)}+\cdots +{\beta}_{p}{\alpha}_{0}^{ip})\end{array}$$

(5)

for *i =* 2, …, *p*; and

$$\begin{array}{l}{\gamma}_{1}^{1}={\beta}_{1}+{\beta}_{2}{\alpha}_{1}^{12}+\cdots +{\beta}_{p}{\alpha}_{1}^{1p}\\ {\gamma}_{1}^{i}={\beta}_{1}{\alpha}_{1}^{i1}+\cdots +{\beta}_{i-1}{\alpha}_{1}^{i(i-1)}+{\beta}_{i}+{\beta}_{i+1}{\alpha}_{1}^{i(i+1)}+\cdots +{\beta}_{p}{\alpha}_{1}^{ip}\end{array}$$

(6)

for *i* = 2, …, *p*.

Let **M** be a *p* × *p* matrix with diagonal elements 1, and element
${\alpha}_{1}^{ij}$ when *i* ≠ *j*; let **β** = (*β _{k}*,

$$\mathbf{M}\mathbf{\beta}={\mathbf{\gamma}}_{1}$$

(7)

Using Cramer’s rule, we can easily solve the above *p* simultaneous linear equations. Let us define the following determinants:

$$\begin{array}{l}\mathbf{D}=\left|\begin{array}{ccccc}1& {\alpha}_{1}^{12}& {\alpha}_{1}^{13}& \dots & {\alpha}_{1}^{1p}\\ {\alpha}_{1}^{21}& 1& {\alpha}_{1}^{23}& \dots & {\alpha}_{1}^{2p}\\ \vdots & \vdots & \vdots & \vdots & \vdots \\ {\alpha}_{1}^{p1}& {\alpha}_{1}^{p2}& {\alpha}_{1}^{p3}& \dots & 1\end{array}\right|\\ {\mathbf{D}}_{\mathbf{1}}=\left|\begin{array}{ccccc}{\gamma}_{1}^{2}& {\alpha}_{1}^{12}& {\alpha}_{1}^{13}& \dots & {\alpha}_{1}^{1p}\\ {\gamma}_{1}^{2}& 1& {\alpha}_{1}^{23}& \dots & {\alpha}_{1}^{2p}\\ \vdots & \vdots & \vdots & \vdots & \vdots \\ {\gamma}_{1}^{p}& {\alpha}_{1}^{p2}& {\alpha}_{1}^{p3}& \dots & 1\end{array}\right|\phantom{\rule{0.38889em}{0ex}}\phantom{\rule{0.38889em}{0ex}}\phantom{\rule{0.38889em}{0ex}}\text{and}\\ {\mathbf{D}}_{\mathbf{p}}=\left|\begin{array}{ccccc}1& {\alpha}_{1}^{12}& {\alpha}_{1}^{13}& \dots & {\gamma}_{1}^{1}\\ {\alpha}_{1}^{21}& 1& {\alpha}_{1}^{23}& \dots & {\gamma}_{1}^{2}\\ \vdots & \vdots & \vdots & \vdots & \vdots \\ {\alpha}_{1}^{p1}& {\alpha}_{1}^{p2}& {\alpha}_{1}^{p3}& \dots & {\gamma}_{1}^{p}\end{array}\right|\end{array}$$

Cramer’s rule gives us the following unique solution to the system of equations (8):

$${\beta}_{k}=\frac{{\mathbf{D}}_{\mathbf{k}}}{\mathbf{D}}$$

(8)

where *k* = 1, …, *p*.

After obtaining estimates of the vector of slope parameters, *β*, we can derive an estimate for the intercept parameter, *β*_{0}, using any one of the *p* equations given in (6). Hence, we have the following *p* equations for the unknown intercept parameter *β*_{0}:

$$\begin{array}{l}{\beta}_{0}+0+{\alpha}_{0}^{12}{\beta}_{2}+{\alpha}_{0}^{13}{\beta}_{3}+\cdots +{\alpha}_{0}^{1(p-1)}{\beta}_{p-1}+{\alpha}_{0}^{1p}{\beta}_{p}={\gamma}_{0}^{1}\\ {\beta}_{0}+{\alpha}_{0}^{12}{\beta}_{1}+0+{\alpha}_{0}^{23}{\beta}_{3}+\cdots +{\alpha}_{0}^{2(p-1)}{\beta}_{p-1}+{\alpha}_{0}^{2p}{\beta}_{p}={\gamma}_{0}^{2}\\ \vdots \\ {\beta}_{0}+{\alpha}_{0}^{p1}{\beta}_{1}+{\alpha}_{0}^{p2}{\beta}_{2}+{\alpha}_{0}^{p3}{\beta}_{3}+\cdots +{\alpha}_{0}^{p(p-1)}{\beta}_{p-1}+0={\gamma}_{0}^{p}\end{array}$$

Although there are *p* equations for the parameter *β*_{0}, we show that the solution of *β*_{0} is unique in Appendix A. We give a detailed description of our solution for the two-covariate case in Appendix B, and in Appendix C, we give an explicit formula for our synthesized parameters in cases with three and four covariates.

The variance can be estimated using the delta method by assuming that the univariate parameter estimates ${\gamma}_{0}^{(i)}$ and ${\gamma}_{1}^{(i)}(i=1,\dots ,p)$ from individual univariate linear regression models, given by (1), are independent of each other [4]. Let $\alpha =({\alpha}_{0}^{(ij)},{\alpha}_{1}^{(ij)},i,j=1,\dots ,p)$ and $\gamma =({\gamma}_{0}^{(k)},{\gamma}_{1}^{(k)},k=1,\dots ,p)$.

By the well-known result from simple linear regression, we know:

$${n}^{1/2}[{(\mathbf{\alpha},\mathbf{\gamma})}^{\text{T}}-{({\mathbf{\alpha}}_{0},{\mathbf{\gamma}}_{0})}^{\text{T}}]{\to}_{d}\mathbf{N}(\mathbf{0},\mathbf{\sum})$$

where *α*_{0} and *γ*_{0} are the true expected values of *α* and *γ*,

$$\mathbf{\sum}=\left(\begin{array}{cc}{\mathbf{\sum}}_{\alpha}& 0\\ 0& {\mathbf{\sum}}_{\gamma}\end{array}\right)$$

Here

$${\mathbf{\sum}}_{\alpha}=({\sigma}_{{\alpha}_{i}^{kl}{\alpha}_{j}^{{k}^{\prime}{l}^{\prime}}},i,j=0,1;k,l,{k}^{\prime},{l}^{\prime}=1,2,\dots ,p)$$

where ${\sigma}_{{\alpha}_{i}^{kl}{\alpha}_{j}^{{k}^{\prime}{l}^{\prime}}}(i,j=0,1;k,l,{k}^{\prime},{l}^{\prime}=1,2,\dots ,p)$ is the covariance between ${\alpha}_{i}^{(kl)}$ and ${\alpha}_{j}^{({k}^{\prime}{l}^{\prime})}$, and

$${\mathbf{\sum}}_{\mathbf{\gamma}}=\left(\begin{array}{cccc}{\sigma}_{{\gamma}_{0}^{1}{\gamma}_{0}^{1}}& 0& \dots & 0\\ 0& {\sigma}_{{\gamma}_{1}^{1}{\gamma}_{1}^{1}}& \dots & 0\\ \vdots & \vdots & \vdots & \vdots \\ 0& \dots & \ddots & {\sigma}_{{\gamma}_{1}^{p}{\gamma}_{1}^{p}}\end{array}\right)$$

is the covariance matrix of the estimated parameters .

The synthesized parameter estimates *β =(β*_{0}, *β*_{l}, …, *β _{p}*)

$$\beta =\mathbf{g}(\mathbf{\alpha},\mathbf{\gamma})$$

If the function **g** is differentiable, then the delta method gives the asymptotic variance of *β* as follows:

$${\mathbf{\sum}}_{\beta}=\nabla \mathbf{g}{(\mathbf{\alpha},\mathbf{\gamma})}^{\text{T}}\mathbf{\sum}\nabla \mathbf{g}(\mathbf{\alpha},\mathbf{\gamma})$$

(9)

where **g( α, γ)** is the vector of derivatives of function

Once the estimates of parameters and their variances have been derived, we can calculate the covariance matrix of predicted values as follows:

$$\text{Cov}(Y\mid X)=\text{Cov}({\mathbf{X}}^{\text{T}}\mathbf{\beta}\mid \mathbf{X})={\mathbf{X}}^{\text{T}}{\mathbf{\sum}}_{\beta}\mathbf{X}$$

where **X**^{T} is the transpose of the **X** matrix, and **Σ*** _{β}* is the covariance matrix of

The mean-squared error (MSE) of the predicted value is given by

$${\text{MSE}}_{\widehat{Y}}=\frac{{\sum}_{i=1}^{n}({\widehat{Y}}_{i}-{Y}_{i})}{n}$$

where *Ŷ _{i}* and

$$\rho =\frac{\text{Cov}({\widehat{Y}}_{i},{Y}_{i})}{\sqrt{\text{Var}({\widehat{Y}}_{i})\text{Var}({Y}_{i})}}$$

where Cov(*Ŷ _{i}*

We conducted a simulation study to assess the performance of the proposed method in comparison with our previous method [5], denoted by SHR. We simulated data with two, three, and four predictor variables. For simplicity of presentation, we only reported the results for the two-predictors here, because the results for three-predictor and four-predictor cases are similar to those in the two-predictor case.

In each of these cases, we simulated independent variables from (1) a multivariate normal distribution, (2) a multivariate log-normal distribution, (3) a multivariate exponential distribution, and (4) a multivariate gamma distribution. We chose the variances of all the independent variables to be 1 and correlations for pairs of the independent variables to be 0.5. After simulating the independent variables *X*, we generated the dependent variable *Y* by adding random normal errors to the mean model:

$$Y={\beta}_{0}+\sum _{i=1}^{p}{\beta}_{i}{X}_{i}+\epsilon \phantom{\rule{0.38889em}{0ex}}\phantom{\rule{0.38889em}{0ex}}\phantom{\rule{0.38889em}{0ex}}(p=2,3,4)$$

(10)

where *ε* is a random error following the standard normal distribution.

We set the true regression parameters as follows: (*β*_{0}, *β*_{l}, *β*_{2}) = (−5, 5, 3) for the two-variable setting, (*β*_{0}, *β*_{l,} *β*_{2}, *β*_{3}) = (−5, 1, 3, 5) for the three-variable setting, and (*β*_{0}, *β*_{l,} *β*_{2}, *β*_{3,}*β*_{4}) = (−5, 5, 4, 3, 1) for the four-variable setting. We divided each data set into
${C}_{2}^{p+1}$ (*p* = 2,3,4) subsets with equal sample sizes. Here,
${C}_{2}^{p+1}$ denoted the total number of combinations of choosing 2 items from (p + 1) items. In simulated data, each subset contained only one pair of variables chosen from *Y*, *X*_{1}, …, *X _{p}*. The sample size (the total number of observations) used in simulation was 300 and 3000 (with equal size for each subset). For each of the above settings, we simulated a total number of 1000 data sets. As the results for the data from the skewed log-normal distribution were similar to those from the other skewed distributions, we only reported the results for the normal and log-normal distributions. We reported the mean bias and MSE for estimated parameters in Tables I and andIIII.

Mean bias and MSE of estimated regression parameters with two independent variables following a normal distribution.

Mean bias and MSE of estimated regression parameters with two independent variables following a log-normal distribution.

In order to evaluate the accuracy of predicted values using the new model, we simulated two data sets with equal sample sizes. One was used as the training set for model derivation, while the other was used as the validation data set. To evaluate prediction performance, we reported mean bias, MSE, and the mean of standard error estimates (SEEs) for predicted values in Tables III and andIV.IV. The SEEs were derived using the method developed in Sections 2.2 and 2.3. The correlations between predicted and observed values were also reported in the two tables.

Mean bias, MSE, correlation and SE for predicted values with two independent variables following a normal distribution.

Mean bias, MSE, correlation and SE for predicted values with two independent variables following a log-normal distribution.

Simulation results for the regression parameters showed that the mean bias and MSE of the estimated regression parameters using our new method were, in general, better than those using the SHR method, across all of the distributions and sample sizes considered here. The results also indicated that when the distributions of independent variables *X* were heavily skewed (log-normal distribution), the bias and MSE of the estimated regression parameters using both methods were large, especially when sample sizes were small. Nonetheless, the results from our new method were much better than those from the SHR method under this situation.

The results for predicted values indicated that both the new method and the SHR method had similar correlations between observed and predicted values across all sample sizes and distributions. However, mean bias and MSE for predicted values derived from our new method were much smaller than those from the SHR method.

In this section, we analyzed a real-world example and compared the results using our new synthesis method and the SHR method. The data came from the 1999–2000 National Health and Nutritional Examination Survey [6]. There were five variables in this data set, including one outcome *Y*, systolic blood pressure, and four predictors, *X*_{1}, *X*_{2}, *X*_{3}, and *X*_{4}, which represented age, body mass index (BMI), serum total cholesterol level, and the natural log of serum triglycerides, respectively. First, we fitted a multivariate regression model to this data set, which would serve as the gold standard for this analysis. Next, we randomly divided the data set into the five mutually exclusive subsets with approximately equal sample sizes. The first four subsets included the outcome *Y* and each of the four covariates, *X*_{1}, *X*_{2}, *X*_{3}, and *X*_{4}, respectively. The last subset contained all four covariates, which was used to derive pairwise correlations among the covariates. We applied the two synthesis methods to these five subsets to obtain estimated parameters in the multivariate regression model and reported the results in Table V. For comparison purposes, we also included the estimated parameters in the multivariate regression models obtained by the gold standard model in Table V.

The estimated parameters and their standard errors (SEs) from the gold standard and from both our new method and SHR method were listed in Table V (SE was not available by the SHR method). From these results, we observed that the new method produced the coefficient estimates that were comparable to those derived using the gold standard. However, the estimates for Intercept and LOGTRIG from the SHR method were varied somewhat from those derived using the gold standard method. As an illustration, the predicted value for a 65-year-old subject with the BMI of 19, the serum total cholesterol level of 190, and the serum triglycerides of 160 would be 134, 135, and 136, using the gold standard method, the new method, and the SHR method, respectively.

In this paper, we provided several enhancements to the existing SHR synthesis analysis methodology. These improvements allow for more robust estimates of the regression parameters and predicted values when covariates are not normally distributed. Additionally, the new method allows for estimation of the variance of the resulting parameters and predicted outcomes.

Both the previously reported SHR method and our improved method allow for the building of multivariate regression models using univariate regression coefficients and two-way correlation coefficient data that are derived from different data sources. The underlying assumption is that each individual study is representative of the target population. However, the validity of the previously reported SHR synthesis analysis methodology relies on the normality assumption of the data. Although synthesis analysis is related to both meta-analysis and analysis of missing data, it is also different from these two traditional analyses in two important ways. First, while the goal of traditional meta-analysis is to combine the multivariate regression models with the same covariates from different studies, the goal of synthesis analysis is to create a multivariate linear regression model from univariate linear regression models on different covariates. Although the statistical problem that synthesis analysis address may be considered as one particular type of missing-data problem, unlike a traditional analysis, synthesis analysis does not require individual level data; rather, synthesis analysis only requires coefficient estimates of univariate linear regression models between the outcome and a covariate and between any two covariates.

Although the proposed method was developed to synthesize different univariate linear regression models with different covariates into multivariate linear regression models, it can be easily extended to the setting in which several studies are available for some (or all) of the univariate regression models. In this case, there would be variation among the parameter estimates. For example, if there are five studies available for the linear model, *E*(*Y* | *X*_{1}), and six studies for the linear model, *E*(*X*_{1} | *X*_{2}), then we would have the five sets of estimates for the intercept and slope of the linear model of *Y* on *X,* denoted by
${\gamma}_{0}^{j1}$ and
${\gamma}_{1}^{j1}$, for *j* = 1, …, 5, and the six sets of estimates for the intercept and slope of the linear model of *X*_{1} on *X*_{2}, denoted by
${\alpha}_{0}^{k21}$ and
${\alpha}_{1}^{k21}$, for *k* = l, …, 6.

In this case, we propose to first combine the results on the same univariate regression model from different studies into the one univariate regression model using the weighted mean of ${\alpha}_{i}^{jk}$ and ${\gamma}_{i}^{j}$, with the weight being the inverse sample size; that is,

$${\gamma}_{0}^{1}=\sum _{j=1}^{5}\frac{{N}_{j}}{N}{\gamma}_{0}^{j1},\phantom{\rule{0.38889em}{0ex}}\phantom{\rule{0.38889em}{0ex}}\phantom{\rule{0.38889em}{0ex}}{\gamma}_{1}^{1}=\sum _{j=1}^{5}\frac{{N}_{j}}{N}{\gamma}_{1}^{j1}$$

where *N _{j}* is the sample size for the

We performed a simulation study to assess the performance of the modified method in the two independent variables case, with one independent variables following a normal distribution and another following a log-normal distribution. We also compared this modified method with other combining methods, including mean, median, minimum, and maximum of multiple estimates for a same regression parameter. From these simulation results, we concluded that parameter estimates using the weighted mean had the smallest bias and MSE, and were very close to the bias and MSE using the gold standard. In addition, the predicted value using the weighted mean had the smallest bias, MSE, and SEE. We give a detailed description on our simulation study and results in Appendix D. The computer software for implementing the proposed method is available at http://faculty.washington.edu/azhou.

We would like to thank Vicki Ding and Hua Chen for their help in preparing this manuscript. Xiao-Hua Zhou, PhD, is presently a Core Investigator and Biostatistics Unit Director at the Northwest HSR&D Center of Excellence, Department of Veterans Affairs Medical Center, Seattle, WA. The views expressed in this article are those of the authors and do not necessarily represent the views of the Department of Veterans Affairs. This study has been partially supported by NSFC 30728019.

Contract/grant sponsor: NSFC; contract/grant number: 30728019

Here we show that there is a unique solution for the intercept term *β*_{0} with the *p* equations (5), meaning that we need to show that the following *p* solutions are equivalent:

$$\begin{array}{l}{\beta}_{0}^{(1)}={\gamma}_{0}^{1}-({\alpha}_{0}^{12}{\beta}_{2}+{\alpha}_{0}^{13}{\beta}_{3}+\cdots +{\alpha}_{0}^{1,p-1}{\beta}_{p-1}+{\alpha}_{0}^{1p}{\beta}_{p})\\ {\beta}_{0}^{(2)}={\gamma}_{0}^{2}-({\alpha}_{0}^{21}{\beta}_{1}+0+{\alpha}_{0}^{23}{\beta}_{3}+\cdots +{\alpha}_{0}^{2,p-1}{\beta}_{p-1}+{\alpha}_{0}^{2p}{\beta}_{p})\\ \vdots \\ {\beta}_{0}^{(p)}={\gamma}_{0}^{p}-({\alpha}_{0}^{p1}{\beta}_{1}+{\alpha}_{0}^{p2}{\beta}_{2}+{\alpha}_{0}^{p3}{\beta}_{3}+\cdots +{\alpha}_{0}^{p,p-1}{\beta}_{p-1}+0)\end{array}$$

Without losing generality, we only show that the solutions of the first two equations are equal, that is, ${\beta}_{0}^{(1)}={\beta}_{0}^{(2)}$. The proof for other solutions is similar.

In order to show

$${\gamma}_{0}^{1}-{\alpha}_{0}^{12}{\beta}_{2}-{\alpha}_{0}^{13}{\beta}_{3}-\cdots -{\alpha}_{0}^{1,p-1}{\beta}_{p-1}-{\alpha}_{0}^{1p}{\beta}_{p}={\gamma}_{0}^{2}-{\alpha}_{0}^{21}{\beta}_{1}-{\alpha}_{0}^{23}{\beta}_{3}-\cdots -{\alpha}_{0}^{2,p-1}{\beta}_{p-1}-{\alpha}_{0}^{2p}{\beta}_{p}$$

(A1)

we add *E*(*X*_{1})*β*_{1} + *E*(*X*_{2})*β*_{2} + ··· + *E*(*X _{p}*)

$${\gamma}_{0}^{1}+E({X}_{1}){\beta}_{1}+(E({X}_{2})-{\alpha}_{0}^{12}){\beta}_{2}+\cdots (E({X}_{p-1})-{\alpha}_{0}^{1,p-1}){\beta}_{p-1}+(E({X}_{p})-{\alpha}_{0}^{1p}){\beta}_{p}$$

(A2)

Because $E({X}_{j}\mid {X}_{i})={\alpha}_{0}^{ij}+{\alpha}_{1}^{ij}{X}_{i}$, we can get the following result:

$$E({X}_{j})=E(E({X}_{j}\mid {X}_{i}))={\alpha}_{0}^{ij}+{\alpha}_{1}^{ij}E({X}_{i})$$

(A3)

Hence, we can replace ( $E({X}_{j})-{\alpha}_{0}^{ij}$) with ${\alpha}_{1}^{1j}E({X}_{1})$ in (A2) and obtain the following result:

$${\gamma}_{0}^{1}+E({X}_{1}){\beta}_{1}+{\alpha}_{1}^{12}{\beta}_{2}E({X}_{1})+{\alpha}_{1}^{1,p-1}{\beta}_{p-1}E({X}_{1})+{\alpha}_{1}^{1p}{\beta}_{p}E({X}_{1})={\gamma}_{0}^{1}+({\beta}_{1}+{\alpha}_{1}^{12}{\beta}_{2}+\cdots +{\alpha}_{1}^{1p}{\beta}_{p})E({X}_{1})$$

(A4)

Because *β*_{1}, …, and *β _{p}* are the solutions of

$${\beta}_{1}+{\alpha}_{1}^{12}{\beta}_{2}+\cdots +{\alpha}_{1}^{1p}{\beta}_{p}={\gamma}_{1}^{1}$$

(A5)

Hence, the right side of (A4) becomes
${\gamma}_{0}^{1}+{\gamma}_{1}^{1}E({X}_{1})$, which equals to *E*(*Y*) because
$E(Y)=E(E(Y\mid {X}_{1}))=E({\gamma}_{0}^{1}+{\gamma}_{1}^{1}{X}_{1})={\gamma}_{0}^{1}+{\gamma}_{1}^{1}E({X}_{1})$.

Similarly, we can proof the right side of (A1) plus *E*(*X*_{1})*β*_{1} + *E*(*X*_{2}) *β*_{2} + ··· + *E*(*X _{p}*)

When *p =* 2, we can also have an explicit formula for the derivative of **β** = **g**(**α**, **γ**) with respect to agr; and γ, **g**(**α**, **γ**), for the two independent variables case. Here, **g**(**α**, **γ**) is used to calculate the variance of **β** and predicted values.

$$\begin{array}{l}\nabla \mathbf{g}(\mathbf{\alpha},\mathbf{\gamma})=\left(\begin{array}{ccc}{\scriptstyle \frac{\partial {\widehat{\beta}}_{0}}{\partial {\alpha}_{0}^{12}}}& {\scriptstyle \frac{\partial {\widehat{\beta}}_{1}}{\partial {\alpha}_{0}^{12}}}& {\scriptstyle \frac{\partial {\widehat{\beta}}_{2}}{\partial {\alpha}_{0}^{12}}}\\ {\scriptstyle \frac{\partial {\widehat{\beta}}_{0}}{\partial {\alpha}_{1}^{12}}}& {\scriptstyle \frac{\partial {\widehat{\beta}}_{1}}{\partial {\alpha}_{1}^{12}}}& {\scriptstyle \frac{\partial {\widehat{\beta}}_{2}}{\partial {\alpha}_{1}^{12}}}\\ {\scriptstyle \frac{\partial {\widehat{\beta}}_{0}}{\partial {\alpha}_{0}^{21}}}& {\scriptstyle \frac{\partial {\widehat{\beta}}_{1}}{\partial {\alpha}_{0}^{21}}}& {\scriptstyle \frac{\partial {\widehat{\beta}}_{2}}{\partial {\alpha}_{0}^{21}}}\\ {\scriptstyle \frac{\partial {\widehat{\beta}}_{0}}{\partial {\alpha}_{1}^{21}}}& {\scriptstyle \frac{\partial {\widehat{\beta}}_{1}}{\partial {\alpha}_{1}^{21}}}& {\scriptstyle \frac{\partial {\widehat{\beta}}_{2}}{\partial {\alpha}_{1}^{21}}}\\ {\scriptstyle \frac{\partial {\widehat{\beta}}_{0}}{\partial {\gamma}_{0}^{1}}}& {\scriptstyle \frac{\partial {\widehat{\beta}}_{1}}{\partial {\gamma}_{0}^{1}}}& {\scriptstyle \frac{\partial {\widehat{\beta}}_{2}}{\partial {\gamma}_{0}^{1}}}\\ {\scriptstyle \frac{\partial {\widehat{\beta}}_{0}}{\partial {\gamma}_{1}^{1}}}& {\scriptstyle \frac{\partial {\widehat{\beta}}_{1}}{\partial {\gamma}_{1}^{1}}}& {\scriptstyle \frac{\partial {\widehat{\beta}}_{2}}{\partial {\gamma}_{1}^{1}}}\\ {\scriptstyle \frac{\partial {\widehat{\beta}}_{0}}{\partial {\gamma}_{0}^{2}}}& {\scriptstyle \frac{\partial {\widehat{\beta}}_{1}}{\partial {\gamma}_{0}^{2}}}& {\scriptstyle \frac{\partial {\widehat{\beta}}_{2}}{\partial {\gamma}_{0}^{2}}}\\ {\scriptstyle \frac{\partial {\widehat{\beta}}_{0}}{\partial {\gamma}_{1}^{2}}}& {\scriptstyle \frac{\partial {\widehat{\beta}}_{1}}{\partial {\gamma}_{1}^{2}}}& {\scriptstyle \frac{\partial {\widehat{\beta}}_{2}}{\partial {\gamma}_{1}^{2}}}\end{array}\right)\\ =\left(\begin{array}{ccc}-{\scriptstyle \frac{{\gamma}_{1}^{2}-{\alpha}_{1}^{21}{\gamma}_{1}^{1}}{1-{\alpha}_{1}^{12}{\alpha}_{1}^{21}}}& 0& 0\\ {\scriptstyle \frac{{\alpha}_{0}^{12}{\alpha}_{1}^{21}}{{(1-{\alpha}_{1}^{12}{\alpha}_{1}^{21})}^{2}}}& -{\scriptstyle \frac{{\gamma}_{1}^{2}}{1-{\alpha}_{1}^{12}{\alpha}_{1}^{21}}}+{\scriptstyle \frac{{\alpha}_{1}^{21}({\gamma}_{1}^{1}-{\alpha}_{1}^{21}{\gamma}_{1}^{2})}{{(1-{\alpha}_{1}^{12}{\alpha}_{1}^{21})}^{2}}}& {\scriptstyle \frac{{\alpha}_{1}^{21}({\gamma}_{1}^{2}-{\alpha}_{1}^{21}{\gamma}_{1}^{1})}{1-{\alpha}_{1}^{12}{\alpha}_{1}^{21}}}\\ 0& 0& 0\\ -{\alpha}_{0}^{12}\left[-{\scriptstyle \frac{{\gamma}_{1}^{1}}{1-{\alpha}_{1}^{12}{\alpha}_{1}^{21}}}-{\scriptstyle \frac{{\alpha}_{1}^{12}({\gamma}_{1}^{2}-{\alpha}_{1}^{21}{\gamma}_{1}^{1})}{{(1-{\alpha}_{1}^{12}{\alpha}_{1}^{21})}^{2}}}\right]& {\scriptstyle \frac{{\alpha}_{1}^{12}({\gamma}_{1}^{1}-{\alpha}_{1}^{12}{\gamma}_{1}^{2})}{{(1-{\alpha}_{1}^{12}{\alpha}_{1}^{21})}^{2}}}& -{\scriptstyle \frac{{\gamma}_{1}^{1}}{1-{\alpha}_{1}^{12}{\alpha}_{1}^{21}}}-{\scriptstyle \frac{{\alpha}_{1}^{21}({\gamma}_{1}^{2}-{\alpha}_{1}^{21}{\gamma}_{1}^{1})}{{(1-{\alpha}_{1}^{12}{\alpha}_{1}^{21})}^{2}}}\\ 1& 0& 0\\ {\scriptstyle \frac{{\alpha}_{0}^{12}{\alpha}_{1}^{21}}{1-{\alpha}_{1}^{12}{\alpha}_{1}^{21}}}& {\scriptstyle \frac{1}{1-{\alpha}_{1}^{12}{\alpha}_{1}^{21}}}& -{\scriptstyle \frac{{\alpha}_{1}^{21}}{1-{\alpha}_{1}^{12}{\alpha}_{1}^{21}}}\\ 0& 0& 0\\ -{\scriptstyle \frac{{\alpha}_{0}^{12}}{1-{\alpha}_{1}^{12}{\alpha}_{1}^{21}}}& -{\scriptstyle \frac{{\alpha}_{1}^{12}}{1-{\alpha}_{1}^{12}{\alpha}_{1}^{21}}}& {\scriptstyle \frac{1}{1-{\alpha}_{1}^{12}{\alpha}_{1}^{21}}}\end{array}\right)\end{array}$$

When there are three predictors in the model, **D** and **D*** _{i}*, (

$$\begin{array}{l}\mathbf{D}=\left|\begin{array}{ccc}1& {\alpha}_{1}^{12}& {\alpha}_{1}^{13}\\ {\alpha}_{1}^{21}& 1& {\alpha}_{1}^{23}\\ {\alpha}_{1}^{31}& {\alpha}_{1}^{32}& 1\end{array}\right|=(1+{\alpha}_{1}^{12}{\alpha}_{1}^{23}{\alpha}_{1}^{31}+{\alpha}_{1}^{13}{\alpha}_{1}^{21}{\alpha}_{1}^{32})-({\alpha}_{1}^{12}{\alpha}_{1}^{21}+{\alpha}_{1}^{13}{\alpha}_{1}^{31}+{\alpha}_{1}^{23}{\alpha}_{1}^{32})\\ {\mathbf{D}}_{1}=\left|\begin{array}{ccc}{\gamma}_{1}^{1}& {\alpha}_{1}^{12}& {\alpha}_{1}^{13}\\ {\gamma}_{1}^{2}& 1& {\alpha}_{1}^{23}\\ {\gamma}_{1}^{3}& {\alpha}_{1}^{32}& {\alpha}_{1}^{33}\end{array}\right|=({\gamma}_{1}^{1}{\alpha}_{1}^{33}+{\alpha}_{1}^{12}{\alpha}_{1}^{23}{\gamma}_{1}^{3}+{\alpha}_{1}^{13}{\gamma}_{1}^{2}{\alpha}_{1}^{32})-({\alpha}_{1}^{13}{\gamma}_{1}^{3}+{\alpha}_{1}^{12}{\gamma}_{1}^{2}{\alpha}_{1}^{33}+{\gamma}_{1}^{1}{\alpha}_{1}^{23}{\alpha}_{1}^{32})\\ {\mathbf{D}}_{2}=\left|\begin{array}{ccc}1& {\gamma}_{1}^{1}& {\alpha}_{1}^{13}\\ {\alpha}_{1}^{21}& {\gamma}_{1}^{2}& {\alpha}_{1}^{23}\\ {\alpha}_{1}^{31}& {\gamma}_{1}^{3}& {\alpha}_{1}^{33}\end{array}\right|=({\gamma}_{1}^{2}{\alpha}_{1}^{33}+{\gamma}_{1}^{1}{\alpha}_{1}^{23}{\alpha}_{1}^{31}+{\alpha}_{1}^{13}{\alpha}_{1}^{21}{\gamma}_{1}^{3})-({\alpha}_{1}^{13}{\gamma}_{1}^{2}{\alpha}_{1}^{31}+{\gamma}_{1}^{1}{\alpha}_{1}^{21}{\alpha}_{1}^{33}+{\alpha}_{1}^{23}{\gamma}_{1}^{3})\end{array}$$

and

$${\mathbf{D}}_{3}=\left|\begin{array}{ccc}1& {\alpha}_{1}^{12}& {\gamma}_{1}^{1}\\ {\alpha}_{1}^{21}& 1& {\gamma}_{1}^{2}\\ {\alpha}_{1}^{31}& {\alpha}_{1}^{32}& {\gamma}_{1}^{3}\end{array}\right|=({\gamma}_{1}^{3}+{\alpha}_{1}^{12}{\gamma}_{1}^{2}{\alpha}_{1}^{31}+{\gamma}_{1}^{1}{\alpha}_{1}^{21}{\alpha}_{1}^{32})-({\gamma}_{1}^{1}{\alpha}_{1}^{31}+{\alpha}_{1}^{12}{\alpha}_{1}^{21}{\gamma}_{1}^{3}+{\gamma}_{1}^{2}{\alpha}_{1}^{32})$$

If there are four predictors in the regression model, the **D** and **D*** _{i}*, (

$$\begin{array}{ll}\mathbf{D}=& \left|\begin{array}{cccc}1& {\alpha}_{1}^{12}& {\alpha}_{1}^{13}& {\alpha}_{1}^{14}\\ {\alpha}_{1}^{21}& 1& {\alpha}_{1}^{23}& {\alpha}_{1}^{24}\\ {\alpha}_{1}^{31}& {\alpha}_{1}^{32}& 1& {\alpha}_{1}^{34}\\ {\alpha}_{1}^{41}& {\alpha}_{1}^{42}& {\alpha}_{1}^{43}& 1\end{array}\right|=[(1+{\alpha}_{1}^{23}{\alpha}_{1}^{34}{\alpha}_{1}^{42})+{\alpha}_{1}^{24}{\alpha}_{1}^{32}{\alpha}_{1}^{43})-({\alpha}_{1}^{23}{\alpha}_{1}^{32}+{\alpha}_{1}^{24}{\alpha}_{1}^{42}+{\alpha}_{1}^{34}{\alpha}_{1}^{43})]\\ & -{\alpha}_{1}^{12}[({\alpha}_{1}^{21}+{\alpha}_{1}^{23}{\alpha}_{1}^{34}{\alpha}_{1}^{41}+{\alpha}_{1}^{24}{\alpha}_{1}^{31}{\alpha}_{1}^{43})-({\alpha}_{1}^{24}{\alpha}_{1}^{41}+{\alpha}_{1}^{23}{\alpha}_{1}^{31}+{\alpha}_{1}^{21}{\alpha}_{1}^{34}{\alpha}_{1}^{43})]\\ & +{\alpha}_{1}^{13}[({\alpha}_{1}^{21}{\alpha}_{1}^{32}+{\alpha}_{1}^{34}{\alpha}_{1}^{41}+{\alpha}_{1}^{24}{\alpha}_{1}^{31}{\alpha}_{1}^{42})-({\alpha}_{1}^{24}{\alpha}_{1}^{32}{\alpha}_{1}^{41}+{\alpha}_{1}^{21}{\alpha}_{1}^{34}{\alpha}_{1}^{42}+{\alpha}_{1}^{31})]\\ & -{\alpha}_{1}^{14}[({\alpha}_{1}^{21}{\alpha}_{1}^{32}{\alpha}_{1}^{43}+{\alpha}_{1}^{41}+{\alpha}_{1}^{23}{\alpha}_{1}^{31}{\alpha}_{1}^{42})-({\alpha}_{1}^{23}{\alpha}_{1}^{32}{\alpha}_{1}^{41}+{\alpha}_{1}^{31}{\alpha}_{1}^{43}+{\alpha}_{1}^{21}{\alpha}_{1}^{42})]\\ {\mathbf{D}}_{1}=& \left|\begin{array}{cccc}{\gamma}_{1}^{1}& {\alpha}_{1}^{12}& {\alpha}_{1}^{13}& {\alpha}_{1}^{14}\\ {\gamma}_{1}^{2}& 1& {\alpha}_{1}^{23}& {\alpha}_{1}^{24}\\ {\gamma}_{1}^{3}& {\alpha}_{1}^{32}& 1& {\alpha}_{1}^{34}\\ {\gamma}_{1}^{4}& {\alpha}_{1}^{42}& {\alpha}_{1}^{43}& 1\end{array}\right|={\gamma}_{1}^{1}[(1+{\alpha}_{1}^{23}{\alpha}_{1}^{34}{\alpha}_{1}^{42})+{\alpha}_{1}^{24}{\alpha}_{1}^{32}{\alpha}_{1}^{43})-({\alpha}_{1}^{23}{\alpha}_{1}^{32}+{\alpha}_{1}^{24}{\alpha}_{1}^{42}+{\alpha}_{1}^{34}{\alpha}_{1}^{43})]\\ & -{\alpha}_{1}^{12}[({\gamma}_{1}^{2}+{\alpha}_{1}^{23}{\alpha}_{1}^{34}{\gamma}_{1}^{4}+{\alpha}_{1}^{24}{\gamma}_{1}^{3}{\alpha}_{1}^{43})-({\alpha}_{1}^{24}{\gamma}_{1}^{4}+{\alpha}_{1}^{23}{\gamma}_{1}^{3}+{\alpha}_{1}^{34}{\alpha}_{1}^{43}{\gamma}_{1}^{2})]\\ & +{\alpha}_{1}^{13}[({\gamma}_{1}^{2}{\alpha}_{1}^{32}+{\alpha}_{1}^{34}{\gamma}_{1}^{4}+{\alpha}_{1}^{24}{\gamma}_{1}^{3}{\alpha}_{1}^{42})-({\alpha}_{1}^{24}{\alpha}_{1}^{32}{\gamma}_{1}^{4}+{\gamma}_{1}^{3}+{\alpha}_{1}^{34}{\alpha}_{1}^{42}{\gamma}_{1}^{2})]\\ & -{\alpha}_{1}^{14}[({\gamma}_{1}^{2}{\alpha}_{1}^{32}{\alpha}_{1}^{43}+{\gamma}_{1}^{4}+{\alpha}_{1}^{23}{\gamma}_{1}^{3}{\alpha}_{1}^{42})-({\alpha}_{1}^{23}{\alpha}_{1}^{32}{\gamma}_{1}^{4}+{\alpha}_{1}^{43}{\gamma}_{1}^{3}+{\alpha}_{1}^{42}{\gamma}_{1}^{2})]\\ {\mathbf{D}}_{2}=& \left|\begin{array}{cccc}1& {\gamma}_{1}^{1}& {\alpha}_{1}^{13}& {\alpha}_{1}^{14}\\ {\alpha}_{1}^{21}& {\gamma}_{1}^{2}& {\alpha}_{1}^{23}& {\alpha}_{1}^{24}\\ {\alpha}_{1}^{31}& {\gamma}_{1}^{3}& 1& {\alpha}_{1}^{34}\\ {\alpha}_{1}^{41}& {\gamma}_{1}^{4}& {\alpha}_{1}^{43}& 1\end{array}\right|=[({\gamma}_{1}^{2}+{\alpha}_{1}^{23}{\alpha}_{1}^{34}{\gamma}_{1}^{4}+{\alpha}_{1}^{24}{\gamma}_{1}^{3}{\alpha}_{1}^{43})-({\alpha}_{1}^{24}{\gamma}_{1}^{4}+{\alpha}_{1}^{23}{\gamma}_{1}^{3}+{\alpha}_{1}^{34}{\alpha}_{1}^{43}{\gamma}_{1}^{2})]\\ & -{\gamma}_{1}^{1}[({\alpha}_{21}+{\alpha}_{1}^{23}{\alpha}_{1}^{34}{\alpha}_{1}^{41}+{\alpha}_{1}^{24}{\alpha}_{1}^{31}{\alpha}_{1}^{43})-({\alpha}_{1}^{24}{\alpha}_{1}^{41}+{\alpha}_{1}^{23}{\alpha}_{1}^{31}+{\alpha}_{1}^{21}{\alpha}_{1}^{34}{\alpha}_{1}^{43})]\\ & +{\alpha}_{1}^{13}[({\alpha}_{1}^{21}{\gamma}_{1}^{3}+{\gamma}_{1}^{2}{\alpha}_{1}^{34}{\alpha}_{1}^{41}+{\alpha}_{1}^{24}{\alpha}_{1}^{31}{\gamma}_{1}^{4})-({\alpha}_{1}^{24}{\gamma}_{1}^{3}{\alpha}_{1}^{41}+{\gamma}_{1}^{2}{\alpha}_{1}^{31}+{\alpha}_{1}^{21}{\alpha}_{1}^{34}{\gamma}_{1}^{4})]\\ & -{\alpha}_{1}^{14}[({\alpha}_{1}^{21}{\gamma}_{1}^{3}{\alpha}_{1}^{43}+{\gamma}_{1}^{2}{\alpha}_{1}^{41}+{\alpha}_{1}^{23}{\alpha}_{1}^{31}{\gamma}_{1}^{4})-({\alpha}_{1}^{23}{\gamma}_{1}^{3}{\alpha}_{1}^{41}+{\gamma}_{1}^{2}{\alpha}_{1}^{31}{\alpha}_{1}^{43}+{\alpha}_{1}^{21}{\gamma}_{1}^{4})]\\ {\mathbf{D}}_{3}=& \left|\begin{array}{cccc}1& {\alpha}_{1}^{12}& {\gamma}_{1}^{1}& {\alpha}_{1}^{14}\\ {\alpha}_{1}^{21}& 1& {\gamma}_{1}^{2}& {\alpha}_{1}^{24}\\ {\alpha}_{1}^{31}& {\alpha}_{1}^{32}& {\gamma}_{1}^{3}& {\alpha}_{1}^{34}\\ {\alpha}_{1}^{41}& {\alpha}_{1}^{42}& {\gamma}_{1}^{4}& 1\end{array}\right|=[({\gamma}_{1}^{3}+{\gamma}_{1}^{2}{\alpha}_{1}^{34}{\alpha}_{1}^{42}+{\alpha}_{1}^{24}{\alpha}_{1}^{32}{\gamma}_{1}^{4})-({\alpha}_{1}^{24}{\alpha}_{1}^{42}{\gamma}_{1}^{3}+{\gamma}_{1}^{2}{\alpha}_{1}^{32}+{\alpha}_{1}^{34}{\gamma}_{1}^{4})]\\ & -{\alpha}_{1}^{12}[({\alpha}_{1}^{21}{\gamma}_{1}^{3}+{\gamma}_{1}^{2}{\alpha}_{1}^{34}{\alpha}_{1}^{41}+{\alpha}_{1}^{24}{\alpha}_{1}^{31}{\gamma}_{1}^{4})-({\alpha}_{1}^{24}{\gamma}_{1}^{3}{\alpha}_{1}^{41}+{\gamma}_{1}^{2}{\alpha}_{1}^{31}+{\alpha}_{1}^{21}{\alpha}_{1}^{34}{\gamma}_{1}^{4})]\\ & +{\gamma}_{1}^{1}[({\alpha}_{1}^{21}{\alpha}_{1}^{32}+{\alpha}_{1}^{34}{\alpha}_{1}^{41}+{\alpha}_{1}^{24}{\alpha}_{1}^{31}{\alpha}_{1}^{42})-({\alpha}_{1}^{24}{\alpha}_{1}^{32}{\alpha}_{1}^{41}+{\alpha}_{1}^{31}+{\alpha}_{1}^{21}{\alpha}_{1}^{34}{\alpha}_{1}^{42})]\\ & -{\alpha}_{1}^{14}[({\alpha}_{1}^{21}{\alpha}_{1}^{32}{\gamma}_{1}^{4}+{\gamma}_{1}^{3}{\alpha}_{1}^{41}+{\gamma}_{1}^{2}{\alpha}_{1}^{31}{\alpha}_{1}^{42})-({\gamma}_{1}^{2}{\alpha}_{1}^{32}{\alpha}_{1}^{41}+{\alpha}_{1}^{31}{\gamma}_{1}^{4}+{\alpha}_{1}^{21}{\gamma}_{1}^{3}{\alpha}_{1}^{42})]\end{array}$$

and

$$\begin{array}{l}{\mathbf{D}}_{4}=\mid \begin{array}{cccc}1& {\alpha}_{1}^{12}& {\alpha}_{1}^{13}& {\gamma}_{1}^{1}\\ {\alpha}_{1}^{21}& 1& {\alpha}_{1}^{23}& {\gamma}_{1}^{2}\\ {\alpha}_{1}^{31}& {\alpha}_{1}^{32}& 1& {\gamma}_{1}^{3}\\ {\alpha}_{1}^{41}& {\alpha}_{1}^{42}& {\alpha}_{1}^{43}& {\gamma}_{1}^{4}\end{array}\mid =[({\gamma}_{1}^{4}+{\alpha}_{1}^{23}{\gamma}_{1}^{3}{\alpha}_{1}^{42})+{\gamma}_{1}^{2}{\alpha}_{1}^{32}{\alpha}_{1}^{43})-({\gamma}_{1}^{2}{\alpha}_{1}^{42}+{\alpha}_{1}^{23}{\alpha}_{1}^{32}{\gamma}_{1}^{4}+{\gamma}_{1}^{3}{\alpha}_{1}^{43})]\\ -{\alpha}_{1}^{12}[({\alpha}_{1}^{21}{\gamma}_{1}^{4}+{\alpha}_{1}^{23}{\gamma}_{1}^{3}{\alpha}_{1}^{41}+{\gamma}_{1}^{2}{\alpha}_{1}^{31}{\alpha}_{1}^{43})-({\gamma}_{1}^{2}{\alpha}_{1}^{41}+{\alpha}_{1}^{23}{\alpha}_{1}^{31}{\gamma}_{1}^{4}+{\alpha}_{1}^{21}{\gamma}_{1}^{3}{\alpha}_{1}^{43})]\\ +{\alpha}_{1}^{13}[({\alpha}_{1}^{21}{\alpha}_{1}^{32}{\gamma}_{1}^{4}+{\gamma}_{1}^{3}{\alpha}_{1}^{41}+{\gamma}_{1}^{2}{\alpha}_{1}^{31}{\alpha}_{1}^{42})-({\gamma}_{1}^{2}{\alpha}_{1}^{32}{\alpha}_{1}^{41}+{\alpha}_{1}^{31}{\gamma}_{1}^{4}+{\alpha}_{1}^{21}{\gamma}_{1}^{3}{\alpha}_{1}^{42})]\\ -{\gamma}_{1}^{1}[({\alpha}_{1}^{21}{\alpha}_{1}^{32}{\alpha}_{1}^{43}+{\alpha}_{1}^{41}+{\alpha}_{1}^{23}{\alpha}_{1}^{31}{\alpha}_{1}^{42})-({\alpha}_{1}^{23}{\alpha}_{1}^{32}{\alpha}_{\text{1}}^{41}+{\alpha}_{1}^{31}{\alpha}_{1}^{43}+{\alpha}_{1}^{21}{\alpha}_{1}^{42})]\end{array}$$

We performed a simulation study to assess the performance of the modified method, as described in the discussion section, for the two independent-variable case when the vector of two covariates follows a bivariate normal distribution or bivariate log-normal distribution. We also compared this modified method with the other combining methods, including mean, median, minimum, and maximum of multiple estimates for a same regression parameter. For each of the three univariate linear models, *E*(*Y | X*_{1}), *E*(*Y | X*_{2}), and *E*(*X*_{1} *| X*_{2}), there were the estimates from five different studies. We selected the sample size for each of the five studies for each univariate model to be equal (1000 and 100) or unequal (100, 200, 500, 1200, 3000) or (10, 20, 50, 120, 300). We assessed the performance of the modified synthesis method using the weighted mean, mean, median, minimum, and maximum of combing results from the five studies.

Since our results on the simulated data from the bivariate normal distribution are similar to those on the simulated data from the bivariate log-normal distribution, we only report the results on the bivariate normal distribution case. Tables DI–DIV show the bias and MSE for each of the regression parameters *β*_{0}, *β*_{1}, *β*_{2} as well as the mean bias, MSE, correlation, and SEE (mean of SE estimates) for the predicted values.

1. Hackam DG, Anand SS. Emerging risk factors for atherosclerotic vascular disease. A critical review of the evidence. Journal of the American Medical Association. 2003;290:932–940. [PubMed]

2. Fruchart-Najib J, Bauge E, Niculescu LS, Pham T, Thomas B, Rommens C, Majd Z, Brewer B, Pennacchio LA, Fruchart JC. Mechanism of triglyceride lowering in mice expressing human apolipoprotein. Biochemical and Biophysical Research Communications. 2004;319:397–404. [PubMed]

3. Vasan RS. Biomarkers of cardiovascular disease: molecular basis and practical considerations. Circulation. 2006;113:2335–2362. [PubMed]

4. Casella G, Berger RL. Statistical Inference. 2. Thomson Learning; Pacific Grove, CA: 2002.

5. Samsa G, Hu G, Root M. Combining information from multiple data sources to create multivariable risk models: illustration and preliminary assessment of a new method. Journal of Biomedicine and Biotechnology. 2005;2:113–123. [PMC free article] [PubMed]

6. National Center for Health Statistics. National Health and Nutrition Examination Survey (NHANES) 1999–2000. Available from: http://www.cdc.gov/nchs/about/major/nhanes/

PubMed Central Canada is a service of the Canadian Institutes of Health Research (CIHR) working in partnership with the National Research Council's national science library in cooperation with the National Center for Biotechnology Information at the U.S. National Library of Medicine(NCBI/NLM). It includes content provided to the PubMed Central International archive by participating publishers. |