The third general approach is to test the significance of the intervening variable effect by dividing the estimate of the intervening variable effect, αβ, by its standard error and comparing this value to a standard normal distribution. There are several variants of the standard error formula based on different assumptions and order of derivatives in the approximations. These variants are summarized in the bottom part of .

The most commonly used standard error is the approximate formula derived by

Sobel (1982) using the multivariate delta method based on a first order Taylor series approximation:

The intervening variable effect is divided by the standard error in

Equation 9, which is then compared to a standard normal distribution to test for significance (

*H*_{o}: αβ = 0). This standard error formula is used in covariance structure programs such as EQS (

Bentler, 1997) and LISREL (

Jöreskog & Sörbom, 1993).

The exact standard error based on first and second order Taylor series approximation (

Aroian, 1944) of the product of α and β is

The intervening variable effect is divided by the standard error in

Equation 10, which is then compared to a standard normal distribution to test for significance (

*H*_{o}: αβ = 0).

Equation 9 excludes the product of the two variances, which is part of the exact standard error in

Equation 10, although that term is typically very small.

Goodman (1960;

Sampson & Breunig, 1971) derived the unbiased variance of the product of two normal variables, which subtracts the product of variances, giving

A test of the intervening variable effect can be obtained by dividing αβ by the standard error in

Equation 11, which is then compared to a standard normal distribution to test for significance.

MacKinnon, Lockwood, and Hoffman (1998) showed evidence that the αβ/σ

_{αβ} methods to test the significance of the intervening variable effect have low power because the distribution of the product of regression coefficients α and β is not normally distributed, but rather is often asymmetric with high kurtosis. Under the conditions of multivariate normality of

*X, I*, and

*Y*, the two paths represented by α and β in are independent (

MacKinnon, Warsi, & Dwyer, 1995;

Sobel, 1982). On the basis of the statistical theory of the products of random variables (

Craig, 1936;

Meeker, Cornwell, & Aroian, 1981;

Springer & Thompson, 1966), MacKinnon and colleagues (

MacKinnon et al., 1998;

MacKinnon & Lockwood, 2001) proposed three alternative variants (presented below) that theoretically should be more accurate: (a) empirical distribution of αβ/σ

_{αβ} (

*H*_{o}: αβ/σ

_{αβ} = 0), (b) distribution of the product of two standard normal variables,

*z*_{α}*z*_{β} (

*H*_{o}:

*z*_{α}*z*_{β} = 0), and (c) asymmetric confidence limits for the distribution of the product, αβ (

*H*_{o}: αβ = 0).

In the first variant,

MacKinnon et al. (1998) conducted extensive simulations to estimate the empirical sampling distribution of αβ for a wide range of values of α and β. On the basis of these empirical sampling distributions, critical values for different significance levels were determined. These tables of critical values are available at

http://www.public.asu.edu/~davidpm/ripl/methods.htm. For example, the empirical critical value is .97 for the .05 significance level rather than 1.96 for the standard normal test of αβ = 0. We designate this test statistic by z’ because it uses a different distribution than the normal distribution.

A second variant of the test of the intervening variable effect involves the distribution of the product of two

*z* statistics—one for the α parameter,

*z*_{α} = α/σ

_{α}, and another for the β parameter,

*z*_{β} = β/σ

_{β}. If α and β are assumed to be normal, the

*z*_{α}*z*_{β} term can be directly tested for significance using critical values based on the theoretical distribution of the product of two normal random variables,

*P* =

*z*_{α}*z*_{β}. This test involves converting both the α and the β paths to

*z* scores, multiplying the

*z*s, and using a critical value based on the distribution of the product of random variables, P =

*z*_{α}*z*_{β}, from

Craig (1936; see also

Meeker et al., 1981;

Springer & Thompson, 1966) to determine significance. For example, the critical value to test αβ = 0 for the .05 significance level for the P =

*z*_{α}*z*_{β} distribution is 2.18, rather than 1.96 for the normal distribution.

A third variant constructs asymmetric confidence limits to accommodate the nonnormal distribution of the intervening variable effect based on the distribution of the product of random variables. Again, two

*z* statistics are computed,

*z*_{α} = α/σ

_{α} and

*z*_{β} = β/σ

_{β}. These values are then used to find critical values for the product of two random variables from the tables in

Meeker et al. (1981) to find lower and upper significance levels. Those values are used to compute lower and upper confidence limits using the formula CL = αβ± (critical value) σ

_{αβ}. If the confidence interval does not include zero, the intervening variable effect is significant.

Bobko and Rieck (1980) examined intervening variable effects in path analysis using regression coefficients from the analysis of standardized variables (

*H*_{o}: α

_{σ}β

_{σ} = 0, where α

_{σ} and β

_{σ} are from regression analysis of standardized variables). These researchers used the multivariate delta method to find an estimate of the variance of the intervening variable effect for standardized variables, based on the product of the correlation between

*X* and

*I* and the partial regression coefficient relating

*I* and

*Y*, controlling for

*X*. The function of the product of these terms is

The partial derivatives of this function given in

Bobko and Rieck (1980) are

The variance–covariance matrix of the correlation coefficients is pre- and postmultiplied by the vector of partial derivatives to calculate a standard error that can be used to test the significance of the intervening variable effect.

The product of coefficients methods provide estimates of the intervening variable effect and the standard error of the intervening variable effect. In addition, the underlying model follows directly from path analysis wherein the intervening variable effect is the product of coefficients hypothesized to measure causal relations. This logic directly extends to models incorporating multiple intervening variables (

Bollen, 1987). However, as is presented below, two problems occur in conducting these tests. First, the sampling distribution of these tests does not follow the normal distribution as is typically assumed. Second, the form of the null hypothesis that is tested is complex.