PMCCPMCCPMCC

Search tips
Search criteria 

Advanced

 
Logo of nihpaAbout Author manuscriptsSubmit a manuscriptHHS Public Access; Author Manuscript; Accepted for publication in peer reviewed journal;
 
J Pediatr Nurs. Author manuscript; available in PMC 2010 July 19.
Published in final edited form as:
J Pediatr Nurs. 1994 December; 9(6): 409–413.
PMCID: PMC2905797
NIHMSID: NIHMS211820

A Consumer’s Guide to Causal Modeling: Part II

Causal models schematically represent a system of hypotheses that describe the expected causal structure of the phenomenon of interest (Long, 1983). A discussion of the conceptual basis for causal modeling and information on methods for building the model was the focus of part I of this two-part series. Part II will focus on terms commonly used in causal modeling and on evaluation of the modeling testing. An example from the author’s research (Youngblut, 1992) is provided to illustrate the concepts discussed.

There are several terms (Table 1)unique to causal modeling. An understanding of these terms is necessary for the consumer of nursing research to be able to read and understand reports of research using causal modeling techniques. Latent variables are unobserved variables, the theoretical concepts or constructs about which we are interested in making inferences. Empirical indicators are the observed variables, the measurements actually obtained in the study. In drawings of a causal model, the latent variables are represented by circles and the empirical indicators by boxes. For example, in Figure 1, Maternal Employment and Family Functioning are latent variables. The empirical indicators for Maternal Employment are Hours Employed per Week and Months Employed Since Child’s Birth, and the indicators for Family Functioning are Dyadic Adjustment and Feetham Family Functioning Survey. The arrows linking the indicators with the latent variables are depicted with the arrowhead pointing to the indicator, because responses on the indicators are considered to be “caused” by the underlying concept, similar to the factor analysis model.

Figure 1
Causal model with empirical indicators. a, Coefficient for strength of tie between empirical indicator and latent variable; b, path coefficient for strength of effect of one latent variable on another; e, unexplained variance (error in measurement) in ...
Table 1
Glossary of causal modeling terms

Coefficients, represented as a in Figure 1, are calculated to describe the strength of the links between latent variables and their empirical indicators. All measured variables, including empirical indicators, represent the subject’s “true” value for the concept and an amount known as error (Nunnally, 1978). Whereas other types of analyses assume that observed variables are measured without error, causal models allow for imperfect measurement of the empirical indicators and account for this value in the mathematical calculations. Thus, each indicator is represented as having an error term, denoted as e in Figure 1. The arrow linking the indicator and its error points toward the indicator.

Each arrow linking two latent variables is a hypothesis that is tested by causal modeling analysis techniques. The strength of this connection, or path, is represented by a path coefficient, denoted as b in Figure 1. A path coefficient is similar to a regression coefficient, so it represents the amount of change in the dependent variable that occurs with a unit change in the independent variable. It is important to note that any paths omitted from the model represent the hypothesis that there is no relationship between the two latent variables. Causal models traditionally have been drawn so that the causal flow goes from the upper left corner to the bottom right corner.

In other analyses, variables are identified as either independent (predictor) or dependent (outcome). However, these terms are not descriptive in causal modeling, because most models have more than one stage. For example, in Figure 1, Mother-Child Interaction regressed on Maternal Employment and Consistency Between Employment Attitudes and Employment Status represents one stage, whereas Child Development regressed on Family Functioning, Mother-Child Interaction, and Mother’s Education represents a second stage of the model. Thus, some variables are dependent variables in one stage but independent variables in another stage.

To remedy this problem, the terms used to describe a variable’s function in a causal model are exogenous and endogenous. Exogenous variables are the latent variables whose causes are not represented by the model. In other words, exogenous variables only have arrows that lead away from them, not towards them. In Figure 1, Maternal Employment, Consistency, and Mother’s Education are exogenous variables. Endogenous variables, on the other hand, are latent variables whose causes are represented within the model. These variables have arrows that point toward them and possibly away from them. Family Functioning, Mother-Child Interaction, and Child Development are endogenous variables. Note that Family Functioning and Mother-Child Interaction act as both independent and dependent variables. Endogenous variables usually are not completely explained by the independent variables. Therefore, each endogenous variable has an error term that indicates the amount of variance that is not explained by the predictors. The arrows that connect these error terms to the endogenous variables are pointed toward the latent variable. The error terms for the endogenous variables are represented as E in Figure 1.

Finally, models are either recursive or nonrecursive. Recursive models allow for the causal flow to be in one direction only. There are no feedback loops in recursive models. Nonrecursive models, on the other hand, allow for reciprocal causality and feedback loops. These loops often are found among the latent variables but may occur in some instances when the researcher allows the error term of an endogenous variable to be correlated with an error term of an exogenous variable. Recursive models are considerably easier to analyze. Non-recursive models require additional assumptions that may be difficult to defend in many cases. If the researcher believes that feedback loops exist, it may be better to design a longitudinal study and represent nonrecursive effects with paths connecting variables measured at an earlier time point with variables measured at a later time point.

SO WHAT DOES THIS MEAN?

There are several key points to consider when evaluating published reports of model testing. First, the reader evaluates the model in a global way by considering the theoretical soundness of the hypothesized relationships, adequacy of the sample for the analysis, and the fit of the model to the data. Theoretical soundness is evaluated in causal modeling as it is with any research study. The reader considers whether the relationships are plausible and whether there are variables or paths that should be added or dropped.

Evaluation of the adequacy of the sample size is based on the number of free parameters to be estimated by the statistical program. A parameter is a characteristic of the population (Sullivan & Feldman, 1979), such as a mean, a correlation, or a regression coefficient. Because the value for the population is very rarely known, sample data are used to make an estimate of the population parameter. In causal modeling parameter estimates can be either “fixed” or “free.” If the researcher sets a parameter to a specific value (often zero), the parameter is said to be “fixed.” If the researcher does not set the parameter, it is said to be “free,” because the parameter is free to vary in accordance with the sample data. If the authors do not state the number of free parameters, the reader can count them in the following manner.

Each empirical indicator (observed variable) has two free parameters associated with it. One is the coefficient on the arrow that links the empirical indicator to its latent variable (represented by a in Figure 1), and the other is the error term (represented by e). An exception would be a single empirical indicator used to represent the latent variable. In Figure 1, Years of Education is a single indicator of the latent variable Mother’s Education. Generally, the parameters for the single indicator are set by the researcher and thus are not estimated by the program. There are 13 indicators in Figure 1; one (Years of Education) is a single indicator, so there are 24 free parameters [(13 – 1) × 2] associated with the empirical indicators.

Next, count the number of arrows linking the latent variables (represented by b in Figure 1). In the example, there are seven arrows among the latent variables. The example does not contain any correlations between latent or observed variables. These correlations are generally represented by a curved two-headed arrow, and each correlation counts as one parameter. Finally, each endogenous variable has an error term (represented as E in Figure 1) associated with it. In the example, there are three endogenous variables, representing three more parameters. Thus, the model in Figure 1 has 24 + 7 + 3 = 34 parameters to be estimated. According to Bentler and Chou (1988), a minimum of 5 to 10 cases per parameter is necessary for model testing. For the example, 5 × 34 or 170 cases is considered the minimum sample size necessary to test the model. If the number of cases divided by the number of free parameters yields a number that is much smaller than 5, the results are unstable and thus less likely to hold up to future testing with a different sample.

Whereas in most analyses statistical significance is evaluated before differences or relationships are interpreted, the fit of the model to the data is evaluated first in causal modeling. Both LISREL (Joreskog & Sorbom, 1986) and EQS (Bentler, 1989) provide a χ2 statistic that indicates the deviation of the model from the data; a nonsignificant χ2 indicates good fit. However, because the χ2 statistic is inflated by larger sample sizes (n ≥ 100), a nonsignificant value is difficult to obtain in most studies (Wheaton, 1987). A variety of other fit statistics are available, many of these being constructed so that they vary between 0 and 1 (Tanaka, 1993). For example, LISREL provides a Goodness-of-Fit Index, which indicates “the relative amount of variance and covariance jointly accounted for by the model” (Boyd, Frey, & Aaronson, 1988, p. 251). For these statistics, values of .90 and above indicate good fit of the model to the data.

During the analysis process, the researcher tests many models, beginning with the one implied by the theoretical model. The process of “trimming the model” involves deleting nonsignificant paths and adding significant paths to obtain a well-fitting model. Comparison of the theoretical model with the final model will give the reader an indication of the amount of trimming that was done. Although it is helpful if both the theoretical model and the final model are published in the report, the theoretical model may be omitted if there are space limitations. If the theoretical model is missing, the reader may be able to reconstruct the model by looking for relationships implied in the literature review section of the report (Fawcett & Downs, 1991). After a well-fitting model has been obtained, the individual parameters are examined.

The path coefficients are evaluated for significance with a t test, and the expected directions of the relationships are compared with those obtained. Even though the model fits the data, a large number of paths that are nonsignificant or that have opposite signs suggests that the relationships among the variables were not as theoretically specified. The coefficients on the arrows between the latent variable and the empirical indicator should be ≥ .50 but ≤ 1.0 respectively for the indicator to be well-tied to its latent variable. If this coefficient is greater than 1.0, there is generally a problem in the model. Finally, the amount of unexplained variance, the error, for each endogenous variable is evaluated. Even though the model fits the data well, large unexplained variances for the endogenous variables suggest that the predictors do not adequately explain the outcomes.

SUMMARY

Causal modeling is a widely used technique for specifying a system of relationships among theoretical constructs. However, there are cautions or limitations that must be considered with this technique. Consumers of research that uses causal modeling techniques must evaluate the model testing results on several levels before using the findings in future research or in practice. A good fit between the model and the data does not necessarily mean that all the relationships are as posited. For the researcher intending to use causal modeling in an appropriate study, the most serious limitation of the technique is the large number of subjects required for the analysis, resulting in expensive and logistically complex studies. However, given the nature of many of the phenomena of interest to nursing, causal modeling often proves to be a highly useful technique for knowledge development.

Acknowledgments

The study cited and illustrated in Figure 1 was supported by a grant from the National Institute for Nursing Research (no. R01-NR02707) and by an administrative supplement from the Office of Research on Women’s Health through the National Institute for Nursing Research.

References

  • Bentler PM. EQS Structural Equations Program Manual. Los Angeles: BMDP Statistical Software; 1989.
  • Bentler PM, Chou C-P. Practical issues in structural modeling. In: Long JS, editor. Common Problems/Proper Solutions: Avoiding Error in Quantitative Research. Newbury Park, CA: Sage; 1988.
  • Boyd CJ, Frey MA, Aaronson LS. Structural equation models and nursing research: Part I. Nursing Research. 1988;37:249–252. [PubMed]
  • Fawcett J, Downs FS. The Relationship of Theory and Research. 2. Philadelphia: Davis; 1992.
  • Joreskog KG, Sorbom D. LISREL VI. Chicago: National Education Resources; 1986.
  • Long JS. Covariance Structure Models: An Introduction to LISREL. Beverly Hills: Sage; 1983.
  • Nunnally JC. Psychometric theory. New York: McGraw-Hill; 1978.
  • Sullivan JL, Feldman S. Multiple indicators: An introduction. Beverly Hills: Sage; 1979.
  • Tanaka JS. Multifaceted conceptions of fit in structural equation models. In: Bollen KA, Long JS, editors. Testing structural equation models. Newbury Park, CA: Sage; 1993. pp. 10–39.
  • Wheaton B. Assessment of fit in overidentified models with latent variables. Sociological Methods and Research. 1987;16:118–154.
  • Youngblut JM. National Institute for Nursing Research. 1992. Nursing: Maternal Employment and LRW Infant Outcomes; p. R01NR02707.