PMCCPMCCPMCC

Search tips
Search criteria 

Advanced

 
Logo of jpepsyLink to Publisher's site
 
J Pediatr Psychol. Jun 2011; 36(5): 642–646.
Published online Feb 15, 2011. doi:  10.1093/jpepsy/jsr004
PMCID: PMC3131708
Commentary: Family Assessment in Pediatric Psychology
Grayson N. Holmbeckcorresponding author1 and Katie A. Devine2
1Loyola University Chicago and 2University of Rochester Medical Center
corresponding authorCorresponding author.
All correspondence concerning this article should be addressed to Grayson N. Holmbeck, Department of Psychology, Loyola University Chicago, 1032 W. Sheridan Road, Chicago, IL, 60660, USA. E-mail: gholmbe/at/luc.edu
Received November 16, 2010; Revised December 30, 2010; Accepted January 9, 2011.
The theoretical and research literatures on links between family functioning and pediatric chronic health conditions are extensive both in their comprehensiveness and depth. On the other hand, the assessment of family relationships is no easy task and there appear to be gaps between the quality of our family assessment methods and our theories, research methodologies, and clinical endeavors (Kazak, 2008). Why is the assessment of family functioning so challenging? First, when studying families, the focus of one’s research questions can be on individuals (e.g., the functioning of mothers, fathers, children), dyads (e.g., relations between mothers and children, relations within sibling pairs), the family system (e.g., the level of cohesiveness in the family as a whole), or any combination of these. Second, the same individual may serve different roles within the family (e.g., a mother could also be a spousal partner; an adolescent is a child but could also be a sibling). Third, there are different methods of assessment that can be employed with families (e.g., questionnaires, observational methods, interviews, daily diaries) and these methods often yield nonoverlapping or divergent data. Fourth, with our assessment methods, we are attempting to evaluate the functioning of the families where the individuals themselves as well as their health status are changing over time. Finally, such research is particularly challenging in families with individuals who have chronic health conditions because the assessment of family functioning can be based on generic and/or illness-related family assessment methods.
Despite such challenges, empirical studies of families are among the most common types of research in the field of pediatric psychology and have been the basis for entire volumes and special issues of journals. In fact, one of the co-editors of the current special issue authored a review of 29 of the most commonly used family-based measures in pediatric psychology and concluded that the database for 19 of these measures had advanced to the point where they could be classified as “well-established” (Alderfer et al., 2008). Although this is an impressive number of high-quality measures, Alderfer et al. (2008) also advanced several recommendations for those who seek to further the quality of family assessment in the field of pediatric psychology. First, they maintained that many family measures were developed on general populations and that little is known about the psychometric quality of these measures in samples of individuals with chronic health conditions. Second, they suggested that we need more studies that focus on fathers and siblings and that the literature on the effects of factors such as family structure and ethnicity on family functioning in pediatric populations is less well developed. Finally, they suggested that we lack knowledge concerning the clinical utility and treatment sensitivity of our family-based measures (Alderfer et al., 2008).
Although one issue of a journal could not possibly address all of the challenges of family assessment or the recommendations of Alderfer et al. (2008), the current special issue moves the field forward by providing new measures (including the development of noncategorical and disease-specific measures of various aspects of family functioning), novel uses of previously developed measures, and new approaches to integrating across existing measures. The contributors are all to be commended for the value of their research in advancing the field of pediatric psychology. In this commentary, we first discuss the many strengths of the articles in this issue. Next, we evaluate the evidence base for the family-based instruments included in these papers by using the checklist for measure development and validation, which we published in the Journal of Pediatric Psychology in 2009 (Holmbeck & Devine, 2009). Finally, we discuss research that is needed to continue our progress in the area of family assessment.
One of the most notable contributions of the papers in this issue is the rigor and detail of the methods and statistical strategies employed in measure development and validation. Most of the papers addressed multiple aspects of reliability and validity, including internal consistency, test–retest reliability, content validity, convergent validity, discriminant validity, and criterion-related validity. One study examined the incremental validity of a disease-specific measure beyond a general measure of parenting (Barzel & Reid, 2011). Several papers used exploratory and/or confirmatory factor analyses to examine the structure of a measure or the relations among several measures, which indicates that researchers in this field are beginning to collect data on large enough samples to permit the use of fairly sophisticated data analytic strategies (Barzel & Reid, 2010; Benzies et al., 2010; Berlin, Davies, Silverman, & Rudolph, 2009; Knafl et al., 2009; Palmer et al., 2010). Further, some studies advanced the field by including fathers, who are often underrepresented in pediatric psychology research (Phares, Lopez, Fields, Kamboukos, & Duhig, 2005). Several studies incorporated multiple reporters (Barzel & Reid, 2010, 2011 Jastrowski Mano, Khan, Ladwig, & Weisman, 2009; Marsac & Alderfer, 2011; Palmer et al., 2010) and a minority of the studies incorporated multiple methods, such as the use of an observed inhaler technique in addition to the use of self-report questionnaires in asthma patients (Celano, Klinnert, Holsey, & McQuaid, 2009) and the use of both self-report measures and an assessment of HbA1C in diabetes patients (Barzel & Reid, 2011; Palmer et al., 2010). Finally, a couple of studies demonstrated the feasibility of using alternate methodologies, such as computer-assisted telephone interviews, which could more broadly advance the quality of research methodology in pediatric psychology (Benzies et al., 2010; Knafl et al., 2009).
Another notable contribution of these studies is the reliance on theory to drive hypotheses and methods. Researchers applied theories of general family functioning to families of children with medical conditions to inform measure development and validation. Further, researchers collected data on samples of individuals with pediatric conditions, examining the properties of measures originally designed to be used with healthy populations and evaluating whether such measures would perform in the intended manner for specific samples (thus addressing one of Alderfer et al.’s, 2008, recommendations). In fact, in one case, the authors found that their target measure performed less well with a pediatric sample than had been found in studies of typical families (Marsac & Alderfer, 2011).
Together, these papers provide psychometrically sound and promising measures for use in pediatric populations. These works advance the field by disseminating important knowledge about measures to researchers and clinicians who can use this information to determine which measures would be best suited for their work. Further, several of these papers attempt to bridge the gap between research and clinical work by developing measures directly relevant to clinical problems, which could potentially be sensitive to change in a treatment context.
One way to evaluate the psychometric quality of a set of measures and the degree to which research studies in this special issue were comprehensive in their use of important measure validation techniques is to apply the same criteria to all papers. Thus, for each study, we completed our measure development and validation checklist (Holmbeck & Devine, 2009) and we report our findings here. The major sections of the checklist focus on reliability, item analysis, factor analytic strategies, various types of validity (content, construct, convergent, discriminant, concurrent, and predictive), and clinical utility. The importance of building content validity into a measure during the early stages of measure development is emphasized.
Two issues need to be raised before we review these findings. First, some of the studies in this special issue are reports of findings for previously developed measures; thus, the measure development portions of the checklist could not be completed (even though the validation portions of the checklist were relevant). Second, the first author of this article is a co-author on two of the papers in this special issue (Kaugars et al., 2010; Kelly, Holmbeck, & O’Mahar, 2010); thus, these papers were not evaluated in this review (i.e., only 10 of the 12 papers in the special issue are reviewed here).
Only 3 of the 10 papers are measure development papers (Barzel & Reid, 2010; Berlin et al., 2009; Knafl et al., 2009). Applying the measure development aspects of the checklist to these three papers, we find that all the three papers established a scientific need for the target instrument and attended to content validation, although the degree to which content validation techniques were taken into account vary considerably across these studies. Content validation is a critical initial step in the measure development process; such validity is “built in” to a measure as the items for the measure are being generated and evaluated (Haynes, Richard, & Kubany, 1995; Holmbeck & Devine, 2009). Some authors generated items based on a review of past measures and the clinical experiences of their research team (Barzel & Reid, 2010) whereas others used a more elaborate approach. Berlin et al. (2009) relied on relevant theories, prior empirical literature, related instruments, and consultation with experts in developing their measure. Similarly, Knafl et al. (2009) not only relied on theories, consultation with experts, and focus groups with the target population, but they also generated a satisfactory number of items for each of the hypothesized dimensions of the larger construct. This latter strategy is critical if one is to guarantee that the dimensions of the construct of interest are covered adequately and that each dimension is represented to the same degree in the item pool. If one fails to apply this strategy, one is often left with subscales that include very few items (Barzel & Reid, 2010; Berlin et al., 2009).
Other measure development strategies could have been implemented by these investigators. For example, none of them refined their measures in stages by use of pilot testing or quantitative item analyses (Holmbeck & Devine, 2009). Only Berlin et al. (2009) conducted cross-validating factor analyses (i.e., one exploratory and two confirmatory factors analyses) and only Knafl et al. (2009) had members of the target population review an initial version of the measure. With respect to factor analyses, there seems to be some confusion over when to use exploratory versus confirmatory analyses. Examples of the following were found in these papers: use of exploratory factor analyses when a clear theory/model was advanced by the authors and the use of confirmatory analyses in an exploratory fashion. If one seeks to examine the factor structure of a measure and, during the content validation stage of measure development, the investigator generates items for specific subcomponents of the construct of interest (either based on theory or past research), a confirmatory approach is recommended. With this strategy, it is important to keep in mind that one is examining the fit of a specific proposed model; such an analysis will not necessarily yield the best fitting model. Only with an exploratory approach, one is able to generate the absolute best-fitting model. But, as indicated by its name, this strategy is highly exploratory and solutions based on this method are not likely to be replicated with new samples.
Moving now to validation-related analyses, most of the 10 studies provided some form of validity evaluation. In general, the strategy used by most authors was to examine convergent validity with data from the same reporters who provided responses to the target assessment measure. In one case, the validity indices were taken from the same measure as the target assessment tool. Unfortunately, with these strategies, the investigator is unable to rule out common method variance interpretations for high correlations between measures. Although the use of data from multiple reporters (e.g., parent and child) addresses some of the concerns with common method variance, only 3 of the 10 studies reviewed for this special issue employed extra-familial validation measures. We would suspect that most investigators in this field seek to develop family-oriented measures which have a significant degree of predictive utility for behaviors exhibited outside of the family. Such extra-familial validity assessments could involve medical outcomes (such as those used by Barzel & Reid, 2011; Celano et al., 2009; Palmer et al., 2010), reports from teachers or peers, and/or coded observations. To evaluate the validity and utility of any measure, it is important to understand how self-reports relate to behavioral, physiological, and medical data as well as reports of other important individuals from nonfamily settings.
Although these papers will contribute significantly to the field of family assessment in pediatric psychology, we offer several recommendations for future work. One consistent theme across the majority of the studies was the lack of diversity in the participant sample. Moreover, none of the measures used in these studies were translated into other languages and no attempts were made to examine whether there were ethnic differences in how test items were interpreted by participants. At this point, we appear to be satisfied with merely reporting the percentage of participants who come from different ethnic populations; unfortunately, we have failed to examine whether our measures are similarly valid across these populations. The inclusion of diverse samples representative of the larger population of pediatric patients under study is imperative for advancing the field. Multisite studies, while challenging and expensive to conduct, are likely the best solution to address this issue in the field. Further, researchers are encouraged to consider the developmental level of the respondents and the developmental relevance of various items in each measure. While several of the articles in this issue specify the age range for whom the measure is intended, consideration of the developmental appropriateness of different items was not discussed in most cases.
It is important to consider the setting in which the data are collected. Many studies use the clinic as the site of data collection even though this strategy is limited in certain respects. Specifically, family members may have negative affective responses during medical appointments that may impact on self-reported data collected in this setting. Also, clinic-based data collections occur in the context of distracting medically related activities, likely reducing the degree to which family members are able to focus on research-related activities. Indeed, Dunn et al. (2010) found that only 58% of their sample was willing to do an observational task in a clinic setting. In our own research, we have found home-based data collections to be a useful approach to conducting research on families (Holmbeck et al., 2010). With such a strategy, one is much more likely to gain the involvement of more than one parent and data collection proceeds without the interruptions that often occur during medical appointments in a healthcare setting. Such a strategy is maximally convenient for families and is, therefore, very useful when attempting to gain the long-term commitment of families in longitudinal research. With respect to family’s willingness to participate in observational sessions during home-based data collections, only a very small number of families have refused to participate in observational tasks.
Although the field of pediatric psychology now has access to numerous family-oriented measures, we lack data on the incremental validity of these measures. Are each of these measures associated with important outcomes beyond the variance accounted for by similar measures? Unfortunately, little attention was paid to incremental validity in this set of studies, but this type of validity clearly deserves attention in future work. Simply put, such validity examines whether a new measure “buys us anything” above and beyond established measures. In relation to the papers included in this special issue, one might be interested in whether a newly developed family-oriented measure has incremental validity beyond other existing family-oriented measures. Alternatively, one might be interested in the incremental validity of a family systems measure (e.g., a family environment scale) above and beyond a related, but nonfamily systems-oriented measure (e.g., a measure of parenting quality). Finally, with the exception of the paper by Marsac and Alderfer (2011), we tend not to critique the measures we use; we need to be more critical of these measures and begin systematically refining our measures across multiple studies.
Many authors in this special issue made statements such as “this measure has never been used in this population”. Although it is true that most measures have not been employed with most populations, it is important to provide a compelling rationale for why a given measure might be useful or uniquely relevant for a given population. Similarly, none of the studies included here examined the ability of a measure to detect change in a treatment context (although it appears that Celano et al., 2009, will be able to conduct such analyses in the future).
Cost-effectiveness and dissemination of promising or established measures deserve greater attention. Dunn et al. (2010), as well as Celano et al. (2009), noted that observational and interview measures are often costly in terms of the time required for training and coding. While such data have considerable value, it will be important to examine the cost-effectiveness of our measures. Dissemination of promising or established measures is an important next step so that measures with the greatest evidence can be used more widely. A few of the researchers in this issue published copies of their measures in their articles (e.g., Barzel & Reid, 2010) or published a website on which the measure is available (Knafl et al., 2009). Other measures are available for purchase. Another way to provide access to newly developed measures is to include them as supplemental material on journals’ websites.
Finally, when evaluating family-oriented measures, we need to include extra-familial outcome measures in our data collection protocols. If the studies in this special issue are any indication, such a strategy is rarely used but will be critical if we are to evaluate the criterion-related validity of our family measures and learn whether these measures reliably predict behaviors of interest outside the family setting.
Final Comments
As noted earlier, it is impossible for one journal issue to address all of the needs of a particular area of assessment in pediatric psychology. On the other hand, many of these papers represent a leap forward for their respective literatures. It is our hope that this commentary provides some useful suggestions for how we can continue to move this field forward and develop measures that have high levels of relevance to the medical and nonmedical outcomes that are of interest to researchers and clinicians alike.
Funding
National Institute of Child Health and Human Development (R01-HD048629).
Conflicts of interest: None declared.
  • Alderfer M A, Fiese B, Gold J I, Cutuli J J, Holmbeck G, Goldbeck L, Chambers C, Abad M, Spetter D, Patterson J. Evidence-based assessment in pediatric psychology: Family measures. Journal of Pediatric Psychology. 2008;33:1046–1061. [PMC free article] [PubMed]
  • Barzel M, Reid G. Assessing coparenting in families of children with type I diabetes: A preliminary examination of the psychometric properties of the Coparenting Questionnaire (CQ) and the Diabetes-Specific Coparenting Questionnaire (DCQ) Journal of Pediatric Psychology. 2010 Advance online publication. doi:10.1093/jpepsy/jsq103. [PubMed]
  • Barzel M, Reid G. Coparenting in relation to Children’s psychosocial and diabetes-specific adjustment. Journal of Pediatric Psychology. 2011 Manuscript submitted for publication. [PubMed]
  • Benzies K M, Trute B, Worthington C, Reddon J, Keown L, Moore M. Assessing psychological well-being in mothers of children with disabilities: Evaluation of the Parenting Morale Index (PMI) and Family Impact of Childhood Disability (FICD) Scale. Journal of Pediatric Psychology. 2010 Advance online publication. doi:10.1093/jpepsy/jsq081. [PMC free article] [PubMed]
  • Berlin K S, Davies W H, Silverman A H, Rudolph C D. Assessing family-based feeding strategies, strengths, and mealtime structure with the Feeding Strategies Questionnaire. Journal of Pediatric Psychology. 2009 Advance online publication. doi:10.1093/jpepsy/jsp107. [PubMed]
  • Celano M, Klinnert M D, Holsey C N, McQuaid E L. Validity of the Family Asthma Management System Scale with an urban African American sample. Journal of Pediatric Psychology. 2009 Advance online publication. doi:10.1093/jpepsy/jsp083. [PMC free article] [PubMed]
  • Dunn M, Rodriguez E M, Miller K, Gerhardt C, Vannatta K, Saylor M, Schuele M, Compas B. Direct observation of mother-child communication in pediatric cancer: Assessment of verbal and non-verbal behavior and emotion. Journal of Pediatric Psychology. 2010 Advance online publication. doi:10.1093/jpepsy/jsq062. [PMC free article] [PubMed]
  • Haynes S N, Richard D C S, Kubany E S. Content validity in psychological assessment: A functional approach to concepts and methods. Psychological Assessment. 1995;7:238–247.
  • Holmbeck G N, DeLucia C, Essner B, Kelly L, Zebracki K, Friedman D, Jandasek B. Trajectories of psychosocial adjustment in adolescents with spina bifida: A six-year four-wave longitudinal follow-up. Journal of Consulting and Clinical Psychology. 2010;78:511–525. [PubMed]
  • Holmbeck G N, Devine K A. Editorial: An author's checklist for measure development and validation manuscripts. Journal of Pediatric Psychology. 2009;34:691–696. [PMC free article] [PubMed]
  • Jastrowski Mano J E, Anderson Khan K, Ladwig R J, Weisman S J. The impact of pediatric chronic pain on parents’ health-related quality of life and family functioning: Reliability and validity of the PedsQL Family Impact Module. Journal of Pediatric Psychology. 2009 Advance online publication. doi:10.1093/jpepsy/jsp099. [PubMed]
  • Kaugars A, Zebracki K, Kichler J, Fitzgerald C, Greenley R, Alemzadeh R, Holmbeck G N. Use of the Family Interaction Macro-coding System with families of adolescents: Psychometric properties among pediatric and healthy populations. Journal of Pediatric Psychology. 2010 Advance online publication. doi:10.1093/jpepsy/jsq106. [PMC free article] [PubMed]
  • Kazak A E. Commentary: Progress and challenges in evidence-based family assessment in pediatric psychology. Journal of Pediatric Psychology. 2008;33:1062–1064.
  • Kelly L M, Holmbeck G N, O’Mahar K. Assessment of parental expressed emotion: Associations with adolescent depressive symptoms among youth with spina bifida. Journal of Pediatric Psychology. 2010 Advance online publication. doi:10.1093/jpepsy/jsq084. [PMC free article] [PubMed]
  • Knafl K, Deatrick J A, Gallo A, Dixon J, Grey M, Knafl G, O’Malley J. Assessment of the psychometric properties of the Family Management Measure. Journal of Pediatric Psychology. 2009 Advance online publication. doi:10.1093/jpepsy/jsp034. [PMC free article] [PubMed]
  • Marsac M L, Alderfer M A. Psychometric properties of the FACES-IV in families of pediatric oncology patients. Journal of Pediatric Psychology. 2011 Advance online publication. doi:10.1093/jpepsy/jsq003. [PMC free article] [PubMed]
  • Palmer D L, Osborn P, King P S, Berg C A, Butler J, Butner J, Horton D, Wiebe D J. The structure of parental involvement and relations to disease management for youth with type 1 diabetes. Journal of Pediatric Psychology. 2010 Advance online publication. doi:10.1093/jpepsy/jsq019. [PMC free article] [PubMed]
  • Phares V, Lopez E, Fields S, Kamboukos D, Duhig A M. Are fathers involved in pediatric psychology research and treatment? Journal of Pediatric Psychology. 2005;30:631–643. [PubMed]
Articles from Journal of Pediatric Psychology are provided here courtesy of
Oxford University Press