|Home | About | Journals | Submit | Contact Us | Français|
Establishing the feasibility and validity of implementation fidelity monitoring strategies is an important methodological step in implementing evidence-based interventions on a large scale.
The objective of the study was to examine the reliability and validity of the Fidelity Checklist, a measure designed to assess group leader adherence and competence delivering a parent training intervention (the Chicago Parent Program) in child care centers serving low-income families.
The sample included 9 parent groups (12 group sessions each), 12 group leaders, and 103 parents. Independent raters reviewed 106 audiotaped parent group sessions and coded group leaders’ fidelity on the Adherence and Competence Scales of the Fidelity Checklist. Group leaders completed self-report adherence checklists and a measure of parent engagement in the intervention. Parents completed measures of consumer satisfaction and child behavior.
High interrater agreement (Adherence Scale = 94%, Competence Scale = 85%) and adequate intraclass correlation coefficients (Adherence Scale = .69, Competence Scale = .91) were achieved for the Fidelity Checklist. Group leader adherence changed over time, but competence remained stable. Agreement between group leader self-report and independent ratings on the Adherence Scale was 85%; disagreements were more frequently due to positive bias in group leader self-report. Positive correlations were found between group leader adherence and parent attendance and engagement in the intervention and between group leader competence and parent satisfaction. Although child behavior problems improved, improvements were not related to fidelity.
The results suggest that the Fidelity Checklist is a feasible, reliable, and valid measure of group leader implementation fidelity in a group-based parenting intervention. Future research will be focused on testing the Fidelity Checklist with diverse and larger samples and generalizing to other group-based interventions using a similar intervention model.
Few empirically supported parenting interventions have been adopted for use in community-based settings (Prinz & Sanders, 2007). This is due, in part, to the failure of program developers to create practical strategies for moving their interventions from controlled settings typical of clinical trials to community settings where larger scale adoption can take place. Implementation fidelity is a key component to building a scientific knowledge base related to the replication, dissemination, and implementation strategies of effective prevention programs (Elliott & Mihalic, 2004). Dusenbury, Brannigan, Falco, and Hansen (2003) recommended extensive measurement development of fidelity assessment for successful dissemination. The purpose of this study was to test the reliability and validity of an instrument for measuring implementation fidelity for a group-based prevention intervention targeting low-income parents of young children.
In this study, the reliability and validity of a tool to measure fidelity related to the delivery of an intervention were examined. Several terms are used in the literature for fidelity related to the delivery of an intervention. Terms used include implementation fidelity, treatment integrity fidelity, treatment fidelity, and intervention fidelity (Carroll et al., 2007; Fixsen, Naoom, Blase, Friedman, & Wallace, 2005; Perepletchikova, Treat, & Kazdin, 2007; Resnick et al., 2005; Stein, Sargent, & Rafaels, 2007). In the cited reviews and studies, these terms share the broad definition of the intervention being delivered as intended by the program developers and consistent with the program model. In this study, the term implementation fidelity will be used to mean an assessment of the degree to which group leaders deliver the intervention competently and according to protocol. Implementation fidelity was chosen because it is the term most frequently used in prevention and community-based interventions (Carroll et al., 2007; Gottfredson et al., 2006; Lee et al., 2008).
There are two relevant dimensions for implementation fidelity assessment: adherence and competence. Adherence is the extent to which the interventionists’ behaviors conform to the intervention protocol, and competence relates to the skillfulness in the delivery of the intervention related to facilitation and process skills (Forgatch, Patterson, & DeGarmo, 2005; Perepletchikova & Kazdin, 2005; Stein et al., 2007).
In this study, implementation fidelity was measured using the Fidelity Checklist, a measure constructed to capture adherence to the protocol and the competent delivery of the intervention. The intervention studied was the Chicago Parent Program (CPP). The CPP is a 12-session group-based parenting program shown to improve parenting and reduce behavior problems in young children (Gross, Garvey, Julion, & Fogg, 2007; Gross, Garvey, et al., 2009). Research shows that evidence-based parent training is effective for early intervention and prevention in reducing behavioral risk among children from low-income families (Conduct Problems Prevention Research Group, 1999; Dumas, Prinz, Smith, & Laughlin, 1999; Gottfredson et al., 2006). To support dissemination efforts, feasible, reliable, and valid measures of fidelity are critical in ensuring successful implementation.
In a review of parent training studies between 1980 and 1988, only 6% of studies reported fidelity related to implementation (Rogers Wiese, 1992); however, greater attention is emerging to implementation fidelity in the parenting research literature. Dumas, Lynch, Laughlin, Smith, and Prinz (2001) developed a fidelity plan and checklist specific to their parenting program targeting risk reduction among older children. The results of their study showed that it was possible to obtain high interrater agreement on fidelity ratings using audiotaped data. Forgatch et al. (2005) developed a similar measure using observations from video-recorded data. They found that higher fidelity ratings predicted improved parenting practices. This intervention is conducted with individual parents. Therefore, the measure does not include leader competence in facilitating group processes.
Eames et al. (2008) developed a reliable leader observation tool to measure implementation fidelity of a group-based parenting program via live or video-recorded observations. The coding of the evaluation tool captures frequency of group leader behavior. Although frequency is important, facilitating group process incorporates complex skills implemented in response to group participants’ needs. Therefore, the Fidelity Checklist was designed to capture the frequency and presence of behavior deemed essential to the CPP intervention and the strategies taught for facilitating parent groups.
Development of psychometrically sound instruments to measure implementation fidelity is critical in dissemination of behavioral interventions. These instruments can be used to monitor intervention quality and investigate the relationship between quality and intervention outcomes. Research relating implementation fidelity and outcomes has shown variable results (Barber et al., 2006; Hogue et al., 2008; Huey, Henggeler, Brondino, & Pickrel, 2000), and little attention has been paid to fidelity over time (Zvoch, 2009). This study adds to the growing knowledge of implementation fidelity assessment and its implications for the dissemination of behavioral interventions.
The aims of the study were to (a) establish interrater reliability of the Fidelity Checklist; (b) assess agreement between group leader self-report and independent ratings of group leader adherence; (c) describe systematic changes in implementation fidelity over time and by session; (d) examine the relationship between adherence and competence scores on the Fidelity Checklist and parent attendance of, engagement in, and satisfaction with parent groups; and (e) test the relationship between adherence and competence and improvements in parent-rated child behavior problems. As a measure of criterion validity, it was hypothesized that group leader adherence and competence would be related positively to parent attendance, satisfaction, engagement, and improvements in child behavior problems.
This descriptive study was conducted under approval by the institution’s internal review board. Informed consent was obtained from intervention participants and group leaders.
The sample included 103 parents or legal guardians of 2- to 5-year-old children enrolled in child care centers serving low-income families in Chicago and 12 group leaders who coled CPP parent groups. Nine CPP groups implemented in community-based day care settings were used in this study and were drawn from a larger study on the effectiveness of the CPP (Gross, Fogg, et al., 2009). Twenty-six CPP groups were conducted over a 2-year period. All day care settings were licensed by the State Department of Children and Family Services and predominantly served ethnic minority families. All English-language parent groups conducted during a 6-month period were used in the current study.
Inclusion criteria for parent participation in the larger study were (a) the parent or legal guardian of a child between the ages of 2 and 5 years attending day care at the participating centers; (b) agreement to participate in the CPP groups; and (c) agreement to having their parent group sessions audiotaped. In this study, 96% of the target population was African American or Latino. Group participants were the mother (80.6%), father (12.5%), or grandparent or aunt (6.9%) of the child. Mean parent age was 29.8 years (SD = 7.62 years).
All group leaders (n = 12) completed a CPP group leader-training workshop. The training workshop consisted of two 8-hour days of training on the content of the CPP and facilitation process of CPP groups. Group leaders were required to pass a written test on the content and principles of the CPP for completion of training. Novice group leaders were paired with more experienced group leaders for their first CPP group-leading experience. All groups were conducted in English. Demographic data for the group leaders are shown in Table 1.
The CPP is a 12-session community-based prevention intervention for parents of preschoolers designed to promote parenting competence and prevent behavior problems in young children (Gross et al., 2007). The CPP is grounded in social learning theory (Bandura, 1997) and targets coercive parent–child interactions known to reinforce child behavior problems (Patterson, 1982; Table 2).
During 2-hour weekly sessions, videotaped vignettes are shown to parents and used to stimulate discussion and problem solving related to child behavior and parenting skills. Group leaders facilitate discussions guided by a comprehensive group leader manual. The standardized manual includes the content of the vignettes, discussion questions, and commonly occurring questions related to the vignettes. Also included in the manual are home and group activities and role-play exercises to facilitate learning and application of the program principles.
Research has demonstrated the effectiveness of the CPP with ethnically and economically diverse families (Gross, Garvey, et al., 2009). Specifically, parents who participated in the CPP reported greater parenting self-efficacy, more consistent use of discipline, and reduced reliance on corporal punishment up to 1 year later. In addition, children of parents who participated in the program demonstrated reductions in behavior problems in the classroom and with their parents. For a full description of the CPP intervention, see Gross et al. (2007).
Outcome data were collected using multiple informants (group leaders, parents, and independent raters) and methods (questionnaires and audio recordings). Variables of interest in this study were implementation fidelity, participant attendance, satisfaction and engagement in the intervention, and parent reports of improvements in child behavior problems.
The Fidelity Checklist was constructed to measure group leader adherence and competence in delivering the CPP intervention. The following steps were used in developing the Fidelity Checklist: (a) identification of the essential elements of the CPP based on the theory underlying the intervention and facilitation model for group leaders, (b) construction of scale items related to adherence to content and competence in delivering the intervention, and (c) development of item scaling (Mowbray, Holter, Teague, & Bybee, 2003; Stein et al., 2007). Content validity was established by review of the checklist by developers of the CPP, experts in delivering parent interventions, and experts in fidelity monitoring. There was agreement that items on the checklist represented the contents of the CPP curriculum and required skills for competent delivery.
The Adherence and Competence Scales make up the Fidelity Checklist. Adherence to the defined CPP intervention protocol for each group session is measured with 12–16 items (session dependent), each coded as “yes” or “no,” depending on whether the group leader performed the expected action during that group session. Group leader competence in delivering the intervention, responding to group participants, and facilitating the group process while delivering the intervention was assessed using a 15-item questionnaire. These items were invariant across sessions and rated on a 3-point scale of 1 (skill rarely or never demonstrated), 2 (skill emerging, needs further development), or 3 (skill demonstrated and done well). When rating competence, raters make note of times the group leader missed opportunities to perform the competencies outlined in the checklist. Missed opportunities are important, as group leader competence is rated on what was done well and how effective group leaders respond to the process, dynamics, and needs of the group members. Examples of adherence and competence items are presented in Table 3.
Group leaders completed a weekly Adherence Scale after each group session. The items on the Adherence Scale-Group Leader Report are parallel to the items on the Adherence Scale of the Fidelity Checklist. Group leaders report whether (“yes” or “no”) they performed activities for that week’s session.
Attendance was calculated for each group as the percentage of parents enrolled in the group who attended each session.
Parent satisfaction was measured through parent report on a weekly satisfaction survey. Parents were asked in the 5-item weekly survey to rate the quality of the content, vignettes, practice assignments, and group leaders’ facilitation for that session on a scale of 1 (not helpful) to 4 (very helpful). In this study, Cronbach’s alpha reliability for the weekly survey was .87.
Parent engagement was assessed through group leader report using the Engagement Form EF; (Garvey, Julion, Fogg, Kratovil, & Gross, 2006). The 7-item EF is used to assess the extent to which group attendees participate actively in the group sessions. Active participation was defined as the extent group participants attended to the videotaped scenes, participated in the discussion, were open and supportive to other group participants, were not resistant to new ideas, and correctly applied the program principles. Items were scored on a scale of 1 (not at all) to 4 (most of the time) and summed for a total EF score. Validity of the EF is supported by significant associations with improvements in teachers’ and parents’ ratings of child behavior problems and reductions in parent depressive symptoms (p < .04; Garvey et al., 2006). Cronbach’s alpha reliability of the EF in this study was .92.
The Child Behavior Checklist 1½–5 (CBCL) is a measure of frequency of problem behavior for children aged 1½–5 years on two scales, Externalizing (a measure of disruptive behavior problems, aggression, and hyperactivity) and Internalizing (measure of anxiety, inhibition, depression, and social withdrawal) behavior problems (Achenbach & Rescorla, 2000). Parents or guardians answered 100 questions related to their child’s behavior now or within the past 2 months on a 3-point scale ranging from 0 (not true as far as you know), 1 (somewhat or sometimes true), or 2 (very true or often true). The CBCL shows significant discrimination (p ≤ .01) between referred and non-referred children (Keenan & Wakschlag, 2000) and validity across racial and ethnic populations and economically and linguistically diverse samples (Gross et al., 2006). Cronbach’s alpha reliabilities of the CBCL scales in this study were .88 (Internalizing Scale) and .92 (Externalizing Scale).
Each CPP group session was recorded with a digital audio recorder by the group leader and submitted to the study team. Of the 108 parent group sessions, 2 were not recorded due to equipment malfunction. The 106 audiotaped group sessions were later reviewed and coded by independent raters using the Fidelity Checklist. Missing data were imputed for the two missing groups using the Hot Deck method (Rubin, 1987).
Audio recordings were reviewed in their entirety to capture overall adherence and competence. Although others have suggested and used samples of intervention sessions for fidelity assessment (Dumas et al., 2001; Forgatch et al., 2005; Hogue, Liddle, Singer, & Leckrone, 2005), all recordings were rated for two reasons. First, collection of all sessions would provide information of fidelity ratings over time and inform the frequency needed to measure implementation fidelity to obtain a reliable group-level estimate in the future. Second, coding of full sessions nets all interactions and group leader behavior. Coding full sessions captures critical incidents that may be missed by sampling of sessions but are important for comprehensive assessment of group leader adherence and competence.
For interrater reliability estimates, 30% of CPP group session (n = 32) recordings were coded by two independent raters. Satisfaction, attendance, and group leader report on the Adherence Scale were collected weekly. Engagement data were collected between the 11th and 12th (booster) session. Parents completed the CBCL at baseline and postintervention.
Independent observers used for this study were knowledgeable of the CPP intervention and experienced in facilitating groups. A detailed coding manual describing each checklist item was used to guide the rating of the CPP groups. Rater training included (a) instruction on the philosophy of fidelity monitoring, (b) review of the theoretical underpinnings of the CPP intervention, and (c) thorough review of the Fidelity Checklist Coding Manual. Raters take notes throughout the recording and assign codes at the end of the recording.
During pilot testing of the Fidelity Checklist and coding procedures, raters reviewed discrepancies in coding, clarified directions, and provided examples to guide item coding. During the coding period of this study, raters reviewed ongoing coding issues and discussed areas of agreement and disagreement.
Two estimates of interrater reliability for the Adherence and Competence Scales were conducted, percentage agreement of independent raters and intraclass correlation coefficients (ICCs). An internal consistency estimate (Cronbach’s alpha) was conducted for the Competence Scale. Internal consistency estimates were not conducted on the Adherence Scale because items varied by group session.
A repeated-measures analysis of variance was used to examine changes in Adherence and Competence Scale ratings. Improvements in parent report of child behavior were calculated as the difference between preintervention and postintervention CBCL scores. Correlations were used to assess the relationship of mean adherence and competence with mean group outcome measures (attendance, satisfaction, engagement, and improvements in child behavior problems). The p value for significance was set at .10 because the small number of groups used greatly reduced the power of the analysis and the preliminary nature of the study (Burns & Grove, 2005).
Mean percentage agreement across two independent raters was 94% (range = 89%–100%) on the Adherence Scale and 85%(range = 75%–97%) on the Competence Scale. The ICC is used to assess rating reliability by comparing the variability of different ratings of the same participant (group) to the total variation across all ratings and all participants. The ICCs were .69 and .91 on the Adherence and Competence Scales, respectively. Cronbach’s alpha reliability was .70 for the Competence Scale.
Group-level data for the 108 CPP group sessions used in the study are presented in Table 4. Average attendance was 50% of group sessions (range = 0.38–0.69). Of the 103 enrolled participants, 11 (10%) parents enrolled but never attended a group session. Mean group attendance without these 11 non-attenders was increased to 60% (range = 0.38–0.92).
Mean score on the Adherence Scale was .89, indicating that group leaders adhered with the protocol 89%of the time (range = 0.74–0.95). Mean score on the Competence Scale was 2.62, suggesting that group leaders demonstrated a fairly high degree of competence across all group sessions (range = 2.30–2.86 on a 3-point scale). Overall, parent satisfaction with the quality of the intervention and group leaders’ facilitation across all sessions was high (M = 3.73, range = 3.46–4.00 on a 4-point scale). Mean parent engagement scores were 3.28 (range = 2.75–3.79 on a 4-point scale).
Percentage agreement between group leader adherence self-report and independent ratings of audiotaped parent groups using the Adherence Scale was 85%(range = 70%–92%) across sessions. The item “post the ground rules” was removed from the analysis because coding this item required a visual cue to rate and was not amenable to the method of rating using audio recordings. Most (87%) disagreement was due to the independent rater coding behavior as “not occurring” and group leader reporting that the behavior had occurred.
Group leader ratings of adherence and competence were assessed over time and session. There was a significant single-peaked quadratic effect, F(1, 8) = 12.67, p < .01, of adherence over time and a significant linear effect, F(1, 8) = 4.85, p < .10, over time. There was no significant effect (linear or quadratic) over time for competence. These findings suggest that (a) group leaders initially exhibited improved adherence over time, peaked at Session 7, then declined below the mean by the 12th session, but (b) competence was stable across time.
Correlations among implementation fidelity and parent attendance, satisfaction, and engagement are presented in Table 5. Outcomes of these process variables were aggregated to the group level (n = 9). There was a positive relationship between mean adherence scores and parent engagement (r = .50, p < .10) and attendance (r = .45, p < .10). Mean competence scores were correlated significantly with parent satisfaction (r = .64, p < .05). There were no relationships between adherence and parent satisfaction or between competence and parent attendance or engagement.
Children’s behavior improved from baseline to postintervention. Mean improvements in parent report of child behavior problems were 2.47 (SD = 3.58) on the Internalizing Scale and 2.50 (SD = 3.97) on the Externalizing Scale. However, there were no significant correlations between mean adherence and competence scores and parent reports of improved child behavior problems.
The current study was conducted to establish the reliability and initial validity of the Fidelity Checklist measuring implementation fidelity of the CPP parenting program and further understand the role of fidelity in implementing evidenced-based interventions. As noted by Zvoch (2009), understanding measurement and analysis of implementation fidelity data highlights the processes impeding or promoting successful implementation of interventions.
Overall scale ICCs on the Adherence and Competence Scales were adequate, and there was a high percentage of rater agreement on both scales. Although raters had high interrater agreement (95%) across sessions on the Adherence Scale, the ICC was relatively low (.69). Lack of item variability, dichotomous scaling, and changes in adherence items across sessions may account for the lower ICC.
Although there was substantial agreement between group leader self-report and independent ratings on the Adherence Scale, the results of this study suggest that group leaders tended to report higher adherence to the protocol than did independent raters. This suggests that self-report fidelity ratings may be higher than independent ratings because of positive bias in self-report. Another explanation is that group leaders do not fully appreciate the importance of their reports of adherence and report adherence to all components by rote at the end of a group session. To address this issue, discussing with group leaders the need for transparency in reporting adherence and the utility of this information may decrease the bias. Nevertheless, there was substantial agreement between group leader and independent rater reports of adherence, suggesting that self-report may provide a relatively good estimate of adherence to intervention protocol when no other method is available.
One strength of this study is the assessment of ratings on the Adherence and Competence Scales over time. In this study, group leader competence was highly stable over time. Although mean adherence scores remained high across all sessions, adherence did vary by time and session. Group leaders initially exhibited improved adherence over time, peaked at Session 7, then declined from the 7th to 12th session.
There are several explanations for changes in adherence over time. First, because audio-recording procedures were initiated in this study, it is possible that adherence was higher at the beginning of the intervention because group leaders were aware of being recorded but later habituated to the presence of the recorder. Thus, a true measure of adherence to protocol occurred. This suggests that implementation fidelity methods should include audio recordings of all group sessions, with a random selection of recordings for obtaining accurate estimates of implementation fidelity.
A second explanation of changes in adherence over time is that, as group leaders become more confident in their conduct of CPP groups and more familiar with the parents in their group, they may adjust the protocol in response to what they believe the parents in their group most need. Although research relating adherence and outcomes has shown variable results (Barber et al., 2006; Hogue et al., 2008; Huey et al., 2000), moderate adherence may be a better predictor of outcome than strict adherence is (Barber et al., 2006). It is possible that adjusting to the needs of the individuals in the group while staying true to the theoretical underpinnings of the intervention may be more related to outcomes than strict adherence.
Finally, changes in adherence over time may be representative of group leader drift from the intervention protocol over time. In response, group leader supervisors or technical assistance personnel should discuss and promote adherence to intervention protocol midway through the intervention, with the goal of maintaining adherence through the remainder of intervention sessions.
Mean adherence scores were correlated positively with attendance and engagement in the intervention, whereas competence scores were correlated positively with group participant reports of satisfaction. Effect sizes from these associations ranged from .45 to .64. According to Cohen (1988), these effect sizes are considered moderate to large. In addition, adherence and competence were related to different outcomes, supporting the concept that adherence and competence are distinct components. However, what remains unknown is the potential role of group leader adherence and competence as mediators of process variables with more distal outcomes of the intervention (e.g., parenting practices and child behavior outcomes) and what factors (e.g., context or intervention environment) might converge with fidelity to influence intervention outcomes.
Child behavior problems improved substantially from baseline to postintervention. However, these improvements were not related significantly to group leader fidelity ratings. This may be due to insufficient power. It is also possible that 3 months (the length of the intervention) is too brief. Prior research indicates that the greatest improvements in parents’ ratings of child behavior problems occur over a longer time, after parents have had many opportunities to apply what they have learned and to see how the changes are influencing their children’s behavior (Gross, Garvey, et al., 2009). Future research will be focused on the relationship of fidelity to more distal outcomes from parent training.
The high adherence and competence rate found in this study supports the efficacy of CPP group leader training and ongoing supervision of group leaders for achieving a relatively high level of fidelity. Group leaders in this study may be considered early adopters of the intervention and, as a result, may have been more motivated, perceived a greater benefit of the intervention, and supported the theoretical foundations of the intervention (Rogers, 2003). However, as dissemination efforts persist, it is possible that changes in levels of investment result in changes in implementation fidelity. The potential for this shift highlights the need for a reliable and valid measure of ongoing implementation fidelity. Further, fidelity data can provide timely clinical information to group leaders for supervision during dissemination and inform changes in group leader training protocols.
There were several limitations to this study worth noting. First, because the unit of analysis for fidelity is at the group level, aggregation of data significantly decreased the power to detect differences and find significant relationships. Future research will include a larger sample of parent groups for examining the relationship between fidelity and intervention outcomes. A second limitation is the lack of variability in overall mean scores of fidelity. Although this finding is positive in indicating overall good adherence and competence in delivering the CPP intervention, limited range and lack of variability in fidelity items limit the ability to understand the distinct items that influence outcomes. Further work will be focused on specific item analysis on the Fidelity Checklist. Finally, there is limited generalizability of these findings to other groups and settings. However, because the purpose of the study was to establish the reliability and validity of the checklist, in the future, generalizability can be assessed across settings and groups. Although the Adherence Scale is specific to the CPP intervention, the Competence Scale may be applied to other group-based interventions using a similar facilitation model.
This study adds to the growing body of knowledge related to fidelity measurement and assessment (Carroll et al., 2007; Eames et al., 2008; Lee et al., 2008). This study advances implementation fidelity research by utilizing multi-informant, multimethod assessments of fidelity and outcomes and assessing adherence and competence as discrete components. This study suggests that the Fidelity Checklist is a feasible, reliable, and valid measure of group leader implementation fidelity to the CPP and is an important methodological step in taking evidence-based interventions to scale. Assuring implementation fidelity provides confidence that the intervention is being delivered as intended to effect the desired change and improve the lives of parents and young children.
The authors thank Anil Chacko, PhD, assistant professor, State University of New York at Buffalo, and Wrenetha Julion, PhD, RN, associate professor, Rush University College of Nursing, for comments during the development of the Fidelity Checklist and to participating day care centers, parents, and group leaders for support. This study was supported by grants from the National Institute for Nursing Research to D. Gross (Grant 5R01 NR004085), the Chicago Department of Children and Youth Services, Sigma Theta Tau Research Award, and the Golden Lamp Society of Rush University College of Nursing Dissertation Award.
Susan M. Breitenstein, Rush University College of Nursing, Chicago, Illinois.
Louis Fogg, Rush University College of Nursing, Chicago, Illinois.
Christine Garvey, Rush University College of Nursing, Chicago, Illinois.
Carri Hill, Institute for Juvenile Research, Department of Psychiatry, University of Illinois at Chicago.
Barbara Resnick, University of Maryland, Baltimore.
Deborah Gross, Johns Hopkins University, Baltimore, Maryland.