Search tips
Search criteria 


Logo of nihpaAbout Author manuscriptsSubmit a manuscriptHHS Public Access; Author Manuscript; Accepted for publication in peer reviewed journal;
Ambul Pediatr. Author manuscript; available in PMC 2009 January 1.
Published in final edited form as:
PMCID: PMC2245876

Development and Psychometric Assessment of the Collaborative Care for Attention Deficit Disorders Scale

James P. Guevara, MD, MPH,1,2 Paul E. Greenbaum, PhD,3 David Shera, ScD,4 Judy A. Shea, PhD,5 Laura Bauer, BS,1 and Donald F. Schwarz, MD, MPH2,6



To describe the development and assess the validity and reliability of the Collaborative Care for Attention Deficit Disorders Scale (CCADDS), a measure of collaborative care processes for children with ADHD who attend primary care practices.


Collaborative care was conceptualized as a multidimensional construct. The 41-item CCADDS was developed from an existing instrument, review of the literature, focus groups, and an expert panel. The CCADDS was field tested in a national mail survey of 600 stratified and randomly selected practicing general pediatricians. Psychometric analysis included assessments of factor structure, construct validity, and internal consistency.


The overall response rate was 51%. The majority of respondents were male (56%), age 46 years old and above (59%), and white (69%). Common factor analysis identified 3 subscales: beliefs, collaborative activities, and connectedness. Internal consistency reliability (coefficient α) for the overall scale was 0.91, and subscale scores ranged from 0.80 to 0.89. The CCADDS correlated with a validated measure of provider psychosocial orientation (r =−0.36, p <0.001) and with self-reported frequency of mental health referrals or consultations (r =−0.24 to r =−0.42, p <0.001). CCADD scores were similar among physicians by race/ethnicity, gender, age group, and practice location.


Scores on the CCADDS were reliable for measuring collaborative care processes in this sample of primary care clinicians who provide treatment for children with ADHD. Evidence for validity of scores was limited. Future research is needed to confirm its psychometric properties and factor structure and provide guidance on score interpretation.

Keywords: mental health, ADHD, primary health care, health care surveys


The current mental health system for children has been described as fragmented and inefficient.1,2 The President’s New Freedom Commission on Mental Health has called for better coordination of mental health services.2In this groundbreaking report, fragmentation in the mental health system was identified as one of three obstacles preventing Americans from receiving excellent quality mental health care. The current system was viewed as inefficient and poorly integrated, and the commission called for a fundamental transformation in how mental health care is delivered. This proposed transformation would need to involve better collaboration between the physical and mental health systems to bridge the current gap.

Collaborative care, which seeks to bridge the gap between systems, has been defined as primary care clinicians, specialists, nurses, and other professionals along with family members developing a shared definition of a patient’s problem, targeting goals, developing a comprehensive treatment plan, and supporting and problem-solving to optimize adherence and follow-up.3 Collaborative care involves the coordination of services and resources among providers and families to maximize children’s potential and provide optimal care.4 Unfortunately, few instruments exist to measure collaborative care, and most of these have limited use in primary care settings.57 First, they are based on a theoretical framework known as Systems of Care, which espouses a core set of values and principles deemed important in the publicly funded mental health system.79 This framework, however, does not routinely involve primary care in the treatment of children with serious emotional disorders.10 In addition, they assume that collaborating agencies are publicly funded and include measures of interagency administrative and financial ties.5, 6 However, most primary care clinicians are employed by private or hospital-based practices that do not have administrative or financial relationships with public agencies.

In this paper, we describe the development of a novel instrument, the Collaborative Care for Attention Deficit Disorders Scale (CCADDS), which is designed to measure collaborative care processes for children with attention-deficit/hyperactivity disorder (ADHD) who receive treatment in primary care. In addition, we assess the psychometric properties of the CCADDS using a national survey of primary care pediatricians. If evidence suggests that scores are valid and reliable, the CCADDS may assist clinicians and health care organizations to measure collaboration for quality improvement initiatives in ADHD management.


Instrument Development

The CCADDS was adapted from the Interagency Collaboration Scale (IACS) Version 5.1, a provider self-report instrument developed to measure collaboration among community agencies who serve children with serious emotional disorders.5, 6 The IACS was developed based on a literature review of Systems of Care principles; face-to-face interviews with community mental health service providers, case managers, and administrators; and review by an expert panel. The IACS has undergone field testing with samples of respondents from community mental health agencies. Based on the results of factor analyses, the IACS v.5.1 contains 31 items organized into 3 domains: values, activities, and connectedness. Responses for items were scaled using a 5-point response rating. Psychometric properties including test-retest reliability (r = 0.78 to 0.86), internal consistency (α = 0.72 to 0.97), and concurrent validity with the Systems of Care Practice Review (r = 0.32 to 0.56) for the IACS and its subscales were adequate in study samples.

Items for the CCADDS were derived based on a review of the literature regarding the coordination of care for children with ADHD,1122 and from information derived from focus groups conducted with primary care pediatricians, teachers and school staff, therapists, and parents of children with ADHD.23 Nineteen new items, drawn from the literature review and focus group study, were added to the IACS to create a 50-item instrument with four proposed domains: beliefs, activities, individual connectedness, and group connectedness. One IACS domain, connectedness, was divided into individual and group connectedness, since focus group themes suggested the importance of collaboration at the organizational and individual provider level. Six existing items from the IACS were similar to items identified from the literature review and focus groups but were revised to reflect ADHD-specific content. Responses for all items were scaled using a 5-point response rating (1 = not at all, 2 = little, 3 = somewhat, 4 = much, 5 = very much).

The scale was pilot tested for readability, clarity, and content using a mail survey of 50 primary care pediatricians from a metropolitan community. Results of the pilot test (response rate 60%) were reviewed by a local expert panel consisting of a general pediatrician, developmental/behavioral pediatrician, child psychologist, child psychiatrist, and an adolescent medicine physician. Individual items were examined for clarity, content, and appropriateness for primary care management. Based on group consensus, nine items were dropped due to poor content or lack of clarity. In addition, some items were retained but were revised to improve clarity or content appropriateness. The revised CCADDS therefore consisted of 41 items organized into 4 domains for ease of completion with higher scores reflecting more positive collaborative care attitudes and practices.


The CCADDS was field-tested using a national mail survey of practicing general pediatricians. Pediatricians were identified using the American Medical Association’s 2004 Directory of Physicians in the United States.24 This directory contained the listing of over 800,000 practicing and retired physicians in the U.S. In the directory, physicians reported their medical specialty, their address, and year of graduation. To be eligible for participation in the survey, physicians must have self-identified as pediatricians and had contact information (address ± telephone number) available in the directory. Pediatricians were excluded if they were retired or in specialty care practice.

Physicians in the eligible sample (N = 53,789) were stratified into 4 geographic regions to produce a nationally representative sample. Approximately 150 pediatricians from each region were randomly selected using computer-generated random numbers to achieve an initial sample of 600 (Figure 1). Eligible physicians were mailed a questionnaire, recruitment letter, declination card, self-addressed stamped return envelope, and an American Express gift certificate ($10). If physicians did not respond, two additional mailings and a telephone call were placed at approximate 2 week intervals. If surveys were returned unopened and no forwarding address was included, these subjects were excluded as ineligible and an additional eligible physician from the same region was randomly selected to replace them. We excluded and did not replace physicians who responded and indicated they were retired or in specialty care practice.

Figure 1
Flow diagram of mail survey


Pediatricians completed a questionnaire that contained questions on demographic characteristics, mental health activities, the modified Physician Belief Scale (PBS), and the CCADDS. Demographic characteristics sought included age, gender, race/ethnicity, years in practice, practice location, and the average number of patients seen per week. Mental health activities included four questions developed ad hoc regarding the frequency of co-located mental health providers in primary care practices, consultation with mental health providers, referral to mental health providers, and receipt of referral information from mental health providers. These questions were piloted for clarity and readability during the initial pilot testing of the CCADDS. The frequency of each of these activities was measured using a 4-point rating scale (always, often, sometimes, never) where lower scores indicate a greater frequency of self-reported activity. The modified PBS is a validated 14-item scale that measures physician psychosocial orientation.25 Scores range from 14 to 70 with lower scores reflecting better psychosocial orientation. The PBS contains two subscales, beliefs and burden. The beliefs subscale assesses a provider’s attitudes toward mental health issues, e.g. “my patients and/or their caregivers do not want me to investigate psychosocial problems”. The burden subscale assesses a provider’s sense of burden in providing mental health treatment, e.g. “one reason I do not consider information about psychosocial problems is the limited time I have available”. Physician psychosocial orientation as measured by the modified PBS and frequency of mental health activities were included in the questionnaire to assess their correlation with collaborative care.


Summary statistics including means, standard deviations, and proportions were ascertained for individual items and for overall and subscale scores. The percentage of missing data was determined for each item. In addition, the percentage of respondents with the lowest possible score (floor effect) and the highest possible score (ceiling effect) was determined for each subscale.

Factor Analysis

Exploratory factor analysis of the correlation matrix was conducted on the overall scale to determine how items aggregated together as factors, since the factor structure of this instrument had not been previously determined.26 Scree plots, which graph eigenvalues versus factor number, were used to visually determine factor number.27 The inflection point where the graph turns horizontal was considered to reflect diminishing increases in eigenvalues for corresponding increases in factor number and represented the appropriate factor number. Factor solutions were rotated orthogonally (Varimax) and obliquely (Promax), and the rotation that produced the highest number of salient loading items (λ ≥ 0.30) with the fewest multiple loading items, i.e. items that have salient loadings on more than 1 factor, was sought. We also examined the hyperplane count, i.e. the number of loadings between +0.10 and −0.10, and sought the rotated solution with the highest hyperplane count. Non-salient items or multiple loading items were dropped from the scale to obtain a simple structure.


Internal consistency reliability of the CCADDS and its domains was estimated using Cronbach’s Alpha (α), a measure of the average split half correlation among all possible combinations of items.28 An α coefficient of at least 0.70 has been recommended for group comparisons, and an α coefficient of at least 0.90 has been recommended for individual comparisons.29 Item-total correlations were examined and represent the correlation of individual items with the sum of all other items in the scale.30 Items with low item-total (r<.20) correlation were dropped from corresponding subscales, since low item-scale correlation suggests poor correlation with other items in the scale. Intercorrelations among subscales were examined to see how closely subscales related to each other.

Construct Validity

Construct validity, i.e. how well a scale’s scores move in hypothesized directions, of the CCADDS and its subscales was examined by correlating its scores with PBS total and subscale scores and with measures of the frequency of mental health activities.30 We hypothesized that collaborative care would be associated with physician psychosocial orientation, since data suggest that greater psychosocial orientation may be associated with clinician decisions to jointly manage behavioral problems with mental health providers.31 We also hypothesized that collaborative care would be associated with self-reported frequency of collaborative mental health activities such as referral and consultation. To assess construct validity, we correlated the CCADDS total and subscales scores with those of the modified PBS using Pearson correlation coefficients and with the frequency of reported mental health activities (co-location of services, consultation, referral, and information receipt) using Spearman correlation coefficients. We adjusted p-values for multiple comparisons using Sidak’s correction.32, 33 We designated correlations as small (r 0.10–0.29), medium (r 0.30–0.49), and large (r ≥ 0.50).34

Discriminant Validity

Discriminant validity of the CCADDS was assessed by comparing scores among groups of subjects thought to be similar or different in their collaborative capabilities. We compared scores among groups stratified by race, sex, age group (≤ 45 years, >45 years), and practice location (urban, suburban, rural) using t-tests and one-way ANOVA. We adjusted for multiple comparisons involving scale and subscale scores using a Bonferroni correction which resulted in subsequent p-values of p<0.003 as representing statistical significance.35 We hypothesized that scores would be lower for subjects who practiced in rural communities, since rural communities may have fewer mental health resources. We hypothesized that scores would not differ by sex, race, or age group, since there was no published information on differences by these characteristics. Factor analysis was performed using SAS version 9.12 (SAS Institute, Carey, NC), and remaining statistical analyses were performed using Stata version 8.0 (Stata Corporation, College Station, TX).36, 37 The study received approval by the Institutional Review Board at The Children’s Hospital of Philadelphia.


Of the initial mailing to 600 eligible pediatricians, 100 were returned unopened without a forwarding address and were replaced. The overall response rate was 50.5% (Figure 1). We excluded 61 respondents (10.2%) who reported being retired or in specialty care practice and two respondents who completed the demographics portion of the questionnaire but did not complete the CCADDS. Non-respondents included 43 (7.2%) who returned declination cards and 254 (42.3%) who never responded. Respondents did not differ (p>0.05) from non-respondents with respect to gender, years in practice, or region of the country. Participation rates by region varied from 47.3% to 53.4%, but differences in these rates were not significant (p>0.05).

Demographic characteristics of respondents are shown in Table 1. Respondents were predominantly male, older than 45 years old, white, and practiced in suburban communities. There were few Hispanic or African-American respondents. They reported on average 16.6 (SD ± 9.6) years of practice experience and saw on average 103.2 (SD ± 52.7) patients a week. Their mean PBS subscale scores were 13.1 for beliefs and 15.8 for burden, which were similar to pediatricians who participated in the Child Behavior Study, a large nation-wide study of primary care clinicians (mean beliefs score 12.8, mean burden score 15.3).25

Table 1
Demographic Characteristics of Respondents

Factor Analysis

The Scree plot generated from exploratory factor analysis suggested the presence of three or four factors. The addition of factors after the fourth resulted in a relatively horizontal downward sloping line. The 3- and 4-factor solutions were examined for plausibility. The 3-factor solution was selected, since it resulted in a plausible factor structure, in which items corresponded to beliefs, activities, and connectedness domains as in the IACS. The 4-factor solution was not deemed plausible.

Thirty-four percent of the total variance among items was explained by the 3-factor structure. The 3-factor solution was rotated orthogonally (Varimax) and obliquely (Promax). The oblique rotation generated a greater hyperplane count (59 vs. 33) and a simpler solution with absence of cross-loading items than the orthogonal rotation. We therefore selected the oblique solution and dropped the four items without salient loadings to obtain an overall scale with 37 items and 3 subscales (beliefs, activities, and connectedness). Items in the beliefs subscale reflect attitudes and beliefs regarding the importance of collaborating with schools and mental health agencies. Items in the activities subscale measure specific activities that involve collaboration, e.g. developing referral arrangements or coordinating treatment plans. Items in the connectedness subscale measure how well clinicians or practices collaborate with community organizations.


The rate of missing data was low (0.4%) with individual items having at most 3 missing responses out of 240 respondents (Table 2). Overall, only 15 subjects (6.3%) had any missing items. There were no differences between those with and without missing items with regards to age, race, sex, practice location, and years in practice (p>0.05). We therefore assumed that missing items were missing completely at random and did not impute missing values. There were no floor effects. The beliefs domain had the highest ceiling effect with 4% of respondents reporting the highest possible score. Ceiling effects for the other two subscales were low (0.4%).

Table 2
CCADDS Items and Internal Consistency*

The internal consistency for all three subscales (α =0.80–0.89) and the overall scale (α =0.91) were good, suggesting that items within subscales generally were associated with each other. Corrected item-total correlations were in an appropriate range for all items (r >0.20). Selective removal of individual items did not improve the internal consistency of any subscales.

Construct Validity

The CCADDS subscale scores were moderately correlated with each other. Correlation coefficients ranged from 0.31 for beliefs and connectedness to 0.51 for activities and connectedness, suggesting medium intercorrelations. The CCADDS total score and subscale scores had small to medium correlations with the PBS total and subscale scores in the expected direction with two exceptions (Table 3). The beliefs subscale of the CCADDS did not correlate significantly with the burden subscale of the PBS, and the activities subscale of the CCADDS did not correlate significantly with the beliefs subscale of the PBS. In general, the CCADDS total score correlated best with the PBS total score and with reported frequency of mental health consultation and receipt of referral information.

Table 3
Correlations between Collaborative Care for ADD Scales (CCADDS) and Physician Belief Scales (PBS) and Frequency of Mental Health Activities.

Intercorrelations between CCADDS total and subscale scores and reported frequency of mental health activities were examined (Table 3). CCADDS total and subscale scores correlated best with reported frequency of mental health consultations (r =−0.17 to −0.42) and least with reported frequency of on-site mental health providers (r =−0.05 to −0.22). Only the connectedness subscale scores and the total scores correlated significantly with all mental health activities. The beliefs subscale scores consistently correlated worse with mental health activities.

Discriminant Validity

Discriminant validity of the CCADDS total scores among subpopulations of the sample was examined. As hypothesized, CCADDS total scores were not significantly different by sex, age, or race-ethnicity. Although we had postulated differences in CCADDS scores by practice location, there were no significant differences after adjustment for multiple comparisons.


The CCADDS was developed from a review of the literature, focus groups, and an expert panel and was adapted from a preexisting instrument designed to measure interagency collaboration. In this nationally representative sample, the CCADDS demonstrated evidence for score reliability and limited validity for measuring collaborative care processes. Exploratory factor analysis revealed the presence of three subscales. The low missing data rate indicates that pediatric providers can understand and respond to the items. The absence of significant floor and ceiling effects indicates that the instrument captures the range of possible responses. The internal consistency reliability of the scale and subscales was strong and surpassed the 0.70 minimum standard for making group comparisons, and the overall scale’s internal consistency surpassed the 0.90 minimum standard for making individual comparisons. The CCADDS correlated in expected directions with a validated measure of physician psychosocial orientation and with self-reported frequency of mental health activities. Scores and subscale scores were similar among physicians by age, gender, race/ethnicity, and practice location.

The CCADDS overall and subscale scores correlated modestly to moderately in expected directions with other measures of physician psychosocial orientation and mental health activities with a few exceptions. First, the CCADDS beliefs subscale did not correlate well with the burden subscale of the PBS, frequency of on-site mental health providers, and frequency of receipt of referral information¨ It did correlate best with the beliefs subscale of the PBS. Second, the CCADDS activities subscale did not correlate well with the beliefs subscale of the PBS or with frequency of mental health referrals. This suggests that clinicians’ attitudes toward collaborating with schools and mental health providers are related to their overall views toward mental health care. It also suggests that engaging in collaborative activities has less to do with their attitudes toward collaboration than with time pressures and other burdens they may experience. It is not clear why engaging in collaborative activities, which includes a question on developing referral arrangements, did not correlate better with frequency of mental health referrals.

Several limitations to these findings warrant discussion. First, our overall response rate of 51% was low for mail surveys in general but consistent with mail surveys of physicians. Asch and colleagues reported average response rates from physician mail surveys of 54%, similar to ours, but average response rates from non-physician mail surveys of 68%.38 In addition, we found no significant differences among respondents and non-respondents with regards to sex, years in practice, or region of the country. However, it is not clear if respondents differed from non-respondents in other important ways. Second, our absolute sample size was probably insufficient to conduct analyses of important population subgroups, particularly minority physicians. However, we incorporated a sampling scheme that allowed us to select a nationally representative pool of pediatricians and improve the generalizability of our findings. The study sample was similar to that of the Pediatric Research in Office Settings (PROS), a national practice-based research network administered by the American Academy of Pediatrics, except that our sample had a greater proportion over age 45 years (59% vs. 41%).39

Despite these limitations, our data suggest that measuring collaborative care in primary care settings is feasible. Efforts to improve behavioral health care collaboration in primary care settings can be augmented with validated instruments that measure how well collaboration occurs. Such efforts can be seen as responses to federal calls to improve the coordination of mental health care for children.2 The CCADDS scores demonstrated good reliability and validity in this nationally representative sample of practicing general pediatricians. This instrument may be of assistance to quality improvement efforts targeted at ADHD treatment in primary care settings, particularly for attempts to improve collaboration between practices and schools and mental health providers. Further study, however, is needed to confirm its psychometric properties and factor structure and provide guidance on score interpretation and responsiveness to change prior to widespread implementation.


This study was funded by a grant from the National Institute of Mental Health, K23 MH065696. We would like to thank Snejana Nihtianova, M.S., for her help in the statistical analysis and survey mailings. We would also like to thank Drs. Nathan Blum, Thomas Power, Josephine Elia, Donald Schwarz, and Flaura Winston for their participation on our expert panel and for offering suggestions for instrument development. Finally, we would like to thank Katherine Bevans, Ph.D., for her critical review of the manuscript.


Publisher's Disclaimer: This is a PDF file of an unedited manuscript that has been accepted for publication. As a service to our customers we are providing this early version of the manuscript. The manuscript will undergo copyediting, typesetting, and review of the resulting proof before it is published in its final citable form. Please note that during the production process errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain.


1. U.S. Public Health Service. Report of the Surgeon General's Conference on Children's Mental Health: a national action agenda. Washington, D.C.: 2000.
2. Anonymous. New Freedom Commission on Mental Health, achieving the promise: transforming mental health care in America. [Accessed May 1, 2006].
3. Katon W, Von Korff M, Lin E, Simon G. Rethinking practitioner roles in chronic illness: the specialist, primary care physician, and the practice nurse. Gen Hosp Psychiatry. 2001;23:138–144. [PubMed]
4. American Academy of Pediatrics. Care coordination: integrating health and related systems of care for children with special health care needs. Pediatrics. 1999;104(4):978–981. [PubMed]
5. Greenbaum PE, Lipien L, Dedrick RF. An instrument to measure interagency collaboration among child-serving organizations. Paper presented at: American Psychological Association; Honolulu, HI. 2004.
6. Greenbaum PE, Dedrick RF. Multilevel analysis of interagency collaboration among children's mental health agencies. Paper presented at: A System of Care for Children's Mental Health: Expanding the Research Base; Tampa, FL. 2006.
7. Stroul BA, Friedman RM. A system of care for children and youth with severe emotional disturbances. Washington, DC: Georgetown University Child Development Center; 1986.
8. Bickman L, Noser K, Summerfelt WT. Long-term effects of a system of care on children and adolescents. J Behav Health Serv Res. 1999;26(2):185–202. [PubMed]
9. Bickman L, Lambert EW, Andrade AR, Penaloza RV. The Fort Bragg continuum of care for children and adolescents: mental health outcomes over 5 years. J Consult Clin Psychol. 2000;68(4):710–716. [PubMed]
10. Fisher WH. Introduction: children, adolescents, and mental health services research: an overview of emerging perspective. In: Fisher WH, editor. Research on community-based mental health services for children and adolescents. San Diego: JAI Press; 2007. pp. 1–12.
11. American Academy of Pediatrics. Clinical practice guideline: diagnosis and evaluation of the child with attention-deficit/hyperactivity disorder. Pediatrics. 2000;105(5):1158–1170. [PubMed]
12. American Academy of Pediatrics. Clinical practice guideline: treatment of the school-aged child with attention-deficit/hyperactivity disorder. Pediatrics. 2001;108(4):1033–1044. [PubMed]
13. American Academy of Pediatrics. Bright Futures.
14. Goldman L, Genel M, Bezman R, Slanetz P. Diagnosis and treatment of attention-deficit/hyperactivity disorder in children and adolescents. JAMA. 1998;279(14):1100–1107. [PubMed]
15. Leslie LK. The role of primary care physicians in attention-deficit/hyperactivity disorder. Pediatr Ann. 2002;31(8):475–484. [PMC free article] [PubMed]
16. National Initiative for Children's Healthcare Quality. ADHD Toolkit.
17. Reiff M, Banez G, Culbert T. Children who have attentional disorders: diagnosis and evaluation. Pediatr in Rev. 1993;14(12):455–464. [PubMed]
18. Jadad A, Boyle M, Cunningham C, Kim M, Schachar R. Treatment of attention-deficit/hyperactivity disorder. Evidence report/technology assessment no. 11, ARHQ Publ. No. 00-E005. Rockville, MD: Agency for Healthcare Research and Quality; 1999. Evidence report/technology assessment no. 11, ARHQ Publ. No. 00-E005.
19. American Academy of Child and Adolescent Psychiatry. Practice parameters for the assessment and treatment of children, adolescents, and adults with attention-deficit/hyperactivity disorder. J Am Acad Child Adolesc Psychiatry. 1997;36(10 Suppl):85S–121S. [PubMed]
20. Guevara JP, Stein MT. Evidence based management of attention deficit hyperactivity disorder. BMJ. 2001;323:1232–1235. [PMC free article] [PubMed]
21. Gephart HR. A Managed Care Approach to ADHD. Contemp Pediatrics. 1997;14(5):123–139.
22. Office of Special Education Programs. Children with ADD/ADHD- topic brief. Washington, DC: U.S. Department of Education; 1999.
23. Guevara J, Feudtner C, Romer D, et al. Fragmented care for inner-city minority children with attention-deficit/hyperactivity disorder. Pediatrics. 2005;116:e512–e517. [PMC free article] [PubMed]
24. American Medical Association. Directory of Physicians in the United States 2004. Chicago, IL: American Medical Association; 2004.
25. McLennan JD, Jansen-McWilliams L, Comer DM, Gardner WP, Kelleher KJ. The Physician Belief Scale and psychosocial problems in children: a report from the Pediatric Research in Office Settings and the Ambulatory Sentinel Practice Network. J Dev Behav Pediatr. 1999;20:24–30. [PubMed]
26. Gorsuch RL. Factor analysis. 2nd ed. Hillsdale, New Jersey: Lawrence Erlbaum Associates; 1983.
27. Cattell RB. The scientific use of factor analysis in behavioral and life sciences. New York, NY: Plenum Press; 1978.
28. Cronbach LJ. Coefficient Alpha and the internal structure of tests. Psychometrika. 1951;16:297–334.
29. Nunnally JC, Bernstein IR. Psychometric theory. 3rd ed. New York, NY: McGraw-Hill; 1994.
30. Aday LA. Designing and conducting health surveys: a comprehensive guide. San Francisco: Jossey-Bass Publishers; 1996.
31. Guevara JP, Rothbard A, Shera D, et al. Correlates of behavioral care management strategies used by primary care pediatric providers. Ambulatory Pediatrics. 2007;7:160–166. [PMC free article] [PubMed]
32. Sidak Z. Rectangular confidence regions for the means of multivariate normal distributions. J Am Stat Assoc. 1967;62:626–633.
33. Winer BJ, Brown DR, Michels KM. Statistical principles in experimental design. 3rd ed. New York: McGraw-Hill; 1991.
34. Cohen J. A power primer. Psychol Bull. 1992;112(1):155–159. [PubMed]
35. Rosner B. Fundamentals of biostatistics. 5th ed. Pacific Grove, CA: Duxbury; 2000.
36. SAS Institute. SAS statistical software. Cary, NC: SAS Institute; 2002.
37. StataCorp. Stata statistical software: release 8. College Station, TX: StataCorp LP; 2005.
38. Asch DA, Jedrziewski MK, Christakis NA. Response rates to mail surveys published in medical journals. J Clin Epidemiol. 1997;50(10):1129–1136. [PubMed]
39. Wasserman RC, Slora EJ, Bocian AB, et al. Pediatric Research in Office Settings (PROS): a national practice-based research network to improve children's health care. Pediatrics. 1998;102:1350–1357. [PubMed]