|Home | About | Journals | Submit | Contact Us | Français|
To describe the development and assess the validity and reliability of the Collaborative Care for Attention Deficit Disorders Scale (CCADDS), a measure of collaborative care processes for children with ADHD who attend primary care practices.
Collaborative care was conceptualized as a multidimensional construct. The 41-item CCADDS was developed from an existing instrument, review of the literature, focus groups, and an expert panel. The CCADDS was field tested in a national mail survey of 600 stratified and randomly selected practicing general pediatricians. Psychometric analysis included assessments of factor structure, construct validity, and internal consistency.
The overall response rate was 51%. The majority of respondents were male (56%), age 46 years old and above (59%), and white (69%). Common factor analysis identified 3 subscales: beliefs, collaborative activities, and connectedness. Internal consistency reliability (coefficient α) for the overall scale was 0.91, and subscale scores ranged from 0.80 to 0.89. The CCADDS correlated with a validated measure of provider psychosocial orientation (r =−0.36, p <0.001) and with self-reported frequency of mental health referrals or consultations (r =−0.24 to r =−0.42, p <0.001). CCADD scores were similar among physicians by race/ethnicity, gender, age group, and practice location.
Scores on the CCADDS were reliable for measuring collaborative care processes in this sample of primary care clinicians who provide treatment for children with ADHD. Evidence for validity of scores was limited. Future research is needed to confirm its psychometric properties and factor structure and provide guidance on score interpretation.
The current mental health system for children has been described as fragmented and inefficient.1,2 The President’s New Freedom Commission on Mental Health has called for better coordination of mental health services.2In this groundbreaking report, fragmentation in the mental health system was identified as one of three obstacles preventing Americans from receiving excellent quality mental health care. The current system was viewed as inefficient and poorly integrated, and the commission called for a fundamental transformation in how mental health care is delivered. This proposed transformation would need to involve better collaboration between the physical and mental health systems to bridge the current gap.
Collaborative care, which seeks to bridge the gap between systems, has been defined as primary care clinicians, specialists, nurses, and other professionals along with family members developing a shared definition of a patient’s problem, targeting goals, developing a comprehensive treatment plan, and supporting and problem-solving to optimize adherence and follow-up.3 Collaborative care involves the coordination of services and resources among providers and families to maximize children’s potential and provide optimal care.4 Unfortunately, few instruments exist to measure collaborative care, and most of these have limited use in primary care settings.5–7 First, they are based on a theoretical framework known as Systems of Care, which espouses a core set of values and principles deemed important in the publicly funded mental health system.7–9 This framework, however, does not routinely involve primary care in the treatment of children with serious emotional disorders.10 In addition, they assume that collaborating agencies are publicly funded and include measures of interagency administrative and financial ties.5, 6 However, most primary care clinicians are employed by private or hospital-based practices that do not have administrative or financial relationships with public agencies.
In this paper, we describe the development of a novel instrument, the Collaborative Care for Attention Deficit Disorders Scale (CCADDS), which is designed to measure collaborative care processes for children with attention-deficit/hyperactivity disorder (ADHD) who receive treatment in primary care. In addition, we assess the psychometric properties of the CCADDS using a national survey of primary care pediatricians. If evidence suggests that scores are valid and reliable, the CCADDS may assist clinicians and health care organizations to measure collaboration for quality improvement initiatives in ADHD management.
The CCADDS was adapted from the Interagency Collaboration Scale (IACS) Version 5.1, a provider self-report instrument developed to measure collaboration among community agencies who serve children with serious emotional disorders.5, 6 The IACS was developed based on a literature review of Systems of Care principles; face-to-face interviews with community mental health service providers, case managers, and administrators; and review by an expert panel. The IACS has undergone field testing with samples of respondents from community mental health agencies. Based on the results of factor analyses, the IACS v.5.1 contains 31 items organized into 3 domains: values, activities, and connectedness. Responses for items were scaled using a 5-point response rating. Psychometric properties including test-retest reliability (r = 0.78 to 0.86), internal consistency (α = 0.72 to 0.97), and concurrent validity with the Systems of Care Practice Review (r = 0.32 to 0.56) for the IACS and its subscales were adequate in study samples.
Items for the CCADDS were derived based on a review of the literature regarding the coordination of care for children with ADHD,11–22 and from information derived from focus groups conducted with primary care pediatricians, teachers and school staff, therapists, and parents of children with ADHD.23 Nineteen new items, drawn from the literature review and focus group study, were added to the IACS to create a 50-item instrument with four proposed domains: beliefs, activities, individual connectedness, and group connectedness. One IACS domain, connectedness, was divided into individual and group connectedness, since focus group themes suggested the importance of collaboration at the organizational and individual provider level. Six existing items from the IACS were similar to items identified from the literature review and focus groups but were revised to reflect ADHD-specific content. Responses for all items were scaled using a 5-point response rating (1 = not at all, 2 = little, 3 = somewhat, 4 = much, 5 = very much).
The scale was pilot tested for readability, clarity, and content using a mail survey of 50 primary care pediatricians from a metropolitan community. Results of the pilot test (response rate 60%) were reviewed by a local expert panel consisting of a general pediatrician, developmental/behavioral pediatrician, child psychologist, child psychiatrist, and an adolescent medicine physician. Individual items were examined for clarity, content, and appropriateness for primary care management. Based on group consensus, nine items were dropped due to poor content or lack of clarity. In addition, some items were retained but were revised to improve clarity or content appropriateness. The revised CCADDS therefore consisted of 41 items organized into 4 domains for ease of completion with higher scores reflecting more positive collaborative care attitudes and practices.
The CCADDS was field-tested using a national mail survey of practicing general pediatricians. Pediatricians were identified using the American Medical Association’s 2004 Directory of Physicians in the United States.24 This directory contained the listing of over 800,000 practicing and retired physicians in the U.S. In the directory, physicians reported their medical specialty, their address, and year of graduation. To be eligible for participation in the survey, physicians must have self-identified as pediatricians and had contact information (address ± telephone number) available in the directory. Pediatricians were excluded if they were retired or in specialty care practice.
Physicians in the eligible sample (N = 53,789) were stratified into 4 geographic regions to produce a nationally representative sample. Approximately 150 pediatricians from each region were randomly selected using computer-generated random numbers to achieve an initial sample of 600 (Figure 1). Eligible physicians were mailed a questionnaire, recruitment letter, declination card, self-addressed stamped return envelope, and an American Express gift certificate ($10). If physicians did not respond, two additional mailings and a telephone call were placed at approximate 2 week intervals. If surveys were returned unopened and no forwarding address was included, these subjects were excluded as ineligible and an additional eligible physician from the same region was randomly selected to replace them. We excluded and did not replace physicians who responded and indicated they were retired or in specialty care practice.
Pediatricians completed a questionnaire that contained questions on demographic characteristics, mental health activities, the modified Physician Belief Scale (PBS), and the CCADDS. Demographic characteristics sought included age, gender, race/ethnicity, years in practice, practice location, and the average number of patients seen per week. Mental health activities included four questions developed ad hoc regarding the frequency of co-located mental health providers in primary care practices, consultation with mental health providers, referral to mental health providers, and receipt of referral information from mental health providers. These questions were piloted for clarity and readability during the initial pilot testing of the CCADDS. The frequency of each of these activities was measured using a 4-point rating scale (always, often, sometimes, never) where lower scores indicate a greater frequency of self-reported activity. The modified PBS is a validated 14-item scale that measures physician psychosocial orientation.25 Scores range from 14 to 70 with lower scores reflecting better psychosocial orientation. The PBS contains two subscales, beliefs and burden. The beliefs subscale assesses a provider’s attitudes toward mental health issues, e.g. “my patients and/or their caregivers do not want me to investigate psychosocial problems”. The burden subscale assesses a provider’s sense of burden in providing mental health treatment, e.g. “one reason I do not consider information about psychosocial problems is the limited time I have available”. Physician psychosocial orientation as measured by the modified PBS and frequency of mental health activities were included in the questionnaire to assess their correlation with collaborative care.
Summary statistics including means, standard deviations, and proportions were ascertained for individual items and for overall and subscale scores. The percentage of missing data was determined for each item. In addition, the percentage of respondents with the lowest possible score (floor effect) and the highest possible score (ceiling effect) was determined for each subscale.
Exploratory factor analysis of the correlation matrix was conducted on the overall scale to determine how items aggregated together as factors, since the factor structure of this instrument had not been previously determined.26 Scree plots, which graph eigenvalues versus factor number, were used to visually determine factor number.27 The inflection point where the graph turns horizontal was considered to reflect diminishing increases in eigenvalues for corresponding increases in factor number and represented the appropriate factor number. Factor solutions were rotated orthogonally (Varimax) and obliquely (Promax), and the rotation that produced the highest number of salient loading items (λ ≥ 0.30) with the fewest multiple loading items, i.e. items that have salient loadings on more than 1 factor, was sought. We also examined the hyperplane count, i.e. the number of loadings between +0.10 and −0.10, and sought the rotated solution with the highest hyperplane count. Non-salient items or multiple loading items were dropped from the scale to obtain a simple structure.
Internal consistency reliability of the CCADDS and its domains was estimated using Cronbach’s Alpha (α), a measure of the average split half correlation among all possible combinations of items.28 An α coefficient of at least 0.70 has been recommended for group comparisons, and an α coefficient of at least 0.90 has been recommended for individual comparisons.29 Item-total correlations were examined and represent the correlation of individual items with the sum of all other items in the scale.30 Items with low item-total (r<.20) correlation were dropped from corresponding subscales, since low item-scale correlation suggests poor correlation with other items in the scale. Intercorrelations among subscales were examined to see how closely subscales related to each other.
Construct validity, i.e. how well a scale’s scores move in hypothesized directions, of the CCADDS and its subscales was examined by correlating its scores with PBS total and subscale scores and with measures of the frequency of mental health activities.30 We hypothesized that collaborative care would be associated with physician psychosocial orientation, since data suggest that greater psychosocial orientation may be associated with clinician decisions to jointly manage behavioral problems with mental health providers.31 We also hypothesized that collaborative care would be associated with self-reported frequency of collaborative mental health activities such as referral and consultation. To assess construct validity, we correlated the CCADDS total and subscales scores with those of the modified PBS using Pearson correlation coefficients and with the frequency of reported mental health activities (co-location of services, consultation, referral, and information receipt) using Spearman correlation coefficients. We adjusted p-values for multiple comparisons using Sidak’s correction.32, 33 We designated correlations as small (r 0.10–0.29), medium (r 0.30–0.49), and large (r ≥ 0.50).34
Discriminant validity of the CCADDS was assessed by comparing scores among groups of subjects thought to be similar or different in their collaborative capabilities. We compared scores among groups stratified by race, sex, age group (≤ 45 years, >45 years), and practice location (urban, suburban, rural) using t-tests and one-way ANOVA. We adjusted for multiple comparisons involving scale and subscale scores using a Bonferroni correction which resulted in subsequent p-values of p<0.003 as representing statistical significance.35 We hypothesized that scores would be lower for subjects who practiced in rural communities, since rural communities may have fewer mental health resources. We hypothesized that scores would not differ by sex, race, or age group, since there was no published information on differences by these characteristics. Factor analysis was performed using SAS version 9.12 (SAS Institute, Carey, NC), and remaining statistical analyses were performed using Stata version 8.0 (Stata Corporation, College Station, TX).36, 37 The study received approval by the Institutional Review Board at The Children’s Hospital of Philadelphia.
Of the initial mailing to 600 eligible pediatricians, 100 were returned unopened without a forwarding address and were replaced. The overall response rate was 50.5% (Figure 1). We excluded 61 respondents (10.2%) who reported being retired or in specialty care practice and two respondents who completed the demographics portion of the questionnaire but did not complete the CCADDS. Non-respondents included 43 (7.2%) who returned declination cards and 254 (42.3%) who never responded. Respondents did not differ (p>0.05) from non-respondents with respect to gender, years in practice, or region of the country. Participation rates by region varied from 47.3% to 53.4%, but differences in these rates were not significant (p>0.05).
Demographic characteristics of respondents are shown in Table 1. Respondents were predominantly male, older than 45 years old, white, and practiced in suburban communities. There were few Hispanic or African-American respondents. They reported on average 16.6 (SD ± 9.6) years of practice experience and saw on average 103.2 (SD ± 52.7) patients a week. Their mean PBS subscale scores were 13.1 for beliefs and 15.8 for burden, which were similar to pediatricians who participated in the Child Behavior Study, a large nation-wide study of primary care clinicians (mean beliefs score 12.8, mean burden score 15.3).25
The Scree plot generated from exploratory factor analysis suggested the presence of three or four factors. The addition of factors after the fourth resulted in a relatively horizontal downward sloping line. The 3- and 4-factor solutions were examined for plausibility. The 3-factor solution was selected, since it resulted in a plausible factor structure, in which items corresponded to beliefs, activities, and connectedness domains as in the IACS. The 4-factor solution was not deemed plausible.
Thirty-four percent of the total variance among items was explained by the 3-factor structure. The 3-factor solution was rotated orthogonally (Varimax) and obliquely (Promax). The oblique rotation generated a greater hyperplane count (59 vs. 33) and a simpler solution with absence of cross-loading items than the orthogonal rotation. We therefore selected the oblique solution and dropped the four items without salient loadings to obtain an overall scale with 37 items and 3 subscales (beliefs, activities, and connectedness). Items in the beliefs subscale reflect attitudes and beliefs regarding the importance of collaborating with schools and mental health agencies. Items in the activities subscale measure specific activities that involve collaboration, e.g. developing referral arrangements or coordinating treatment plans. Items in the connectedness subscale measure how well clinicians or practices collaborate with community organizations.
The rate of missing data was low (0.4%) with individual items having at most 3 missing responses out of 240 respondents (Table 2). Overall, only 15 subjects (6.3%) had any missing items. There were no differences between those with and without missing items with regards to age, race, sex, practice location, and years in practice (p>0.05). We therefore assumed that missing items were missing completely at random and did not impute missing values. There were no floor effects. The beliefs domain had the highest ceiling effect with 4% of respondents reporting the highest possible score. Ceiling effects for the other two subscales were low (0.4%).
The internal consistency for all three subscales (α =0.80–0.89) and the overall scale (α =0.91) were good, suggesting that items within subscales generally were associated with each other. Corrected item-total correlations were in an appropriate range for all items (r >0.20). Selective removal of individual items did not improve the internal consistency of any subscales.
The CCADDS subscale scores were moderately correlated with each other. Correlation coefficients ranged from 0.31 for beliefs and connectedness to 0.51 for activities and connectedness, suggesting medium intercorrelations. The CCADDS total score and subscale scores had small to medium correlations with the PBS total and subscale scores in the expected direction with two exceptions (Table 3). The beliefs subscale of the CCADDS did not correlate significantly with the burden subscale of the PBS, and the activities subscale of the CCADDS did not correlate significantly with the beliefs subscale of the PBS. In general, the CCADDS total score correlated best with the PBS total score and with reported frequency of mental health consultation and receipt of referral information.
Intercorrelations between CCADDS total and subscale scores and reported frequency of mental health activities were examined (Table 3). CCADDS total and subscale scores correlated best with reported frequency of mental health consultations (r =−0.17 to −0.42) and least with reported frequency of on-site mental health providers (r =−0.05 to −0.22). Only the connectedness subscale scores and the total scores correlated significantly with all mental health activities. The beliefs subscale scores consistently correlated worse with mental health activities.
Discriminant validity of the CCADDS total scores among subpopulations of the sample was examined. As hypothesized, CCADDS total scores were not significantly different by sex, age, or race-ethnicity. Although we had postulated differences in CCADDS scores by practice location, there were no significant differences after adjustment for multiple comparisons.
The CCADDS was developed from a review of the literature, focus groups, and an expert panel and was adapted from a preexisting instrument designed to measure interagency collaboration. In this nationally representative sample, the CCADDS demonstrated evidence for score reliability and limited validity for measuring collaborative care processes. Exploratory factor analysis revealed the presence of three subscales. The low missing data rate indicates that pediatric providers can understand and respond to the items. The absence of significant floor and ceiling effects indicates that the instrument captures the range of possible responses. The internal consistency reliability of the scale and subscales was strong and surpassed the 0.70 minimum standard for making group comparisons, and the overall scale’s internal consistency surpassed the 0.90 minimum standard for making individual comparisons. The CCADDS correlated in expected directions with a validated measure of physician psychosocial orientation and with self-reported frequency of mental health activities. Scores and subscale scores were similar among physicians by age, gender, race/ethnicity, and practice location.
The CCADDS overall and subscale scores correlated modestly to moderately in expected directions with other measures of physician psychosocial orientation and mental health activities with a few exceptions. First, the CCADDS beliefs subscale did not correlate well with the burden subscale of the PBS, frequency of on-site mental health providers, and frequency of receipt of referral information¨ It did correlate best with the beliefs subscale of the PBS. Second, the CCADDS activities subscale did not correlate well with the beliefs subscale of the PBS or with frequency of mental health referrals. This suggests that clinicians’ attitudes toward collaborating with schools and mental health providers are related to their overall views toward mental health care. It also suggests that engaging in collaborative activities has less to do with their attitudes toward collaboration than with time pressures and other burdens they may experience. It is not clear why engaging in collaborative activities, which includes a question on developing referral arrangements, did not correlate better with frequency of mental health referrals.
Several limitations to these findings warrant discussion. First, our overall response rate of 51% was low for mail surveys in general but consistent with mail surveys of physicians. Asch and colleagues reported average response rates from physician mail surveys of 54%, similar to ours, but average response rates from non-physician mail surveys of 68%.38 In addition, we found no significant differences among respondents and non-respondents with regards to sex, years in practice, or region of the country. However, it is not clear if respondents differed from non-respondents in other important ways. Second, our absolute sample size was probably insufficient to conduct analyses of important population subgroups, particularly minority physicians. However, we incorporated a sampling scheme that allowed us to select a nationally representative pool of pediatricians and improve the generalizability of our findings. The study sample was similar to that of the Pediatric Research in Office Settings (PROS), a national practice-based research network administered by the American Academy of Pediatrics, except that our sample had a greater proportion over age 45 years (59% vs. 41%).39
Despite these limitations, our data suggest that measuring collaborative care in primary care settings is feasible. Efforts to improve behavioral health care collaboration in primary care settings can be augmented with validated instruments that measure how well collaboration occurs. Such efforts can be seen as responses to federal calls to improve the coordination of mental health care for children.2 The CCADDS scores demonstrated good reliability and validity in this nationally representative sample of practicing general pediatricians. This instrument may be of assistance to quality improvement efforts targeted at ADHD treatment in primary care settings, particularly for attempts to improve collaboration between practices and schools and mental health providers. Further study, however, is needed to confirm its psychometric properties and factor structure and provide guidance on score interpretation and responsiveness to change prior to widespread implementation.
This study was funded by a grant from the National Institute of Mental Health, K23 MH065696. We would like to thank Snejana Nihtianova, M.S., for her help in the statistical analysis and survey mailings. We would also like to thank Drs. Nathan Blum, Thomas Power, Josephine Elia, Donald Schwarz, and Flaura Winston for their participation on our expert panel and for offering suggestions for instrument development. Finally, we would like to thank Katherine Bevans, Ph.D., for her critical review of the manuscript.
Publisher's Disclaimer: This is a PDF file of an unedited manuscript that has been accepted for publication. As a service to our customers we are providing this early version of the manuscript. The manuscript will undergo copyediting, typesetting, and review of the resulting proof before it is published in its final citable form. Please note that during the production process errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain.