|Home | About | Journals | Submit | Contact Us | Français|
Recent Breakthrough Series Collaboratives have focused on improving chronic illness care, but few have included academic practices, and none have specifically targeted residency education in parallel with improving clinical care. Tools are available for assessing progress with clinical improvements, but no similar instruments have been developed for monitoring educational improvements for chronic care education.
To design a survey to assist teaching practices with identifying curricular gaps in chronic care education and monitor efforts to address those gaps.
During a national academic chronic care collaborative, we used an iterative method to develop and pilot test a survey instrument modeled after the Assessing Chronic Illness Care (ACIC). We implemented this instrument, the ACIC-Education, in a second collaborative and assessed the relationship of survey results with reported educational measures.
A combined 57 self-selected teams from 37 teaching hospitals enrolled in one of two collaboratives.
We used descriptive statistics to report mean ACIC-E scores and educational measurement results, and Pearson’s test for correlation between the final ACIC-E score and reported educational measures.
A total of 29 teams from the national collaborative and 15 teams from the second collaborative in California completed the final ACIC-E. The instrument measured progress on all sub-scales of the Chronic Care Model. Fourteen California teams (70%) reported using two to six education measures (mean 4.3). The relationship between the final survey results and the number of educational measures reported was weak (R2=0.06, p=0.376), but improved when a single outlier was removed (R2=0.37, p=0.022).
The ACIC-E instrument proved feasible to complete. Participating teams, on average, recorded modest improvement in all areas measured by the instrument over the duration of the collaboratives. The relationship between the final ACIC-E score and the number of educational measures was weak. Further research on its utility and validity is required.
Calls for improvement in health care for patients with chronic illnesses abound1–5. Over the past decade, multiple improvement collaboratives have addressed the care gap in diverse practice settings6,7, but less is known about efforts to improve chronic illness care in academic practices where future physicians train8–10.
In 2005, the Institute for Improving Clinical Care of the Association of American Medical Colleges launched a national breakthrough series collaborative designed to train health care teams that included residents in training to improve the quality of care for patients with chronic illnesses. The collaborative used the Chronic Care Model (CCM) to structure targets for improvement (Table 1)11. Teaching hospitals with affiliated primary care residency training programs (internal medicine, family medicine, or pediatrics) were invited to participate. Participation required the formation of inter-professional teams, which included residents training in primary care disciplines, who would dedicate efforts to re-designing their academic practice sites to improve both clinical care delivery and the educational program.
In breakthrough series collaboratives12, participating organizations commit to implementation of improvement strategies over a period of several months, alternating between ‘learning sessions’ where teams from participating organizations come together to learn about the chosen topic and to plan changes (e.g., optimizing care for chronic illnesses), and subsequent ‘action periods’ in which teams return to their organizations and test those changes in clinical settings. Most recently, this process has been applied to coordinate and accelerate efforts to improve chronic illness care. Multiple previous collaboratives have focused efforts on improving chronic illness care13, but few have included residency training practices in these efforts, and none have specifically addressed improving residency education in parallel with improving the delivery of clinical care.
Many teams participating in collaboratives use the Assessment of Chronic Illness Care (ACIC)14,15, an instrument developed and validated to identify deficiencies in their systems of care for chronic conditions and direct improvement efforts to address those gaps8. The ACIC survey is typically self-administered three times as a team-based group exercise: at the start, a midpoint, and the end of the collaborative. Scores are expected to improve as changes in the practice environment are implemented.
As work to improve chronic illness care moves to residency training sites, residency programs also need tools to help multidisciplinary, clinical teams identify gaps in their educational programs and to assist with continuous improvement in chronic illness care education. As part of the national chronic care collaborative, we developed and pilot-tested an educational survey modeled after the ACIC. Subsequently, the California Healthcare Foundation funded a chronic care collaborative in California teaching hospitals, which provided the opportunity to further test this instrument. We report here on the development and preliminary results of the Assessment of Chronic Illness Care Education or ACIC-E.
Our aim was to develop a survey that could help residency education programs and teaching practices identify curricular gaps in chronic care education and design efforts to address those gaps. As such, the survey is a needs assessment tool that can be used to identify and periodically monitor improvement efforts. Similar to the ACIC, the ACIC-E survey is self-administered as a team-based exercise in practices where residents provided care to patients with chronic illnesses applying the chronic care model. We hypothesized that teams’ self-rated scores on the ACIC-E would increase over time as they focused on improving their educational programs.
Thirty-six self-selected quality improvement teams from 22 institutions participated in the national collaborative. Twenty-one self-selected teams from 15 institutions participated in the subsequent California collaborative; two teams from one institution combined for a final cohort of 20 teams. Individual teams generally consisted of a clinical practice champion, such as the medical director of the training practice, an education leader, such as the residency program director, residents in training at the practice site, and nurses, social workers, receptionists, medical assistants, and/or pharmacists from the practice. Most members of these teams attended the learning sessions to learn, plan changes, and assess progress. Team members in attendance at the learning sessions completed the ACIC-E survey as described below.
As the leadership group for the national collaborative, we reviewed the constructs of the ACIC instrument and discussed parallel constructs for process improvement in education. We developed performance statements related to the six components of the Chronic Care Model: organization of health care (four statements), community linkages (three), self-management support (five), delivery system design (six), decision support (four), and clinical information systems (five). Four anchors describing graduated levels of performance from “little or none” to “fully implemented” were developed to represent various stages of improving educational efforts related to chronic illness care. Examples from the ACIC instrument and parallel constructs from the ACIC-E are shown in Table 2. Some constructs were directly translated from the ACIC to apply to educational settings. Others were developed to address unique educational challenges. As with the ACIC instrument, we used a 12-point numerical scale (0 to 11) for teams’ self-scoring. The higher point values indicated that the actions described in the anchors were more fully implemented. The instrument was designed to be completed as a team discussion exercise and take no more than 30 min to complete.
An initial instrument was drafted, discussed among the leadership group, and edits incorporated. We repeated this process until consensus was reached. The instrument was then pilot tested with one team enrolled in the national collaborative, resulting in further refinement to clarify directions and concepts, and improve the rating scale descriptors.
In 2005, the ACIC-E was mailed electronically to the education representative for each team enrolled in the national collaborative. Reminders were sent to maximize response. At the second learning session of the national collaborative, each team worked in small groups with other teams and a leadership group facilitator. Each team presented their own completed ACIC-E report and explained their self-ratings. Teams revised their self-reported ratings as necessary based on the group discussion. Not infrequently, their ratings were readjusted downwardly. This normative calibration process was designed to provide at least one external monitor of the self-reported results as well as to improve shared understanding of the statements and rating scale. Revised surveys were collected and recorded as baseline team assessments. We asked teams to complete the instrument two more times, at mid-point and at the end of the collaborative, using the same group discussion method described. We encouraged feedback about statements that were unclear. Minor edits were made to the survey to improve readability.
In 2007, we implemented the ACIC-E instrument at the beginning of the California collaborative and collected baseline, mid-point, and end-of-collaborative self-ratings using the same process. In addition to completing the ACIC-E instrument, California teams were encouraged to improve their educational programs to better align residency curricula with their efforts to improve the delivery of chronic illness care, using educational measures developed and pilot tested in the national collaborative to monitor these educational improvements16. These educational measures, shown in Table 3, were designed for residency programs to have a simple way of knowing if a change in the curriculum resulted in improved resident exposure to, or performance of, learning objectives. Teams were asked to report monthly on the two required measures and as many optional measures as they desired, essentially building their own unique sets of education performance metrics.
We hypothesized that the number of educational measures utilized and reported would be indicative of the level of a team’s engagement in educational improvement with more measures suggesting a more advanced stage of curricular integration of education with chronic care practice re-design. In contrast, teams facing significant barriers to chronic care practice re-design and curricular reform would make fewer changes in their curriculum and would have fewer measures to report. Since the ACIC-E was designed to assess the level of improvement of chronic care education, we hypothesized that teams with higher final ACIC-E scores would show evidence of more curricular change as reflected in more robust sets of education performance metrics.
Mean ACIC-E scores at baseline, midpoint, and endpoint were calculated for the survey overall and for each component for both collaboratives. For the implementation collaborative (California), the numbers of team-specific monthly education reports were totaled, and the numbers of different education measures utilized by teams were counted. Teams were considered to have reported on a specific measure if they provided at least six monthly reports. The final team-specific ACIC-E scores were compared with the number of different educational measures utilized. All teams completing the final ACIC-E were included in the comparison. Pearson correlations were calculated for this comparison.
A total of 29 teams from the national collaborative and 15 teams from the California collaborative completed the final ACIC-E. For the California collaborative, 11 of 20 participating teams completed the survey at all three points in time (55%), 4 teams (20%) completed two surveys, and 5 teams (25%) completed only one survey (Fig. 1). Mean scores for each sub-scale chronic care component and the integration components of the ACIC-E for both collaboratives are shown in Figure 2. In all areas, teams reported moderate progress in implementing educational changes to improve chronic illness care. For both collaboratives, the lowest baseline performance rating overall was the use of clinical information systems to teach and improve chronic care, but attention to this gap resulted in the greatest improvement over time. In contrast, the highest baseline rating was in decision support, the use of evidence-based medicine to teach and support clinical care decisions, and practice guideline utilization, yet this area showed little improvement over time.
The California collaborative teams’ use of education measures is shown in Table 3. Fourteen teams (70%) reported monthly results for the required education measures on use of registries and self-management support. In addition, a subset of teams utilized education measures for monitoring participation in planned visits, answering clinical questions and teaching others, assessing patients’ perceptions of chronic illness care in the practice, participation in improvement work, and inclusion on quality improvement teams (mean 4.3 measures per team).
Of the 15 California collaborative teams that completed the final ACIC-E, 11 reported from 2 to 6 different education measures (mean 2.44). Three other teams reported a mean of 5 measures, but could not be included in the correlation analysis because they did not complete the final ACIC-E survey. Two teams completed neither reports nor the final survey. The relationship between the final ACIC-E summary score and the number of educational measures the 15 teams reported was weak when all teams were included (R2=0.06, p=0.376). A single team reported a high number of educational measures, but self-rated poorly on the instrument (six measures, summary score 3.32). When this team was removed from the analysis, the probability of correlation improved (R2=0.37, p=0.022) (Fig. 3).
This report describes our efforts to develop a self-rating instrument to assist teams with identifying and directing improvement efforts in their residency educational programs for chronic care. The instrument proved feasible to complete, and participating teams, on average, recorded improvement in almost all areas measured by the tool over the duration of the collaborative. Greatest improvement was seen in novel areas for traditional training programs such as using population disease registries to monitor the quality of care delivered or including patient self-management as a core component of health care delivery. As expected, most residency programs routinely use multiple methods to teach and reinforce use of evidence-based decision-making. Less progress was noted in this area.
In the California collaborative, little improvement was noted between the mid-point and final ACIC-E self-assessments. Because of prior health care improvement efforts in the state, several of these teams were more prepared to initiate improvement at the start of the collaborative and probably realized more easily obtained changes by the mid-point of the collaborative. In contrast, the national collaborative included teams from across the US with considerable variation in readiness for change. Progress was more incremental for these teams.
Over the 18-month time period of the collaborative, programs improved self-ratings to the mid-range of the ACIC-E 12-point scale for all components, leaving plenty of room for continual curricular monitoring and improvement. Further study is required to determine if the ACIC-E components represent valuable educational aims, if the changes in scores we observed represent educationally meaningful change, and if reaching scores in the highest range is related to superior quality of chronic illness care education.
The ACIC-E instrument uses generic language in reference to ‘learners.’ Although the purpose of the collaboratives was to facilitate educational change in primary care residency training programs where the targeted learners are residents, it can be used and studied in other health professions' training settings aiming to improve chronic illness care education.
The relationship between the final ACIC-E scores and the number of educational measures that teams from the California collaborative chose to report was weak and improved when a single outlier was removed from the analysis. This outlier completed two of three ACIC-E surveys with little change in low self-ratings (Fig. 3, Team F) yet reported the highest number of education measures (six) used by any team. This finding is unexplained. Future research should include qualitative investigation of such outliers to better understand how teams are interpreting both the ACIC-E and the education measures.
Our findings may be the result of the small number of participating teams, the relatively short observation period, teams’ varying interpretation of the anchors on the ACIC-E components, or teams lack of attention to optional reporting of educational measures. Although the educational measures were developed in direct relationship to the goal of developing, implementing, and improving curricula using the chronic care model, the ACIC-E instrument self-ratings may be more related to barriers in the educational environment than to specific curricular design elements targeted by the educational measures. Further, the educational measures themselves may not be a valid reflection of educational improvement efforts.
Our study has several limitations. First, we relied on self-report and teams likely interpreted their performances differently. We attempted to minimize these differences through referent setting group discussion of each result at every collaborative meeting. It is possible that team members did not reach consensus in determining their survey responses, allowing more assertive team members to dominate decisions. Since teamwork is a vital component for improving chronic care and working in collaboratives, we purposefully chose the team approach for completing surveys. Use of the ACIC-E was just as much an intervention (engaging teams in setting priorities and working on specific changes) as a measurement tool. The consensus discussion was essential for getting all team members onboard for working on changes to their education programs. Second, not all teams completed all three ACIC-E surveys. The results may represent what is possible in more highly functioning teams rather than a true average of participating teams. Third, we had a small sample of teams in the California collaborative for testing the relationship between the instrument and reported educational measures. A larger sample may yield different results.
Further research on the usefulness of the ACIC-E as a tool that can assist improvement teams and educational programs in identifying chronic care curricular gaps and monitor improvement efforts to address those gaps is needed.
The Robert Wood Johnson Foundation and the California Healthcare Foundation generously supported the academic chronic care collaboratives that served as the basis for this work.
Conflict of Interest None disclosed.