|Home | About | Journals | Submit | Contact Us | Français|
The patient-centered medical home (PCMH) has become a widely cited solution to the deficiencies in primary care delivery in the United States. To achieve the magnitude of change being called for in primary care, quality improvement interventions must focus on whole-system redesign, and not just isolated parts of medical practices.
Investigators participating in 9 different evaluations of Patient Centered Medical Home implementation shared experiences, methodological strategies, and evaluation challenges for evaluating primary care practice redesign.
A year-long iterative process of sharing and reflecting on experiences produced consensus on 7 recommendations for future PCMH evaluations: (1) look critically at models being implemented and identify aspects requiring modification; (2) include embedded qualitative and quantitative data collection to detail the implementation process; (3) capture details concerning how different PCMH components interact with one another over time; (4) understand and describe how and why physician and staff roles do, or do not evolve; (5) identify the effectiveness of individual PCMH components and how they are used; (6) capture how primary care practices interface with other entities such as specialists, hospitals, and referral services; and (7) measure resources required for initiating and sustaining innovations.
Broad-based longitudinal, mixed-methods designs that provide for shared learning among practice participants, program implementers, and evaluators are necessary to evaluate the novelty and promise of the PCMH model. All PCMH evaluations should as comprehensive as possible, and at a minimum should include a combination of brief observations and targeted qualitative interviews along with quantitative measures.
Primary health care has long been a cornerstone of a well functioning health care system.1,2 Nevertheless, there is growing consensus that primary health care in the United States is in turmoil and badly in need of change.3–6 The patient-centered medical home (PCMH) has become a widely cited solution to the deficiencies in primary care.7–10 The PCMH combines core tenets of primary care (eg, first-contact care that is continuous, comprehensive, and coordinated across the care continuum) with recent practice innovations such as electronic information systems, population-based management of chronic-illness, and continuous quality improvement.9,11,12 One cornerstone of the PCMH is meeting the needs and preferences of patients; another is payment reform that improves reimbursement to primary care practices and rewards high performance. PCMH Joint Principles were approved in 2007 by major primary care professional organizations,13 and the model has gained the endorsement of 17 specialty societies, nearly all the Fortune 500 companies, and all major national health plans.14
There are numerous pilot and demonstration projects underway to test efficacy and effectiveness of PCMH models.14,15 The future of primary care is far too important for us to fail to learn everything possible from these varied experiments in implementing the PCMH. Given wide acknowledgment of the urgent need for health care reform, understanding the “how” of implementing change may be as, or even more, important than establishing if such changes are justifiable. Evaluations are critical for understanding on-the-ground developments as practices attempt to implement new care models such as the PCMH.16 Although detailed qualitative research into the context and mechanism of interventions is not new,17–19 opportunities to look inside the black box of practices’ actual experience with implementation presents unique challenges and important opportunities to shape health care delivery.
Practitioners and researchers have embarked on a variety of PCMH demonstration and pilot projects. Some focus on implementing interventions targeting specific processes of care, patient populations, or particular components of practice delivery and operation. Others attempt to implement more comprehensive redesign of practice functioning and culture.14 Recent research suggests that there are core structural elements, technological and process components, and cultural shifts necessary for a practice to effectively transform along the lines of a medical home model.20 Initial demonstrations make it clear that transformation to a PCMH is challenging, but also having far-reaching implications for patients, providers, and payers.21,22 Beneath this sobering reality lies a wealth of lessons about ingredients required to successfully redesign practices and methods to engage physicians, practice members, and patients in envisioning and implementing new models of delivering care.22,23 It is vital that lessons from these varied practice experiences be captured and understood. Future evaluation efforts need to focus on the rich contextual relationships that are integral to the success or failure of experiments in practice change. The ability to record, reflect, and integrate ongoing data about the day-to-day transformation experience is critical for learning what it takes to create and sustain a PCMH.
This manuscript reflects on lessons from multiple PCMH demonstration projects being evaluated by the Commonwealth Fund and others to identify essential components and appropriate methodologies for conducting evaluations that optimize learning.
The Commonwealth Fund created a collaborative of PCMH evaluation teams and launched multiple workgroups to describe the range of PCMH pilot projects and their characteristics. Workgroups included efficiency, clinical quality, patient experience, physician/staff satisfaction, and process/implementation. Our workgroup focused on the process/implementation of PCMH interventions (Table 1 indicates a list of projects represented in the workgroup). This workgroup began its collaboration in May 2009 with a series of conference calls to develop an initial logic model based on different members’ experiences evaluating PCMH pilots (Fig. 1). The logic model identified key assumptions to guide interventions and evaluations. Discussions of these assumptions then generated a core list of intervention activities and evaluation approaches that should be included in evaluations. These include assessments of practice capability, system/external environments, stakeholder and leadership motivation, teambuilding and collaboration, and the change model and assumptions underlying the practice improvement process. This logic model was discussed in a face-to-face meeting in June 2009. The workgroup then “met” via a series of conference calls to refine the logic model and reach consensus on recommendations. This iterative process resulted in 7 key recommendations for PCMH evaluation as summarized later in the text.
There is a continual interplay between abstract models of health care redesign and actual experience to implement these models in real-world health care settings. As insurance; payment policies; and national, regional, and local situations and environments evolve, delivery models must adapt to this ever-changing milieu.22 Therefore, diverse models of primary care need to be envisioned and constantly updated. This fluidity creates both research and intervention challenges. Evaluations must capture the coevolution of innovation models and real-world conditions over time. Practice redesign is a process, not an end-state. Evaluations must move iteratively between research and real-world change. Although an evaluation could passively watch from the sidelines as an intervention unfolds, our workgroup concluded that evaluations should include embedded process evaluations that provides ongoing feedback to practices participating in interventions so the model can adapt as practices learn what works and does not work.19,24,25 Such evaluations must understand and describe initial PCMH models while also capturing changes as they occur over time. And, because it is unlikely that the final model can be implemented elsewhere without further local adaptation, it is critical to understand how different evolutionary processes facilitate or hinder implementation.
Primary care transformation is only possible through long-term intervention plans that evolve over time. Just as the primary care delivery model evolves over time, the intervention strategy itself must also adapt as projects are implemented. Feedback from experiments in real-world practices will inevitably suggest possible changes in implementation strategies. For example, initial intervention plans might call for practice participants to attend a series of learning sessions.26 However, during the implementation it may be discovered that practice staff also require onsite facilitation or external consultants. As implementers change their strategies over time, evaluations must be sufficiently flexible to capture these changes along with rationales for making them.
Practice redesigns are “whole system” changes made up of multiple, interdependent components. The PCMH is explicitly a model of service delivery system reform that encompasses multiple layers and levers for improving primary care practice, ranging from improvements in chronic and care to increased efficiency. Evaluations must be attentive to specific model elements that are changing in real-world contexts; however, they should also track changes to the overall gestalt—how different aspects of change combine to create a new patient experience and alternative forms of care delivery. Evaluations must avoid entering into a parts/whole fallacy. There are hazards in assuming that just because 1 component or feature works in 1 context, that it will work equally well in other contexts and with other components. The complex interplay among components also increases the potential for unintended consequences; improvement in 1 aspect may reduce or promote the optimal functioning of another. For example, work efficiency may increase by stretching appointment intervals or replacing one-on-one visits with group visits. But this could adversely impact the clinician-patient relationship, thereby undercutting patient-centeredness. Similarly, adding new technologies, such as electronic chronic disease registries or e-prescribing, could initially add significant time delays as clinic teams learn how to use these tools and integrate them into the practice.
Integration of new practice features and components requires that stakeholder roles evolve. Stakeholders include the clinical and managerial teams, as well as those in the larger environment such as patients, payers, community members, specialists, and hospitals. Evaluations should identify stakeholder willingness and ability to adapt their roles to new innovations. Evaluations should also examine how roles of practice team members change as service delivery improvements take hold. For example, introducing a care management specialist might mean that care processes traditionally carried out by physicians in the examination room will now occur in different settings and with different providers. Long-established roles and patterns of care may, however, be difficult to overcome, leading to active or passive resistance that must be understood and documented. Patient roles and responsibilities will also change under most PCMH models. Patient responsibilities may include timely arrival for appointments, following treatment plans, self-managing illnesses, communicating via an internet patient portal, and collaborating with care coordination. Patients’ access to their own records and the use of new technologies through web portals and online scheduling may also redefine patient roles.
One key to implementation research and evaluation is decoding not just what people intend, but what they actually do. Using implementation of health information technology (HIT) as an example, we find innovations can unfold in a wide range of ways in the day-to-day life of a real world practice, often in ways that are unanticipated.27 Clinical documentation, third party requirements, and compliance audits take time and can potentially get in the way of more personal interaction, leading physicians to resist change. Additionally, while HIT allows other clinic team members to share in the care-related workload, this technology is still a long way from being realized in most programs, practices, and workflows.28 Evaluations must be able to separate innovations that “fail” due to problems in implementation from those with inherent problems that make any implementation problematic. For example, physicians may not use a newly purchased e-prescribing tool because the server goes down 10% of the time. Similarly, lack of examination room laptops or too few site licenses may make access too laborious.
Evaluations also need to address the fact that initial conditions within organizations may hamper successful implementation. A new technology may be sound, but the way it is implemented makes it appear unsuccessful. Layering complex HIT solutions atop a broken process, such as a medication refill system without protocols in place, will have limited impact. Clinic teams that clearly embrace change and commit to redesigned processes of care before incorporating HIT are more likely to succeed. Thus, evaluation teams should work closely with end-users to understand how processes of care change as clinic teams work to implement HIT tools.
Community-based care cannot be easily redesigned if the continuum of care is not linked with specialists, hospitals, a supportive payment system, and accessible referral sites for other services.29,30 For example, referral and referral management are challenges that primary care practices cannot fix in isolation. To be successful, a redesigned practice must be well integrated into its so-called “medical neighborhood”—the larger local and regional context of interorganizational networks.30,31 Widespread adoption of new practice models within larger medical neighborhoods requires close collaboration and coordination with other health care professionals who may not initially understand or share these concepts. For example, many practices cannot support care managers or health educators, yet there are often viable avenues for establishing these supports within the local network. Evaluations must examine local and larger environments surrounding primary care practices to better understand implementation barriers and constraints.
Implementation of practice change is a resource intensive proposition and many practices lack the necessary resources. Most primary care in the United States occurs in small practices without the administrative capacity to implement widespread and continuous change.32 Small practices typically lack the resources to purchase advanced HIT systems, hire additional care coordinators, or integrate other resource or time-intensive innovations. Because practice transformation requires an amalgamation of improvement strategies, how any given practice implements the PCMH will depend on its resources, constraints, and the specifics of its situation. Whereas large “integrated” health systems may have more resources, they also have more layers of bureaucracy, institutional inertia, required sign-offs, and competition for resources compared with more flexible smaller practices.
Evaluations should attempt to independently define and measure the external resources required for implementations, including both start-up and continuing costs. Start-up resources facilitate team building and process improvement, for example, training teams how to change approach. Practice members often need assistance in “learning how to learn,” especially if key terminology or criteria are new. To sustain change, practices require continuing resources to assess implementation and make adjustments, perhaps in the form of Plan-Do-Study-Act cycles. It is not unrealistic to expect a 3-day staff intensive to cost $30,000 in lost productivity and training costs. Such investment of time and people are not insignificant and require extra resources, meanwhile the full magnitude of benefit is often initially hidden. Comprehensive evaluations are needed to document these resource needs to illuminate the true costs to practices of transformative change and to guide future implementations.
Recent guidelines for reporting on practice change and quality improvement studies emphasize the need to go beyond traditional outcomes and include details of context, initial intervention, the implementation process, as well as changes to the intervention and implementation process.16,33 The consensus of our workgroup is that evaluations need to be mixed-methods qualitative and quantitative designs, particularly those that include innovative strategies for collecting data at the local, regional, and patient population levels of the organization and its environment.12 Mixed-methods are essential. Patient outcomes may be best measured using clinical records, whereas patient experience may be optimally studied through surveys. Practices achieving status as an advanced PCMH should already have internal “sensing systems” that collect these data,20 so evaluations may want to be a collaboration with practices to develop these measures. In contrast, capturing the day-to-day problems with implementing change typically requires more unobtrusive, qualitative observations and unstructured interviews.34,35 There are many specific quantitative measures for multiple aspects of primary care practice redesign,12 and an increasing collection of qualitative and mixed methods strategies available for evaluations.35,36 Although it is not possible to provide an exhaustive list of recommendations, we do provide here some guidelines for a more comprehensive evaluation based on our experience in the PCMH Evaluator’s Collaborative.
To describe the variability and richness of contexts, settings, and outcomes of interventions, it is important to gather data on initial conditions of both practices and the larger medical neighborhoods in which they are situated. Often this information is partially available in grant applications and other documents available from practices, municipalities, and local health systems. Geographic information systems also provide capacities for defining service areas, identifying available local resources, and describing patient characteristics of primary care practices.37
Because invariably multiple intervention and improvement design modifications are required during implementation, it is important to capture these in some detail. A recently developed strategy for capturing these changes is the use of online diaries posted by intervention team members, practice clinicians, and practice staff.38 These online diaries have been shown to provide ongoing insights into what worked, why, and how. They also clarify what should have worked, but did not, and why.39 Frequent first-hand reflections of this kind can also document barriers and unanticipated consequences.
It has become clear that core strengths of a practice, such as care processes, finances, and operations, have tremendous impact on the uptake and sustainability of practice change.20 Empirical research suggests that “adaptive reserve,” comprised of leadership skills, communication patterns, relationships among stakeholders, improvisational skills, and sense-making, are also key factors in successful practice change.20,36,40 Information on these characteristics can be collected through a combination of online diaries and clinician/staff surveys,12,36 although direct observation of organizational operations and targeted key informant interviews are likely to contribute insights not easily gleaned from other sources.41,42 These qualitative strategies are instrumental for understanding the interdependencies among different components of practice change models, how changing one impacts others, and how interventions influence and reshape roles of practice participants.
Although PCMH models are still evolving, quantitative measures of clinical and organizational outcomes have been proposed.12,36 These instruments often emphasize data that are easily obtained from sources like billing and medical records and may be less focused on capturing patient and clinician/staff experience.12 Fortunately, there are instruments to capture patient experiences,43 for example, the Ambulatory Care Experience Survey,44 the Patient Enablement Index45 and a consultation and relational empathy measure.46 Quantitative measures are also available to capture health care access, health promotion counseling, clinical team care, whole person care, and patient’s perception of time with doctor.12
The recently published Standards for Quality Improvement Reporting Excellence guidelines emphasize the need for reporting of quality improvement interventions to focus on understanding and describing the actual course of interventions, including how and why plans for practice change evolve.16 Thus, evaluations of PCMH demonstrations should be as comprehensive as possible. Comprehensive evaluations can be very expensive and may not always be feasible, particularly in state pilots with limited funds or given limitations of some federally funded grants. Nevertheless, some comprehensiveness should be the standard to which we hold our field. Investigators should take caution in interpreting results that are only partial in their inclusion of stakeholder groups and/or lack observation and measures of structure, process, and outcomes at multiple levels.
When resources limit the comprehensiveness of the evaluation, the minimum standard should include a combination of brief observation and targeted interviews at different points in time during implementation. If it is not possible to make site visits to all participating practices, then at least a purposeful sample should be selected, because direct observation is able to obtain insights of day-to-day implementation that are hard to capture through interviews. Borkan et al describes one such brief strategy for evaluating residencies.47 Online or written diaries kept by both the implementation team and a sample of practice participants can be used to check in and regularly track changes to the intervention or implementation.38
Figure 2 provides a more comprehensive evaluation framework that includes both an embedded process evaluation to provide ongoing feedback to the implementation team, which is separate from an independent qualitative assessment team that captures details of the actual intervention over time. This innovative design allows the qualitative assessment team to track the actual course of the intervention as suggested in the Standards for Quality Improvement Reporting Excellence guidelines, while not contaminating the intervention, which also needs to be receiving real-time participatory feedback. The qualitative assessment team would use a range of qualitative data collection strategies,41,48 ranging from direct observation, key informant interviews, focus groups, and individual in-depth interviews. This particular evaluation design also includes a separate quantitative outcomes team that can collect patient, clinician/staff, and practice level outcomes data, as well as a national advisory group. Our workgroup recommends this type of design for any large-scale PCMH implementation.
The collection of this volume of diverse data at multiple levels does create some analysis challenges. Although it is not possible to provide analysis details in this manuscript, if the task is envisioned as creating multiple series of comparative case studies in which each series focuses on one or more of the recommendations,49 the task is much less daunting. Using the recently completed National Demonstration Project as an example, Nutting et al provide a series of case summaries that illustrate this process.23 Although somewhat time intensive, creating these case summaries, is well described and requires the use of commonly described mixed-methods analysis techniques.35,48,50
It has become readily apparent that practice redesign is exceedingly complex and that most interventions fail to meet expectations, although unanticipated consequences are commonplace. Evaluations must be designed to reflect these realities by simultaneously focusing on both the parts and the whole across multiple units of analysis. This will require longitudinal, mixed methods designs. Optimally, designs should incorporate characteristics of evaluation strategies that enable reciprocal learning among evaluators, program implementers, and practice participants such as participatory or empowerment evaluation and action research.19,25,51 Because of the important policy implications of transforming and reforming primary care, learning how to do it right itself has to be done right.
Supported by a grant from the Commonwealth Fund.