PMCCPMCCPMCC

Search tips
Search criteria 

Advanced

 
Logo of nihpaAbout Author manuscriptsSubmit a manuscriptHHS Public Access; Author Manuscript; Accepted for publication in peer reviewed journal;
 
Med Care. Author manuscript; available in PMC 2012 January 1.
Published in final edited form as:
PMCID: PMC3130251
NIHMSID: NIHMS306100

Evaluation of Patient Centered Medical Home Practice Transformation Initiatives

Benjamin F. Crabtree, PhD,* Sabrina M. Chase, PhD,* Christopher G. Wise, PhD, MHA, Gordon D. Schiff, MD,§ Laura A. Schmidt, PhD, MSW, MPH,|| Jeanette R. Goyzueta, MPH,** Rebecca A. Malouin, PhD, MPH,††‡‡ Susan M. C. Payne, PhD, MPH,§§ Michael T. Quinn, PhD,¶¶ Paul A. Nutting, MD, MSPH,||||*** William L. Miller, MD, MA,††† and Carlos Roberto Jaén, MD, PhD‡‡‡§§§

Abstract

Background

The patient-centered medical home (PCMH) has become a widely cited solution to the deficiencies in primary care delivery in the United States. To achieve the magnitude of change being called for in primary care, quality improvement interventions must focus on whole-system redesign, and not just isolated parts of medical practices.

Methods

Investigators participating in 9 different evaluations of Patient Centered Medical Home implementation shared experiences, methodological strategies, and evaluation challenges for evaluating primary care practice redesign.

Results

A year-long iterative process of sharing and reflecting on experiences produced consensus on 7 recommendations for future PCMH evaluations: (1) look critically at models being implemented and identify aspects requiring modification; (2) include embedded qualitative and quantitative data collection to detail the implementation process; (3) capture details concerning how different PCMH components interact with one another over time; (4) understand and describe how and why physician and staff roles do, or do not evolve; (5) identify the effectiveness of individual PCMH components and how they are used; (6) capture how primary care practices interface with other entities such as specialists, hospitals, and referral services; and (7) measure resources required for initiating and sustaining innovations.

Conclusions

Broad-based longitudinal, mixed-methods designs that provide for shared learning among practice participants, program implementers, and evaluators are necessary to evaluate the novelty and promise of the PCMH model. All PCMH evaluations should as comprehensive as possible, and at a minimum should include a combination of brief observations and targeted qualitative interviews along with quantitative measures.

Keywords: patient-centered medical home, PCMH, quality improvement, evaluation, primary care practice redesign, qualitative methods, implementation

Primary health care has long been a cornerstone of a well functioning health care system.1,2 Nevertheless, there is growing consensus that primary health care in the United States is in turmoil and badly in need of change.36 The patient-centered medical home (PCMH) has become a widely cited solution to the deficiencies in primary care.710 The PCMH combines core tenets of primary care (eg, first-contact care that is continuous, comprehensive, and coordinated across the care continuum) with recent practice innovations such as electronic information systems, population-based management of chronic-illness, and continuous quality improvement.9,11,12 One cornerstone of the PCMH is meeting the needs and preferences of patients; another is payment reform that improves reimbursement to primary care practices and rewards high performance. PCMH Joint Principles were approved in 2007 by major primary care professional organizations,13 and the model has gained the endorsement of 17 specialty societies, nearly all the Fortune 500 companies, and all major national health plans.14

There are numerous pilot and demonstration projects underway to test efficacy and effectiveness of PCMH models.14,15 The future of primary care is far too important for us to fail to learn everything possible from these varied experiments in implementing the PCMH. Given wide acknowledgment of the urgent need for health care reform, understanding the “how” of implementing change may be as, or even more, important than establishing if such changes are justifiable. Evaluations are critical for understanding on-the-ground developments as practices attempt to implement new care models such as the PCMH.16 Although detailed qualitative research into the context and mechanism of interventions is not new,1719 opportunities to look inside the black box of practices’ actual experience with implementation presents unique challenges and important opportunities to shape health care delivery.

Practitioners and researchers have embarked on a variety of PCMH demonstration and pilot projects. Some focus on implementing interventions targeting specific processes of care, patient populations, or particular components of practice delivery and operation. Others attempt to implement more comprehensive redesign of practice functioning and culture.14 Recent research suggests that there are core structural elements, technological and process components, and cultural shifts necessary for a practice to effectively transform along the lines of a medical home model.20 Initial demonstrations make it clear that transformation to a PCMH is challenging, but also having far-reaching implications for patients, providers, and payers.21,22 Beneath this sobering reality lies a wealth of lessons about ingredients required to successfully redesign practices and methods to engage physicians, practice members, and patients in envisioning and implementing new models of delivering care.22,23 It is vital that lessons from these varied practice experiences be captured and understood. Future evaluation efforts need to focus on the rich contextual relationships that are integral to the success or failure of experiments in practice change. The ability to record, reflect, and integrate ongoing data about the day-to-day transformation experience is critical for learning what it takes to create and sustain a PCMH.

This manuscript reflects on lessons from multiple PCMH demonstration projects being evaluated by the Commonwealth Fund and others to identify essential components and appropriate methodologies for conducting evaluations that optimize learning.

METHODS

The Commonwealth Fund created a collaborative of PCMH evaluation teams and launched multiple workgroups to describe the range of PCMH pilot projects and their characteristics. Workgroups included efficiency, clinical quality, patient experience, physician/staff satisfaction, and process/implementation. Our workgroup focused on the process/implementation of PCMH interventions (Table 1 indicates a list of projects represented in the workgroup). This workgroup began its collaboration in May 2009 with a series of conference calls to develop an initial logic model based on different members’ experiences evaluating PCMH pilots (Fig. 1). The logic model identified key assumptions to guide interventions and evaluations. Discussions of these assumptions then generated a core list of intervention activities and evaluation approaches that should be included in evaluations. These include assessments of practice capability, system/external environments, stakeholder and leadership motivation, teambuilding and collaboration, and the change model and assumptions underlying the practice improvement process. This logic model was discussed in a face-to-face meeting in June 2009. The workgroup then “met” via a series of conference calls to refine the logic model and reach consensus on recommendations. This iterative process resulted in 7 key recommendations for PCMH evaluation as summarized later in the text.

FIGURE 1
Initial logic model guiding workgroup discussions.
TABLE 1
PCMH Demonstration Projects Being Evaluated by Workgroup Team Members

Fundamental Evaluation Requirements

1. Evaluations Should Critically Examine and Continually Reassess PCMH Models as They Unfold During Real-World Implementation

There is a continual interplay between abstract models of health care redesign and actual experience to implement these models in real-world health care settings. As insurance; payment policies; and national, regional, and local situations and environments evolve, delivery models must adapt to this ever-changing milieu.22 Therefore, diverse models of primary care need to be envisioned and constantly updated. This fluidity creates both research and intervention challenges. Evaluations must capture the coevolution of innovation models and real-world conditions over time. Practice redesign is a process, not an end-state. Evaluations must move iteratively between research and real-world change. Although an evaluation could passively watch from the sidelines as an intervention unfolds, our workgroup concluded that evaluations should include embedded process evaluations that provides ongoing feedback to practices participating in interventions so the model can adapt as practices learn what works and does not work.19,24,25 Such evaluations must understand and describe initial PCMH models while also capturing changes as they occur over time. And, because it is unlikely that the final model can be implemented elsewhere without further local adaptation, it is critical to understand how different evolutionary processes facilitate or hinder implementation.

2. Evaluations Need Ongoing Embedded Data Collection Over an Extended Period of Time That Captures How and Why the Implementation Strategies Changed

Primary care transformation is only possible through long-term intervention plans that evolve over time. Just as the primary care delivery model evolves over time, the intervention strategy itself must also adapt as projects are implemented. Feedback from experiments in real-world practices will inevitably suggest possible changes in implementation strategies. For example, initial intervention plans might call for practice participants to attend a series of learning sessions.26 However, during the implementation it may be discovered that practice staff also require onsite facilitation or external consultants. As implementers change their strategies over time, evaluations must be sufficiently flexible to capture these changes along with rationales for making them.

3. Evaluations Must Not Only Capture Details of How Individual Components Are Implemented, But Also How They Interact With and Impact Each Other Over Time

Practice redesigns are “whole system” changes made up of multiple, interdependent components. The PCMH is explicitly a model of service delivery system reform that encompasses multiple layers and levers for improving primary care practice, ranging from improvements in chronic and care to increased efficiency. Evaluations must be attentive to specific model elements that are changing in real-world contexts; however, they should also track changes to the overall gestalt—how different aspects of change combine to create a new patient experience and alternative forms of care delivery. Evaluations must avoid entering into a parts/whole fallacy. There are hazards in assuming that just because 1 component or feature works in 1 context, that it will work equally well in other contexts and with other components. The complex interplay among components also increases the potential for unintended consequences; improvement in 1 aspect may reduce or promote the optimal functioning of another. For example, work efficiency may increase by stretching appointment intervals or replacing one-on-one visits with group visits. But this could adversely impact the clinician-patient relationship, thereby undercutting patient-centeredness. Similarly, adding new technologies, such as electronic chronic disease registries or e-prescribing, could initially add significant time delays as clinic teams learn how to use these tools and integrate them into the practice.

4. Evaluations Need to Understand and Describe How and Why Roles Evolve, Why They Do Not, and What Future Training May be Indicated

Integration of new practice features and components requires that stakeholder roles evolve. Stakeholders include the clinical and managerial teams, as well as those in the larger environment such as patients, payers, community members, specialists, and hospitals. Evaluations should identify stakeholder willingness and ability to adapt their roles to new innovations. Evaluations should also examine how roles of practice team members change as service delivery improvements take hold. For example, introducing a care management specialist might mean that care processes traditionally carried out by physicians in the examination room will now occur in different settings and with different providers. Long-established roles and patterns of care may, however, be difficult to overcome, leading to active or passive resistance that must be understood and documented. Patient roles and responsibilities will also change under most PCMH models. Patient responsibilities may include timely arrival for appointments, following treatment plans, self-managing illnesses, communicating via an internet patient portal, and collaborating with care coordination. Patients’ access to their own records and the use of new technologies through web portals and online scheduling may also redefine patient roles.

5. Evaluations Need to Identify the Effectiveness of Individual Components (eg, Patient Registries, Physician Dashboards, Patient Portals, etc) and Also How They Are Actually Used as They Are Integrated Into the Practice

One key to implementation research and evaluation is decoding not just what people intend, but what they actually do. Using implementation of health information technology (HIT) as an example, we find innovations can unfold in a wide range of ways in the day-to-day life of a real world practice, often in ways that are unanticipated.27 Clinical documentation, third party requirements, and compliance audits take time and can potentially get in the way of more personal interaction, leading physicians to resist change. Additionally, while HIT allows other clinic team members to share in the care-related workload, this technology is still a long way from being realized in most programs, practices, and workflows.28 Evaluations must be able to separate innovations that “fail” due to problems in implementation from those with inherent problems that make any implementation problematic. For example, physicians may not use a newly purchased e-prescribing tool because the server goes down 10% of the time. Similarly, lack of examination room laptops or too few site licenses may make access too laborious.

Evaluations also need to address the fact that initial conditions within organizations may hamper successful implementation. A new technology may be sound, but the way it is implemented makes it appear unsuccessful. Layering complex HIT solutions atop a broken process, such as a medication refill system without protocols in place, will have limited impact. Clinic teams that clearly embrace change and commit to redesigned processes of care before incorporating HIT are more likely to succeed. Thus, evaluation teams should work closely with end-users to understand how processes of care change as clinic teams work to implement HIT tools.

6. Evaluations Must Capture Not Only Changes Within Individual Practices, But the Ways That Practices Interface With Other Entities Such as Specialists, Hospitals, and Referral Services

Community-based care cannot be easily redesigned if the continuum of care is not linked with specialists, hospitals, a supportive payment system, and accessible referral sites for other services.29,30 For example, referral and referral management are challenges that primary care practices cannot fix in isolation. To be successful, a redesigned practice must be well integrated into its so-called “medical neighborhood”—the larger local and regional context of interorganizational networks.30,31 Widespread adoption of new practice models within larger medical neighborhoods requires close collaboration and coordination with other health care professionals who may not initially understand or share these concepts. For example, many practices cannot support care managers or health educators, yet there are often viable avenues for establishing these supports within the local network. Evaluations must examine local and larger environments surrounding primary care practices to better understand implementation barriers and constraints.

7. Evaluations Must Contribute to Better Understanding What Resources Are Required to Change the Delivery of Care and Which Default Options Work Best When Resources Limit Implementation

Implementation of practice change is a resource intensive proposition and many practices lack the necessary resources. Most primary care in the United States occurs in small practices without the administrative capacity to implement widespread and continuous change.32 Small practices typically lack the resources to purchase advanced HIT systems, hire additional care coordinators, or integrate other resource or time-intensive innovations. Because practice transformation requires an amalgamation of improvement strategies, how any given practice implements the PCMH will depend on its resources, constraints, and the specifics of its situation. Whereas large “integrated” health systems may have more resources, they also have more layers of bureaucracy, institutional inertia, required sign-offs, and competition for resources compared with more flexible smaller practices.

Evaluations should attempt to independently define and measure the external resources required for implementations, including both start-up and continuing costs. Start-up resources facilitate team building and process improvement, for example, training teams how to change approach. Practice members often need assistance in “learning how to learn,” especially if key terminology or criteria are new. To sustain change, practices require continuing resources to assess implementation and make adjustments, perhaps in the form of Plan-Do-Study-Act cycles. It is not unrealistic to expect a 3-day staff intensive to cost $30,000 in lost productivity and training costs. Such investment of time and people are not insignificant and require extra resources, meanwhile the full magnitude of benefit is often initially hidden. Comprehensive evaluations are needed to document these resource needs to illuminate the true costs to practices of transformative change and to guide future implementations.

Evaluation Design and Measurement Considerations

Recent guidelines for reporting on practice change and quality improvement studies emphasize the need to go beyond traditional outcomes and include details of context, initial intervention, the implementation process, as well as changes to the intervention and implementation process.16,33 The consensus of our workgroup is that evaluations need to be mixed-methods qualitative and quantitative designs, particularly those that include innovative strategies for collecting data at the local, regional, and patient population levels of the organization and its environment.12 Mixed-methods are essential. Patient outcomes may be best measured using clinical records, whereas patient experience may be optimally studied through surveys. Practices achieving status as an advanced PCMH should already have internal “sensing systems” that collect these data,20 so evaluations may want to be a collaboration with practices to develop these measures. In contrast, capturing the day-to-day problems with implementing change typically requires more unobtrusive, qualitative observations and unstructured interviews.34,35 There are many specific quantitative measures for multiple aspects of primary care practice redesign,12 and an increasing collection of qualitative and mixed methods strategies available for evaluations.35,36 Although it is not possible to provide an exhaustive list of recommendations, we do provide here some guidelines for a more comprehensive evaluation based on our experience in the PCMH Evaluator’s Collaborative.

What Qualitative/Process Data Are Needed to Evaluate a PCMH Intervention?

To describe the variability and richness of contexts, settings, and outcomes of interventions, it is important to gather data on initial conditions of both practices and the larger medical neighborhoods in which they are situated. Often this information is partially available in grant applications and other documents available from practices, municipalities, and local health systems. Geographic information systems also provide capacities for defining service areas, identifying available local resources, and describing patient characteristics of primary care practices.37

Because invariably multiple intervention and improvement design modifications are required during implementation, it is important to capture these in some detail. A recently developed strategy for capturing these changes is the use of online diaries posted by intervention team members, practice clinicians, and practice staff.38 These online diaries have been shown to provide ongoing insights into what worked, why, and how. They also clarify what should have worked, but did not, and why.39 Frequent first-hand reflections of this kind can also document barriers and unanticipated consequences.

It has become clear that core strengths of a practice, such as care processes, finances, and operations, have tremendous impact on the uptake and sustainability of practice change.20 Empirical research suggests that “adaptive reserve,” comprised of leadership skills, communication patterns, relationships among stakeholders, improvisational skills, and sense-making, are also key factors in successful practice change.20,36,40 Information on these characteristics can be collected through a combination of online diaries and clinician/staff surveys,12,36 although direct observation of organizational operations and targeted key informant interviews are likely to contribute insights not easily gleaned from other sources.41,42 These qualitative strategies are instrumental for understanding the interdependencies among different components of practice change models, how changing one impacts others, and how interventions influence and reshape roles of practice participants.

Quantitative Measures

Although PCMH models are still evolving, quantitative measures of clinical and organizational outcomes have been proposed.12,36 These instruments often emphasize data that are easily obtained from sources like billing and medical records and may be less focused on capturing patient and clinician/staff experience.12 Fortunately, there are instruments to capture patient experiences,43 for example, the Ambulatory Care Experience Survey,44 the Patient Enablement Index45 and a consultation and relational empathy measure.46 Quantitative measures are also available to capture health care access, health promotion counseling, clinical team care, whole person care, and patient’s perception of time with doctor.12

Evaluation Design Options

The recently published Standards for Quality Improvement Reporting Excellence guidelines emphasize the need for reporting of quality improvement interventions to focus on understanding and describing the actual course of interventions, including how and why plans for practice change evolve.16 Thus, evaluations of PCMH demonstrations should be as comprehensive as possible. Comprehensive evaluations can be very expensive and may not always be feasible, particularly in state pilots with limited funds or given limitations of some federally funded grants. Nevertheless, some comprehensiveness should be the standard to which we hold our field. Investigators should take caution in interpreting results that are only partial in their inclusion of stakeholder groups and/or lack observation and measures of structure, process, and outcomes at multiple levels.

When resources limit the comprehensiveness of the evaluation, the minimum standard should include a combination of brief observation and targeted interviews at different points in time during implementation. If it is not possible to make site visits to all participating practices, then at least a purposeful sample should be selected, because direct observation is able to obtain insights of day-to-day implementation that are hard to capture through interviews. Borkan et al describes one such brief strategy for evaluating residencies.47 Online or written diaries kept by both the implementation team and a sample of practice participants can be used to check in and regularly track changes to the intervention or implementation.38

Figure 2 provides a more comprehensive evaluation framework that includes both an embedded process evaluation to provide ongoing feedback to the implementation team, which is separate from an independent qualitative assessment team that captures details of the actual intervention over time. This innovative design allows the qualitative assessment team to track the actual course of the intervention as suggested in the Standards for Quality Improvement Reporting Excellence guidelines, while not contaminating the intervention, which also needs to be receiving real-time participatory feedback. The qualitative assessment team would use a range of qualitative data collection strategies,41,48 ranging from direct observation, key informant interviews, focus groups, and individual in-depth interviews. This particular evaluation design also includes a separate quantitative outcomes team that can collect patient, clinician/staff, and practice level outcomes data, as well as a national advisory group. Our workgroup recommends this type of design for any large-scale PCMH implementation.

FIGURE 2
Organization of a comprehensive PCMH evaluation.

The collection of this volume of diverse data at multiple levels does create some analysis challenges. Although it is not possible to provide analysis details in this manuscript, if the task is envisioned as creating multiple series of comparative case studies in which each series focuses on one or more of the recommendations,49 the task is much less daunting. Using the recently completed National Demonstration Project as an example, Nutting et al provide a series of case summaries that illustrate this process.23 Although somewhat time intensive, creating these case summaries, is well described and requires the use of commonly described mixed-methods analysis techniques.35,48,50

CONCLUSIONS

It has become readily apparent that practice redesign is exceedingly complex and that most interventions fail to meet expectations, although unanticipated consequences are commonplace. Evaluations must be designed to reflect these realities by simultaneously focusing on both the parts and the whole across multiple units of analysis. This will require longitudinal, mixed methods designs. Optimally, designs should incorporate characteristics of evaluation strategies that enable reciprocal learning among evaluators, program implementers, and practice participants such as participatory or empowerment evaluation and action research.19,25,51 Because of the important policy implications of transforming and reforming primary care, learning how to do it right itself has to be done right.

Acknowledgments

Supported by a grant from the Commonwealth Fund.

REFERENCES

1. Starfield B. Primary Care: Concept, Evaluation, and Policy. Oxford University Press; New York, NY: 1992.
2. Starfield B. The future of primary care: refocusing the system. N Engl J Med. 2008;359:2087–2091. [PubMed]
3. Institute of Medicine (U.S.) Crossing the Quality Chasm: A New Health System for the 21st Century. National Academy Press; Washington, DC: 2001. Committee on Quality of Health Care in America.
4. McGlynn EA, Asch SM, Adams J, et al. The quality of health care delivered to adults in the United States. N Engl J Med. 2003;348:2635–2645. [PubMed]
5. Rosenthal TC. The medical home: growing evidence to support a new approach to primary care. J Am Board Fam Med. 2008;21:427–440. [PubMed]
6. Dentzer S. Reinventing primary care: a task that is far “too important to fail. Health Aff (Millwood) 2010;29:757. [PubMed]
7. Arrow K, Auerbach A, Bertko J, et al. Toward a 21st-century health care system: recommendations for health care reform. Ann Intern Med. 2009;150:493–495. [PubMed]
8. Berenson RA, Hammons T, Gans DN, et al. A house is not a home: keeping patients at the center of practice redesign. Health Aff (Millwood) 2008;27:1219–1230. [PubMed]
9. Davis K, Schoenbaum SC, Audet AM. A 2020 vision of patient-centered primary care. J Gen Intern Med. 2005;20:953–957. [PMC free article] [PubMed]
10. Martin JC, Avant RF, Bowman MA, et al. The future of family medicine: a collaborative project of the family medicine community. Ann Fam Med. 2004;2(suppl 1):S3–S32. [PubMed]
11. Rittenhouse DR, Shortell SM. The patient-centered medical home: will it stand the test of health reform? JAMA. 2009;301:2038–2040. [PubMed]
12. Stange KC, Nutting PA, Miller WL, et al. Defining and measuring the patient-centered medical home. J Gen Intern Med. 2010;25:601–612. [PMC free article] [PubMed]
13. American Academy of Family Physicians (AAFP) American Academy of Pediatrics (AAP) American College of Physicians (ACP) American Osteopathic Association (AOA) [Accessed October 7, 2009];Joint principles of the patient-centered medical home. Available at: www.medicalhomeinfo.org/Joint%20Statement.pdf.
14. Patient-Centered Primary Care Collaborative [Accessed October 30, 2009];Proof in practice: a compilation of patient-centered medical home pilot and demonstration projects. Available at: http://pcpcc.net/pilot-guide.
15. Carrier E, Gourevitch MN, Shah NR. Medical homes: challenges in translating theory into practice. Med Care. 2009;47:714–722. [PMC free article] [PubMed]
16. Davidoff F, Batalden P, Stevens D, et al. Publication guidelines for quality improvement studies in health care: evolution of the SQUIRE project. BMJ. (Clinical Research Edition) 2009;338:a3152. [PMC free article] [PubMed]
17. Berwick DM. The stories beneath. Med Care. 2007;45:1123–1125. [PubMed]
18. Pawson R, Tilley N. Realistic Evaluation. Sage; London, United Kingdom, Thousand Oaks, CA: 1997.
19. Patton MQ. Qualitative Research and Evaluation Methods. Sage Publications; Thousand Oaks, CA: 2002.
20. Miller WL, Crabtree BF, Nutting PA, et al. Primary care practice development: a relationship-centered approach. Ann Fam Med. 2010;8(suppl 1):S68–S79. [PubMed]
21. Nutting PA, Miller WL, Crabtree BF, et al. Initial lessons from the first national demonstration project on practice transformation to a patient-centered medical home. Ann Fam Med. 2009;7:254–260. [PubMed]
22. Crabtree BF, Nutting PA, Miller WL, et al. Summary of the National Demonstration Project and recommendations for the patient-centered medical home. Ann Fam Med. 2010;8(suppl 1):S80–S90. [PubMed]
23. Nutting PA, Crabtree BF, Miller WL, et al. Journey to the patient-centered medical home: a qualitative analysis of the experiences of practices in the National Demonstration Project. Ann Fam Med. 2010;8(suppl 1):S45–S56. [PubMed]
24. Stewart EE, Nutting PA, Crabtree BF, et al. Implementing the patient-centered medical home: observation and description of the National Demonstration Project. Ann Fam Med. 2010;8(suppl 1):S21–S32. [PubMed]
25. Fetterman DM, Kaftarian SJ, Wandersman A. Empowerment Evaluation: Knowledge and Tools for Self-Assessment & Accountability. Sage; Thousand Oaks, CA: 1996.
26. Vos L, Duckers ML, Wagner C, et al. Applying the quality improvement collaborative method to process redesign: a multiple case study. Implement Sci. 2010;5:19. [PMC free article] [PubMed]
27. Han YY, Carcillo JA, Venkataraman ST, et al. Unexpected increased mortality after implementation of a commercially sold computerized physician order entry system. Pediatrics. 2005;116:1506–1512. [PubMed]
28. Schiff GD, Bates DW. Can electronic clinical documentation help prevent diagnostic errors? N Engl J Med. 2010;362:1066–1069. [PubMed]
29. Pham HH, O’Malley AS, Bach PB, et al. Primary care physicians’ links to other physicians through Medicare patients: the scope of care coordination. Ann Intern Med. 2009;150:236–242. [PMC free article] [PubMed]
30. Pham HH. Good neighbors: how will the patient-centered medical home relate to the rest of the health-care delivery system? J Gen Intern Med. 2010;25:630–634. [PMC free article] [PubMed]
31. Fisher ES. Building a medical neighborhood for the medical home. N Engl J Med. 2008;359:1202–1205. [PMC free article] [PubMed]
32. Hing E, Burt CW. Characteristics of office-based physicians and their medical practices: United States, 2005–2006. Vital Health Stat 13. 2008:1–34. [PubMed]
33. Patient-Centered Primary Care Collaborative [Accessed April 2, 2010];Guidelines for patient centered medical home (PCMH) demonstration projects. Available at: http://www.pcpcc.net/files/pcmh_demo_guidelines.pdf.
34. Creswell JW, Fetters MD, Ivankova NV. Designing a mixed methods study in primary care. Ann Fam Med. 2004;2:7–12. [PubMed]
35. Curry LA, Nembhard IM, Bradley EH. Qualitative and mixed methods provide unique contributions to outcomes research. Circulation. 2009;119:1442–1452. [PubMed]
36. Jaen CR, Crabtree BF, Palmer RF, et al. Methods for evaluating practice change toward a patient-centered medical home. Ann Fam Med. 2010;8(suppl 1):S9–S20. [PubMed]
37. Bazemore A, Phillips RL, Miyoshi T. Harnessing geographic information systems (GIS) to enable community-oriented primary care. J Am Board Fam Med. 2010;23:22–31. [PubMed]
38. Cohen DJ, Leviton LC, Isaacson NF, et al. Online diaries for qualitative evaluation: gaining real-time insights. Am J Eval. 2006;27:163–184.
39. Cohen DJ, Crabtree BF, Etz RS, et al. Fidelity versus flexibility: translating evidence-based research into practice. Am J Prev Med. 2008;35:S381–S389. [PubMed]
40. Lanham HJ, McDaniel RR, Jr, Crabtree BF, et al. How improving practice relationships among clinicians and nonclinicians can improve quality in primary care. Jt Comm J Qual Patient Saf/Joint Commission Resources. 2009;35:457–466. [PMC free article] [PubMed]
41. Checkland K. Understanding general practice: a conceptual framework developed from case studies in the UK NHS. Br J Gen Pract. 2007;57:56–63. [PMC free article] [PubMed]
42. Crabtree BF, Miller WL, Stange KC. Understanding practice from the ground up. J Fam Pract. 2001;50:881–887. [PubMed]
43. Malouin RA, Starfield B, Sepulveda MJ. Evaluating the tools used to assess the medical home. Manag Care. 2009;18:44–48. [PubMed]
44. Safran DG, Karp M, Coltin K, et al. Measuring patients’ experiences with individual primary care physicians. Results of a statewide demonstration project. J Gen Intern Med. 2006;21:13–21. [PMC free article] [PubMed]
45. Howie JG, Heaney DJ, Maxwell M, et al. A comparison of a patient enablement instrument (PEI) against two established satisfaction scales as an outcome measure of primary care consultations. Fam Pract. 1998;15:165–171. [PubMed]
46. Mercer SW, McConnachie A, Maxwell M, et al. Relevance and practical use of the consultation and relational empathy (CARE) Measure in general practice. Fam Pract. 2005;22:328–334. [PubMed]
47. Borkan JM, Miller WL, Neher JO, et al. Evaluating family practice residencies: a new method for qualitative assessment. Fam Med. 1997;29:640–647. [PubMed]
48. Crabtree BF, Miller WL. Doing Qualitative Research. Sage Publications; Thousand Oaks, CA: 1999.
49. Anderson RA, Crabtree BF, Steele DJ, et al. Case study research: the view from complexity science. Qual Health Res. 2005;15:669–685. [PMC free article] [PubMed]
50. Creswell JW, Clark VL Plano. Designing and Conducting Mixed Methods Research. Sage Publications; Thousand Oaks, CA: 2007.
51. Patton MQ. Developmental Evaluation: Applying Complexity Concepts to Enhance Innovation and Use. Guilford Press; New York, NY: 2010.