|Home | About | Journals | Submit | Contact Us | Français|
In Massachusetts, physician groups’ performance on validated surveys of patient experience has been publicly reported since 2006. Groups also receive detailed reports of their own performance, but little is known about how physician groups have responded to these reports.
To examine whether and how physician groups are using patient experience data to improve patient care.
During 2008, we conducted semi-structured interviews with the leaders of 72 participating physician groups (out of 117 groups receiving patient experience reports). Based on leaders’ responses, we identified three levels of engagement with patient experience reporting: no efforts to improve (level 1), efforts to improve only the performance of low-scoring physicians or practice sites (level 2), and efforts to improve group-wide performance (level 3).
Groups’ level of engagement and specific efforts to improve patient care.
Forty-four group leaders (61%) reported group-wide improvement efforts (level 3), 16 (22%) reported efforts to improve only the performance of low-scoring physicians or practice sites (level 2), and 12 (17%) reported no performance improvement efforts (level 1). Level 3 groups were more likely than others to have an integrated medical group organizational model (84% vs. 31% at level 2 and 33% at level 1; P<0.005) and to employ the majority of their physicians (69% vs. 25% and 20%; P<0.05). Among level 3 groups, the most common targets for improvement were access, communication with patients, and customer service. The most commonly reported improvement initiatives were changing office workflow, providing additional training for nonclinical staff, and adopting or enhancing an electronic health record.
Despite statewide public reporting, physician groups’ use of patient experience data varied widely. Integrated organizational models were associated with greater engagement, and efforts to enhance clinicians’ interpersonal skills were uncommon, with groups predominantly focusing on office workflow and support staff.
The Institute of Medicine has recognized achieving patient-centered care as an essential component of efforts to improve the quality of U.S. health care.1 To assess whether care is patient-centered and to guide improvement efforts, public agencies and private sector organizations have developed valid and reliable methods for surveying patients about their health care experiences.2–4 These surveys have been used to evaluate patients’ experiences with health plans, hospitals, and most recently, with physicians and physician groups in the ambulatory setting.5–7
Since 2002, the Massachusetts Health Quality Partners (MHQP), a multistakeholder collaborative, has conducted a statewide patient experience survey of more than 200,000 patients enrolled in the five largest commercial health plans in the state.8 This survey assesses the care delivered by over 4,000 primary care physicians in nearly 500 primary care practice sites of approximately 120 physician groups. In order to inform patients’ choices when selecting providers, MHQP began publicly reporting the patient experience survey performance of primary care practice sites in 2006.9 With the intent of guiding groups’ quality improvement efforts, MHQP also provides each physician group a detailed report of its own performance.
Previous studies have assessed how patients use publicly reported provider performance data10–13, and how providers respond to performance reports on measures of technical quality and health outcomes.14–16 However, less is known about whether and how physicians and physician groups respond to performance reports of patient experience. In the context of national efforts such as the “medical home” movement, which emphasizes performance measurement and improvement by physician groups, such information may be especially salient.17 In this paper, we assess—in the context of a statewide public reporting effort—the use of confidential, detailed reports of patient experience by physician groups.
For the purposes of this study, we defined a physician group as a collection of physicians practicing at one or more office addresses (i.e., practice sites) who shared at least one group-level manager (defined as an individual who coordinated contacts with health plans or oversaw group performance). To identify physician groups, we used the 2007 MHQP statewide physician directory, which included all Massachusetts physician groups having at least 3 physicians (but not those having ≤2 physicians) who provided care to enrollees in any of the five largest commercial health plans in the state. The directory, which is updated annually via direct contact with physician groups, also identifies whether each group is affiliated with one of the nine large multi-group provider networks in the state.18
Specialist physician groups that did not provide primary care were not included in the study because reports of patients’ experiences with specialist care had not been released at the time of our interviews. We excluded pediatric-only groups from the study sample for two reasons: (1) there were fewer than 20 such groups in the state, and (2) pediatric and adult patient experience survey instruments differed, which may have led to divergent responses of pediatric and adult primary care groups. However, groups providing primary care for both adults and children were included in the study sample.
After excluding specialist-only and pediatric-only groups, the MHQP group roster contained all 133 physician groups in the state (of size ≥3 physicians) that provided primary care to adult patients. While approaching each of the 133 groups to participate in the survey, we discovered that 18 of these groups were ineligible because they lacked local medical management and thus would be better described as practice sites of other medical groups. These 18 sites were removed from the sample, and 5 physician groups that these sites identified as providing them with management were added to the sample in their stead. Another ten groups from the original roster had recently reorganized into seven “new” groups. After these refinements to the roster (removing 28 ineligible groups and adding 12 “new” groups), the final study sample contained 117 physician groups. All groups had previously received detailed patient experience survey reports, although the manner of dissemination of these reports to the 12 new groups was uncertain.
Based on a review of the literature and prior surveys of physician group leaders about the quality of care19,20, we developed a guide for conducting semi-structured 30-minute interviews with physician group leaders. The guide’s questions were designed to assess group leaders’ use of patient experience reports, with emphasis on obtaining detailed descriptions of performance improvement activities, when such activities were present.
The guide first elicited performance improvement activities in an open format, asking respondents to list all activities. This open elicitation was followed by prompts about specific activities (e.g., changing office workflow, retraining physicians or other staff) if these were not previously mentioned. For each activity described by group leaders, the guide asked what aspect of patient experience the activity was intended to improve. These improvement targets were elicited in an open format, without follow-up prompts about specific targets. The guide also included queries about group size (number of physicians), organizational model (integrated medical group or independent practice association; IPA), employment of physicians, use of electronic health records (EHRs), and exposure to financial incentives based on clinical quality and patient experience.
The queries about organizational model were distinct from questions that assessed groups’ use of patient experience reports (i.e., a group’s use of reports did not influence its organizational model classification). For this study, integrated medical groups were defined as those in which most decisions about policies, staffing, and resources were made by a group manager or management team. In IPAs, by contrast, management decisions were made predominantly by individual practice sites.
We refined the interview guide based on input from selected colleagues, members of the MHQP staff and physician advisory council, and experts who served on a national advisory panel to the project. After the first few interviews, we made minor adjustments to the question sequence to streamline interview administration.
We conducted the semi-structured interviews via telephone between June and November 2008. An initial roster of medical group leaders was provided by MHQP and all leaders were invited by mail to participate. Non-respondents received up to two additional mailed invitations and four telephone calls. Each respondent was a medical director, administrator, or manager who was considered a leader of the group and who would be knowledgeable about the group’s performance improvement initiatives (if any). Each interview was conducted by at least two investigators, with a project manager taking notes. The interviews were recorded and transcribed, and a research assistant verified interview transcripts for accuracy by comparing them to the original audio recordings. The study was approved by Human Subjects Committees at RAND and the Harvard School of Public Health.
We analyzed the data using a three-step approach. First, researchers coded participants’ comments pertaining to their groups' patient experience improvement activities. The coding scheme was developed inductively using a variation of content analysis21, which allowed information obtained during the interviews to be coded into coherent concepts based on participants’ descriptions as opposed to a pre-established set of categories. In all cases, responses were coded according to participants’ detailed description of improvement activities rather than the specific question to which the participant was responding. For each patient experience improvement initiative reported by group leaders, the coding scheme allowed identification of the corresponding performance improvement target. The text of each interview transcript was independently coded by one author (MWF) and a research assistant experienced in qualitative research. Coding discrepancies were resolved via conversation between coders or by consultation with a second author (GKS or ECS), and consensus was reached in all cases.
Second, based on these initial codes, each interview was assigned an aggregate code describing the group's overall level of engagement. We identified three levels of physician group engagement with patient experience survey reports. Level 1 group leaders did not recall receiving patient experience survey reports or made no use of patient experience reports beyond distributing them to members of the group. Level 2 group leaders described taking one or more actions based on patient experience reports but focused these efforts on physicians or practice sites that were low performers. Level 3 group leaders described one or more group-wide initiatives to improve patient experience (including most or all physicians, staff, and practice sites in the group, regardless of achieved performance). We calculated the frequency with which Level 3 groups targeted each domain of patient experience and implemented each type of improvement initiative that was identified by the coding process.
Third, after coding was complete, we compared the characteristics of physician groups across the three levels of engagement. These characteristics included each group’s number of physicians, organizational model, and exposure to performance-based financial incentives. Because cell counts in the comparison tables of categorical data did not allow valid application of chi-square test statistics, we instead used Fisher’s exact tests to evaluate the statistical significance of associations between levels of engagement and group characteristics. Data management and statistical analyses were conducted using SAS software, version 9.2 (SAS Institute, Cary, North Carolina).
Seventy-two group leaders responded (62% response rate). The median number of physicians per group was 15, with a range from 3 physicians to 244 physicians. Non-responding groups had fewer physicians (median ten physicians per group; P=0.02) but did not differ from responding groups on any other observable characteristic (number of practice sites or rate of network affiliation). Approximately half of groups had only one practice site (Table 1). Sixty-four percent of the groups were organized as integrated medical groups, 22% were IPAs, and 14% had a mixed organizational model (e.g., an IPA that contained a large, more integrated group as well as a collection of independent small practice sites). Fifty-one percent employed the majority of their physicians, and 63% were affiliated with a large multi-group physician network. Only 28% of groups reported being eligible for payment based on measures of patient experience, while 87% reported eligibility for payments based on measures of clinical quality (e.g., Healthcare Effectiveness Data and Information Set measures; HEDIS).
Table 2 shows that only 17% of group leaders reported that they were unaware of patient experience reports or not using them to improve performance (level 1), and 22% used patient experience to improve the performance of only low scoring physicians or practice sites (level 2). The majority of physician group leaders (61%) reported using patient experience results to undertake group-wide improvement initiatives (level 3).
Group organizational model was statistically significantly associated with level of engagement. Integrated medical groups comprised approximately one-third of level 1 and level 2 groups but accounted for 84% of groups in level 3 (P<0.005). Sixty-nine percent of level 3 groups employed the majority of their physicians, compared to 25% of level 2 groups and 20% of level 1 groups (P<0.05). Level 3 groups were more likely to be network-affiliated (75% vs. 38% in level 2, P<0.05). While 36% of level 3 groups were eligible for payment based on measures of patient experience, none of the groups in level 1 had such incentives (P<0.05). In contrast, approximately 90% of groups in all three levels were exposed to financial incentives based on measures of clinical quality.
Among the 44 level 3 groups, the most common patient experience targets for group-wide performance improvement were access (57% of groups), communication with patients (48%), and customer service (45%) (Table 3). Physicians’ interactions with patients, patient education, and the continuity and coordination of care were less commonly reported as areas targeted for improvement.
The most common improvement initiatives reported by the leaders of level 3 groups were to “change office workflow” (e.g., changing patient check-in procedures; 70% of groups), “provide training for non-clinicians” (e.g., classes for administrative assistants; 57%), “conduct EHR-based interventions” (e.g., installing a new EHR; 50%), and “reassign staff responsibilities” (e.g., non-clinicians performing a greater share of routine patient assessment and documentation tasks; 45%) (Table 4). Less common improvement strategies included improving communication systems other than EHRs, hiring or firing staff, training clinicians, improving appointment scheduling processes, expanding office hours, sharing “best practices” within the group, and performing physical plant upgrades.
The most common improvement initiatives reported by group leaders varied between improvement targets. To improve patients’ access to care, groups most commonly undertook improvements in appointment scheduling processes and changed office workflow (Table 5). When groups attempted to improve communication with patients, they most commonly invested in communication systems other than EHRs, reassigned staff responsibilities, and changed office workflow. In order to improve customer service, groups most commonly provided training to both clinicians and non-clinicians. When groups focused on improving primary care physicians’ interactions with patients, many initiatives were employed; however, training for physicians was not the most frequently-chosen strategy.
Public reporting on patient experience, which has previously focused on health plans and hospitals5, 6, is increasingly being applied to ambulatory physician groups and practice sites.7 Patient experience surveys are intended to produce performance results that physicians can use to identify specific targets for quality improvement, that patients can use to compare and select providers, and that payers can use as a basis for setting incentive payments in pay-for-performance programs.2, 8, 11, 22 Despite substantial investment in these efforts and the potential salience of this information to practicing physicians, little is known about how physician groups have responded.
We found that the majority of Massachusetts physician groups are engaged in efforts to improve the patient experience. Physician groups engaged in these efforts were more likely than others to have an integrated medical group organizational model (as opposed to an IPA or mixed model), to employ the majority of their physicians, and to have financial incentives based on patient experience.
However, a substantial number of physician group leaders reported no efforts to improve patient experience, and others focused their efforts exclusively on low-performing physicians or practice sites. These groups had less integrated organizational models, suggesting that improvement efforts may require a managerial infrastructure capable of starting and directing improvement activities. This finding is consistent with national data suggesting that medical groups may be more likely than IPAs to participate in general quality improvement activities.23 In addition, groups not engaged in improvement activities were more likely to lack payment incentives based on patient experience. This association between group-level financial incentives and engagement in patient experience improvement echoes similar findings at the physician level, where performance incentives that emphasize patient experience have been associated with improved performance.24
The most common areas of patient experience targeted by groups’ improvement efforts were access (e.g., waiting times for an appointment), communication with patients (e.g., triage of incoming phone calls), and the customer service (e.g., staff courtesy). Groups were less likely to focus on the performance of physicians and other clinicians or on patient educational activities to enable self-management. Even though continuity of care has been highly associated with patient satisfaction25, 26, very few groups reported efforts intended to improve the continuity and coordination of care (despite wide performance variation and relatively poorer statewide performance in these domains).9
Though improvements in physician communication skills are thought to be crucial to the provision of patient-centered care, physician groups rarely pursued strategies to train physicians.27 Instead, groups most commonly changed processes for managing interactions between patients and nonclinical staff, trained non-clinicians, and invested in structural capabilities such as EHRs.
It is notable, however, that even when attempting to improve physician communication with patients, groups predominantly pursued this goal by reassigning staff responsibilities and adopting or enhancing EHRs. A reluctance to directly intervene with individual physicians may reflect physicians’ skepticism regarding physician-level patient experience results as well as a sensitivity to low morale among primary care physicians—two explanations that were volunteered by some group leaders.28, 29 Further, groups’ general focus on EHRs as a means of improving patient experience may reflect a previously observed association: in a national sample of physician groups, the use of patient feedback to analyze and improve services has been associated with increased adoption of health information technology.23
The study has limitations. Physician groups’ use of patient experience survey reports was based on self-report by group leaders. Despite our efforts to minimize response bias, leaders may have over-reported their efforts to improve patient experience. Our statewide sample of physician groups was too small to allow meaningful multivariable modeling, so we could not assess the independent effects of organizational variables on groups’ level of engagement with patient experience reports. The observational study design limits our ability to infer causation from associations between groups’ characteristics and activities. Groups that did not respond to our survey may be less likely than respondents to engage in improvement activities.
The study was limited to Massachusetts, and some findings may not generalize to other states. Two national surveys of physician groups have found associations between external incentives to improve patient experience and greater use of processes that may improve quality.20, 30 Because statewide public reporting of patient experience scores may constitute an external incentive, improvement efforts among physician groups in Massachusetts may exceed those in states without public reporting.
We lacked the data necessary to assess whether groups targeted their improvement efforts towards patient experience domains on which they had low performance. The extent to which groups’ reported improvement efforts will improve their scores on patient experience surveys is unknown. Finally, public reporting on patient experience had recently begun at the time of our study, and groups’ responses may evolve over time. Describing this evolution is a planned area of future research.
In a state that has publicly reported patient experience survey results for more than 2 years, we found that many physician groups have engaged in efforts to improve their performance. Groups with more integrated organizational models were especially likely to engage in group-wide improvement efforts, and all groups facing financial incentives based on patient experience reported improvement efforts of some kind. While patient experience surveys assess both provider-level and organization-level aspects of care, groups have predominantly focused their improvement efforts on organizational factors. If policy makers wish to motivate changes in the behavior of individual providers, new incentives that target specific, provider-focused domains of patient experience may be necessary.
Contributors: The authors thank Elizabeth Siteman, BA, for assistance in performing the group leader interviews, and Elizabeth Steiner, MPP, for assistance in coding the interview transcripts. The authors also thank the members of the study national advisory committee for constructive feedback on preliminary study results.
Funder: This study was sponsored by the Commonwealth Fund. No funder had a role in the design and conduct of the study; collection, management, analysis, and interpretation of the data; and preparation, review, or approval of the manuscript.
Prior Presentation: Preliminary results from this study were presented at the Annual Research Meeting of AcademyHealth in Chicago, Illinois on June 29, 2009.
Conflict of Interest: None disclosed.