PMCCPMCCPMCC

Search tips
Search criteria 

Advanced

 
Logo of nihpaAbout Author manuscriptsSubmit a manuscriptHHS Public Access; Author Manuscript; Accepted for publication in peer reviewed journal;
 
Med Care. Author manuscript; available in PMC 2013 November 1.
Published in final edited form as:
PMCID: PMC3480665
NIHMSID: NIHMS402789

A Field Experiment on the Impact of Physician-Level Performance Data on Consumers’ Choice of Physician

Abstract

Background

In 2008, HealthPlus of Michigan introduced an online primary care provider (PCP) report that displays clinical quality data and patients’ ratings of their experiences with PCPs on a public website.

Design and Procedure

A randomized encouragement design was used to examine the impact of HealthPlus’s online physician quality report on new plan members’ choice of a PCP. This study evaluated the impact of an added encouragement to utilize the report by randomizing half of new adult plan members in 2009–2010 who were required to select a PCP (N = 1347) to receive a one-page letter signed by the health plan’s chief medical officer emphasizing the importance of the online report and a brief phone call reminder. We examined use of the report and the quality of PCPs selected by participants.

Results

Twenty-eight percent of participants in the encouragement condition versus 22% in the control condition looked at the online report prior to selecting a PCP. Although participants in the encouragement condition selected PCPs with higher patient experience ratings than did control participants, this difference was not explained by their increased likelihood of accessing the online report.

Conclusions

Health plan members can be encouraged successfully to access physician-level quality data using an inexpensive letter and automated phone call. However, a large proportion of missing data in HealthPlus’s online report may have limited the influence of the physician-quality report on consumer choice.

Keywords: CAHPS, choice of PCP, encouragement design, physician-level quality data, physician quality report

INTRODUCTION

Public reporting of information on the comparative performance of healthcare providers has become increasingly common.1 One of the main objectives of providing this information to the public is to support consumers in choosing among providers.24 Until recently, most public reports on healthcare quality have focused on hospitals, health plans, and large physician groups,1 and few public data have been available to inform decision-making about individual physicians. This is not due to a lack of demand; studies consistently find that consumers are most interested in individual physician-level quality information.57

Momentum toward public reporting of data on the quality of individual physicians is starting to build. For example, the Centers for Medicare and Medicaid Services (CMS) is soliciting physician quality data as part of its Physician Compare Initiative,8 with plans to report these data online. The Physician Compare website will present quality of care and patient experience data to help Medicare and non-Medicare patients and their families assess the quality of providers. Consumer Reports has recently begun providing quality information on individual physicians for a fee through its ConsumerReportsHealth.org® website.9 Two medium-sized health plans have recently begun publishing web-based quality reports on individual physicians. The report issued by one of these plans, HealthPlus of Michigan, is the focus of our study.

Considering the significant interest in public reporting of physician-level performance data and the potential stakes for consumers and providers, it is critical to know if publicly displaying these data has the intended consequences on consumer choice. There are, however, few published evaluations of physician-level quality reports. A recent review of the evidence for publicly reported performance data as a means to affect decisions about individual providers identified seven studies on this issue.10 Together, these studies provide inconsistent evidence for an association between public reporting and selection of individual providers. It is difficult to generalize from these studies, however, as they all focused on a single reporting system—the New York State Cardiac Surgery Reporting System—which publishes clinical outcome data (e.g., mortality rates) for individual cardiac surgeons in New York. There are substantially more published evaluations of reporting at other levels (i.e., health plans and hospitals). Likewise, these studies provide little evidence regarding the effectiveness of quality information in driving consumers to select higher quality health plans and hospitals.3, 1011 One cannot assume, however, that results for health plan and hospital choice will necessarily match those for individual provider choice, as the context of these decisions can be very different.11

In interpreting the results of studies that examine the impact of publicly reported quality data on provider choice, researchers must consider both whether the research design allows one to make causal inferences about the effects of reporting and how closely the decisions being studied resemble those made in the real world. Some researchers have used experimental designs to isolate the effects of quality information on consumer decision-making.1214 Although experimental studies offer a high level of control over the many other factors that may influence provider choice, they are limited in their external validity. Because participants in these studies make only hypothetical choices, they may lack the incentives, emotions, and engagement of consumers making such choices in the real world. A handful of studies have used real-world “field” experiments to investigate the impact of healthcare quality data on consumer decision-making.1517 In these studies, individuals needing to select a new provider are randomly assigned to receive or not receive data on the set of plans under consideration. Consumers’ plan choices are then analyzed as a function of experimental condition. Although studies like these have enhanced external validity over studies conducted in the laboratory, they are impractical in situations in which exposure to quality data cannot be randomly assigned, e.g., when they are already publicly available.

There has been significant progress in methods to rigorously evaluate treatment effects when participants cannot be randomly assigned.18 With nonrandom exposure to treatment, people who use healthcare quality information may be predisposed to selecting higher quality providers than those who do not seek out or use such information. One way around the potential self-selection bias is a randomized trial of an intervention that increases exposure to the quality data among some participants. In such a randomized encouragement design,1921 a randomly selected group receives extra encouragement or incentives to undertake a treatment (in our case, to access and use physician-level performance data). Under certain assumptions that we discuss below,1819 this design may enable unbiased estimates of the effect of comparative quality data on consumer choice of providers.

HealthPlus of Michigan is an independent, non-profit plan that contracts with over 900 primary care physicians (PCPs) to serve approximately 72,000 adult, commercial HMO members in central/east Michigan. In the current study, we used a randomized encouragement design to investigate the impact of an online physician performance report on new HMO members’ choice of HealthPlus PCPs. Specifically, we offered a randomly selected half of all adult new members who were required to select a PCP an encouragement beyond the standard information provided to all new members by the health plan to utilize the online physician performance report, and then we tracked use of the report and the quality of PCPs selected by all participants.

In keeping with the vast majority of current public reporting efforts,3 HealthPlus of Michigan disseminates its physician performance data via the Internet (on its publicly accessible website). Dissemination of comparative healthcare quality data via the Internet is an attractive option in that it is relatively inexpensive and can allow consumers to customize data displays to fit their preferences and concerns.22 At the same time, information provided via the Internet will not be seen by those who are unaware of its existence, and accessing and using this information may prove challenging for those who do not use computers regularly or do not have friends or family who are willing and able to assist them. Thus, in our study, we also examined participants’ Internet use savvy as a potential moderating factor in the relationship of interest.

METHODS

Design

This study used a randomized encouragement design to investigate the influence of physician performance data on new health plan members’ choice of a PCP. By randomizing encouragement and tracking both exposure to the treatment and outcomes for all those who do and do not receive the encouragement, it is possible to obtain unbiased estimates of the effects of the encouragement and—under certain assumptions discussed below—of the treatment itself. The effect of encouragement on the outcome can be analyzed directly as randomized. If the encouragement’s only effect on the outcome is via increased uptake of the treatment, then the effect of the treatment can be estimated using the Wald estimator.2324

This variation of an experimental design is useful when it is impractical or unethical to exert full control over which study participants are exposed to a treatment. For example, randomized encouragement designs have been used to estimate the effectiveness of an influenza vaccine on reducing morbidity in high risk adults25 and to study the effectiveness of physician adherence to treatment guidelines on improving the survival rate of patients with heart failure26. In our case, random assignment to treatment (exposure to physician performance data) was both impractical (the physician performance report is publicly available to all who wish to access it) and unethical (withholding from a subset of new health plan members data that could facilitate selection of a high quality PCP).

Participants and Procedure

During its new-member enrollment period in the fall-winter of 2009–10, HealthPlus of Michigan informed all new commercial HMO members of the availability of physician performance data on its website. This notification was provided in the information packet mailed to all new-member households. Information about the online physician performance data was also disseminated more broadly through a press release issued by HealthPlus in the fall of 2009.

Each year during new-member enrollment, HealthPlus mails a notice to all new commercial HMO members who have not designated a PCP upon enrollment that they need to do so as soon as possible. When these mail contacts were made between October 2009 and January 2010, a randomly selected half of new members were given extra encouragement to access the online physician performance report and use the information it contained to select a PCP. (New enrollees were randomized in batches, via urn randomization, on a weekly basis throughout this 4-month enrollment period.) This extra encouragement was provided in part by a one-page letter— signed by the chief medical officer at the health plan—that explained how to access and use the online report and emphasized the importance of doing so. In particular, new members assigned to the encouragement condition were told that, “choosing a primary care doctor who provides quality care and meets your specific needs is important to your health.” Furthermore, they were told that the online physician performance reports contain “important information about the quality of the doctors you are considering, including what patients like you say about their experiences with doctors and their staff.” Approximately one week after the mail contact, new members assigned to the encouragement condition additionally received a brief (54 seconds), automated phone call reminding them about the online physician performance data and further reinforcing their potential usefulness.

Approximately two weeks after the new-member enrollment period ended, in February 2010, all 1347 new commercial HMO members assigned to either the encouragement or control condition were mailed a survey that asked about their exposure to and use of the physician performance report in selecting their PCP. Reminder postcards were mailed a week later, and replacement surveys were mailed one month later to those who did not respond to the survey sent initially. In addition to information on participants’ use of the physician performance data, the mail survey elicited demographic data and information on participants’ use of the Internet. HealthPlus provided data on the quality of each PCP (as reported in the online provider directory) selected by new enrollees assigned to a condition of our study regardless of whether or not they completed a survey. Although the majority of plan members designated a PCP within the first few months of enrollment, some had not done so even after a full year. Those who did not select a PCP within a year of enrollment were excluded from our analyses of the effects of encouragement on the quality of PCP selected. All procedures for this study were approved by RAND’s Human Subjects Protection Committee.

Nature of the Treatment

Data on physician performance is imbedded within HealthPlus of Michigan’s online provider directory.27 This directory shows, in a grid format, individual provider names, practice addresses and phone numbers, HealthPlus product affiliations (e.g., HMO or PPO), specialty, and scores on “overall clinical quality” and “member satisfaction.” In the online directory, a hyperlink is provided to information about the data underlying the clinical quality and member satisfaction scores. Users who click on this link are told that the overall member satisfaction score summarizes (through equal weighting) three composite measures (quality of doctor communication; courteous and helpful office staff; and timeliness of appointments, care, and information) and an overall rating of the doctor, all derived from the Consumer Assessment of Healthcare Providers and Systems (CAHPS®) Clinician and Group Survey (CG-CAHPS). Further information is presented on each of these underlying measures, including their rating scales and the individual items in each of the composite measures. Likewise, users are told that the overall clinical quality score summarizes (through equal weighting) Healthcare Effectiveness Data and Information Set (HEDIS) indicators for asthma care; breast, cervical, and colorectal cancer screening; and diabetes management. Information is also provided on the benchmarks used to establish physician rankings on the overall member satisfaction and clinical quality scores that are displayed.

Users of the directory are able to restrict the number of providers displayed by limiting them to a certain geographic location, specialty, or product affiliation. The website also offers the capability to sort providers on any of the variables in the display, including the two physician performance scores. Each of the physician performance scores is presented using a row of one to five symbols, where the symbol is HealthPlus’s corporate logo, a variation of a “plus” symbol. A legend at the top of the data table explains that 5 “plus” symbols is the highest score a doctor can achieve and that 1 “plus” symbol is the lowest. The legend also explains several missing data indicators that may appear in the columns of the grid that present the performance scores. “N/A” indicates that data were available from too few plan members to calculate a reliable score for the physician. “New to network” indicates that data are not yet available for the physician as s/he only recently joined the health plan’s network of physicians. “Under review” indicates that data have only recently been collected on the provider and are not yet available for display.

Measures

Quality of provider chosen

Our main outcome variable is the quality of PCP chosen by plan members, as indicated by the provider’s member satisfaction and overall clinical quality scores displayed in the provider directory. As noted above, scores on each of these measures ranged from 1 to 5 (displayed as 1–5 “plus” symbols in the provider directory), with higher scores indicating higher quality. Table 1 shows the distribution of scores on these two measures among all PCPs listed in the provider directory at the time of our study. Scores on these two measures were uncorrelated, r (460) = 0.04, so we analyzed them separately.

TABLE 1
Distribution of Member Satisfaction (CAHPS) and Clinical Quality (HEDIS) Scores among PCPs Listed in the HealthPlus Provider Directory during Our Study*

Use of the physician performance data to select a PCP

As part of the mail survey, we asked participants to indicate whether they had accessed the online provider directory, and if so, whether they had accessed the directory prior to or after selecting a PCP.

Internet use

Participants reported how often they connected to the Internet (never to at least once/day) and whether in the past 12 months they used the Internet to (a) make travel plans, (b) find information on a hobby or favorite activity, (c) find information on health insurance, (d) find information on a possible purchase, (e) find information on doctors or hospitals, (f) find information on a health condition, and (g) find information about a medical treatment or procedure. Participants who said they accessed the Internet at least weekly or who said they accessed the Internet at least monthly and that they had used the Internet for at least four of the seven stated purposes in the past year (78% of participants) were classified as regular Internet users using an indicator variable. Those who are not regular Internet users may not be as affected by the encouragement to use Internet-based information, so the overall analyses are followed by analyses restricted to regular Internet users.

Analysis Strategy

We began by comparing survey respondents and non-respondents to investigate the possibility of selection bias. Information on the latter group was limited to data on age and gender from the health plan’s administrative records. In addition, we used data from HealthPlus’s 2010 CAHPS survey to compare the racial/ethnic composition of the health plan’s total commercial HMO membership with that of our survey respondents (who provided data on race/ethnicity as part of this study’s mail survey). Finally, we compared survey respondents and non-respondents on whether or not they selected a PCP within a year of enrolling in the health plan, information that was also available through the health plan’s administrative records.

Next, we conducted a simple bivariate analysis to test whether participants in the encouragement condition were more likely than those in the control condition to access the provider directory prior to selecting a PCP. A key assumption underlying our study is that encouragement makes plan members more likely to access the online provider directory than they would have in the absence of encouragement. This assumption was testable only among survey respondents as the survey was our sole source of information about plan members’ use of the online provider directory.

In contrast, data on the quality of PCP selected by new enrollees was available for all plan members assigned to a condition of our study regardless of whether they returned a completed survey. Thus, we began our analysis of PCP selection by looking among the entire study population to see whether PCPs selected by plan members in the encouragement condition were of higher quality than PCPs selected by plan members in the control condition. Next, we used ordinary least-squares (OLS) regression to predict, separately, the member satisfaction and clinical quality scores of selected PCPs from encouragement condition, directory use (whether a participant reported looking at the online provider directory prior to selecting a PCP), and their interaction. In this model, which could only be tested among new enrollees who returned a survey, the encouragement coefficient estimates the effect of encouragement among those not accessing the directory, the directory use coefficient estimates the effect of directory use among those in the control condition, and the interaction coefficient estimates the amount by which the treatment (directory use) effect on quality is greater among those who were encouraged than among those who were not.

Wald estimation of the effects of treatment (directory use) on physician quality requires that all effects of encouragement on the outcome (physician quality) occur through increased uptake of the treatment. First, there must be no effect of encouragement on physician quality in the absence of directory use, and thus the encouragement coefficient in the model must not differ significantly from zero. Second, the effect of treatment (directory use) on quality must not differ as a function of encouragement, and thus the interaction term must not differ significantly from zero. If either of these assumptions is not met, there is evidence of direct effects of encouragement on quality that are not “fully mediated” by increased directory use.

RESULTS

Analysis of Survey Nonresponse

Of the 1,347 new plan members assigned to a study condition, 693 (337 control, 356 encouragement) returned a completed survey, a 51% participation rate (see Figure 1). The likelihood that a new plan member returned a completed survey did not differ by study condition, χ2 (1) = 1.13, p = 0.29. Compared with survey non-respondents, a greater proportion of survey respondents were female (57% vs. 50%, χ2 [1] = 6.88, p = 0.009) and age 55 years or older (23% vs. 11%, χ2 [1] = 35.14, p < 0.001), and a smaller proportion were age 34 or younger (44% vs. 27%, χ2 [1] = 44.45, p < 0.001). Compared with HealthPlus’s entire commercial HMO membership, similar proportions of survey respondents were non-Hispanic white (87% of survey respondents vs. 88% of total membership), non-Hispanic African American (6% vs. 7%), Hispanic (3% vs. 3%), and other race/ethnicity (4% vs. 2%). Of the 1,347 new plan members assigned to a study condition, 923 (69%) selected a PCP within a year of enrollment (see Figure 1). The likelihood of selecting a PCP within a year of enrollment differed between survey respondents and non-respondents: 81% of survey respondents selected a PCP within a year of enrollment vs. 56% of survey non-respondents (χ2 [1] = 87.95, p < 0.001), a point that we address in the Discussion.

Figure 1
Number (percentage) of new plan enrollees assigned to the control and encouragement conditions who returned a completed survey and the number (percentage) of survey respondents and non-respondents in each condition who selected a primary care provider ...

Effect of Encouragement on Exposure to the Physician Performance Data

Of the 693 survey respondents, 25 (4%) did not provide data on whether they accessed the provider directory prior to selecting a PCP. Missingness on this variable was unrelated to study condition, χ2 (1) = 0.22, p = 0.64. In the control condition, 22% of participants reported looking at the provider directory prior to selecting a PCP. In the encouragement condition, 28% of participants reported doing so, a twenty-seven percent increase. A chi-square test of the difference between these two proportions produced a marginally significant result, χ2 (1) = 2.88, p (two-tailed) = 0.09, p (one-tailed) = 0.045.

Effect of Encouragement on Quality of Provider Selected: All New Plan Enrollees

At the time of our study, 38% of all commercial PCPs listed in the online provider directory had nonmissing data on at least one of these two measures of physician quality. Of the 923 new plan enrollees who selected a PCP, 517 (56%) selected a PCP for whom data on quality appeared in the online provider directory. Study condition was unrelated to whether a participant selected a PCP, χ2 (1) = 0.03, p = 0.84, and whether a participant chose a PCP for whom quality data appeared in the provider directory, χ2 (1) = 1.22, p = 0.27. Among new plan enrollees who picked a PCP for whom data on the member satisfaction (CAHPS) measure appeared in the provider directory, study condition was unrelated to the member satisfaction score of selected PCPs (M encouragement = 3.24 [SD = 0.89], M control = 3.22 [SD = 0.89], t (461) = 0.28, p = 0.78). Similarly, among new plan enrollees who picked a PCP for whom data on the clinical quality (HEDIS) measure appeared in the provider directory, study condition was unrelated to the clinical quality score of selected PCPs (M encouragement = 3.76 [SD = 0.87], M control = 3.78 [SD = 0.95], t (512) = −0.26, p = 0.80).

Effect of Encouragement on Quality of Provider Selected: Survey Respondents Only

The regression results shown in Table 2 are based on survey respondents (n = 315) who chose a PCP for whom performance data were reported in the online provider directory. As this table shows, among this subset of participants, there was no evidence of an effect of encouragement or exposure to physician performance data on the overall clinical quality scores of selected PCPs. There was also no evidence that participants who viewed physician performance data selected a PCP scoring higher on the measure of member satisfaction (p = 0.49). There was, however, a significant effect of encouragement (p = 0.04) on the member satisfaction scores of selected PCPs. In particular, survey respondents in the encouragement condition selected PCPs with higher scores on member satisfaction (M = 3.31, SD = 0.89) than did survey respondents in the control condition (M = 3.17, SD = 0.86; Cohen’s d = 0.16). As the non-significant (p = 0.83) interaction indicates, there is no evidence that the effect of encouragement differed between those who did versus those who did not access the physician performance data prior to selecting a PCP. In other words, among survey respondents, encouragement appears to have affected the quality of PCP selected, but that effect did not result from the hypothesized mechanisms of increased use of the physician directory among those encouraged and selection of higher quality PCPs among all users of the directory.

TABLE 2
Regression Models Predicting PCP Performance Scores from Study Condition and Exposure to Physician Performance Data (All Survey Respondents)

Table 3 shows these same regression models for the subset of survey respondents categorized as regular Internet users. This table shows the same pattern of results as seen among the entire sample of survey respondents. That is, there was no evidence of an effect of encouragement or exposure to the physician performance data on the overall clinical quality of PCPs selected. There was, however, a significant (p = .002) effect of encouragement on the member satisfaction scores of selected PCPs. This effect, which was more than twice as large as that observed among the entire sample of survey respondents (Cohen’s d = 0.36), did not result from the hypothesized mechanisms of increased use of the physician directory among those encouraged and selection of higher quality PCPs among all users of the directory.

TABLE 3
Regression Models Predicting PCP Performance Scores from Study Condition and Exposure to Physician Performance Data (Survey Respondents Who Were Regular Internet Users)

DISCUSSION

Our study was primarily intended to evaluate the effect of publicly reported physician quality data on PCP choice. We investigated this association in the real world among a population for whom data on individual provider quality were highly relevant—new health plan members who were required to select a PCP—and we controlled for possible selection effects by using a randomized encouragement design. Yet, our study produced no evidence for the effectiveness of physician performance reports on consumer decision-making. Although it is possible to interpret these findings only as failing to demonstrate the utility of comparative quality data on provider choice, such a conclusion may not be warranted.

A growing body of research suggests that data accessibility and display issues play a vital role in determining whether consumers use quality data once they are aware of them.14, 17, 2831 Our encouragement messages directed new enrollees to the introductory page of the HealthPlus provider directory. The steps necessary to get from that page to an actual data display are not obvious, which may have frustrated many study participants. In particular, there are many input fields that users may complete (via pull-down menus) as a way of customizing their search for a provider. The response options in the pull-down menus are not always clear. Some of the fields are required and some are not; again, the distinction is not always clear. Despite these problems, which are not uncommon in online reports of provider quality,32 there is value in assessing the effect of HealthPlus’s physician performance report as it is one of very few examples of physician-level reporting to date.

A second issue is the amount of missing data on physician quality in the HealthPlus provider directory. Nearly two-thirds of the physicians listed in the directory at the time of our study did not have data on the performance measures reported there. The high rate of missing data may have eroded consumers’ confidence in the data as a whole, thereby limiting the effects of exposure to quality information on new members’ choice of a PCP. However, our study does provide evidence about how consumers behave in response to a report in which data are missing for a substantial proportion of health care options. Consumers’ responses to and use of physician ratings in the context of abundant missing data may differ from how they would respond to ratings in the context of more complete data.

The impact of missing information on consumer choice, especially when such information is missing in a large proportion of cases, is poorly understood and is an issue of significant policy concern that is just beginning to get the research attention that it deserves.3334 Consumers facing such data may doubt the reliability or usefulness of the data as a whole and may ignore them even where they do exist. Consumers who do not have such a reaction must make the difficult inference about how to compare the quality of physicians with no data to the quality of physicians with data. They might, for example, assume that the absence of data for some providers means that those providers are of lower quality.33 Such an assumption would lead to an impact of reporting that is substantially different from the impact of a report with more complete data. In our study, a majority (55%) of participants who reported consulting the provider directory before choosing a PCP picked a provider with missing data. This suggests that many participants may have ignored the quality data when making their decision.

The most common reason for missing data in the HealthPlus provider directory is that there are a large number of providers in the health plan’s network for whom HealthPlus members constitute only a small fraction of their total pool of patients (and thus provide too little data from which to estimate reliable performance scores). Among physicians in HealthPlus’s “core service area” in which the plan has long-standing membership and established physician networks, the problem of missing data is much less severe: 68% of physicians in the plan’s core service area had non-missing quality data in the online directory at the time of our study vs. 38% of all commercial primary care physicians. However, because physicians in the core service area are not reported separately from physicians in outlying areas, users of the online directory—including those looking for a physicians in the core service area—must contend with large amounts of missing data when trying to compare the quality of PCPs.

The problem of missing data is likely to be common to all but the largest health plans, and suggests an important challenge for publicly reporting quality data at the individual physician level. Small sample sizes are a problem in general in public reporting,35 and there is general agreement that it is better to indicate missing data in a report than to include unreliable data based on small sample sizes. To minimize the problem of having too few patients per provider for reliable reporting, it may be necessary to report individual provider-level data at other than the health plan level (e.g., statewide or via all-payer claims databases).

Another possible explanation for why participants who visited the online directory were not swayed by the performance data is that they did not fully understand the measures of physician quality. In general, consumers have difficulty understanding comparative quality data.3637 Though little is known about how consumers understand roll-up scores such as those reported in the HealthPlus physician performance report, there is reason to believe that consumers may have particular difficulty understanding these scores. A roll-up combines multiple measures that are not necessarily related conceptually into a single score. In principle roll-up scores should make it easier to arrive at an overall evaluation by reducing the number of dimensions that people need to consider; however, consumers may have little understanding of the dimensions that are rolled up and thus little motivation to use roll-ups for decision-making.

Although our study does not provide evidence that publicly reported data on physician quality affect the quality of PCPs selected by consumers, it did identify a low-cost means of encouraging new plan members to access these data. This is an important finding given that one of the most important challenges in public reporting (at any level) has been promoting awareness of these data.17, 3840 This finding also suggests the value of strategies directing consumers to physician quality data at a point when they are most likely to be interested in seeing it.4142

Among those who responded to our survey, our encouragement manipulation seems to have done more than just draw them to the physician performance data—it may have led them to choose PCPs with higher ratings on member satisfaction. That encouragement appears to have influenced the quality of providers chosen by survey respondents, even when they did not access the data on physician quality, suggests that encouragement had an effect that was not dependent on viewing the intended source of quality information. It is plausible that the encouragement intervention enhanced the salience of physician quality, which might then have activated diffuse information-seeking behavior, such asking friends, family members, or colleagues for recommendations, or consulting for-profit websites (e.g., vitals.com, Angie’s List) that present information on individual providers. If so, then it is also plausible that the experiences underlying those recommendations would be predictive of patient satisfaction but not of clinical measures of quality. The encouragement effect was strongest among regular Internet users who would have had greater access to information on doctors besides what was presented in the HealthPlus directory. More research is needed to understand the mechanism or mechanisms underlying this effect.

That the effect of encouragement on the member satisfaction (CAHPS) scores of selected PCPs was not evident among the entire study population suggests that respondents to our survey may have been a select subgroup of new enrollees. While a 51% response rate is similar to those observed in surveys of outpatient and inpatient experiences,4344 and response rates tend to be only weakly associated with nonresponse bias in well-conducted probability samples,45 the possibility of nonresponse bias remains. We observed some evidence of demographic non-representativeness with women and older people being more likely to return a survey. These are standard patterns in survey response.4647 We also observed that survey respondents were more likely than non-respondents to select a PCP within one year of enrolling in the health plan, suggesting that survey respondents may be a more activated group of health care consumers, in greater need of health care, or perhaps more conscientious generally. Thus, caution is warranted in making inferences about prevalence based on our sample data. Even so, our randomized encouragement design should protect us against bias in comparing across study conditions, and equal response rates across conditions suggest that there was no differential response by condition.

Our study had other important limitations. First, nearly a third of new plan members assigned to a condition of our study did not select a PCP within a year of enrollment. Though an interesting finding in its own right and not indicative of selection bias per se, it reduced the sample size available to test for effects of encouragement and exposure on quality of PCP selection. Thus, some caution is merited in interpreting non-significant results. Second, our study sample was necessarily limited to those new enrollees who did not pre-designate a PCP on their enrollment form. Many of those who pre-designate a PCP are switching plans and already have a doctor with whom they are satisfied and who is available through the new plan. Our study excludes this subset of new enrollees, though how this may affect our results is unclear. Third, information about exposure to the physician quality report was limited to participants’ self-reports about whether and when they accessed the provider directory. To the extent that participants in the encouragement condition felt compelled to report accessing the directory even if they had not, our results may be biased toward finding an effect of encouragement on this outcome. Other studies that use a similar design should consider more covert ways to collect this information. Finally, it is important to note that our model of PCP choice does not account for many of the reasons why people choose a particular PCP, including word-of-mouth reputation, availability, and location of the provider’s office.48 However, our randomized encouragement design should protect us against bias that might result from the exclusion of such factors.

Conclusion

A number of conditions must be met for healthcare quality reports to be effective. Consumers must know about the data, have access to them, and be in a state of readiness to make a decision.39, 42 The data must be understandable, seen as trustworthy, and relevant to their choice options.48 If any one of these conditions is not met, exposure to performance data is likely to have little or no (detectable) effect, even if it has an effect under optimal conditions. In this study, we were able to provide consumers with access to information at the time they needed it to make a decision, but the performance data covered only a fraction of their choice options. This highlights challenges in finding ways to minimize missing data and provide appropriate information about why data are missing so that consumers’ confidence in the data is not undermined.33 Our study identifies encouragement as a possible “salience intervention” that may have value independent of consumer reporting, and which, under ideal circumstances, may synergistically enhance the effects of reporting.

Acknowledgments

Disclosure of Funding: This research was supported by the Robert Wood Johnson Foundation (Grant #65447) and the Agency for Healthcare Research and Quality (U18 HS016980). Ron D. Hays was also supported in part by grants from the National Institute on Aging (P30-AG021684) and the National Institute on Minority Health and Health Disparities (2P20MD000182).

Footnotes

Publisher's Disclaimer: This is a PDF file of an unedited manuscript that has been accepted for publication. As a service to our customers we are providing this early version of the manuscript. The manuscript will undergo copyediting, typesetting, and review of the resulting proof before it is published in its final citable form. Please note that during the production process errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain.

Contributor Information

Steven C. Martino, RAND Corporation, Pittsburgh, PA.

David E. Kanouse, RAND Corporation, Santa Monica, CA.

Marc N. Elliott, RAND Corporation, Santa Monica, CA.

Stephanie S. Teleki, California HealthCare Foundation, Oakland, CA.

Ron D. Hays, Department of Medicine, University of California, Los Angeles, CA.

REFERENCES

1. O’Neil S, Schurrer J, Simon S. Environmental Scan of Public Reporting Programs and Analysis. Cambridge, MA: Mathematica Policy Research; 2010.
2. Berwick DM, James B, Coye MJ. Connections between quality measurement and improvement. Med Care. 2003;41:I30–I38. [PubMed]
3. Harris KM, Buntin MB. Research Synthesis Report No. 14. Princeton, NJ: The Robert Wood Johnson Foundation; 2008. Choosing a Health Care Provider: The Role of Quality Information.
4. Marshall MN, Shekelle PG, Leatherman S, et al. The public release of performance data: What do we expect to gain? A review of the evidence. JAMA. 2000;283:1866–1874. [PubMed]
5. Hibbard JH, Jewett JJ. What type of quality information do consumers want in a health care report card? Med Care Res Rev. 1996;53:28–47. [PubMed]
6. Isaacs SL. Consumer’s information needs: Results of a national survey. Health Aff (Millwood) 1996;15:31–41. [PubMed]
7. Longo DR, Everet KD. Health care consumer reports: an evaluation of consumer perspectives. J Health Care Finance. 2003;30:65–71. [PubMed]
10. Fung CH, Lim YW, Mattke S, et al. Systematic review: The evidence that publishing patient care performance data improves quality of care. Ann Intern Med. 2008;148:111–123. [PubMed]
11. Faber M, Bosch M, Wollersheim H, et al. Public reporting in health care: How do consumers use quality-of-care information? A systematic review. Med Care. 2009;47:1–8. [PubMed]
12. Schoenbaum M, Spranca M, Elliott M, et al. Health plan choice and information about out-of-pocket costs: an experimental analysis. Inquiry. 2001;38:35–48. [PubMed]
13. Spranca M, Kanouse DE, Elliott M, et al. Do consumer reports of health plan quality affect health plan selection? Health Serv Res. 2000;35:933–947. [PMC free article] [PubMed]
14. Uhrig JD, Short PF. Testing the effect of quality reports on the health plan choices of Medicare beneficiaries. Inquiry. 2002;39:355–371. [PubMed]
15. Farley DO, Short PF, Elliott MN, et al. Effects of CAHPS health plan performance information on plan choices by New Jersey Medicaid beneficiaries. Health Serv Res. 2002;37:985–1007. [PMC free article] [PubMed]
16. Farley DO, Elliott MN, Short PF, et al. Effect of CAHPS performance information on health plan choices by Iowa Medicaid beneficiaries. Med Care Res Rev. 2002;59:319–336. [PubMed]
17. Hibbard JH, Berkman N, McCormack LA, et al. The impact of a CAHPS report on employee knowledge, beliefs, and decisions. Med Care Res Rev. 2002;59:104–116. [PubMed]
18. West SG, Duan N, Pequegnat W, et al. Alternatives to the randomized controlled trial. Am J Public Health. 2008;98:1359–1366. [PubMed]
19. Bradlow ET. Encouragement designs: an approach to self-selected samples in an experimental design. Market Lett. 1998;9:383–391.
20. Hirano K, Imbens GW, Rubin DB, et al. Causal inference in encouragement designs with covariates. Biostatistics. 2000;1:69–88. [PubMed]
21. Holland PW. Causal inference, path analysis, and recursive structural equations models. Sociol Methodol. 1998;18:449–484.
22. Marshall M, McLoughlin V. How do patients use information on providers? BMJ. 2010;341:1255–1257. [PubMed]
23. Imbens GW, Angrist JD. Identification and estimation of local average treatment effects. Econometrica. 1994;62:467–475.
24. Imbens GW, Rubin DB. Bayesian inference for causal effects in randomized experiments with noncompliance. Ann Stat. 1997;25:305–327.
25. Zhou XH, Li SM. ITT analysis of randomized encouragement design studies with missing data. Statist Med. 2006;25:2737–2761. [PubMed]
26. Subramanian U, Fihn SD, Weinberger M. A controlled trial of including symptom data in computer-based care suggestions for managing patients with chronic heart failure. Am J Med. 2004;116:375–384. [PubMed]
28. Harris-Kojetin LD, McCormack LA, Jael EF, et al. Creating more effective health plan quality reports for consumers: Lessons learned from a synthesis of qualitative testing. Health Serv Res. 2001;36:447–476. [PMC free article] [PubMed]
29. Hibbard JH, Greene J, Daniel D. What Is Quality Anyway? Performance Reports That Clearly Communicate to Consumers the Meaning of Quality of Care. Med Care Res Rev. 2010;67:275–293. [PubMed]
30. Robinowitz DL, Dudley RA. Public reporting of provider performance: Can its impact be made greater? Annu Rev Public Health. 2006;27:517–536. [PubMed]
31. Uhrig JD, Harris-Kojetin LD, Bann C, et al. Do content and format affect older consumers’ use of comparative information in a Medicare health plan choice? Results from a controlled experiment. Med Care Res Rev. 2006;63:701–718. [PubMed]
33. American Institutes for Research. How to present missing data clearly. Princeton, NJ: The Robert Wood Johnson Foundation; 2012. Apr,
34. American Institutes for Research. Three reasons for missing data. Princeton, NJ: The Robert Wood Johnson Foundation; 2012. Apr,
35. Elliott MN, Zaslavsky AM, Cleary PD. Are finite population connections appropriate when profiling institutions? Health Serv Outcomes Res Methodol. 2006;6:153–156.
36. Hibbard JH, Slovic P, Peters E, et al. Strategies for reporting health plan performance information to consumers: Evidence from controlled studies. Health Serv Res. 2002;37:291–313. [PMC free article] [PubMed]
37. Marshall MN, Romano PS, Davies HTO. How do we maximize the impact of the public reporting of quality of care? Int J Qual Health Care. 2004;16:I57–I63. [PubMed]
38. Schwartz LM, Woloshin S, Birkmeyer JD. How do elderly patients decide where to go for major surgery? Telephone interview survey. BMJ. 2005;331:821–824. [PMC free article] [PubMed]
39. Sofaer S. Commentary on “What is Quality Anyway?” Med Care Res Rev. 2010;67:297–300. [PubMed]
40. Werner RM, Asch DA. The unintended consequences of publicly reporting quality information. JAMA. 2005;293:1239–1244. [PubMed]
41. Shaller D, Kanouse D, Schlesinger M. Meeting Consumers Halfway: Context-Driven Strategies for Engaging Consumers to Use Public Reports on Health Care Providers; A Paper Commissioned for the AHRQ National Summit on Public Reporting; 2011. Mar,
42. Fanjiang G, von Glahn T, Chang H, et al. Providing patients web-based data to inform physician choice: If you build it, will they come? J Gen Intern Med. 2007;22:1463–1466. [PMC free article] [PubMed]
43. Goldstein E, Elliott MN, Lehrman WG, et al. Racial/ethnic differences in patients' perceptions of inpatient care using the HCAHPS survey. Med Care Res Rev. 2010;67:74–92. [PubMed]
44. Roland M, Elliott M, Lyratzopoulos G, et al. Reliability of patient responses in pay for performance schemes: analysis of national General Practitioner Patient Survey data in England. BMJ. 2009;339:1756–1833. [PMC free article] [PubMed]
45. Groves R, Peytcheva E. The impact of nonresponse rates on nonresponse bias: A meta-analysis. Public Opin Quart. 2008;72:167–189.
46. Elliott MN, Edwards C, Angeles J, et al. Patterns of unit and item nonresponse in the CAHPS Hospital Survey. Health Serv Res. 2005;40:2096–2119. [PMC free article] [PubMed]
47. Klein DJ, Elliott MN, Haviland AM, et al. Understanding nonresponse to the 2007 Medicare CAHPS survey. Gerontologist. 2011;51:843–855. [PubMed]
48. Abraham J, Sick B, Anderson J, et al. Selecting a provider: What factors influence patients’ decision making? J Healthc Manag. 2011;56:99–114. [PubMed]
48. Hibbard JH. What can we say about the impact of public reporting? Inconsistent execution yields variable results. Ann Intern Med. 2008;148:160–161. [PubMed]