Our study was primarily intended to evaluate the effect of publicly reported physician quality data on PCP choice. We investigated this association in the real world among a population for whom data on individual provider quality were highly relevant—new health plan members who were required to select a PCP—and we controlled for possible selection effects by using a randomized encouragement design. Yet, our study produced no evidence for the effectiveness of physician performance reports on consumer decision-making. Although it is possible to interpret these findings only as failing to demonstrate the utility of comparative quality data on provider choice, such a conclusion may not be warranted.
A growing body of research suggests that data accessibility and display issues play a vital role in determining whether consumers use quality data once they are aware of them.14, 17, 28–31
Our encouragement messages directed new enrollees to the introductory page of the HealthPlus provider directory. The steps necessary to get from that page to an actual data display are not obvious, which may have frustrated many study participants. In particular, there are many input fields that users may complete (via pull-down menus) as a way of customizing their search for a provider. The response options in the pull-down menus are not always clear. Some of the fields are required and some are not; again, the distinction is not always clear. Despite these problems, which are not uncommon in online reports of provider quality,32
there is value in assessing the effect of HealthPlus’s physician performance report as it is one of very few examples of physician-level reporting to date.
A second issue is the amount of missing data on physician quality in the HealthPlus provider directory. Nearly two-thirds of the physicians listed in the directory at the time of our study did not have data on the performance measures reported there. The high rate of missing data may have eroded consumers’ confidence in the data as a whole, thereby limiting the effects of exposure to quality information on new members’ choice of a PCP. However, our study does provide evidence about how consumers behave in response to a report in which data are missing for a substantial proportion of health care options. Consumers’ responses to and use of physician ratings in the context of abundant missing data may differ from how they would respond to ratings in the context of more complete data.
The impact of missing information on consumer choice, especially when such information is missing in a large proportion of cases, is poorly understood and is an issue of significant policy concern that is just beginning to get the research attention that it deserves.33–34
Consumers facing such data may doubt the reliability or usefulness of the data as a whole and may ignore them even where they do exist. Consumers who do not have such a reaction must make the difficult inference about how to compare the quality of physicians with no data to the quality of physicians with data. They might, for example, assume that the absence of data for some providers means that those providers are of lower quality.33
Such an assumption would lead to an impact of reporting that is substantially different from the impact of a report with more complete data. In our study, a majority (55%) of participants who reported consulting the provider directory before choosing a PCP picked a provider with missing data. This suggests that many participants may have ignored the quality data when making their decision.
The most common reason for missing data in the HealthPlus provider directory is that there are a large number of providers in the health plan’s network for whom HealthPlus members constitute only a small fraction of their total pool of patients (and thus provide too little data from which to estimate reliable performance scores). Among physicians in HealthPlus’s “core service area” in which the plan has long-standing membership and established physician networks, the problem of missing data is much less severe: 68% of physicians in the plan’s core service area had non-missing quality data in the online directory at the time of our study vs. 38% of all commercial primary care physicians. However, because physicians in the core service area are not reported separately from physicians in outlying areas, users of the online directory—including those looking for a physicians in the core service area—must contend with large amounts of missing data when trying to compare the quality of PCPs.
The problem of missing data is likely to be common to all but the largest health plans, and suggests an important challenge for publicly reporting quality data at the individual physician level. Small sample sizes are a problem in general in public reporting,35
and there is general agreement that it is better to indicate missing data in a report than to include unreliable data based on small sample sizes. To minimize the problem of having too few patients per provider for reliable reporting, it may be necessary to report individual provider-level data at other than the health plan level (e.g., statewide or via all-payer claims databases).
Another possible explanation for why participants who visited the online directory were not swayed by the performance data is that they did not fully understand the measures of physician quality. In general, consumers have difficulty understanding comparative quality data.36–37
Though little is known about how consumers understand roll-up scores such as those reported in the HealthPlus physician performance report, there is reason to believe that consumers may have particular difficulty understanding these scores. A roll-up combines multiple measures that are not necessarily related conceptually into a single score. In principle roll-up scores should make it easier to arrive at an overall evaluation by reducing the number of dimensions that people need to consider; however, consumers may have little understanding of the dimensions that are rolled up and thus little motivation to use roll-ups for decision-making.
Although our study does not provide evidence that publicly reported data on physician quality affect the quality of PCPs selected by consumers, it did identify a low-cost means of encouraging new plan members to access these data. This is an important finding given that one of the most important challenges in public reporting (at any level) has been promoting awareness of these data.17, 38–40
This finding also suggests the value of strategies directing consumers to physician quality data at a point when they are most likely to be interested in seeing it.41–42
Among those who responded to our survey, our encouragement manipulation seems to have done more than just draw them to the physician performance data—it may have led them to choose PCPs with higher ratings on member satisfaction. That encouragement appears to have influenced the quality of providers chosen by survey respondents, even when they did not access the data on physician quality, suggests that encouragement had an effect that was not dependent on viewing the intended source of quality information. It is plausible that the encouragement intervention enhanced the salience of physician quality, which might then have activated diffuse information-seeking behavior, such asking friends, family members, or colleagues for recommendations, or consulting for-profit websites (e.g., vitals.com, Angie’s List) that present information on individual providers. If so, then it is also plausible that the experiences underlying those recommendations would be predictive of patient satisfaction but not of clinical measures of quality. The encouragement effect was strongest among regular Internet users who would have had greater access to information on doctors besides what was presented in the HealthPlus directory. More research is needed to understand the mechanism or mechanisms underlying this effect.
That the effect of encouragement on the member satisfaction (CAHPS) scores of selected PCPs was not evident among the entire study population suggests that respondents to our survey may have been a select subgroup of new enrollees. While a 51% response rate is similar to those observed in surveys of outpatient and inpatient experiences,43–44
and response rates tend to be only weakly associated with nonresponse bias in well-conducted probability samples,45
the possibility of nonresponse bias remains. We observed some evidence of demographic non-representativeness with women and older people being more likely to return a survey. These are standard patterns in survey response.46–47
We also observed that survey respondents were more likely than non-respondents to select a PCP within one year of enrolling in the health plan, suggesting that survey respondents may be a more activated group of health care consumers, in greater need of health care, or perhaps more conscientious generally. Thus, caution is warranted in making inferences about prevalence based on our sample data. Even so, our randomized encouragement design should protect us against bias in comparing across study conditions, and equal response rates across conditions suggest that there was no differential response by condition.
Our study had other important limitations. First, nearly a third of new plan members assigned to a condition of our study did not select a PCP within a year of enrollment. Though an interesting finding in its own right and not indicative of selection bias per se, it reduced the sample size available to test for effects of encouragement and exposure on quality of PCP selection. Thus, some caution is merited in interpreting non-significant results. Second, our study sample was necessarily limited to those new enrollees who did not pre-designate a PCP on their enrollment form. Many of those who pre-designate a PCP are switching plans and already have a doctor with whom they are satisfied and who is available through the new plan. Our study excludes this subset of new enrollees, though how this may affect our results is unclear. Third, information about exposure to the physician quality report was limited to participants’ self-reports about whether and when they accessed the provider directory. To the extent that participants in the encouragement condition felt compelled to report accessing the directory even if they had not, our results may be biased toward finding an effect of encouragement on this outcome. Other studies that use a similar design should consider more covert ways to collect this information. Finally, it is important to note that our model of PCP choice does not account for many of the reasons why people choose a particular PCP, including word-of-mouth reputation, availability, and location of the provider’s office.48
However, our randomized encouragement design should protect us against bias that might result from the exclusion of such factors.