|Home | About | Journals | Submit | Contact Us | Français|
David E. Kanouse, PhD, RAND Corporation, 1776 Main Street, Santa Monica, CA 90401, PH: 310-393-0411, x6963, FAX: 310-260-8175, kanouse/at/rand.org
Marc N. Elliott, PhD, RAND Corporation, 1776 Main Street, Santa Monica, CA 90401, PH: 310-393-0411, x7931, FAX: 310-260-8160, elliott/at/rand.org
Stephanie S. Teleki, Ph.D., California HealthCare Foundation, 1438 Webster Street, Suite 400, Oakland, CA 94612, Telephone: (510) 587-3172, Fax: (510) 587-0172, steleki/at/chcf.org
Ron D. Hays, Ph.D., Department of Medicine, UCLA, 911 Broxton Avenue, Los Angeles, CA 90095-1736, Telephone: (310) 794-2294, Fax: (310) 794-0732, drhays/at/ucla.edu
In 2008, HealthPlus of Michigan introduced an online primary care provider (PCP) report that displays clinical quality data and patients’ ratings of their experiences with PCPs on a public website.
A randomized encouragement design was used to examine the impact of HealthPlus’s online physician quality report on new plan members’ choice of a PCP. This study evaluated the impact of an added encouragement to utilize the report by randomizing half of new adult plan members in 2009–2010 who were required to select a PCP (N = 1347) to receive a one-page letter signed by the health plan’s chief medical officer emphasizing the importance of the online report and a brief phone call reminder. We examined use of the report and the quality of PCPs selected by participants.
Twenty-eight percent of participants in the encouragement condition versus 22% in the control condition looked at the online report prior to selecting a PCP. Although participants in the encouragement condition selected PCPs with higher patient experience ratings than did control participants, this difference was not explained by their increased likelihood of accessing the online report.
Health plan members can be encouraged successfully to access physician-level quality data using an inexpensive letter and automated phone call. However, a large proportion of missing data in HealthPlus’s online report may have limited the influence of the physician-quality report on consumer choice.
Public reporting of information on the comparative performance of healthcare providers has become increasingly common.1 One of the main objectives of providing this information to the public is to support consumers in choosing among providers.2–4 Until recently, most public reports on healthcare quality have focused on hospitals, health plans, and large physician groups,1 and few public data have been available to inform decision-making about individual physicians. This is not due to a lack of demand; studies consistently find that consumers are most interested in individual physician-level quality information.5–7
Momentum toward public reporting of data on the quality of individual physicians is starting to build. For example, the Centers for Medicare and Medicaid Services (CMS) is soliciting physician quality data as part of its Physician Compare Initiative,8 with plans to report these data online. The Physician Compare website will present quality of care and patient experience data to help Medicare and non-Medicare patients and their families assess the quality of providers. Consumer Reports has recently begun providing quality information on individual physicians for a fee through its ConsumerReportsHealth.org® website.9 Two medium-sized health plans have recently begun publishing web-based quality reports on individual physicians. The report issued by one of these plans, HealthPlus of Michigan, is the focus of our study.
Considering the significant interest in public reporting of physician-level performance data and the potential stakes for consumers and providers, it is critical to know if publicly displaying these data has the intended consequences on consumer choice. There are, however, few published evaluations of physician-level quality reports. A recent review of the evidence for publicly reported performance data as a means to affect decisions about individual providers identified seven studies on this issue.10 Together, these studies provide inconsistent evidence for an association between public reporting and selection of individual providers. It is difficult to generalize from these studies, however, as they all focused on a single reporting system—the New York State Cardiac Surgery Reporting System—which publishes clinical outcome data (e.g., mortality rates) for individual cardiac surgeons in New York. There are substantially more published evaluations of reporting at other levels (i.e., health plans and hospitals). Likewise, these studies provide little evidence regarding the effectiveness of quality information in driving consumers to select higher quality health plans and hospitals.3, 10–11 One cannot assume, however, that results for health plan and hospital choice will necessarily match those for individual provider choice, as the context of these decisions can be very different.11
In interpreting the results of studies that examine the impact of publicly reported quality data on provider choice, researchers must consider both whether the research design allows one to make causal inferences about the effects of reporting and how closely the decisions being studied resemble those made in the real world. Some researchers have used experimental designs to isolate the effects of quality information on consumer decision-making.12–14 Although experimental studies offer a high level of control over the many other factors that may influence provider choice, they are limited in their external validity. Because participants in these studies make only hypothetical choices, they may lack the incentives, emotions, and engagement of consumers making such choices in the real world. A handful of studies have used real-world “field” experiments to investigate the impact of healthcare quality data on consumer decision-making.15–17 In these studies, individuals needing to select a new provider are randomly assigned to receive or not receive data on the set of plans under consideration. Consumers’ plan choices are then analyzed as a function of experimental condition. Although studies like these have enhanced external validity over studies conducted in the laboratory, they are impractical in situations in which exposure to quality data cannot be randomly assigned, e.g., when they are already publicly available.
There has been significant progress in methods to rigorously evaluate treatment effects when participants cannot be randomly assigned.18 With nonrandom exposure to treatment, people who use healthcare quality information may be predisposed to selecting higher quality providers than those who do not seek out or use such information. One way around the potential self-selection bias is a randomized trial of an intervention that increases exposure to the quality data among some participants. In such a randomized encouragement design,19–21 a randomly selected group receives extra encouragement or incentives to undertake a treatment (in our case, to access and use physician-level performance data). Under certain assumptions that we discuss below,18–19 this design may enable unbiased estimates of the effect of comparative quality data on consumer choice of providers.
HealthPlus of Michigan is an independent, non-profit plan that contracts with over 900 primary care physicians (PCPs) to serve approximately 72,000 adult, commercial HMO members in central/east Michigan. In the current study, we used a randomized encouragement design to investigate the impact of an online physician performance report on new HMO members’ choice of HealthPlus PCPs. Specifically, we offered a randomly selected half of all adult new members who were required to select a PCP an encouragement beyond the standard information provided to all new members by the health plan to utilize the online physician performance report, and then we tracked use of the report and the quality of PCPs selected by all participants.
In keeping with the vast majority of current public reporting efforts,3 HealthPlus of Michigan disseminates its physician performance data via the Internet (on its publicly accessible website). Dissemination of comparative healthcare quality data via the Internet is an attractive option in that it is relatively inexpensive and can allow consumers to customize data displays to fit their preferences and concerns.22 At the same time, information provided via the Internet will not be seen by those who are unaware of its existence, and accessing and using this information may prove challenging for those who do not use computers regularly or do not have friends or family who are willing and able to assist them. Thus, in our study, we also examined participants’ Internet use savvy as a potential moderating factor in the relationship of interest.
This study used a randomized encouragement design to investigate the influence of physician performance data on new health plan members’ choice of a PCP. By randomizing encouragement and tracking both exposure to the treatment and outcomes for all those who do and do not receive the encouragement, it is possible to obtain unbiased estimates of the effects of the encouragement and—under certain assumptions discussed below—of the treatment itself. The effect of encouragement on the outcome can be analyzed directly as randomized. If the encouragement’s only effect on the outcome is via increased uptake of the treatment, then the effect of the treatment can be estimated using the Wald estimator.23–24
This variation of an experimental design is useful when it is impractical or unethical to exert full control over which study participants are exposed to a treatment. For example, randomized encouragement designs have been used to estimate the effectiveness of an influenza vaccine on reducing morbidity in high risk adults25 and to study the effectiveness of physician adherence to treatment guidelines on improving the survival rate of patients with heart failure26. In our case, random assignment to treatment (exposure to physician performance data) was both impractical (the physician performance report is publicly available to all who wish to access it) and unethical (withholding from a subset of new health plan members data that could facilitate selection of a high quality PCP).
During its new-member enrollment period in the fall-winter of 2009–10, HealthPlus of Michigan informed all new commercial HMO members of the availability of physician performance data on its website. This notification was provided in the information packet mailed to all new-member households. Information about the online physician performance data was also disseminated more broadly through a press release issued by HealthPlus in the fall of 2009.
Each year during new-member enrollment, HealthPlus mails a notice to all new commercial HMO members who have not designated a PCP upon enrollment that they need to do so as soon as possible. When these mail contacts were made between October 2009 and January 2010, a randomly selected half of new members were given extra encouragement to access the online physician performance report and use the information it contained to select a PCP. (New enrollees were randomized in batches, via urn randomization, on a weekly basis throughout this 4-month enrollment period.) This extra encouragement was provided in part by a one-page letter— signed by the chief medical officer at the health plan—that explained how to access and use the online report and emphasized the importance of doing so. In particular, new members assigned to the encouragement condition were told that, “choosing a primary care doctor who provides quality care and meets your specific needs is important to your health.” Furthermore, they were told that the online physician performance reports contain “important information about the quality of the doctors you are considering, including what patients like you say about their experiences with doctors and their staff.” Approximately one week after the mail contact, new members assigned to the encouragement condition additionally received a brief (54 seconds), automated phone call reminding them about the online physician performance data and further reinforcing their potential usefulness.
Approximately two weeks after the new-member enrollment period ended, in February 2010, all 1347 new commercial HMO members assigned to either the encouragement or control condition were mailed a survey that asked about their exposure to and use of the physician performance report in selecting their PCP. Reminder postcards were mailed a week later, and replacement surveys were mailed one month later to those who did not respond to the survey sent initially. In addition to information on participants’ use of the physician performance data, the mail survey elicited demographic data and information on participants’ use of the Internet. HealthPlus provided data on the quality of each PCP (as reported in the online provider directory) selected by new enrollees assigned to a condition of our study regardless of whether or not they completed a survey. Although the majority of plan members designated a PCP within the first few months of enrollment, some had not done so even after a full year. Those who did not select a PCP within a year of enrollment were excluded from our analyses of the effects of encouragement on the quality of PCP selected. All procedures for this study were approved by RAND’s Human Subjects Protection Committee.
Data on physician performance is imbedded within HealthPlus of Michigan’s online provider directory.27 This directory shows, in a grid format, individual provider names, practice addresses and phone numbers, HealthPlus product affiliations (e.g., HMO or PPO), specialty, and scores on “overall clinical quality” and “member satisfaction.” In the online directory, a hyperlink is provided to information about the data underlying the clinical quality and member satisfaction scores. Users who click on this link are told that the overall member satisfaction score summarizes (through equal weighting) three composite measures (quality of doctor communication; courteous and helpful office staff; and timeliness of appointments, care, and information) and an overall rating of the doctor, all derived from the Consumer Assessment of Healthcare Providers and Systems (CAHPS®) Clinician and Group Survey (CG-CAHPS). Further information is presented on each of these underlying measures, including their rating scales and the individual items in each of the composite measures. Likewise, users are told that the overall clinical quality score summarizes (through equal weighting) Healthcare Effectiveness Data and Information Set (HEDIS) indicators for asthma care; breast, cervical, and colorectal cancer screening; and diabetes management. Information is also provided on the benchmarks used to establish physician rankings on the overall member satisfaction and clinical quality scores that are displayed.
Users of the directory are able to restrict the number of providers displayed by limiting them to a certain geographic location, specialty, or product affiliation. The website also offers the capability to sort providers on any of the variables in the display, including the two physician performance scores. Each of the physician performance scores is presented using a row of one to five symbols, where the symbol is HealthPlus’s corporate logo, a variation of a “plus” symbol. A legend at the top of the data table explains that 5 “plus” symbols is the highest score a doctor can achieve and that 1 “plus” symbol is the lowest. The legend also explains several missing data indicators that may appear in the columns of the grid that present the performance scores. “N/A” indicates that data were available from too few plan members to calculate a reliable score for the physician. “New to network” indicates that data are not yet available for the physician as s/he only recently joined the health plan’s network of physicians. “Under review” indicates that data have only recently been collected on the provider and are not yet available for display.
Our main outcome variable is the quality of PCP chosen by plan members, as indicated by the provider’s member satisfaction and overall clinical quality scores displayed in the provider directory. As noted above, scores on each of these measures ranged from 1 to 5 (displayed as 1–5 “plus” symbols in the provider directory), with higher scores indicating higher quality. Table 1 shows the distribution of scores on these two measures among all PCPs listed in the provider directory at the time of our study. Scores on these two measures were uncorrelated, r (460) = 0.04, so we analyzed them separately.
As part of the mail survey, we asked participants to indicate whether they had accessed the online provider directory, and if so, whether they had accessed the directory prior to or after selecting a PCP.
Participants reported how often they connected to the Internet (never to at least once/day) and whether in the past 12 months they used the Internet to (a) make travel plans, (b) find information on a hobby or favorite activity, (c) find information on health insurance, (d) find information on a possible purchase, (e) find information on doctors or hospitals, (f) find information on a health condition, and (g) find information about a medical treatment or procedure. Participants who said they accessed the Internet at least weekly or who said they accessed the Internet at least monthly and that they had used the Internet for at least four of the seven stated purposes in the past year (78% of participants) were classified as regular Internet users using an indicator variable. Those who are not regular Internet users may not be as affected by the encouragement to use Internet-based information, so the overall analyses are followed by analyses restricted to regular Internet users.
We began by comparing survey respondents and non-respondents to investigate the possibility of selection bias. Information on the latter group was limited to data on age and gender from the health plan’s administrative records. In addition, we used data from HealthPlus’s 2010 CAHPS survey to compare the racial/ethnic composition of the health plan’s total commercial HMO membership with that of our survey respondents (who provided data on race/ethnicity as part of this study’s mail survey). Finally, we compared survey respondents and non-respondents on whether or not they selected a PCP within a year of enrolling in the health plan, information that was also available through the health plan’s administrative records.
Next, we conducted a simple bivariate analysis to test whether participants in the encouragement condition were more likely than those in the control condition to access the provider directory prior to selecting a PCP. A key assumption underlying our study is that encouragement makes plan members more likely to access the online provider directory than they would have in the absence of encouragement. This assumption was testable only among survey respondents as the survey was our sole source of information about plan members’ use of the online provider directory.
In contrast, data on the quality of PCP selected by new enrollees was available for all plan members assigned to a condition of our study regardless of whether they returned a completed survey. Thus, we began our analysis of PCP selection by looking among the entire study population to see whether PCPs selected by plan members in the encouragement condition were of higher quality than PCPs selected by plan members in the control condition. Next, we used ordinary least-squares (OLS) regression to predict, separately, the member satisfaction and clinical quality scores of selected PCPs from encouragement condition, directory use (whether a participant reported looking at the online provider directory prior to selecting a PCP), and their interaction. In this model, which could only be tested among new enrollees who returned a survey, the encouragement coefficient estimates the effect of encouragement among those not accessing the directory, the directory use coefficient estimates the effect of directory use among those in the control condition, and the interaction coefficient estimates the amount by which the treatment (directory use) effect on quality is greater among those who were encouraged than among those who were not.
Wald estimation of the effects of treatment (directory use) on physician quality requires that all effects of encouragement on the outcome (physician quality) occur through increased uptake of the treatment. First, there must be no effect of encouragement on physician quality in the absence of directory use, and thus the encouragement coefficient in the model must not differ significantly from zero. Second, the effect of treatment (directory use) on quality must not differ as a function of encouragement, and thus the interaction term must not differ significantly from zero. If either of these assumptions is not met, there is evidence of direct effects of encouragement on quality that are not “fully mediated” by increased directory use.
Of the 1,347 new plan members assigned to a study condition, 693 (337 control, 356 encouragement) returned a completed survey, a 51% participation rate (see Figure 1). The likelihood that a new plan member returned a completed survey did not differ by study condition, χ2 (1) = 1.13, p = 0.29. Compared with survey non-respondents, a greater proportion of survey respondents were female (57% vs. 50%, χ2  = 6.88, p = 0.009) and age 55 years or older (23% vs. 11%, χ2  = 35.14, p < 0.001), and a smaller proportion were age 34 or younger (44% vs. 27%, χ2  = 44.45, p < 0.001). Compared with HealthPlus’s entire commercial HMO membership, similar proportions of survey respondents were non-Hispanic white (87% of survey respondents vs. 88% of total membership), non-Hispanic African American (6% vs. 7%), Hispanic (3% vs. 3%), and other race/ethnicity (4% vs. 2%). Of the 1,347 new plan members assigned to a study condition, 923 (69%) selected a PCP within a year of enrollment (see Figure 1). The likelihood of selecting a PCP within a year of enrollment differed between survey respondents and non-respondents: 81% of survey respondents selected a PCP within a year of enrollment vs. 56% of survey non-respondents (χ2  = 87.95, p < 0.001), a point that we address in the Discussion.
Of the 693 survey respondents, 25 (4%) did not provide data on whether they accessed the provider directory prior to selecting a PCP. Missingness on this variable was unrelated to study condition, χ2 (1) = 0.22, p = 0.64. In the control condition, 22% of participants reported looking at the provider directory prior to selecting a PCP. In the encouragement condition, 28% of participants reported doing so, a twenty-seven percent increase. A chi-square test of the difference between these two proportions produced a marginally significant result, χ2 (1) = 2.88, p (two-tailed) = 0.09, p (one-tailed) = 0.045.
At the time of our study, 38% of all commercial PCPs listed in the online provider directory had nonmissing data on at least one of these two measures of physician quality. Of the 923 new plan enrollees who selected a PCP, 517 (56%) selected a PCP for whom data on quality appeared in the online provider directory. Study condition was unrelated to whether a participant selected a PCP, χ2 (1) = 0.03, p = 0.84, and whether a participant chose a PCP for whom quality data appeared in the provider directory, χ2 (1) = 1.22, p = 0.27. Among new plan enrollees who picked a PCP for whom data on the member satisfaction (CAHPS) measure appeared in the provider directory, study condition was unrelated to the member satisfaction score of selected PCPs (M encouragement = 3.24 [SD = 0.89], M control = 3.22 [SD = 0.89], t (461) = 0.28, p = 0.78). Similarly, among new plan enrollees who picked a PCP for whom data on the clinical quality (HEDIS) measure appeared in the provider directory, study condition was unrelated to the clinical quality score of selected PCPs (M encouragement = 3.76 [SD = 0.87], M control = 3.78 [SD = 0.95], t (512) = −0.26, p = 0.80).
The regression results shown in Table 2 are based on survey respondents (n = 315) who chose a PCP for whom performance data were reported in the online provider directory. As this table shows, among this subset of participants, there was no evidence of an effect of encouragement or exposure to physician performance data on the overall clinical quality scores of selected PCPs. There was also no evidence that participants who viewed physician performance data selected a PCP scoring higher on the measure of member satisfaction (p = 0.49). There was, however, a significant effect of encouragement (p = 0.04) on the member satisfaction scores of selected PCPs. In particular, survey respondents in the encouragement condition selected PCPs with higher scores on member satisfaction (M = 3.31, SD = 0.89) than did survey respondents in the control condition (M = 3.17, SD = 0.86; Cohen’s d = 0.16). As the non-significant (p = 0.83) interaction indicates, there is no evidence that the effect of encouragement differed between those who did versus those who did not access the physician performance data prior to selecting a PCP. In other words, among survey respondents, encouragement appears to have affected the quality of PCP selected, but that effect did not result from the hypothesized mechanisms of increased use of the physician directory among those encouraged and selection of higher quality PCPs among all users of the directory.
Table 3 shows these same regression models for the subset of survey respondents categorized as regular Internet users. This table shows the same pattern of results as seen among the entire sample of survey respondents. That is, there was no evidence of an effect of encouragement or exposure to the physician performance data on the overall clinical quality of PCPs selected. There was, however, a significant (p = .002) effect of encouragement on the member satisfaction scores of selected PCPs. This effect, which was more than twice as large as that observed among the entire sample of survey respondents (Cohen’s d = 0.36), did not result from the hypothesized mechanisms of increased use of the physician directory among those encouraged and selection of higher quality PCPs among all users of the directory.
Our study was primarily intended to evaluate the effect of publicly reported physician quality data on PCP choice. We investigated this association in the real world among a population for whom data on individual provider quality were highly relevant—new health plan members who were required to select a PCP—and we controlled for possible selection effects by using a randomized encouragement design. Yet, our study produced no evidence for the effectiveness of physician performance reports on consumer decision-making. Although it is possible to interpret these findings only as failing to demonstrate the utility of comparative quality data on provider choice, such a conclusion may not be warranted.
A growing body of research suggests that data accessibility and display issues play a vital role in determining whether consumers use quality data once they are aware of them.14, 17, 28–31 Our encouragement messages directed new enrollees to the introductory page of the HealthPlus provider directory. The steps necessary to get from that page to an actual data display are not obvious, which may have frustrated many study participants. In particular, there are many input fields that users may complete (via pull-down menus) as a way of customizing their search for a provider. The response options in the pull-down menus are not always clear. Some of the fields are required and some are not; again, the distinction is not always clear. Despite these problems, which are not uncommon in online reports of provider quality,32 there is value in assessing the effect of HealthPlus’s physician performance report as it is one of very few examples of physician-level reporting to date.
A second issue is the amount of missing data on physician quality in the HealthPlus provider directory. Nearly two-thirds of the physicians listed in the directory at the time of our study did not have data on the performance measures reported there. The high rate of missing data may have eroded consumers’ confidence in the data as a whole, thereby limiting the effects of exposure to quality information on new members’ choice of a PCP. However, our study does provide evidence about how consumers behave in response to a report in which data are missing for a substantial proportion of health care options. Consumers’ responses to and use of physician ratings in the context of abundant missing data may differ from how they would respond to ratings in the context of more complete data.
The impact of missing information on consumer choice, especially when such information is missing in a large proportion of cases, is poorly understood and is an issue of significant policy concern that is just beginning to get the research attention that it deserves.33–34 Consumers facing such data may doubt the reliability or usefulness of the data as a whole and may ignore them even where they do exist. Consumers who do not have such a reaction must make the difficult inference about how to compare the quality of physicians with no data to the quality of physicians with data. They might, for example, assume that the absence of data for some providers means that those providers are of lower quality.33 Such an assumption would lead to an impact of reporting that is substantially different from the impact of a report with more complete data. In our study, a majority (55%) of participants who reported consulting the provider directory before choosing a PCP picked a provider with missing data. This suggests that many participants may have ignored the quality data when making their decision.
The most common reason for missing data in the HealthPlus provider directory is that there are a large number of providers in the health plan’s network for whom HealthPlus members constitute only a small fraction of their total pool of patients (and thus provide too little data from which to estimate reliable performance scores). Among physicians in HealthPlus’s “core service area” in which the plan has long-standing membership and established physician networks, the problem of missing data is much less severe: 68% of physicians in the plan’s core service area had non-missing quality data in the online directory at the time of our study vs. 38% of all commercial primary care physicians. However, because physicians in the core service area are not reported separately from physicians in outlying areas, users of the online directory—including those looking for a physicians in the core service area—must contend with large amounts of missing data when trying to compare the quality of PCPs.
The problem of missing data is likely to be common to all but the largest health plans, and suggests an important challenge for publicly reporting quality data at the individual physician level. Small sample sizes are a problem in general in public reporting,35 and there is general agreement that it is better to indicate missing data in a report than to include unreliable data based on small sample sizes. To minimize the problem of having too few patients per provider for reliable reporting, it may be necessary to report individual provider-level data at other than the health plan level (e.g., statewide or via all-payer claims databases).
Another possible explanation for why participants who visited the online directory were not swayed by the performance data is that they did not fully understand the measures of physician quality. In general, consumers have difficulty understanding comparative quality data.36–37 Though little is known about how consumers understand roll-up scores such as those reported in the HealthPlus physician performance report, there is reason to believe that consumers may have particular difficulty understanding these scores. A roll-up combines multiple measures that are not necessarily related conceptually into a single score. In principle roll-up scores should make it easier to arrive at an overall evaluation by reducing the number of dimensions that people need to consider; however, consumers may have little understanding of the dimensions that are rolled up and thus little motivation to use roll-ups for decision-making.
Although our study does not provide evidence that publicly reported data on physician quality affect the quality of PCPs selected by consumers, it did identify a low-cost means of encouraging new plan members to access these data. This is an important finding given that one of the most important challenges in public reporting (at any level) has been promoting awareness of these data.17, 38–40 This finding also suggests the value of strategies directing consumers to physician quality data at a point when they are most likely to be interested in seeing it.41–42
Among those who responded to our survey, our encouragement manipulation seems to have done more than just draw them to the physician performance data—it may have led them to choose PCPs with higher ratings on member satisfaction. That encouragement appears to have influenced the quality of providers chosen by survey respondents, even when they did not access the data on physician quality, suggests that encouragement had an effect that was not dependent on viewing the intended source of quality information. It is plausible that the encouragement intervention enhanced the salience of physician quality, which might then have activated diffuse information-seeking behavior, such asking friends, family members, or colleagues for recommendations, or consulting for-profit websites (e.g., vitals.com, Angie’s List) that present information on individual providers. If so, then it is also plausible that the experiences underlying those recommendations would be predictive of patient satisfaction but not of clinical measures of quality. The encouragement effect was strongest among regular Internet users who would have had greater access to information on doctors besides what was presented in the HealthPlus directory. More research is needed to understand the mechanism or mechanisms underlying this effect.
That the effect of encouragement on the member satisfaction (CAHPS) scores of selected PCPs was not evident among the entire study population suggests that respondents to our survey may have been a select subgroup of new enrollees. While a 51% response rate is similar to those observed in surveys of outpatient and inpatient experiences,43–44 and response rates tend to be only weakly associated with nonresponse bias in well-conducted probability samples,45 the possibility of nonresponse bias remains. We observed some evidence of demographic non-representativeness with women and older people being more likely to return a survey. These are standard patterns in survey response.46–47 We also observed that survey respondents were more likely than non-respondents to select a PCP within one year of enrolling in the health plan, suggesting that survey respondents may be a more activated group of health care consumers, in greater need of health care, or perhaps more conscientious generally. Thus, caution is warranted in making inferences about prevalence based on our sample data. Even so, our randomized encouragement design should protect us against bias in comparing across study conditions, and equal response rates across conditions suggest that there was no differential response by condition.
Our study had other important limitations. First, nearly a third of new plan members assigned to a condition of our study did not select a PCP within a year of enrollment. Though an interesting finding in its own right and not indicative of selection bias per se, it reduced the sample size available to test for effects of encouragement and exposure on quality of PCP selection. Thus, some caution is merited in interpreting non-significant results. Second, our study sample was necessarily limited to those new enrollees who did not pre-designate a PCP on their enrollment form. Many of those who pre-designate a PCP are switching plans and already have a doctor with whom they are satisfied and who is available through the new plan. Our study excludes this subset of new enrollees, though how this may affect our results is unclear. Third, information about exposure to the physician quality report was limited to participants’ self-reports about whether and when they accessed the provider directory. To the extent that participants in the encouragement condition felt compelled to report accessing the directory even if they had not, our results may be biased toward finding an effect of encouragement on this outcome. Other studies that use a similar design should consider more covert ways to collect this information. Finally, it is important to note that our model of PCP choice does not account for many of the reasons why people choose a particular PCP, including word-of-mouth reputation, availability, and location of the provider’s office.48 However, our randomized encouragement design should protect us against bias that might result from the exclusion of such factors.
A number of conditions must be met for healthcare quality reports to be effective. Consumers must know about the data, have access to them, and be in a state of readiness to make a decision.39, 42 The data must be understandable, seen as trustworthy, and relevant to their choice options.48 If any one of these conditions is not met, exposure to performance data is likely to have little or no (detectable) effect, even if it has an effect under optimal conditions. In this study, we were able to provide consumers with access to information at the time they needed it to make a decision, but the performance data covered only a fraction of their choice options. This highlights challenges in finding ways to minimize missing data and provide appropriate information about why data are missing so that consumers’ confidence in the data is not undermined.33 Our study identifies encouragement as a possible “salience intervention” that may have value independent of consumer reporting, and which, under ideal circumstances, may synergistically enhance the effects of reporting.
Disclosure of Funding: This research was supported by the Robert Wood Johnson Foundation (Grant #65447) and the Agency for Healthcare Research and Quality (U18 HS016980). Ron D. Hays was also supported in part by grants from the National Institute on Aging (P30-AG021684) and the National Institute on Minority Health and Health Disparities (2P20MD000182).
Publisher's Disclaimer: This is a PDF file of an unedited manuscript that has been accepted for publication. As a service to our customers we are providing this early version of the manuscript. The manuscript will undergo copyediting, typesetting, and review of the resulting proof before it is published in its final citable form. Please note that during the production process errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain.
Steven C. Martino, RAND Corporation, Pittsburgh, PA.
David E. Kanouse, RAND Corporation, Santa Monica, CA.
Marc N. Elliott, RAND Corporation, Santa Monica, CA.
Stephanie S. Teleki, California HealthCare Foundation, Oakland, CA.
Ron D. Hays, Department of Medicine, University of California, Los Angeles, CA.