We find a significant change in patient sorting with respect to pain after public reporting was initiated, with high-risk patients being more likely to go to high-scoring facilities and low-risk patients more likely to go to low-scoring facilities. We also find that the incidence of admission pain levels decreases after the launch of public reporting in a way that is not predicted by other patient characteristics, suggesting that facilities were downcoding high-risk patients. Nonetheless, even after accounting for potential down coding, significant evidence of patient sorting remained.
Although we find evidence of changes in patient sorting and changes in admission risk profiles with respect to pain, we find little evidence of either patient sorting or changes in admission risk with respect to delirium or walking. There are plausible explanations for these discrepant findings. First, admission delirium and difficulty walking had very low and high prevalence (at 7 and 96 percent, respectively), making it less likely we will find evidence of changes in patient sorting, if it exists. Second, to the extent that improved matching is due to patient behavior, the report card measure of pain control may be more salient and thus patients (or their agents) may be more likely to respond to it. Because of the low levels of within-facility correlation between report card measures, improved matching on one measure will not necessarily spill over to improved matching on another uncorrelated measure.
We also find that changes in patient sorting on pain waned over four quarters in both pilot and nonpilot states. While one explanation for this waning effect might be that the effect of public reporting itself wanes over time, as fewer resources are devoted to promoting it, a more compelling explanation is related to the inadequate risk adjustment of the quality measures used in Nursing Home Compare (Arling et al. 2007
; Mukamel et al. 2008a
;). If quality measures are poorly risk adjusted, as high-scoring facilities attract high-risk patients their report card scores are bound to decline. Indeed, the four quarters of improved sorting that we document may be related to the delay built into the calculation of the report card scores—report card scores are calculated three quarters after the data are collected; we include a one-quarter lag of the report card scores to allow consumers a chance to respond to the information. Thus, the changes in patient illness severity that occur at the time of Nursing Home Compare would take four quarters to appear in the report card score.
We make several important contributions to the existing literature. To our knowledge, we are the first to directly examine changes in patient sorting in response to Nursing Home Compare. While it is usually assumed that public reporting will improve quality of care by increasing the market share of high-quality providers and/or giving providers incentive to improve the quality of care they deliver, changes in patient sorting suggests an alternative mechanism to improve quality of care, implying that there are changes in the type of patients a provider sees, rather than or in addition to the number of patients a provider sees. Improved matching suggests that the quality effect of public reporting may be largest among the sickest patients.
While prior work has described a decline in the incidence of admission pain after Nursing Home Compare was launched (Mukamel et al. 2009
), we affirm these findings using a robust methodological approach that controls for underlying secular trends. Even when controlling for underlying trends in states where Nursing Home Compare was not simultaneously released, we find clinically meaningful declines in levels of admission pain. However, our analyses suggest that these declines are most consistent with downcoding rather than cream skimming. While it remains possible that nursing homes engage in cream skimming, particularly in ways that are unobservable to us in the data, we find that observable patient characteristics are predictive of higher, and stable, levels of admission pain after Nursing Home Compare was launched. Researchers have found evidence of downcoding in the presence of public reporting (Green and Wintfeld 1995
). In addition, prior evidence suggests the reliability of the pain measure may be low and varies with patient characteristics (Wu et al. 2005a
;). Despite the evidence in support of downcoding in this setting, downcoding does not substantially alter our estimates of sorting. Even after controlling for changes in coding we find significant changes in patient sorting in association with public reporting.
A few study limitations should be considered. First, the relationship we estimate between a facility's report card score and admission severity may be endogenous, particularly in the presence of inadequate risk adjustment where the severity of patients admitted influences that facility's report card score. Although our lagged report card scores account in part for this potential source of endogeneity, it remains a possible source of bias. Second, we limit our analyses to the postacute care patients in nursing homes, which limit the generalizability of our results. However, these results provide important information about the potential for public reporting to induce patient matching and downcoding in any health care sector. Third, our difference-in-differences model depends on the assumption that secular trends in pilot and nonpilot states are the same, and potential violations of this assumption make causal attribution of changes in sorting and case mix to Nursing Home Compare impossible.
Despite these limitations, our findings have important implications. Public reporting may have the largest impact on improving quality for the sickest patients. Thus, looking for changes in quality on average, rather than among subsets of patients, may lead to an underestimate of the effect of public reporting on quality of care. Although improved matching of patients to providers under public reporting is good news, it is accompanied by the possibility that public reporting may also induce downcoding by providers. Changes in coding may be a justified response to data inaccuracies that must be fixed to more accurately measure quality. However, these changes obfuscate true changes in quality in response to quality improvement incentives.