The CMS provides hospitals and their survey vendors with a choice of four different modes of survey administration to allow them to easily implement HCAHPS using their preferred method. This flexible approach requires adjustment by estimates derived from the mode experiment in order to ensure that the resultant hospital-level scores are equitable and comparable, irrespective of a hospital's choice of mode.
A randomized mode experiment found evidence of substantial mode effects for outcomes of the HCAHPS Survey. In general, evaluations were more positive in the telephone and active IVR modes than in the mail mode, whereas mixed mode (mail with telephone follow-up) did not significantly differ from mail mode. These mode effects were large enough to substantially bias comparisons among hospitals choosing different modes unless mode adjustments are made, with errors corresponding to 30 or more percentile points possible for several outcomes. This pattern was largely insensitive to PMA. The small differences in who responded by randomized mode (mainly a slightly younger sample with fewer recent maternity cases in telephone and active IVR modes than with mail and mixed modes) explained little of the differences in response by randomized survey mode.
These results suggest that the observed total mode effect is primarily a function of how people respond (the response effect), rather than who responds (the compositional effect), with respect to observed patient characteristics, though this experiment cannot rule out differential selection by mode on the basis of unobserved characteristics.
Mode effects were considerably larger for the top-box scoring that will be publicly reported than with mean (linear) scoring of outcomes. One possible explanation is that top-box mode effects may reflect both social desirability bias affecting all response options and a “recency” effect that only applies strongly to the top-box response option. Because positive response options appear last on the HCAHPS survey, the positive telephone and active IVR effects may in part represent a cognitive effect known as the recency effect, meaning a tendency to pick the last option within a list with an auditory rather than visual presentation (Baddeley and Hitch 1977
). Mode effects on full-scale outcome means may “dilute” the recency effects as some of the variation for this scoring occurs among the lower response options which are presented earlier.
One might have expected larger survey mode effects for ratings, which are thought to be more subjective, than for report items. For example, a recent study found greater effects of proxy respondents on CAHPS ratings than on CAHPS reports (Elliott et al. 2008
). Nonetheless, there was no systematic difference in the magnitudes of mode effects for ratings and reports for the HCAHPS survey.
The PMA had small-to-moderate effects on hospital scores that were typically less consequential than mode adjustments, with self-rated health and educational attainment being the most important PMA variables. While there was evidence of differential nonresponse overall, and evidence that those with lower response propensity had less positive evaluations of care, there was no evidence that nonresponse weighting based on available data improved the accuracy of hospital scores beyond what could be achieved with PMA.
Before public reporting on the Hospital Compare
), the CMS will adjust HCAHPS results by first using the PMA model described in this article, and then applying a simple fixed-effects adjustment by survey mode based on mode effect estimates that incorporated PMA. Given the existence of many significant mode effects, mode adjustment will be made for all reported outcomes, even those that are not statistically significant at p
<.05. This uniform approach across outcomes is consistent with previous CAHPS practice (Agency for Healthcare Research and Quality 2007
In making mode adjustments, choosing one mode as the reference point allows the interpretation of adjusted data from all modes as if each hospital's patients had been surveyed in the reference mode. Here, the mail mode is used as the reference mode of survey administration. Surveys conducted in the mail mode are not adjusted further for mode after PMA. Surveys conducted in any of the other three modes (telephone, mixed, active IVR) are further adjusted according to the difference in mode effects between that mode and the mail mode, as estimated through the HCAHPS Mode Experiment. This approach results in estimates for hospitals that correspond to the score the hospital would have received if it had seen the same patients as other hospitals and conducted the survey in the mail mode, regardless of actual patient mix or mode of administration. In research applications, significance testing comparing mode-adjusted hospitals to one another or to benchmarks would incorporate variance attributable to the estimation of mode effects into the standard errors of hospital estimates.
There has been a rapid and widespread adoption of the HCAHPS survey, which increases its immediate value to policymakers, researchers, and consumers. The survey mode and PMA described here result in HCAHPS data that are more useful to all concerned with improving hospital quality, including the hospitals themselves because changes in survey vendors, survey modes, or patient populations will not disrupt or distort the continuity of valid, comparable scores over time.