|Home | About | Journals | Submit | Contact Us | Français|
Health care systems throughout the world struggle with allocating scarce resources to achieve the best possible impact on patients and populations and with designing incentives that promote high value care. In the United States, the Department of Health and Human Services (HHS) intends to move 90 % of all Medicare fee-for-service payments to alternative payment models by 2018. These approaches can include bundled payments; Accountable Care Organizations (ACOs); and modifying fee-for-service payments based on measured quality, efficiency, patient experience, or harm.1 Because health outcomes and costs are driven by the collective decisions made by clinicians and patients, these trends at the health system level are also driving the alignment of payment to individual health providers toward similar value-oriented goals.
Structuring incentives for quality is not as simple as it seems. This is due, in part, to the challenges of measurement. Clinical quality is more easily defined and understood for populations than for individual patient encounters. For the individual, contextual factors—including patient preferences and the existence of comorbid conditions—are used by wise clinicians to modify the approach that a purely disease- or population-oriented perspective would dictate.2 Failing to acknowledge that the determination of “value” depends on whether the perspective is that of a population versus that of an individual patient only adds to the stress felt by conscientious clinicians.3
Further complicating the search for a useful and beneficent incentive structure is the incontrovertible fact that human behavior in response to measurement, feedback, and reward is not necessarily rational.4 Individuals subject to close scrutiny of their performance may behave consciously or unconsciously in ways contrary to a patient’s, or even an organization’s, best interests—behaviors collectively termed “gaming,” which, though not pervasive in health care settings, are nonetheless worrisome to clinicians and policymakers.5
For these reasons, it is not sufficient to understand whether value-based incentives for clinicians make a difference. The aggregate experience across a variety of interventional and observational studies suggests they can. But that is insufficient for structuring a feedback and incentive system that promotes positive change and minimizes negative consequences. One equally needs to know how they make a difference: how incentives were designed and implemented; how clinicians were engaged (or disengaged) by the process; how improvements happened; and how long gains were sustained after withdrawal or change of incentives.
In this issue, Kondo and colleagues6 start with the literature that has examined the effects of payments for quality on physician behavior and take the next step, deeply examining the specific methods as well as the context of implementation to enumerate those features that would be of greatest interest to a manager attempting to replicate their success. They do this using the Consolidated Framework for Implementation Research (CFIR), a model for thinking about what makes a program successful,7 including program design, action steps taken to implement the program, external context, internal (organizational) characteristics, provider characteristics, provider responses (cognitive, affective, and behavioral), and program outcomes (including changes in processes of care and patient outcomes). These are factors that are critical to the success of any complex organizational intervention involving competing demands for time, attention, and energy. They also are factors typically not reported in the biomedical literature or in case studies of organizational improvement. This prompted Kondo’s team to reach out directly to the authors of the major studies in order to learn through key informant interviews the important qualitative details of implementation that were not included in the published findings. Such qualitative aspects of implementation are far from “merely subjective” or “unscientific,” and arguably are the key to understanding what makes any initiative a success.8
Not every question the authors raise could be answered either in the published literature or through informants—these gaps lead the authors to call for more research on performance pay with a particular focus on implementation factors. This is a reasonable request, but not an excuse for failing to take action based on what we already know to be true. The findings of Kondo et al. are not only feasible and pragmatic, but strikingly consistent with relevant literature regarding quality audit and feedback,9 and the general management literature on metrics, rewards, and motivation.10 We list some of the cross-cutting themes in the Table. The parallels are remarkable, and suggest we already have sufficient and compelling empirical evidence—both qualitative and quantitative—to inform the implementation of clinical pay-for-performance programs (Table (Table11).
Borrowing the language of Pritchard and Ashwood:10
As close observers of recent successes—as well as a highly publicized lapse—of performance measurement and incentives within VA, we believe the lessons from Kondo and colleagues are timely and relevant. It is now our challenge to put them to effective use, for the good of Veterans, the engagement of our clinicians and managers, and the trust of the American people.
The views expressed in this article are those of the authors and do not necessarily reflect the position or policy of the Department of Veterans Affairs or the United States Government.