PMCCPMCCPMCC

Search tips
Search criteria 

Advanced

 
Logo of jgimedspringer.comThis journalToc AlertsSubmit OnlineOpen Choice
 
J Gen Intern Med. 2016 April; 31(Suppl 1): 6–7.
Published online 2016 March 7. doi:  10.1007/s11606-015-3574-1
PMCID: PMC4803683

Implementing Performance Pay in Health Care: Do We Know Enough to Do It Well?

Joseph Francis, MD, MPHcorresponding author and Carolyn Clancy, MD, MPH

Health care systems throughout the world struggle with allocating scarce resources to achieve the best possible impact on patients and populations and with designing incentives that promote high value care. In the United States, the Department of Health and Human Services (HHS) intends to move 90 % of all Medicare fee-for-service payments to alternative payment models by 2018. These approaches can include bundled payments; Accountable Care Organizations (ACOs); and modifying fee-for-service payments based on measured quality, efficiency, patient experience, or harm.1 Because health outcomes and costs are driven by the collective decisions made by clinicians and patients, these trends at the health system level are also driving the alignment of payment to individual health providers toward similar value-oriented goals.

Structuring incentives for quality is not as simple as it seems. This is due, in part, to the challenges of measurement. Clinical quality is more easily defined and understood for populations than for individual patient encounters. For the individual, contextual factors—including patient preferences and the existence of comorbid conditions—are used by wise clinicians to modify the approach that a purely disease- or population-oriented perspective would dictate.2 Failing to acknowledge that the determination of “value” depends on whether the perspective is that of a population versus that of an individual patient only adds to the stress felt by conscientious clinicians.3

Further complicating the search for a useful and beneficent incentive structure is the incontrovertible fact that human behavior in response to measurement, feedback, and reward is not necessarily rational.4 Individuals subject to close scrutiny of their performance may behave consciously or unconsciously in ways contrary to a patient’s, or even an organization’s, best interests—behaviors collectively termed “gaming,” which, though not pervasive in health care settings, are nonetheless worrisome to clinicians and policymakers.5

For these reasons, it is not sufficient to understand whether value-based incentives for clinicians make a difference. The aggregate experience across a variety of interventional and observational studies suggests they can. But that is insufficient for structuring a feedback and incentive system that promotes positive change and minimizes negative consequences. One equally needs to know how they make a difference: how incentives were designed and implemented; how clinicians were engaged (or disengaged) by the process; how improvements happened; and how long gains were sustained after withdrawal or change of incentives.

In this issue, Kondo and colleagues6 start with the literature that has examined the effects of payments for quality on physician behavior and take the next step, deeply examining the specific methods as well as the context of implementation to enumerate those features that would be of greatest interest to a manager attempting to replicate their success. They do this using the Consolidated Framework for Implementation Research (CFIR), a model for thinking about what makes a program successful,7 including program design, action steps taken to implement the program, external context, internal (organizational) characteristics, provider characteristics, provider responses (cognitive, affective, and behavioral), and program outcomes (including changes in processes of care and patient outcomes). These are factors that are critical to the success of any complex organizational intervention involving competing demands for time, attention, and energy. They also are factors typically not reported in the biomedical literature or in case studies of organizational improvement. This prompted Kondo’s team to reach out directly to the authors of the major studies in order to learn through key informant interviews the important qualitative details of implementation that were not included in the published findings. Such qualitative aspects of implementation are far from “merely subjective” or “unscientific,” and arguably are the key to understanding what makes any initiative a success.8

Not every question the authors raise could be answered either in the published literature or through informants—these gaps lead the authors to call for more research on performance pay with a particular focus on implementation factors. This is a reasonable request, but not an excuse for failing to take action based on what we already know to be true. The findings of Kondo et al. are not only feasible and pragmatic, but strikingly consistent with relevant literature regarding quality audit and feedback,9 and the general management literature on metrics, rewards, and motivation.10 We list some of the cross-cutting themes in the Table. The parallels are remarkable, and suggest we already have sufficient and compelling empirical evidence—both qualitative and quantitative—to inform the implementation of clinical pay-for-performance programs (Table (Table11).

Table 1
Cross-cutting Themes from Management Literature and Studies of Health Care Performance

Borrowing the language of Pritchard and Ashwood:10

  1. Clinical professionals have an intrinsic need to do a good job. Incentive payments are a way of signaling priorities, and monetary compensation is likely a secondary factor for many.
  2. Clinicians value a degree of autonomy, even if that autonomy is not absolute. Performance plans need to be participative in their design and engage clinicians in the actions that improve performance.
  3. Clinicians resent being held accountable for factors outside their control. Asking them to do what they believe is impossible will invite disengagement, or worse, manipulation and “gaming” the system.
  4. Feedback and evaluation are not the same. (Feedback is designed to improve performance, whereas evaluation is ranking an individual’s performance against a reference standard) Relevant data that is presented to improve performance in a timely and positive manner will promote teamwork and confidence.
  5. Clinicians want to feel valued. Callous presentation of results, or an exclusive emphasis on productivity or efficiency, may work against the change that health systems are seeking.
  6. Clinicians do not want their time wasted. The measures should matter; those that don’t should sunset as soon as possible.

As close observers of recent successes—as well as a highly publicized lapse—of performance measurement and incentives within VA, we believe the lessons from Kondo and colleagues are timely and relevant. It is now our challenge to put them to effective use, for the good of Veterans, the engagement of our clinicians and managers, and the trust of the American people.

Footnotes

The views expressed in this article are those of the authors and do not necessarily reflect the position or policy of the Department of Veterans Affairs or the United States Government.

REFERENCES

1. Burwell SM. Setting value-based payment goals—HHS efforts to improve U.S. Health care. N Engl J Med. 2015;372:897–899. doi: 10.1056/NEJMp1500445. [PubMed] [Cross Ref]
2. Boyd CM, Darer J, Boult C, et al. Clinical practice guidelines and quality of care for older patients with multiple comorbid diseases: implications for pay for performance. JAMA. 2005;294:716–724. doi: 10.1001/jama.294.6.716. [PubMed] [Cross Ref]
3. Moffatt-Bruce S, Hefner JL, McAlearney AS. Facing the tension between quality measures and patient satisfaction. Am J Med Qual. 2015;30:489–90. doi: 10.1177/1062860614557352. [PubMed] [Cross Ref]
4. Ariely D. Predictably irrational: the hidden forces that shape our decisions. New York: HarperCollins; 2010.
5. Bevan G, Hood C. What’s measured is what matters: targets and gaming in the english public health care system. Public Adm. 2006;84:517–538. doi: 10.1111/j.1467-9299.2006.00600.x. [Cross Ref]
6. Kondo KK, Damberg CL, Mendelson A, et al. Implementation processes and pay for performance in healthcare: a systematic review. J Gen Intern Med. 2015 [PMC free article] [PubMed]
7. Damschroder LJ, Aron DC, Keith RE, et al. Fostering implementation of health services research findings into practice: a consolidated framework for advancing implementation science. Implement Sci. 2009;4:50. doi: 10.1186/1748-5908-4-50. [PMC free article] [PubMed] [Cross Ref]
8. Pluye P, Nha Hong Q. Combining the power of stories and the power of numbers: mixed methods research and mixed studies reviews. Annu. Rev. Public Health. 2014;35:29–45. doi: 10.1146/annurev-publhealth-032013-182440. [PubMed] [Cross Ref]
9. Ivers NM, Sales A, Colquhoun H, et al. No more ‘business as usual’ with audit and feedback interventions: towards an agenda for a reinvigorated intervention. Implement Sci. 2014;9:14. doi: 10.1186/1748-5908-9-14. [PMC free article] [PubMed] [Cross Ref]
10. Pritchard RD, Ashwood EL. Managing motivation: a manager’s guide to diagnosing and improving motivation. London: Routledge; 2008.

Articles from Journal of General Internal Medicine are provided here courtesy of Society of General Internal Medicine