PMCCPMCCPMCC

Search tips
Search criteria 

Advanced

 
Logo of jgimedspringer.comThis journalToc AlertsSubmit OnlineOpen Choice
 
J Gen Intern Med. Apr 2012; 27(4): 405–412.
Published online Oct 13, 2011. doi:  10.1007/s11606-011-1906-3
PMCID: PMC3304045
Unintended Consequences of Implementing a National Performance Measurement System into Local Practice
Adam A. Powell, PhD,corresponding author1,2 Katie M. White, EdD,3 Melissa R. Partin, PhD,1,2 Krysten Halek, MA,1 Jon B. Christianson, PhD,3 Brian Neil, MD,4 Sylvia J. Hysong, PhD,5,6 Edwin J. Zarling, MD,7 and Hanna E. Bloomfield, MD1,2
1Core Research Investigator, Center for Chronic Disease Outcomes Research (CCDOR), Minneapolis VA Health Care System, One Veterans Drive (111-0), Minneapolis, MN 55417 USA
2Department of Medicine, University of Minnesota, Minneapolis, MN USA
3School of Public Health, University of Minnesota, Minneapolis, MN USA
4VA Midwest Health Care Network, VISN 23, Minneapolis, VA USA
5Houston Center for Quality of Care and Utilization Studies, Michael E. DeBakey VA Medical Center, Houston, TX USA
6Baylor College of Medicine, Houston, TX USA
7North Chicago Captain James A. Lovell Federal Health Care Center, North Chicago, IL USA
Adam A. Powell, Phone: +1-612-4674364, Fax: +1-612-7252118, Adam.Powell/at/va.gov.
corresponding authorCorresponding author.
Received June 3, 2011; Revised August 8, 2011; Accepted September 7, 2011.
BACKGROUND
Although benefits of performance measurement (PM) systems have been well documented, there is little research on negative unintended consequences of performance measurement systems in primary care. To optimize PM systems, a better understanding is needed of the types of negative unintended consequences that occur and of their causal antecedents.
OBJECTIVES
(1) Identify unintended negative consequences of PM systems for patients. (2) Develop a conceptual framework of hypothesized relationships between PM systems, facility-level variables (local implementation strategies, primary care staff attitudes and behaviors), and unintended negative effects on patients.
DESIGN, PARTICIPANTS, APPROACH
Qualitative study design using dissimilar cases sampling. A series of 59 in-person individual semi-structured interviews at four Veterans Health Administration (VHA) facilities was conducted between February and July 2009. Participants included members of primary care staff and facility leaders. Sites were selected to assure variability in the number of veterans served and facility scores on national VHA performance measures. Interviews were recorded, transcribed and content coded to identify thematic categories and relationships.
RESULTS
Participants noted both positive effects and negative unintended consequences of PM. We report three negative unintended consequences for patients. Performance measurement can (1) lead to inappropriate clinical care, (2) decrease provider focus on patient concerns and patient service, and (3) compromise patient education and autonomy. We also illustrate examples of negative consequences on primary care team dynamics. In many instances these problems originate from local implementation strategies developed in response to national PM definitions and policies.
CONCLUSIONS
Facility-level strategies undertaken to implement national PM systems may result in inappropriate clinical care, can distract providers from patient concerns, and may have a negative effect on patient education and autonomy. Further research is needed to ascertain how features of centralized PM systems influence whether measures are translated locally by facilities into more or less patient-centered policies and processes.
Electronic supplementary material
The online version of this article (doi:10.1007/s11606-011-1906-3) contains supplementary material, which is available to authorized users.
KEY WORDS: health care quality assessment, quality indicators, performance measurement, unintended consequences
“As we bustle from one well documented chart to the next, no one is counting whether we are still paying attention to the human beings” Dena Rifkin, New York Times 11/17/2009 Page D5.
Performance measurement (PM) systems are central to many healthcare organizations’ quality assurance programs.1 Orienting quality management around quantitative measures has led to documented improvement in a variety of domains.24 In the Veterans Health Administration (VHA), implementation of a national clinical PM system was associated with dramatic improvements in clinical outcomes, effectively transforming VHA into a quality leader.57 However, in recent years, scores on many VHA measures have stabilized at high levels, indicating that the benefit of these measures has mostly been realized. Within VHA and other organizations with mature PM systems, there may be more potential for improvement by concentrating on minimizing negative effects of PM than by focusing on further improving already high scores.
Concerns have been expressed about negative unintended effects 8 of PM on unmeasured aspects of care,912 on patient-centered care,6,13 and on provider morale and professionalism.8,9,1419 Chassin et al. recently argued that only measures with “little or no chance of inducing unintended adverse consequences” should be used for accountability purposes.20 However, only a few empirical studies have delineated adverse effects such as decreased continuity of care,2 provider avoidance of sicker patients,21 and increased disparities.22 None of these studies have focused on the pathways by which PM programs produce these effects—information that is critical to designing system improvements. We conducted a qualitative study to explore possible relationships between a centralized clinical PM system, facility-level practices to implement the PM system into daily care, and unintended negative consequences for patients.
Design, Setting and Participants
We conducted semi-structured, in-person, individual interviews of staff at four VHA facilities between February and July 2009. To capture diverse perspectives we identified facilities using a dissimilar cases sampling strategy.23 Facilities were chosen based on: 1) facility size (number of outpatients) and 2) performance on an index of primary care clinical PM scores created by averaging scores on cancer screening, cardiovascular health, endocrinology, infection control, and tobacco. After eliminating facilities in the middle two quartiles on either of the two measures, we selected one facility from each of four groups (large high-performers, small high-performers, large low-performers, small low-performers).
At each site we interviewed primary care staff including clinic leaders (primary care clinic physician and nurse directors), physicians, non-physician practitioners (e.g. nurse practitioners), and intake nurses. To better understand the processes used to implement PMs we also interviewed facility leaders and quality/PM officers. We conducted 60 interviews, one of which was excluded because it was inadvertently not recorded. Table 1 shows facility and participant characteristics.
Table 1
Table 1
Participating VHA Facilities
Data Collection
Interviews lasted 60 to 90 minutes and were led by one of the researchers using a guide developed through a review of the PM literature. A series of formative interviews (led by the first author) were conducted to refine the order of topics and phrasing of questions in the interview guide. Interviews focused on the effects of PM on patients, providers, clinics, and facilities. (Provider and leadership interview guides are available in the online Appendix.) Additional modifications to the guide were made as the interviews progressed to ensure that emerging topics were thoroughly explored. The lead interviewer directed the discussion to each topic in the guide, investigating how the PM system was implemented in primary care and its effects on the patient, provider, encounter, clinic, and facility. Probes were used to uncover participant perceptions of the processes by which PMs result in unintended consequences. Participants were given a list of VHA primary care clinical PMs as a discussion aid (Table 2). Interviews were recorded and transcribed.
Table 2
Table 2
Interview Discussion Aide: 2009 VHA Primary Care Clinical Performance Measures
Measurement/Analysis
We first applied codes to quotes in interview transcripts using qualitative analysis software (NVivo 8.0). Initial “topcodes” were identified inductively by reviewing 15 interviews. As coding progressed, topcodes were refined to reflect emerging constructs. For each transcript, a second coder reviewed the initial topcoding. Discrepancies were resolved through discussion. (See Table 3 for a list of final topcodes.) Next, we applied more specific subcodes to quotes within each topcode. Subcodes were created by one reviewer and then applied by a second coder who also refined the list of subcodes. When subcodes were finalized, the first author created two new topcode categories — “positive consequences” and “negative consequences” — and assigned appropriate subcodes to one of these two topcodes. Subcodes assigned to these two topcodes were reviewed with the study team and refined. We conducted a thematic analysis24 on the “negative consequences” codes by running a set of queries on the dataset to identify potential relationships between codes (e.g. “extra time/work” frequently appeared near “technical issues”) and by re-reading passages from the “negative consequences” subcodes to (1) confirm that each passage represented the “negative consequence” it was coded with, (2) confirm that relationships identified through queries were represented in the data, and (3) identify candidate quotes that concisely and accurately conveyed thematic categories.
Table 3
Table 3
List of Topcodes Applied to Transcripts
We report the most commonly mentioned negative unintended consequences as well as several unintended consequences that were mentioned by a smaller number of participants but were considered by the study team to be relevant to practitioners and researchers interested in further exploring PM systems. We also provide a framework of relationships between PMs, facility/clinic implementation practices, staff attitudes and behavior, and unintended consequences that was developed based on the relationships identified between codes (Fig. 1).
Figure 1
Figure 1
Pathways from national performance measurement systems to unintended effects on patient care.
We provide counts of the number of participants mentioning each unintended consequence in parentheses. Unless otherwise noted, each count includes at least one participant at all sites. (Readers should note that our methodology was designed to uncover a range of unintended consequences of PM that may exist in primary care, not to quantify the frequency with which each unintended consequence occurs or compare frequencies across facilities or respondent groups. The counts provided give some indication of the relative salience of each issue among participants; however, they should not be interpreted as a measure of the frequency of occurrence or magnitude.) Example quotes illustrating each type of unintended consequence were chosen by team consensus and are provided in italics. Because of the small number of facilities, and because facility characteristics were conflated with the order of site visits, we do not stratify our findings by facility subgroups.
Attitudes toward PM varied from very positive to very negative. Many participants noted benefits of PM that extend beyond increasing the proportion of patients receiving the measured aspects of care. For example, feedback from the PM system helps some clinic staff feel more confident that their care is thorough; performance scores can be a source of pride and positive competition; and incorporating preventive health PMs into the clinical encounter sends the message to patients that VHA cares about more than just their immediate needs. However, because the primary objective of our research was to identify opportunities for improvement, we focus our analysis on three unintended negative effects of PM on patients.
Within VHA, PMs are defined and monitored at the national level, but facilities are given considerable latitude over the methods they establish to achieve high scores. Providers in our study tended to focus more on these local policies, processes, and strategies than on the nationally generated PM definitions and scores. As depicted in Figure 1, our analyses suggest several pathways by which local processes developed in response to national PM policies have unintended effects on provider attitudes and behaviors and, in turn, on patient care.
Inappropriate Clinical Care
Treating the Measure not the Patient (Figure Pathway D1→C1→B1, B2→A1). Although all four facilities have systems in place to generate provider-level performance scores, providers at three sites were not required to regularly review their scores and many indicated that they were unaware of their scores. One site, however, distributed weekly reports to each provider containing lists of patients who were not in compliance with performance measures. Providers were asked to indicate in writing reasons for non-compliance or actions that would be taken to achieve compliance. Many participants at this site expressed concerns that too much emphasis was placed on PM scores (Site 3 – 10 participants mentioned).
I think it has become a contest, the number game is just a big thing lately. (Registered Nurse – Site 3)
Pressure to achieve high performance scores can incentivize providers to take actions to improve scores even if these actions are not in the patients’ best interests, including overuse of medications (32 mentioned concerns about inappropriateness of PMs for some patients, 17 mentioned specific concerns about medications or procedures).
There are a lot of providers that are really, really pushed by these performance measures and just add on drugs, add on drugs to treat the performance measure. (Physician – Site 2)
They’ll have a patient that will come into clinic and their blood pressure is up and they give them an oral medication in clinic and have the nurse re-check it . . . Clonidine actually brings blood pressure down quickly but then you get a rebound hypertension afterwards if you don’t follow-up . . . I’ve heard of that happening with one provider who no longer works here. (Facility Leader – Site 4)
Delegation of care to intake nurses (Figure pathway D2, D3→B2→A1). Computerized clinical reminders (CCRs) are commonly used throughout VHA to integrate PMs into clinical practice. At all four facilities intake nurses are responsible for the CCRs associated with the majority of PMs. Nurses feel that they are expected to complete all active CCRs at each visit, and do so with few exceptions. Several nurses (7) and physicians (2) expressed concerns that this can lead to inappropriate use.
I have some very healthy people in their 20s who all of a sudden come up as supposedly needing pneumococcal vaccine and I'm like ‘no, they don't.’ Unfortunately . . . the nurses get that one and so they’ll just go ahead and give a pneumococcal vaccine. (Physician – Site 1)
Decreased Provider Focus on Patient Concerns and Patient Service
PM Improvement Strategies that Inconvenience/Frustrate Patients (Figure Pathway D4→B1→A2). Sites implemented a variety of strategies to improve PM scores. Most of these seem beneficial to patients, including creating specialized clinics to address PM issues in certain populations (e.g. diabetes clinic), increasing time allotted per patient visit (all four sites now have 30 minute appointments), and calling patients to remind them to complete home colorectal cancer screening tests. Strategies employed at two sites (Sites 2 and 3) however appeared to unnecessarily inconvenience patients.
We’ve had days where it was towards the end of the month….and we have nurses calling and saying “Why don’t you come get your blood pressure here”. If we have somebody with 190, we’re not even going to call him, we’re going to call him to come in and take care of his blood pressure next month. But for purposes of (the performance measure) – we have patients that when they leave here, their blood pressure at that visit was 140 – if I bring him back in and we redo it, it may be lower or it may be higher – so we’ve done that. We’ve fixed a lot of them that way. (Facility Leader – Site 3)
As an emergency room nurse – they kept telling us . . . that we had to have all these (primary care) reminders done on every patient in the emergency room…we got told over and over … do the reminders, do the reminders … when people are really sick it’s just silly, it’s not the time to be doing all that teaching. (Intake Nurse – Site 2)
Repetition of Clinical Reminders (Figure Pathway D3,D5→B3→A2). Many CCRs at participating facilities are locally programmed to remain active even if a patient refuses a PM-related intervention. Although this is not mandated, CCR programmers are often instructed to match the CCR programming as closely as possible to national PM definitions, which do not exclude patient refusals from the score. As a consequence, many nurses (14) say they waste time asking patients the same questions at multiple visits.
There are some [patients] that are saying I’m just flat-out not doing it, don’t ask me anymore. And one of the nurses actually told me that somebody threw FOBT cards at her. (Intake Nurse – Site 1)
Although participants noted that many patients are used to the repetition of PM-related questions, a few primary care staff (3) at one site (Site 2) mentioned that some patients attempt to avoid it by refusing to respond or by adapting their responses.
A few of the guys…know exactly how many to answer (the alcohol screen) – it says like “less than 14 drinks a week” and they have that number memorized. How many did you have? ‘I had two a day, seven days a week, I had 14.’ (Intake Nurse – Site 2)
Patient Health Concerns Marginalized (Figure Pathway D3, D5, D6→C2, B3, B4→A2). Several inefficiencies in local PM documentation systems create time pressure during the clinical encounter for both intake nurses and providers. Although national VHA PMs are calculated through chart reviews, facilities in our study generated their own local performance scores through the CCR system. Therefore many providers (15) who generally enter the PM data in their progress notes believe they must also re-enter the data in the CCR system.
You have to go back into the reminder and click on these three boxes and then document that you talked with him about alcohol when I’ve already talked with him and I’ve documented it in my note . . . which is a bit frustrating. (Physician – Site 2)
A few providers (3) at two sites (Sites 2 and 3) indicated that even when they are not personally bogged down with PM-related CCRs, the time intake nurses spend with the patient on PM CCRs reduces the time available for provider interaction with the patient.
Sometimes the nurses will be spending up to 20–25 minutes with the patient to try to address all of the performance measures that she feels her job depends on. And then you get the patient 25 minutes late. (Physician – Site 2)
When time is limited, patient health concerns may be given lower priority than PM-related areas of care (23 mentioned).
Occasionally a patient will come in with a complaint and the providers will make sure all the alerts are answered rather than addressing the complaint per se. Because you have this and this to do and you don’t address the fact that they have low back pain because that’s not a performance measure or their ankle hurts or something. (Facility Leader – Site 4)
Our agenda is often guided in some way shape or form by these performance measures, which I think largely are pretty reasonable and pretty evidence-based practices but aren’t always recognized by patients as being the most important thing, and I think can potentially get us down this path of ‘OK, this is the work I need to get done now and I’ve got a lot of it and you need to sit and listen to me so I can turn off my reminders.’ (Physician – Site 2)
Compromises to Patient Education and Autonomy
Quality of Patient Education may Suffer (Figure Pathway D2, D3, D5→C2→B5→A3). There were mixed opinions on how PMs affect patient education. While some felt that PMs are beneficial because more topics are covered, several nurses (3) from one participating site (Site 3) indicated that the emphasis placed on completing all PM-related CCRs during each visit left little time for them to provide the quality of education necessary for patients to make informed decisions.
Every nurse in primary care complains that they don’t have enough time to educate the patient on anything…I think most people work in an assembly line; they just click the buttons to meet the criteria but really not doing justice to the patient….They go through the list of reminders and click, click, click and patient moves on to see the provider. Next patient, get the vitals, click, click, click, move on. (Intake Nurse – Site 3)
Patient’s Decision to Decline Care not Accepted (Figure Pathway D1, D3, D5→C1→B6→A3). Although both nurses and physicians felt patients have a right to refuse PM-related care, several previously described PM implementation strategies make it difficult for them to accept patient refusals, leading to pressure on patients to comply (33 mentioned).
The number one change I would make is when I‘m discussing something with a patient or I’m saying their A1c is too high or their cholesterol needs changing and they absolutely refuse to have any intervention - when I document that I have (had) this lengthy talk with them and told them what the possible consequences of not doing that would be - it should be reflected …as a positive. (Nurse Practitioner – Site 1)
The system does not allow you to refuse . . . It’s like you’re trying to break them down and eventually make them give in and say ‘oh okay, I’ll take the flu shot.’ (Intake Nurse – Site 3)
Adverse Consequences on Primary Care Team Dynamics
Participants described several additional unintended consequences of PMs on primary care team dynamics. Although participants did not directly link these to negative consequences for patients (and are therefore not included in Fig. Fig.1),1), they may have important implications for the quality of care provided. At two of the four facilities (Sites 2 and 3), nurses were asked to check on the providers to be sure that they completed and documented PM-related interventions. Several providers (4) and nurses (6) expressed concerns with this arrangement.
At the end of the day I have to go through and police all their charts to see if they’ve done (their PM documentation), but if they haven’t finished the chart or dictated on it until the next day, then I’m expected to … continue to nag the physician and that really irritates them. (Intake Nurse – Site 2)
Additionally, several nurses (3) at two sites (Sites 1 and 3) resented the fact that physicians receive bonuses based in part on PM scores but that nurses don’t.
I don’t get any monetary returns from my doing my part, and I’ve heard . . . that providers do. To me it’s like I’m helping the provider get a bonus – but it’s part of their job anyways, that’s not fair to me. (Registered Nurse – Site 3)
Primary care staff in our study described several ways in which PMs may lead to inappropriate care, may take the focus off of patient concerns and patient service, and may make it more difficult for patients to make informed value-consistent decisions (especially when the patient’s values conflict with interventions deemed appropriate by PMs). These problems have undoubtedly existed to some extent in all healthcare systems even before the advent of modern performance measurement systems. Indeed, PM systems have likely been responsible for improvements in some areas within these quality domains while simultaneously exacerbating problems in other areas (e.g. reducing inappropriate underuse of interventions while increasing inappropriate overuse). Although our research methodology does not allow us to identify which, if any, unintended consequences are inherent in all PM systems, it is noteworthy that few of the unintended negative consequences identified in our research appeared to be directly related to national PM policies. Instead, the influence of PM policy on patient care appeared to be mediated through facility-level efforts to implement these national policies. This finding suggests that greater collaboration may be needed between developers of PMs and those responsible for facility implementation so that PM developers understand the variation in how measures are adapted into practice and local implementers have a thorough understanding of the rationale and evidence underlying the measure.
As hypothesized elsewhere,911,22 we found some evidence that PMs can drive inappropriate care, such as polypharmacy. Prescribing drugs involves greater risk but is a more efficient way to meet the PMs than behavioral interventions and providers may feel incentivized by the PMs to provide medications rather than behavioral health counseling. The administration of more drugs than is medically indicated may occur in over half of elderly patients25 and is a risk factor for morbidity and mortality.2629 The examples of overtreatment provided by participants were sometimes reported second hand. It is unknown whether they represent perception or reality. Even if more perceived than real, perceptions may shape norms and have unintended effects on provider behavior and satisfaction; or to quote the sociologist W.I. Thomas: “If men define situations as real, they are real in their consequences.”30
This work contributes to a growing body of evidence suggesting that PM systems can have negative effects on provider-patient communication. As others have observed,14,31 we found that PMs increase providers’ workload during the clinical encounter which can crowd out education and the discussion of issues that are of higher priority to patients (including, on occasion, the chief complaint). It is therefore perhaps not surprising that the establishment of a national pay for performance system in the UK failed to improve patient ratings of patient-provider communication even though this measure was incentivized.2 PMs that count patient refusals against PM scores may add to the communication problem by making it more difficult for providers to accept a patient’s choice to decline care.
Avenues for Improvement
This work does not provide sufficient evidence to conclude that the frequency and seriousness of unintended negative consequences of PM are sufficient to warrant system-wide change. Our data does indicate that PM systems do carry some risk of negatively impacting the quality of patient care. Organizations wishing to take the lead in improving their PM systems may therefore be interested in exploring ways to minimize the risk of these problems. The following approaches may address several of the negative unintended consequences identified in this research:
  • Integrate the development of PMs and local implementation strategies through collaboration between PM system administrators and facility staff.
  • Develop PMs monitoring overtreatment to balance the current focus on undertreatment.
  • Modify PMs to credit appropriate provider treatment behavior as well as achievement of patient health goals. (See Kerr and colleagues work on “tightly linked” PMs.)32,33
  • Track the number of patients who refuse care and exclude them from measure denominators. Although some gaming may occur14,34,35 research suggests that widespread abuse is unlikely.35
VHA is currently making several changes to the national PM system to incorporate insights from this and other research. A newly convened national clinical reminder standardization workgroup will develop reminders that facilitate patient-centered care decisions. New PMs are being considered that will reward clinically appropriate action, even if the patient has not achieved specific targets. Finally, more flexible incentive plans were recently introduced that hold facilities accountable for fewer measures, chosen to address areas with the greatest opportunity for local improvement.
Limitations
Our methodology allowed us to explore a range of unintended consequences of PM and identify clinic processes and provider actions that appear to lead to their occurrence. However, this methodology is not well suited for assessing the frequency with which the identified effects occur or evaluating the balance of positive and negative effects of PM. Additional quantitative studies are needed to address these issues. Additionally, our findings may not generalize to other healthcare systems. However, most of the VHA’s primary care PMs are similar or identical to HEDIS (Healthcare Effectiveness Data and Information Set) measures which are used by over 90% of U.S. health plans.36 Our data consisted of facility staff self-reports (and in some cases second-hand reports on the behavior of others). Participants may have incorrectly inferred negative effects on patients, may have been reluctant to share perceived deficiencies in personal or facility practices, or may have been more likely than non-participants to express the views summarized above.
CONCLUSIONS
Although quantitative performance measurement systems have important benefits for patients, they may also: (1) lead to inappropriate clinical care; (2) decrease providers’ focus on patient concerns and patient service, and (3) compromise patient education and autonomy. These negative unintended consequences are largely determined by facility-level strategies undertaken to improve PM scores. Additional study is needed to determine the prevalence of unintended effects and to better understand how to implement systems in ways that minimize them.
Electronic supplementary material
ESM 1(222K, pdf)
(PDF 222 KB)
Acknowledgements
We gratefully acknowledge Joe Francis (Veterans Health Affairs Chief Quality and Performance Officer) for providing comments on a previous draft of this manuscript.
Conflict of Interest
None disclosed.
Funding/Support
This work was supported by a Veterans Administration Health Services research grant (IIR-07-140).
Role of the Sponsors
The funding organizations had no role in the design and conduct of the study, in the collection, analysis, and interpretation of the data, or in the preparation, review, or approval of the manuscript.
1. Asch SM, McGlynn EA, Hogan MM, Hayward RA, Shekelle P, Rubenstein L, et al. Comparison of quality of care for patients in the Veterans Health Administration and patients in a national sample. Ann Intern Med. 2004;141(12):938–945. [PubMed]
2. Campbell SM, Reeves D, Kontopantelis E, Sibbald B, Roland M. Effects of pay for performance on the quality of primary care in England. N Engl J Med. 2009;361(4):368–378. doi: 10.1056/NEJMsa0807651. [PubMed] [Cross Ref]
3. Lindenauer PK, Remus D, Roman S, Rothberg MB, Benjamin EM, Ma A, et al. Public reporting and pay for performance in hospital quality improvement. N Engl J Med. 2007;356(5):486–496. doi: 10.1056/NEJMsa064964. [PubMed] [Cross Ref]
4. Rosenthal MB, Frank RG, Li Z, Epstein AM. Early experience with pay-for-performance: from concept to practice. JAMA. 2005;294(14):1788–1793. doi: 10.1001/jama.294.14.1788. [PubMed] [Cross Ref]
5. Davies M, Spears W, Pugh J. What VA providers really think about clinical practice guidelines. Federal Practitioner. 2004;15.
6. Hayward RA. Performance measurement in search of a path. N Engl J Med. 2007;356(9):951–953. doi: 10.1056/NEJMe068285. [PubMed] [Cross Ref]
7. Jha AK, Perlin JB, Kizer KW, Dudley RA. Effect of the transformation of the Veterans Affairs Health Care System on the quality of care. N Engl J Med. 2003;348(22):2218–2227. doi: 10.1056/NEJMsa021899. [PubMed] [Cross Ref]
8. Casalino LP. The unintended consequences of measuring quality on the quality of medical care. N Engl J Med. 1999;341(15):1147–1150. doi: 10.1056/NEJM199910073411511. [PubMed] [Cross Ref]
9. Sheldon T. Promoting health care quality: what role performance indicators? Qual Health Care. 1998;7(Suppl):S45–S50. [PubMed]
10. Wachter RM, Flanders SA, Fee C, Pronovost PJ. Public reporting of antibiotic timing in patients with pneumonia: lessons from a flawed performance measure. Ann Intern Med. 2008;149(1):29–32. [PubMed]
11. Snyder L, Neubauer RL. Pay-for-performance principles that promote patient-centered care: an ethics manifesto. Ann Intern Med. 2007;147(11):792–794. [PubMed]
12. Eddy DM. Performance measurement: problems and solutions. Health Aff (Millwood) 1998;17(4):7–25. doi: 10.1377/hlthaff.17.4.7. [PubMed] [Cross Ref]
13. Boyd CM, Darer J, Boult C, Fried LP, Boult L, Wu AW. Clinical practice guidelines and quality of care for older patients with multiple comorbid diseases: implications for pay for performance. JAMA. 2005;294(6):716–724. doi: 10.1001/jama.294.6.716. [PubMed] [Cross Ref]
14. McDonald R, White J, Marmor TR. Paying for performance in primary medical care: learning about and learning from "success" and "failure" in England and California. J Health Polit Policy Law. 2009;34(5):747–776. doi: 10.1215/03616878-2009-024. [PubMed] [Cross Ref]
15. Blumenthal D. The vital role of professionalism in health care reform. Health Aff (Millwood) 1994;13(1):252–256. doi: 10.1377/hlthaff.13.1.252. [PubMed] [Cross Ref]
16. Bokhour BG, Burgess JF, Jr, Hook JM, White B, Berlowitz D, Guldin MR, et al. Incentive implementation in physician practices: A qualitative study of practice executive perspectives on pay for performance. Med Care Res Rev. 2006;63(1 Suppl):73S–95S. doi: 10.1177/1077558705283645. [PubMed] [Cross Ref]
17. Fisher ES. Paying for performance–risks and recommendations. N Engl J Med. 2006;355(18):1845–1847. doi: 10.1056/NEJMp068221. [PubMed] [Cross Ref]
18. Sittig DF, Krall MA, Dykstra RH, Russell A, Chin HL. A survey of factors affecting clinician acceptance of clinical decision support. BMC Med Inform Decis Mak. 2006;6:6. doi: 10.1186/1472-6947-6-6. [PMC free article] [PubMed] [Cross Ref]
19. Bindman AB. Can physician profiles be trusted? JAMA. 1999;281(22):2142–2143. doi: 10.1001/jama.281.22.2142. [PubMed] [Cross Ref]
20. Chassin MR, Loeb JM, Schmaltz SP, Wachter RM. Accountability measures–using measurement to promote quality improvement. N Engl J Med. 2010;363(7):683–688. doi: 10.1056/NEJMsb1002320. [PubMed] [Cross Ref]
21. Shen Y. Selection incentives in a performance-based contracting system. Health Serv Res. 2003;38(2):535–552. doi: 10.1111/1475-6773.00132. [PMC free article] [PubMed] [Cross Ref]
22. Werner RM, Asch DA, Polsky D. Racial profiling: the unintended consequences of coronary artery bypass graft report cards. Circulation. 2005;111(10):1257–1263. doi: 10.1161/01.CIR.0000157729.59754.09. [PubMed] [Cross Ref]
23. Patton MQ. Qualitative Research and Evaluation Methods. 3. Thousand Oaks, CA: Sage; 2002.
24. Bradley EH, Curry LA, Devers KJ. Qualitative data analysis for health services research: developing taxonomy, themes, and theory. Health Serv Res. 2007;42(4):1758–72. doi: 10.1111/j.1475-6773.2006.00684.x. [PMC free article] [PubMed] [Cross Ref]
25. Lipton HL, Bero LA, Bird JA, McPhee SJ. The impact of clinical pharmacists' consultations on physicians' geriatric drug prescribing. A randomized controlled trial. Med Care. 1992;30(7):646–658. doi: 10.1097/00005650-199207000-00006. [PubMed] [Cross Ref]
26. Jyrkka J, Enlund H, Korhonen MJ, Sulkava R, Hartikainen S. Polypharmacy status as an indicator of mortality in an elderly population. Drugs Aging. 2009;26(12):1039–1048. doi: 10.2165/11319530-000000000-00000. [PubMed] [Cross Ref]
27. Flaherty JH, Perry HM, III, Lynchard GS, Morley JE. Polypharmacy and hospitalization among older home care patients. J Gerontol A Biol Sci Med Sci. 2000;55(10):M554–M559. doi: 10.1093/gerona/55.10.M554. [PubMed] [Cross Ref]
28. Espino DV, Bazaldua OV, Palmer RF, Mouton CP, Parchman ML, Miles TP, et al. Suboptimal medication use and mortality in an older adult community-based cohort: results from the Hispanic EPESE Study. J Gerontol A Biol Sci Med Sci. 2006;61(2):170–175. doi: 10.1093/gerona/61.2.170. [PubMed] [Cross Ref]
29. Iwata M, Kuzuya M, Kitagawa Y, Suzuki Y, Iguchi A. Underappreciated predictors for postdischarge mortality in acute hospitalized oldest-old patients. Gerontology. 2006;52(2):92–98. doi: 10.1159/000090954. [PubMed] [Cross Ref]
30. Thomas WI, Thomas DS. The child in America: Behavior problems and programs. New York: Knopf; 1928. pp. 571–2.
31. Wilkinson EK, McColl A, Exworthy M, Roderick P, Smith H, Moore M, et al. Reactions to the use of evidence-based performance indicators in primary care: a qualitative study. Qual Health Care. 2000;9(3):166–174. doi: 10.1136/qhc.9.3.166. [PMC free article] [PubMed] [Cross Ref]
32. Kerr EA, Krein SL, Vijan S, Hofer TP, Hayward RA. Avoiding pitfalls in chronic disease quality measurement: a case for the next generation of technical quality measures. Am J Manag Care. 2001;7(11):1033–1043. [PubMed]
33. Kerr EA, Smith DM, Hogan MM, Hofer TP, Krein SL, Bermann M, et al. Building a better quality measure: are some patients with 'poor quality' actually getting good care? Med Care. 2003;41(10):1173–1182. doi: 10.1097/01.MLR.0000088453.57269.29. [PubMed] [Cross Ref]
34. Epstein AM. Paying for performance in the United States and abroad. N Engl J Med. 2006;355(4):406–408. doi: 10.1056/NEJMe068131. [PubMed] [Cross Ref]
35. Doran T, Fullwood C, Gravelle H, Reeves D, Kontopantelis E, Hiroeh U, et al. Pay-for-performance programs in family practices in the United Kingdom. N Engl J Med. 2006;355(4):375–384. doi: 10.1056/NEJMsa055505. [PubMed] [Cross Ref]
36. National Committee for Quality Assurance (NCQA) Web site. Available at: http://www.ncqa.org/tabid/187/Default.aspx. Accessed February 21, 2011.
Articles from Journal of General Internal Medicine are provided here courtesy of
Society of General Internal Medicine