In the last half century, the USA has gone from defining quality, to measuring quality, to requiring providers to publicly report quality measures, and most recently, to beginning to hold providers accountable for those results. External groups requiring measures now include public and private payers, regulators, accreditors and others that certify performance levels for consumers, patients and payers. Our investment in required quality measures has served us well. First, it has stimulated the development of new quality measurement and improvement infrastructure within many health systems that was absent or less developed than in the past.1
Second, it has helped to make good on the call for transparent results reporting that was a central feature of the Institute of Medicine Chasm Report.2
Consumers, providers, governing boards, employers, payers and accreditors can view comparative results and take action to enhance accountability.3
Third, publicly reported measures have been associated with improved levels of quality: the hospital core measures programme shows that evidence-based care for hospitalised patients has increased rapidly across the country;4
and the QUEST programme (which includes over 150 hospitals that are voluntarily seeking to improve performance on a small, standard set of measures), has demonstrated rapid improvement in death rate, evidence-based care and inpatient costs per discharge.5
Fourth, publicly reported measures have created new opportunities for researchers to conduct comparative-effectiveness studies and for medical educators to advance practice-based learning and improvement.
The number of quality measures that healthcare providers are required to report has skyrocketed over the past decade and that trend is poised to continue. For example, the number of National Quality Forum approved measures has gone from less than 200 measures in 2005 to over 700 measures in 2011 (personal communication, Helen Burstin, MD Senior Vice President for Performance Measures, National Quality Forum, 26 February 2011). In just the past year the US Centers for Medicare and Medicaid Services recommended 65 quality performance measures to hold care organisations accountable and to make payments based on these performance metrics,6
and new measures are being introduced to ensure that providers are meaningfully using electronic health records.7
Unfortunately, the accelerated deployment of quality measures has had some unintended consequences. First, the need to invest in capturing required metrics and to improve performance on these measures to reach the top echelon has caused some providers to overinvest measurement resources and improvement dollars in these high-profile high visibility measures. This has led organisations to deplete their quality measurement budgets and ignore other important topics. To provide just one example, the Massachusetts General Hospital and Massachusetts General Physicians Organisation is required to report over 120 quality measures to regulators and payers necessitating an infrastructure which consumes approximately 1% of net patient service revenue.8
Consequently, this organisation has little left in its measurement budget to pursue more important topics, such as patient-centred health outcomes and healthcare-associated harm. Second, different providers will have different areas that are most in need of improvement. The most productive improvement in quality for a specific organisation depends upon where they are in their quality journey (eg, going from 10−1
harm events needs different approaches than going from 10−3
It may be better policy to have a small required set of quality metrics and large optional sets so that organisations can target their improvements on areas where they are most needed. Third, some providers appear to have made sham improvements (eg, distributing a smoking cessation leaflet to all heart failure patients at midnight to ensure 100% compliance with a particular core metric) that meet the measurement requirement but not the patient need for a meaningful intervention. Fourth, many providers have reached high performance levels, not by improving the efficient design of high-quality care but by hiring a heart failure or pneumonia nurse to plug the process holes before patient discharge, thereby scoring high but adding costs without improving the reliability of the basic process. This ‘whack-a-mole’ mentality to quality improvement is unsustainable and will produce only marginal benefit. Fifth, there are statistical considerations. When the underlying measure is imperfect, marginal improvements from 96% to 99% may reflect error in measurement and denominator management more than nearly perfectly reliable performance. Even with a more error-proofed measure it is very difficult to rank providers (physicians or hospitals) accurately due to problems of collecting accurate data across multiple sites, challenges of attribution, and difficulties in forming comparable risk cohorts. Finally, a substantial number of studies have shown that there is often a weak association between high scores on process quality measures for given conditions (eg, acute myocardial infarction, heart failure) and health outcomes that matter most to patients and payers.10
If the USA continues with the proliferation of required quality measures, we will go from hundreds of required metrics to thousands. There are thousands of diseases, injuries, clinical states and interventional procedures that have a large and growing list of evidence-based care processes,11
and every special interest group could lobby for ‘their’ disease, injury or procedure to enjoy the ‘legitimacy’ and command for resources that are associated with being designated as a required quality metric. At the same time, providers of care have a genuine need to develop internal mechanisms to continuously measure and improve the processes of care delivery (ie, what they do) and the outcomes and costs of care that they provide.
We believe that if current trends in the growth of required quality measures continue, providers will need to invest so much money to report externally
imposed measures that there will be scant funds left to support provider-specific internal
measurement systems needed for monitoring and improving quality and for capturing longitudinal measures and cascading them to major clinical programmes and front-line clinical microsystems. Unchecked growth in mandated quality measures will lead to a commensurate growth in quality metric budgets devoted to ‘required metrics’ and thereby leave few ‘discretionary’ dollars to focus on internal quality measurement systems or on the results that matter most to the end users. In short, the drive to increase the scope and depth of required measures to judge quality could have the unanticipated consequence of decreasing providers' ability to manage and improve quality and meet our need for better quality, better outcomes and lower per capita costs.12
To summarise, the growth in the number of publicly reported and externally mandated quality measures has generated positive and negative effects. We ask, has the time come to provide guidance and principles for the future development of quality metrics that providers are required to produce? We think the answer to this question is ‘yes’. Even if we can shift some of the measurement burden to patients through greater reliance on patient reported measures, a development we support, the exponential growth of other measures will overtake our limited resources.13
Thus, the resources that might be devoted to ‘end user’ value will be diverted to cover a plethora of quality-performance metrics that may have a limited impact on the things that patients and payers want and need (ie, better outcomes, better care and lower per capita costs).