PMCCPMCCPMCC

Search tips
Search criteria 

Advanced

 
Logo of nihpaAbout Author manuscriptsSubmit a manuscriptHHS Public Access; Author Manuscript; Accepted for publication in peer reviewed journal;
 
Health Aff (Millwood). Author manuscript; available in PMC 2011 December 1.
Published in final edited form as:
PMCID: PMC3020979
NIHMSID: NIHMS262156

State-Sponsored Public Reporting Programs of Hospital Quality in the United States

Joseph S. Ross, MD, MHS,* Sameer D. Sheth, and Harlan M. Krumholz, MD, SM§

Abstract

The prevalence of state public reporting initiatives focused on hospital quality is not known. We systematically reviewed state-sponsored publicly reporting programs focused on clinical aspects of hospital quality and performance for adults, surveying the 50 U.S. states and the District of Columbia. We found that while identifying information about programs was frequently a challenge, programs were present in 25 states (49%) and provided hospital quality information that varied considerably from state to state both by condition and by process and outcome measures reported. We examine the implications of these findings for future state initiatives.

Keywords: Public Reporting, States, Centers for Medicare and Medicaid Services, Quality and Performance, Measurement

INTRODUCTION

Public reporting, the objective measurement and public disclosure of physician and hospital performance, is now a critical strategy among efforts to improve healthcare quality. The success of public reporting programs is based on its capacity to leverage three broad mechanistic pathways (1) to influence hospital quality improvement: regulation, professionalism, and market forces (2). First, public reporting establishes standards for practice by objectively measuring and reporting on care. Second, public reporting provides performance feedback that is expected to fuel professional desire to improve care and improve quality either out of concern for public image or in an effort to maintain professional norms and standards of self-governance. Finally, public reporting facilitates informed choices by health care consumers, including patients, insurers, and even physicians and hospitals, which can in turn drive quality improvement to increase (or maintain) market share (3, 4).

Despite expectations that public reporting could improve healthcare quality, prior research has shown that public and professional responses to report cards can range widely between being functional, such that reduced information asymmetry leads to better healthcare choices and improved quality, and dysfunctional, such that report cards exacerbate already existing informational inequalities in care (5). Further, any such response is derived from the report card’s validity, comprehensiveness, comprehensibility, relevance, reasonableness, and functionality (5). Others have suggested that public reporting efforts are only effective when the information becomes “embedded” in the everyday decision-making routines users and disclosers (6). Most recently, it was suggested that for public reporting to have an impact, the system needs to have potential to inflict reputational damage by producing information that is reliable, robust to criticism from the hospitals being assessed, understood in broad terms by the public, and published and widely disseminated (7).

Hospital performance profiling did not figure prominently in U.S. health care policy until 1986, when the Health Care Financing Administration (HCFA, now known as the Center for Medicare and Medicaid Services, or CMS) began to publicly report hospital-specific mortality rates for numerous medical and surgical diagnoses (8). Ultimately, this program was discontinued, but in the late 1980s hospital outcome measurement was revisited as policy through development of large clinical registry databases for cardiac surgery by two states (New York State and Massachusetts)(9, 10) and by the Society of Thoracic Surgeons (11). Similar initiatives followed in the Department of Veterans Affairs (1216), Pennsylvania (17), Northeastern Ohio (Cleveland area)(18), and California(19). However, after a decade of measurement and reporting, programs remained focused predominantly on cardiac surgery patients, until a relatively recent shift to examine quality of care for older adults. Beginning in the early 2000s, CMS began developing a large public reporting program, initially measuring process measures of quality care for acute myocardial infarction (AMI), heart failure (HF), pneumonia, and general surgery, followed by measuring nursing home quality, and now includes measuring risk-standardized mortality and readmission rates for AMI, HF, and pneumonia hospitalizations.

While quality measurement and public reporting has grown increasingly focused on national efforts led by CMS, little attention has been paid to the continued growth and development of state-level initiatives. Accordingly, our objective was to systematically review and describe any state-sponsored publicly reporting programs focused on hospital quality and performance, in addition to describing the ease of accessibility of the information from state public reporting programs.

METHODS

We surveyed the public health programs of the 50 U.S. states and the District of Columbia during July and August of 2009 to determine the existence and extent of any independent state government or affiliated agency programs designed to publicly report hospital quality for adult patients, apart from the information provided by CMS through its hospital compare program. To be included, the state program needed to be focused on at least one clinical aspect of hospital quality, specifically process and outcome measures of care. Our survey was conducted in two steps. First, we searched state government websites to identify public reporting programs, regardless of whether the information from such programs was reported directly on the Internet or as a print-report. If state government websites referred to a state-sponsored, -mandated, or -affiliated program operated through an outside agency, we searched that affiliated agency’s website for the same information. Next, we contacted state officials by telephone within each state government, calling the person or official on the state public reporting website identified as working on data collection and analysis. If no contact person was listed, we called the contact phone number from the website, explained the survey, and asked to be connected with someone who would be able to answer our questions, persisting until we contacted a person able to provide the required information. If there was no state public reporting website, we called the State Department of Health Official responsible for public health data and statistics and followed the above procedure. Telephone calls were intended to confirm that no state public reporting program existed when we could not find information pertaining to such from their website or to ask questions or clarifications about the state public reporting program we identified from their website. For officials not immediately reachable by telephone, we followed up with a maximum of five telephone calls and emails.

We developed a standardized instrument to extract relevant information made available in state government or affiliated-agency websites by consulting with experts in systematic reviews, as well as quality measurement and public reporting, preparing an instrument for their review, and piloting the abstraction tool, making modifications as necessary. The following variables were extracted: report frequency, accessibility and rating system; data source; analytic strategy, including use of risk-standardization and methods used for low volume hospitals; number and types of conditions reported; and types of measures reported, including structural measures (e.g. volume), process measures (e.g., delivery of a specified treatment for a specific-condition, such as rates of aspirin delivery to patients admitted for acute myocardial infarction AMI) and outcome measures (e.g., mortality or readmission). All extractions were confirmed by a second author and disagreements in assessment and data extraction were resolved by consensus among all authors.

Descriptive statistics were used to report on the frequency of state public reporting programs. All analyses were performed using JMP version 7 (SAS Institute, Inc., Cary, NC). Because we examined and collected factual information that was publicly available, our study was determined to pose no risk and the protocol was approved by the Yale institutional review board.

RESULTS

State Program Information Accessibility

Identifying information about state public reporting programs was frequently a challenge. Programs were rarely advertised and different departments managed the information within each state; for instance, some states placed their program within the Department of Public Health, others in different government departments or independent hospital guide websites. Moreover, obtaining hospital performance reports or reaching a website that allowed comparison of hospital performance frequently required at least a half-dozen steps through sequential Internet pages. Finally, nearly all states used graphics and tables, as opposed to text alone, to present hospital performance data.

State Public Reporting Programs

Of the 50 U.S. states and the District of Columbia, websites were reviewed and a state official with knowledge of state public reporting initiatives was contacted for all 51. State public reporting programs were present in 25 states (49%; Exhibit 1). In addition, Illinois passed legislation in 2009 to initiate a program, Louisiana had legislation mandating a state public reporting program but no information was yet available, and Wisconsin has an active public reporting program operated by the Wisconsin Hospital Association, but it is neither mandated nor affiliated with the state government. Although all programs updated their public reports at least annually, they otherwise varied in their format: 7 (28%) issued paper reports whereas 18 (72%) provided information directly on their website, and many different reporting systems for outcomes measures of care were used, including tiers (n=11), numerals (n=10), and stars (n=3). The majority of the state programs (n=21, 84%) were mandated by state law.

Exhibit 1
Characteristics of state public reporting programs that measure and report on clinical aspects of hospital quality and performance for adult patients.

State programs varied in their approach to collecting the data used for public reporting. Twelve states (48%) required data to be submitted to the state, 3 (12%) required data to be submitted to an affiliated-agency, 6 (24%) collected the data independently from the hospitals, and 4 (16%) had an affiliated-agency collect the data. Regardless of the approach used, three-quarters of state programs (n=19, 76%) used administrative data, with or without the additional use of chart-abstracted data or other clinical registry data collected by hospitals, 3 (12%) only used data abstracted from medical charts, and 3 (12%) used a case-finding approach that was specifically relevant only for the reporting of hospital infection rates. Seventeen of the states (68%) audited the data collected for public reporting.

Reporting of Processes of Care

Less than half of states with a public reporting program focused on clinical aspects of hospital quality provided data on hospital processes of care (n=9, 36%; Exhibit 2). Among these 9, 8 provided information on processes of care for AMI, heart failure and pneumonia hospitalizations, whereas 3 additionally provided information on surgical care, such as antibiotic prophylaxis. Only California reported on use of the internal mammary artery for coronary artery bypass graft (CABG) surgery as a surgical process of care not related to infection prophylaxis. Additional process of care measures reported included processes of care for stroke hospitalizations and hand hygiene and influenza vaccination rates among hospital staff, as well as composite measures integrating these individual process measures. Finally, all 8 states reporting on processes of care for AMI, HF, and pneumonia used the methodological approach developed by CMS and reported on the same processes of care, but broadened the population on which the reporting was based to all adults using their independently collected data.

Exhibit 2
Clinical aspects of hospital quality and performance for adult patients that are measured and publicly reported by state public reporting programs.

Reporting of Clinical Outcomes

The vast majority of states with a public reporting program focused on clinical aspects of hospital quality provided data on hospital outcomes (n=24, 96%). Eleven states (44%) provided information on hospital-acquired infection rates, 4 (16%) on readmission rates, and 15 (60%) on hospital mortality rates (Exhibit 2); only 3 states, Florida, Pennsylvania and Virginia, publicly reported all three of these outcomes. In addition, Rhode Island provided information on hospital-acquired pressure ulcer rates. Among the 4 states publicly reporting hospital readmission information, the average number of conditions for which readmission was reported was 7.5 (SD=2.6) and the most commonly reported were readmission after hospitalization for heart failure (n=4), AMI (n=3), CABG surgery (n=3), stroke (n=3), pneumonia (n=3), hip fracture (n=3) and hip replacement surgery (n=3).

Among the 15 states publicly reporting hospital mortality information, the average number of conditions for which mortality was reported was 9.3 (Standard Deviation [SD]=3.9) and the most commonly reported were mortality after hospitalization for CABG surgery (n=14), percutaneous coronary intervention (PCI) (n=13), AMI (n=12), heart failure (n=12), stroke (n=11), pneumonia (n=11), and hip fracture (n=11). Among the 15 states publicly reporting mortality information, 12 provided information on in-patient mortality, 2 on 30-day mortality, and 1 on both in-patient and 30-day mortality, all of which risk-adjusted their estimate of hospital mortality rates for patient demographic and clinical characteristics. Finally, 11 of these 15 states (73%) used the methodological approach developed for the Inpatient Quality Indicators (IQI) program by the Agency for Healthcare Research and Quality (AHRQ) to calculate hospital mortality rates and 9 (60%) required a minimum volume of cases to report hospital mortality rates: 8 required 30 cases, 1 required 25.

Additional Observations

Although our survey of state public reporting programs was focused on clinical aspects of hospital quality for adult patients, specifically process and outcome measures of care, many states also publicly reported information on non-clinical aspects of hospital quality, including additional states which had not reported on clinical aspects of hospital quality. Among the non-clinical aspects of hospital quality that were reported, the most commonly reported were hospital length of stay (n=21), volume (n=26), and costs (n=26), which included both hospital-wide costs (n=16) and condition-specific costs (n=19).

DISCUSSION

In our systematic review of state-level hospital quality publicly reporting programs, specifically focused on clinical outcomes for adult patients, we found that just half of states were engaged in public reporting of hospital quality. These programs varied in content, as only a third provided data on hospital processes of care, whereas nearly three-quarters provided data on hospital outcomes, including acquired infection, readmission and mortality rates. Moreover, these programs varied in clinical focus, with many reporting on care for cardiac surgery patients, as well as for adults hospitalized for AMI, heart failure, and pneumonia, whereas others variably reported on care for less common causes of hospitalization, including gastrointestinal hemorrhage, carotid endarterectomy, and craniotomy. Finally, it is important to note that identifying information about and from state programs was challenging, making it unclear how useful the information currently is to patients and communities.

As public reporting of quality and performance has become increasingly common, and national public reporting efforts by CMS are only expected to expand, it is important to note that the vast majority of state public reporting programs were found to provide hospital quality information that was complementary to, rather than redundant with, the information currently publicly reported by CMS. Although states providing data on hospital processes of care were all focused on the same clinical conditions that are currently reported on by CMS, specifically care processes during hospitalizations for AMI, heart failure, and pneumonia, reporting was not limited to Medicare fee-for-service beneficiaries aged 65 years or older and included younger adults and older adults insured through private plans and Medicare-affiliated health maintenance organizations. Similarly, whereas states providing data on hospital outcomes of care for the most part reported on AMI, heart failure and pneumonia mortality, as CMS does currently, states predominantly reported in-patient mortality estimated using the AHRQ IQI methodology as opposed to 30-day mortality, in addition to reporting on a variety of other causes of hospitalization. However, in this instance, CMS’s adherence to a standardized period for outcome assessment (i.e., 30-day morality and readmission rates) is preferable to reporting on in-patient mortality, which has been shown to bias performance estimates to favor hospitals with shorter length of stays (20).

Given the number and breadth of the state public reporting programs we identified, policy makers should consider three initiatives that may further improve and facilitate the availability of hospital quality information. First, state reporting efforts could improve accessibility by using a single, easily navigable Internet site that includes information from each state’s public reporting program. Ideally, this Internet site would integrate, or at least include information from the CMS public reporting program. Information from state reporting programs was difficult to find and using a single site for all state public reporting programs would facilitate information accessibility. However, such a site would face bureaucratic challenges with respect to negotiating responsibilities for site coordination, production and development, and payment. Second, given these likely challenges, state public reporting program administrators should increase efforts to meet, either in-person or remotely, to share stories of successes and failures in their programs. There were many similarities across state efforts, particularly in clinical focus, suggesting that states could have much to learn from one another’s experiences. Finally, the state public reporting programs require rigorous, systematic evaluation in order to ensure that the information being made available is being used by patients, physicians, or hospital administrators to inform healthcare decisions and that it is valid, comprehensive, comprehensible, relevant, reasonable, and functional (5). Furthermore, the impact of state public reporting programs on clinical outcomes should also be assessed. The measurement and reporting of quality information is a public good that promotes transparency and accountability, but programs are not without cost and should be evaluated to ensure they are of sufficient value to the community.

Our study has several limitations. First, we conducted our survey during the summer of 2009. However, state public reporting initiatives are changing rapidly. For instance, we are aware of several states that have implemented changes to their programs during 2010, including the launch of expanded programs in Maryland and Ohio, inclusion of hospital-acquired infection rates in Oregon, and the reporting of the new surgical care improvement score, focused on antibiotic and venous thromboembolism prophylaxis pre- and post-surgery, in states such as New Jersey. Second, we focused on state government or affiliated agency programs designed to publicly report hospital quality. We identified one other program not affiliated with the state government, in Wisconsin, that measured and reported similar information on hospital quality. It is possible that there were additional non-government state programs that we did not identify, as well as programs sponsored by insurance plans or other organizations that we did not capture in our review. In addition, despite using a systematic approach to identify state programs, given the challenges we identified in finding information on state public reporting programs, there remains the possibility that there were state public reporting programs that we did not find. Third, we focused on programs measuring clinical aspects of hospital quality for adult patients. There were several other states that also reported non-clinical aspects of hospital quality, such as costs, volume, and length of stay. In addition, our review neither captures programs measuring clinical aspects of hospital quality for pediatric patients, nor clinical aspects of ambulatory care quality.

In conclusion, our systematic review of state publicly reporting programs focused on hospital clinical outcomes found that identifying information about state programs was challenging. However, while just half of states were engaged in public reporting of hospital quality, state public reporting programs provided hospital quality information that was complementary to, rather than redundant of, the information currently publicly reported by CMS. Nevertheless, there were clear state-to-state differences in their investment in public reporting and no standardized approach to data collection, analysis, and presentation. Future research should focus on what public reporting efforts have achieved.

Acknowledgments

Funding/support and role of the sponsor: Dr. Ross is currently supported by the National Institute on Aging (K08 AG032886) and by the American Federation of Aging Research through the Paul B. Beeson Career Development Award Program.

Footnotes

Data access and responsibility: Dr. Ross had full access to all the data in the study and takes responsibility for the integrity of the data and the accuracy of the data analysis.

Author contributions: All authors were responsible for the conception and design of this work, participated in the analysis and interpretation of the data and critically revised the manuscript for important intellectual content. Dr. Ross conducted the statistical analysis and drafted the manuscript. Dr. Krumholz provided supervision.

Conflicts of interest: Dr. Krumholz chairs a scientific advisory board for UnitedHealthcare and receives grant support from the Centers of Medicare and Medicaid Services (CMS) to develop and maintain performance measures that are used for public reporting.

References

1. Devers KJ, Pham HH, Liu G. What is driving hospitals’ patient-safety efforts? Health Aff (Millwood) 2004;23(2):103–15. [PubMed]
2. Marshall MN, Shekelle PG, Leatherman S, Brook RH. The public release of performance data: what do we expect to gain? A review of the evidence. JAMA. 2000;283(14):1866–74. [PubMed]
3. Hibbard JH, Slovic P, Peters E, Finucane ML. Strategies for reporting health plan performance information to consumers: evidence from controlled studies. Health Serv Res. 2002;37(2):291–313. [PMC free article] [PubMed]
4. Marshall M, Davies H. Public release of information on quality of care: how are health services and the public expected to respond? J Health Serv Res Policy. 2001;6(3):158–62. [PubMed]
5. Gormley WTJ, Weimer DL. Organizational Report Cards. Cambridge, MA: President and Fellows of Harvard College; 1999.
6. Weil D, Fung A, Graham M, Fagotto E. The Effectiveness of Regulatory Disclosure Policies. Journal of Policy Analysis and Management. 2006;25(1):155–81.
7. Bevan G. Performance Measurement of ‘Knights’ and ‘Knaves’: Differences in Approaches and Impacts in British Countries after Devolution. Journal of Comparative Policy Analysis: Research and Practice. 2010;12(1 & 2):33–56.
8. Vladeck BC, Goodwin EJ, Myers LP, Sinisi M. Consumers and hospital use: the HCFA “death list” Health Aff (Millwood) 1988;7(1):122–5. [PubMed]
9. Massachusetts Data Analysis Center. MASS-DAC. 2006. [cited 2007 August 14]; Available from: http://massdac.org/index.htm.
10. New York State Department of Health. New York State Hospital Profile. 2006. [cited 2007 August 14]; Available from: http://hospitals.nyhealth.gov/
11. Society of Thoracic Surgeons. STS National Database. 2006. [cited 2007 November 25]; Available from: http://www.sts.org/sections/stsnationaldatabase/
12. Demakis JG, McQueen L, Kizer KW, Feussner JR. Quality Enhancement Research Initiative (QUERI): A collaboration between research and clinical practice. Med Care. 2000;38(6 Suppl 1):I17–25. [PubMed]
13. Doebbeling BN, Vaughn TE, Woolson RF, Peloso PM, Ward MM, Letuchy E, et al. Benchmarking Veterans Affairs Medical Centers in the delivery of preventive health services: comparison of methods. Med Care. 2002;40(6):540–54. [PubMed]
14. Feussner JR, Kizer KW, Demakis JG. The Quality Enhancement Research Initiative (QUERI): from evidence to action. Med Care. 2000;38(6 Suppl 1):I1–6. [PubMed]
15. Kizer KW. The “new VA”: a national laboratory for health care quality management. Am J Med Qual. 1999;14(1):3–20. [PubMed]
16. Kizer KW, Demakis JG, Feussner JR. Reinventing VA health care: systematizing quality improvement and quality innovation. Med Care. 2000;38(6 Suppl 1):I7–16. [PubMed]
17. Pennsylvania Health Care Cost Containment Council. PHC4: Pennsylvania Health Care Cost Containment Council. 2007. [cited 2007 November 25]; Available from: http://www.phc4.org/
18. Center for Studying Health System Change. Community Report: Cleveland, Ohio, Fall 2000. Washington, DC: The Center for Studying Health System Change; 2000.
19. California Healthcare Foundation. CalHospitalCompare, Rating Hospital Quality in California. 2007. [cited 2007 August 14]; Available from: http://www.calhospitalcompare.org/
20. Drye EE, Normand SLT, Wang Y, Ross JS, Schreiner GS, Krumholz HM. Comparison of Hospital Risk-Standardized Mortality Rates using Inpatient and 30-Day Models: Implications for hospital profiling. AcademyHealth Annual Research Meeting; 2009; Chicago, IL. 2009.