PMCCPMCCPMCC

Search tips
Search criteria 

Advanced

 
Logo of nihpaAbout Author manuscriptsSubmit a manuscriptNIH Public Access; Author Manuscript; Accepted for publication in peer reviewed journal;
 
Stroke. Author manuscript; available in PMC Aug 10, 2009.
Published in final edited form as:
PMCID: PMC2723834
NIHMSID: NIHMS97496
Public Reporting of Quality Data for Stroke: Is it Measuring Quality?
Adam Kelly, MD, Joel P. Thompson, MPH, Deborah Tuttle, RN, MPS, Curtis Benesch, MD, MPH, and Robert G. Holloway, MD, MPH
Department of Neurology (Drs. Kelly, Benesch, and Holloway, and Mr. Thompson) and Office of Clinical Practice Evaluation (Dr. Holloway and Ms. Tuttle), University of Rochester, Rochester, New York
Address correspondence to: Robert G. Holloway, MD, University of Rochester Medical Center, Department of Neurology, 601 Elmwood Avenue, Box 673, Rochester, New York 14642, Phone (585) 275-1018, Fax (585) 244-2529, Email Robert_Holloway/at/urmc.rochester.edu
Background and Purpose
Public reporting of quality data is becoming more common and increasingly used to improve choices of patients, providers, and payers. We reviewed the scope and content of stroke data being reported to the public and how well it captures the quality of stroke care.
Methods
We performed a cross-sectional survey of all report cards within the Agency for Healthcare Research and Quality (AHRQ) Report Card Compendium. Stroke quality data were categorized into one of five groups: structure, process, outcomes, utilization, and finances. We also determined the congruence of mortality ratings of New York hospitals provided by two different report cards.
Results
Of 221 available report cards, 19 (9%) reported quality information regarding stroke, and 17 specifically addressed the quality of hospital-based stroke care. The most frequent data reported were utilization measures (n = 15 report cards) and outcome measures (n = 14 report cards). Data regarding finances (n = 4), structure of care (n = 2), and process of care (n = 1) were reported infrequently. Ratings were incongruent in 61 of the 157 hospitals (39%) with the same hospital being rated below average on one report care and average on another in 44 hospitals.
Conclusions
Publicly reported quality data pertaining to patients with stroke are incomplete, confusing, and inaccurate. Without further improvements and a better understanding of the needs and limitations of the many stakeholders, targeted transparency policies for stroke care may lead to worse quality and large economic losses.
Keywords: Quality of Care, Stroke, Mortality
Public reporting of health care quality data has become a common policy strategy to improve transparency, accountability, and quality. It is hoped that the power of information will increase trust and drive better choices by patients, referring physicians, and purchasers of health care. Public reporting of hospital quality data is incorporated into federal law and many states have a mandatory public reporting requirement.1,2 Transparency policies, however, can decrease quality and lead to large economic losses if the information provided is incomplete, confusing, inaccurate or distorted.3,4
Stroke care is of high priority on the national quality agenda. Outcomes measures are readily available (using administrative data), process-measures have been developed from research evidence, and the Joint Commission and state health departments have encouraged the adoption of primary stroke centers.511 Quality initiatives have also shown that improvement on some of the measures is achievable.12,13 Despite these observations, the scope and accuracy of publicly reported quality data regarding stroke care is unknown.
In November 2006, the Agency for Healthcare Quality and Research (AHRQ) released its Report Card Compendium, which assembled all publicly available websites reporting health quality data at a single internet site.14 We reviewed the individual report cards to assess the content and scope of publicly available information regarding quality data after stroke. We also investigated the degree to which different sites provided similar estimates of quality regarding the same hospital.
All report cards in the Agency for Healthcare Research and Quality Report Card Compendium were accessed from a central summary webpage.14 From this central web page, links were followed and each website was reviewed for the presence of stroke-specific quality data. For each report card that contained stroke information, we identified the sponsor(s), the geographic scope of the quality report, the number of hospitals included in the report card, and the timeliness of the data. In addition, we identified the data sources used, how patients with stroke were defined and the methods of risk-adjustment, if used.15
Each quality measure was categorized as either a measure of structure, process, outcome, utilization, or finance.16,17 Structural measures include the availability and quality of resources, management systems, and designation as a Primary Stroke Center. Process measures (performance measure or effectiveness measures) evaluate the activities of physicians and other health care providers to determine if evidence-based recommendations are followed. Outcome measures evaluate the end result of health care and include measures of mortality, readmissions, complications, and patient/caregiver satisfaction scores. Utilization measures include data pertaining to the frequency of service use, including length-of-stay. Finance measures include economic data pertaining to the provision of stroke care.
We identified the rating of all New York state hospitals as provided by two separate report card sites: the Niagara Health Quality Coalition (NHQC), a not-for-profit corporation, and Health Grades®, a for-profit, nation-wide evaluation program.18,19 Both sites report on inpatient stroke mortality, use ICD-9-CM diagnosis codes to define their population at risk, and exclude patients transferred to another acute care hospital. Minor differences in coding rules exist between the two reporting systems (eg, Health Grades® includes ICD-9 code 436 and excludes patients with a palliative care code, V66.7). Neither report card provided ratings of hospitals with low stroke volumes, which was defined as fewer than 30 cases per year. New York State was chosen as the source for this analysis due to the availability of two report cards, which separately evaluated nearly all hospitals statewide.
The NHQC accepts no advertising, consulting or other funding from the providers it grades and uses statewide hospital administrative data (age 18 years and older) for the calendar year 2005. NHQC uses software developed by the Agency for Healthcare Research and Quality to compares each hospital’s inpatient mortality to a risk-adjusted national average (AHRQ Inpatient Quality Indicators).6 This measure is risk-adjusted using a linear model estimated from a nationwide data set and includes age, gender, and All-Payer Refined Diagnostic Related Groups (APR-DRGs) as developed by 3M.20 NHQC uses a 95% confidence interval to identify which hospitals are better (3-star rating), worse (1-star rating) or not significantly different from the state-wide average (2-star rating).
Health Grades® uses Medicare inpatient billing data (age 65 years and older) for the period 2003 to 2006 and uses a proprietary “disease-specific and outcome-specific” risk-adjustment methodology using demographic characteristics, co-morbid diagnoses, and specific procedures. Health Grades® calculates a predicted mortality for each hospital using a proprietary risk-adjusted methodology, to which the actual hospital mortality is compared. The statistical methodology determining if the actual and predicted rates are significantly different is not readily available at the website. Those hospitals whose mortality was lower than predicted were assigned 5-star ratings and those hospitals whose mortality was higher than predicted were assigned a 1-star rating. Those hospitals whose actual performance was not significantly different from what was predicted received a 3-star rating (2-star and 4-star ratings are not assigned). According to methodology found on the company’s website, for each diagnosis or procedure, approximately 70–80% of hospitals should receive 3-star (average) ratings, while 10–15% of hospitals should receive 1-star and 5-star ratings.
For each hospital in New York State that was included in both report cards, we assessed agreement between the two reporting systems for the following categorical ratings: 1) not significantly different than the state average or from what was predicted (i.e., 2-star rating from NHQC and 3-star rating from Health Grades®), 2) better than the state average or lower than predicted (i.e., 3-star rating from NHQC and 5-star rating from Health Grades®), and 3) worse than the state average or higher than predicted (i.e., 1-star rating from NHQC and 1-star rating from Health Grades®). We performed similar analyses for hospitals designated by the Joint Commission as Primary Stroke Centers and hospitals designated by New York State as Designated Stroke Centers.8,10 Weighted Kappa was used to assess level of agreement.
This study was exempt from review by the University of Rochester’s Research Subject Review Board.
A total of 221 online quality reports were included in the AHRQ compendium as of December 1, 2007. From this total, 5 could not be found from the central AHRQ compendium site and 11 were proprietary or for health plan members only. From the remaining 205 report cards, 19 (9%) reported quality information regarding stroke care. Of these 19 report cards, 17 were hospital-based,18,19,2135 one provided physician self-reported measures regarding stroke prevention care,36 and one indicated the presence of stroke management services for health plans.37 Of the 205 accessible reports, 76 contained data regarding hospital-based measures. Thus, 17 out of 76 hospital quality reports (24%) included stroke-specific quality measures.
The 17 hospital-based report cards are listed in Table 1. The report cards were produced by state health departments,26,29,30,32,34,35 independent research organizations or private-public partnerships,18,25,27 insurance companies,22,23,33 for-profit companies,19,21 hospital associations,24,28 and health systems.31 All sites used hospital administrative data that were 1 to 3 years old, and 13 sites used the Inpatient Quality Indicator risk-adjustment software provided free by the Agency for Healthcare Quality and Research.6 One site created separate reports for non-hemorrhagic and hemorrhagic strokes.34 All other sites grouped subarachnoid hemorrhages (SAH), intracerebral hemorrhages (ICH), and ischemic strokes into one measurement cohort. Two sites included supplemental survey information about the structure and process of care.25,26
Table 1
Table 1
Summary of Hospital-Based Report Cards Including Stroke Quality Data
Table 2 shows the type of quality measures reported. The most frequently reported measures were utilization (n=15) and outcome (n=14) data. Risk-adjusted inpatient stroke mortality was the most commonly reported outcome measure. Two sites included quality indicators about the structure of stroke care services and only one site (from the UK) addressed the process of stroke care. Four sites included economic information with various methods of presentation (eg, costs, average charges, median charges).
Table 2
Table 2
Specific Content of Quality Data included in Report Cards
The results of comparing inpatient mortality ratings for similar hospitals are summarized in Table 3. A total of 157 out of 214 New York State hospitals were evaluated by both the NHQC and Health Grades® (the majority of the 57 unrated hospitals were due to low volume). Health Grades® rated 56 of the 157 hospitals below average compared to only 16 rated below average by NHQC. The two sites provided congruent ratings (average/average, below average/below average, above average/above average) on 96 out of these 157 hospitals (agreement 61.1%, weighted Kappa 0.163, slight agreement above that expected by chance). Ratings were incongruent in 61 out of 157 cases (38.9%), including one case where a hospital was rated above average by one site and below average by the other. The most common disagreement was an average rating by NHQC and a below average rating by Health Grades® (44 of the 157 hospitals, 28%). Only one hospital was rated as above average by both sites. Agreement was similarly low for the 10 Primary Stroke Centers (agreement 40%, weighted Kappa 0.118) and the 107 of the 116 Designated Stroke Centers evaluated by both sites (agreement 61.6%, weighted Kappa 0.173).
Table 3
Table 3
New York State Hospital Inpatient Mortality Ratings Using Two Different Report Cards
The science of quality measurement is maturing at a rapid and frenetic pace. In evaluating health care delivery, good quality is no longer assumed. On the contrary, there is an increasing expectation that it should be measured, compared, and paid for if good results are to be achieved. Our study provides a window into what is currently being publicly reported regarding the quality of stroke care, though the results of our study are limited to only those report cards included in the AHRQ Report Card Compendium. Many more organizations, health systems, and hospitals are also likely reporting stroke quality data on the internet. The amount and content of data available in countries outside the U.S. remains uncertain. The results are concerning for several reasons.
First, the data are incomplete. Despite there being well-established process measures for stroke, they were reported by only one site and this involved hospitals from the United Kingdom.8,9,11 Few sites reported on the structural elements of quality (stroke unit, accredited facility, designated stroke center), an easy potential addition given the published guidelines to establish both primary and comprehensive stroke centers.38,39 No sites reported on the quality dimensions of patient-centeredness (eg, patient satisfaction) or health disparities. One reason for this focus on outcomes and utilization quality reporting is the availability of administrative data, which does not generally include process or structure data.
Second, the data are poorly-defined. The most common outcome measure reported is risk-adjusted in-hospital mortality rate, but it is not clear what this rate is actually measuring. Short-term mortality correlates poorly with process measures and is likely related to unsafe care in fewer than 10% of all deaths.40,41 In fact, the majority of stroke deaths occur after deliberate decisions by patients and their families not to pursue unwanted life-prolonging treatments.42 Short-term mortality, therefore, may be more indicative of “good quality” deaths, particularly since more informed patients are more inclined to want less aggressive care (ie, better quality decision-making leading to higher short-term mortality).43 The tremendous variability in how mortality and other outcome data are reported only compounds the confusion. It is also unclear if the average user knows how to interpret and use other measures that are frequently reported, such as utilization data (eg, length of stay) or financial information (eg, charges vs. costs).
Third, the data are unreliable. We found that two separate report cards provided disparate hospital ratings in 39% of comparisons. Disagreement was also observed amongst Primary and Designated Stroke Centers, a subset of hospitals selected for the capacity and quality of stroke care they provide. A recent study showed inconsistent ratings of hospitals among several sites for surgical procedures, but did not quantify the degree of disagreement.44 It is not clear why the report card ratings disagree so frequently. Potential reasons include different sample eligibility criteria, inconsistent methods of risk-adjustment and variable thresholds for defining statistical significant deviations from average or expected results. The potential for systematic bias should also be explored, particularly given the skew in below average ratings found in one of the report cards and their deviation from a pre-defined distribution of outlier status.
Unreliable and invalid publicly reported stroke quality data may have unintended consequences.3,4,43,45,46 Patients may choose the wrong providers, payers may reward or punish providers inappropriately, providers may “game” to improve rankings, hospital leaders may divert resources from worthy improvement efforts, and intermediary companies may profit by stoking fears of losing reputation and market share among affected hospitals. In the end, the public loses trust.
We provide three recommendations. First, efforts are needed to develop a standardized “dossier” of stroke quality measures that meaningfully align with the six worthy aims of health care: effective, safe, patient-centered, equitable, timely, and efficient 17. This objective will include efforts to harmonize existing stroke process measures (which are in progress) and to develop consensus metrics for stroke outcomes that measure “good quality” deaths as well unexpected “never ever” deaths, for which organizations should be held accountable.8,9,15,47,48 In addition, we need to develop and standardize new measures that focus on patient-centered, efficient, and equitable care. Collaborative public-private partnerships with several organizations that are currently committed to providing stroke quality data for internal quality improvement efforts could facilitate such efforts.9,49
Second, there should be more organized skepticism focused on the AHRQ stroke inpatient quality indicator as a primary measure of quality of care.6 The increasing appetite for health care quality data and the easy access of administrative data will likely guarantee the continued use of mortality as a marker of quality. In the short-run, this will placate stakeholders. Fundamental questions remain however, about the appropriateness of combining all types of stroke (SAH, ICH, ischemic) into this one indicator and the impact that such measures may have on the delivery of high quality palliative care. The inpatient time-horizon is confounded by hospital practice patterns and the capacity of non-hospital services, and ignores the longitudinal accountability needed to improve the quality of a chronic condition. Finally, despite its “public access”, the risk-adjusted methodology remains proprietary.20
Third, further national efforts are needed to develop standardized reporting requirements with explicit rules to reduce bias and to ensure a minimum standard for measuring and reporting conduct.50 Much can be learned from the transparency systems that help govern corporate financing, restaurant hygiene, and mortgage lending practices.51 As the quality field continues to mature, there will be increasing efforts to cherry-pick measures for marketing purposes. All measures should be reported, good or bad - there is no substitute to playing by the rules and working with integrity. Discussion is also needed regarding mandatory vs. voluntary reporting, internal reporting with feedback vs. public reporting, and how to finance a sustainable and effective transparency system that is responsive, interactive, and customized to stakeholder preferences and public concerns.
Summary
The modest amount of stroke quality data that is currently available does not portray an accurate measure of the quality of care being provided, and inconsistencies in this data further undermine its effective utilization. The future of stroke quality measurement and reporting is uncertain, but broad improvement in the science and infrastructure are needed to realize its potential to mobilize choices and market forces in order to improve stroke care. Providers must to take a leading role in these efforts and focus on the needs of our patients and the public at large.
Acknowledgments
Robert Holloway consults for Milliman Guidelines reviewing practice guidelines and Maximus, Inc.
1. Centers for Medicare and Medicaid Services. Reporting Hospital Quality Data for Annual Payment Update. [Accessed November 28, 2007]. http://www.cms.hhs.gov/HospitalQualityInits/20_HospitalRHQDAPU.asp.
2. Steinbrook R. Public Report Cards – Cardiac Surgery and beyond. NEJM. 2006;355:1847–1849. [PubMed]
3. Werner RM, Asch DA. The unintended consequences of publicly reporting quality information. JAMA. 2005;293:1239–1244. [PubMed]
4. Lilford R, Mohammed MA, Spiegelhalter D, Thomson R. Use and misuse of process and outcome data in managing performance of acute medical care: avoiding institutional stigma. Lancet. 2004;363:1147–1154. [PubMed]
5. Iezzoni LI, Shwarz M, Ash AS, Mackiernan YD. Predicting in-hospital mortality for stroke patients: results differ across severity-measurement methods. Med Decis Making. 1996;16:348–356. [PubMed]
6. Agency for Healthcare Research and Quality. AHRQ Quality Indicators. [Accessed November 28, 2007]. http://www.qualityindicators.ahrq.gov/
7. Holloway RG, Vickrey BG, Benesch CG, Hinchey JA, Bieber J. Development of Performance Measures for Acute Ischemic Stroke. Stroke. 2001;32:2058–74. [PubMed]
8. The Joint Commission. Primary Stroke Center Certification. [Accessed October 10, 2007]. http://www.jointcommission.org/CertificationPrograms/PrimaryStrokeCenters.
9. American Heart Association/American Stroke Association. Get With the Guidelines –Stroke. [Accessed October 10, 2007]. http://www.strokeassociation.org/presenter.jhtml?identifier=3002728.
10. New York State Department of Health. NYS DOH Designated Stroke Centers. [Accessed November 28, 2007]. http://www.health.state.ny.us/nysdoh/ems/stroke/stroke.htm.
11. Centers for Medicare and Medicaid Services. Physician Quality Reporting Initiative (PQRI) [Accessed November 28, 2007]. http://www.cms.hhs.gov/pqri/
12. Jacobs BS, Baker PL, Roychoudhury C, Mehta RH, Levine SR. Improved quality of stroke care for hospitalized Medicare beneficiaries in Michigan. Stroke. 2005 Jun;36:1227–31. [PubMed]
13. LaBresh KA. Quality of acute stroke care improvement framework for the Paul Coverdell National Acute Stroke Registry: facilitating policy and system change at the hospital level. Am J Prev Med. 2006;31(6 Suppl 2):S246–50. [PubMed]
14. Agency for Healthcare Research and Quality. Report Card Compendium. [Accessed December 11, 2007]. http://www.talkingquality.gov/compendium.
15. Krumholz HM. for the American Heart Association Quality of Care and Outcomes Research Interdisciplinary Writing Group. Standards for statistical models used for public reporting of health outcomes: an American Heart Association Scientific Statement from the Quality of Care and Outcomes Research Interdisciplinary Writing Group. Circulation. 2006;113:456–62. [PubMed]
16. Donabedian A. The quality of care. How can it be assessed? JAMA. 1998;260:1743–8. [PubMed]
17. Institute of Medicine, Committee on Health Care in America. Crossing the quality chasm: a new health system for the 21st Century. Washington (DC): National Academy Press; 2001.
18. Niagara Health Quality Coalition/Alliance for Quality Healthcare. 2007 New York State Hospital Report Card. [Accessed December 11, 2007]. http://www.myhealthfinder.com.
19. Health Grades® Free Hospital Rankings. [Accessed December 11, 2007]. http://www.healthgrades.com.
20. 3M. 3M APR-DRGs. [Accessed November 28, 2007]. http://www.3mhis2007.com/apr/
21. About.com: Health. UCompare Health Care. [Accessed December 11, 2007]. http://www.ucomparehealthcare.com/hospital_start.html.
22. Blue Cross/Blue Shield of Minnesota. Healthcare Facts. [Accessed December 11, 2007]. http://www.healthcarefacts.org.
23. Blue Cross/Blue Shield of Tennessee/Tennessee Hospital Association. Hospital Quality Comparison. [Accessed December 11, 2007]. http://www.bcbst.com/tools.
24. Colorado Health and Hospital Association Performance and Quality Group. Colorado Hospital Quality. [Accessed December 11, 2007]. http://www.hospitalquality.org.
25. Dr. Foster, Ltd. Hospital Guide. [Accessed December 11, 2007]. http://www.drfoster.com/guides.
26. Florida Agency for Healthcare Administration. Florida Compare Care. [Accessed December 11, 2007]. http://www.floridacomparecare.gov.
27. Fraser Institute. Hospital Report Card: Ontario 2006. [Accessed December 11, 2007]. http://www.hospitalreportcards.ca/
28. Kentucky Hospital Association. Kentucky Hospital Association Quality Data. [Accessed December 11, 2007]. http://www.kyha.com/QualityData/IQISite/default.htm.
29. Maryland Healthcare Commission. Hospital Guide. [Accessed December 11, 2007]. http://mhcc.maryland.gov.
30. Massachusetts Executive Office of Health and Human Services. Healthcare Quality and Cost Information. [Accessed December 11, 2007]. http://www.mass.gov/healthcareqc.
31. Norton Healthcare. Quality Report. [Accessed December 11, 2007]. http://www.nortonhealthcare.com/about/qualityreport.
32. Office for Oregon Health Policy and Research. Oregon Hospital Quality Indicators. [Accessed December 11, 2007]. http://egov.oregon.gov/DAS/OHPPR/HQ.
33. PacifiCare. Hospital Performance. [Accessed December 11, 2007]. http://pacificare.com.
34. Pennsylvania Healthcare Cost Containment Council. Interactive Hospital Performance Report. [Accessed December 11, 2007]. http://www.phc4.org/hpr.
35. Texas Healthcare Information Council/Texas Department of State Health Services. Indicators of Inpatient Care in Texas Hospitals. [Accessed December 11, 2007]. http://www.dshs.state.tx.us/thcic/publications/hospitals/HospitalReports.shtm.
36. National Committee for Quality Assurance. Recognized Physician Directory. [Accessed December 11, 2007]. http://recognition.ncqa.org/
37. Missouri Department of Health and Senior Services. Managed Care Performance Monitoring. [Accessed December 11, 2007]. http://www.dhss.mo.gov/ManagedCare/Publications.html.
38. Alberts MJ. for the Brain Attack Coalition. Recommendations for the establishment of primary stroke centers. Brain Attack Coalition. JAMA. 2000;283:3102–9. [PubMed]
39. Alberts MJ. for the Brain Attack Coalition. Recommendations for comprehensive stroke centers: a consensus statement from the Brain Attack Coalition. Stroke. 2005;36:1597–616. [PubMed]
40. Peterson ED, Roe MT, Mulgund J, DeLong ER, Lytel BL, Brindis Rg, Smith SC, Jr, Pollack CV, Jr, Neby LK, Harrington RA, Gibler WB, Ohman EM. Association between hospital process performance and outcomes among patients with acute coronary syndromes. JAMA. 2006;295:1912–1920. [PubMed]
41. Bradley EH, Herrin J, Elbel B, McNamara RL, Magid DJ, Nallamothu BK, Wang Y, Normand SL, Spertus JA, Krumholz HM. Hospital quality for acute myocardial infarction: correlation among process measures and relationship with short-term mortality. JAMA. 2006;296:72–78. [PubMed]
42. Jaren O, Selwa L. Causes of mortality on a university hospital neurology service. Neurologist. 2006;12:245–8. [PubMed]
43. Holloway RG, Quill TE. Mortality as a measure of quality; implications for palliative and end-of-life care. JAMA. 2007;298:802–4. [PubMed]
44. Leonardi MJ, McGory ML, Ko CY. Publicly available hospital comparison web sites. Arch Surg. 2007;142:863–96. [PubMed]
45. Marshall MN, Shekelle PG, Leatherman S, Brook RH. The public release of performance data. What do we expect to gain? A review of the evidence. JAMA. 2000;283:1866–74. [PubMed]
46. Marshall MN, Romano PS, Davies HTO. How do we maximize the impact of the public reporting of quality of care? Int J Qual Health Care. 2004;16:i57–i63. [PubMed]
47. National Framework and Preferred Practices for Palliative and Hospice Care Quality. A Consensus Report. © 2006 National Quality Forum; Washington DC: [Accessed May 22, 2007]. www.qualityforum.org/pdf/reports/palliative/txPHreportPUBLIC01-29-07.pdf.
48. National Quality Forum. Serious Reportable Events in Healthcare 2006. [Accessed May 22, 2007]. http://www.qualityforum.org/projects/completed/sre/
49. University HealthSystem Consortium. [Accessed December 11, 2007]. http://www.uhc.edu/
50. Pronovost PJ, Miller M, Wachter RM. The GAAP in quality measurement and reporting. JAMA. 2007;298:1800–1802. [PubMed]
51. Fung A, Graham M, Weil D. Full Disclosure: The perils and promise of transparency. Cambridge University Press; New York: 2007.