The science of quality measurement is maturing at a rapid and frenetic pace. In evaluating health care delivery, good quality is no longer assumed. On the contrary, there is an increasing expectation that it should be measured, compared, and paid for if good results are to be achieved. Our study provides a window into what is currently being publicly reported regarding the quality of stroke care, though the results of our study are limited to only those report cards included in the AHRQ Report Card Compendium. Many more organizations, health systems, and hospitals are also likely reporting stroke quality data on the internet. The amount and content of data available in countries outside the U.S. remains uncertain. The results are concerning for several reasons.
First, the data are incomplete. Despite there being well-established process measures for stroke, they were reported by only one site and this involved hospitals from the United Kingdom.8,9,11
Few sites reported on the structural elements of quality (stroke unit, accredited facility, designated stroke center), an easy potential addition given the published guidelines to establish both primary and comprehensive stroke centers.38,39
No sites reported on the quality dimensions of patient-centeredness (eg, patient satisfaction) or health disparities. One reason for this focus on outcomes and utilization quality reporting is the availability of administrative data, which does not generally include process or structure data.
Second, the data are poorly-defined. The most common outcome measure reported is risk-adjusted in-hospital mortality rate, but it is not clear what this rate is actually measuring. Short-term mortality correlates poorly with process measures and is likely related to unsafe care in fewer than 10% of all deaths.40,41
In fact, the majority of stroke deaths occur after deliberate decisions by patients and their families not to pursue unwanted life-prolonging treatments.42
Short-term mortality, therefore, may be more indicative of “good quality” deaths, particularly since more informed patients are more inclined to want less aggressive care (ie, better quality decision-making leading to higher short-term mortality).43
The tremendous variability in how mortality and other outcome data are reported only compounds the confusion. It is also unclear if the average user knows how to interpret and use other measures that are frequently reported, such as utilization data (eg, length of stay) or financial information (eg, charges vs. costs).
Third, the data are unreliable. We found that two separate report cards provided disparate hospital ratings in 39% of comparisons. Disagreement was also observed amongst Primary and Designated Stroke Centers, a subset of hospitals selected for the capacity and quality of stroke care they provide. A recent study showed inconsistent ratings of hospitals among several sites for surgical procedures, but did not quantify the degree of disagreement.44
It is not clear why the report card ratings disagree so frequently. Potential reasons include different sample eligibility criteria, inconsistent methods of risk-adjustment and variable thresholds for defining statistical significant deviations from average or expected results. The potential for systematic bias should also be explored, particularly given the skew in below average ratings found in one of the report cards and their deviation from a pre-defined distribution of outlier status.
Unreliable and invalid publicly reported stroke quality data may have unintended consequences.3,4,43,45,46
Patients may choose the wrong providers, payers may reward or punish providers inappropriately, providers may “game” to improve rankings, hospital leaders may divert resources from worthy improvement efforts, and intermediary companies may profit by stoking fears of losing reputation and market share among affected hospitals. In the end, the public loses trust.
We provide three recommendations. First, efforts are needed to develop a standardized “dossier” of stroke quality measures that meaningfully align with the six worthy aims of health care: effective, safe, patient-centered, equitable, timely, and efficient 17
. This objective will include efforts to harmonize existing stroke process measures (which are in progress) and to develop consensus metrics for stroke outcomes that measure “good quality” deaths as well unexpected “never ever” deaths, for which organizations should be held accountable.8,9,15,47,48
In addition, we need to develop and standardize new measures that focus on patient-centered, efficient, and equitable care. Collaborative public-private partnerships with several organizations that are currently committed to providing stroke quality data for internal quality improvement efforts could facilitate such efforts.9,49
Second, there should be more organized skepticism focused on the AHRQ stroke inpatient quality indicator as a primary measure of quality of care.6
The increasing appetite for health care quality data and the easy access of administrative data will likely guarantee the continued use of mortality as a marker of quality. In the short-run, this will placate stakeholders. Fundamental questions remain however, about the appropriateness of combining all types of stroke (SAH, ICH, ischemic) into this one indicator and the impact that such measures may have on the delivery of high quality palliative care. The inpatient time-horizon is confounded by hospital practice patterns and the capacity of non-hospital services, and ignores the longitudinal accountability needed to improve the quality of a chronic condition. Finally, despite its “public access”, the risk-adjusted methodology remains proprietary.20
Third, further national efforts are needed to develop standardized reporting requirements with explicit rules to reduce bias and to ensure a minimum standard for measuring and reporting conduct.50
Much can be learned from the transparency systems that help govern corporate financing, restaurant hygiene, and mortgage lending practices.51
As the quality field continues to mature, there will be increasing efforts to cherry-pick measures for marketing purposes. All measures should be reported, good or bad - there is no substitute to playing by the rules and working with integrity. Discussion is also needed regarding mandatory vs. voluntary reporting, internal reporting with feedback vs. public reporting, and how to finance a sustainable and effective transparency system that is responsive, interactive, and customized to stakeholder preferences and public concerns.