PMCCPMCCPMCC

Search tips
Search criteria 

Advanced

 
Logo of nihpaAbout Author manuscriptsSubmit a manuscriptNIH Public Access; Author Manuscript; Accepted for publication in peer reviewed journal;
 
Health Aff (Millwood). Author manuscript; available in PMC Oct 27, 2009.
Published in final edited form as:
PMCID: PMC2768577
NIHMSID: NIHMS133194
Hospital Quality And Intensity Of Spending: Is There An Association?
Laura Yasaitis, Elliott S. Fisher, Jonathan S. Skinner, and Amitabh Chandracorresponding author
Laura Yasaitis is a student in the joint MD/PhD program at Dartmouth Medical School, in Hanover, New Hampshire. Elliott Fisher is director of the Center for Healthcare Research and Reform, Dartmouth Institute for Health Policy and Clinical Practice. Jon Skinner is a professor at Dartmouth College, and at the Dartmouth Institute. Amitabh Chandra (amitabh_chandra/at/Harvard.edu) is a professor in the Kennedy School of Government, Harvard University, in Cambridge, Massachusetts, and a fellow at the Dartmouth Institute
corresponding authorCorresponding author.
Abstract
Numerous studies in the United States have examined the association between quality and spending at the regional level. In this paper we evaluate this relationship at the level of individual hospitals, which are a more natural unit of analysis for reporting on and improving accountability. For all of the quality indicators studied, the association with spending is either nil or negative. The absence of positive correlations suggests that some institutions achieve exemplary performance on quality measures in settings that feature lower intensity of care. This finding highlights the need for reporting information on both quality and spending.
Wide variations in both spending and quality of care have been documented throughout the U.S. health care system. Many studies have found no positive correlation between spending and quality; others have noted a negative relationship at the level of states and Hospital Referral Regions (HRRs).1 In these studies, higher spending is associated with care of higher intensity, involving greater use of the hospital and intensive care unit (ICU) and more specialists, tests, and minor procedures, but lower quality as measured by performance on process-of-care measures, which record the percentage of patients that receive appropriate care for specific conditions.2 Some have attributed this negative association to the specialist-oriented patterns of practice present in different regions.3
Previous analyses at the regional level provide an informative picture of variation in spending and quality across the country; however, efforts to improve the quality of inpatient care or reduce unnecessary spending are unlikely to be designed at the level of regions because hospitals and physicians in them are not formally linked. We examined correlations between spending intensity and quality at the level of hospitals, which, unlike states or HRRs, provide the organizational context within which Medicare patients receive most of their care.4 We used measures of process-of-care quality from the Centers for Medicare and Medicaid Services (CMS) Hospital Compare database. We compared performance on these measures to hospital-level end-of-life spending based on spending for chronically ill patients from the Medicare over-sixty-five population. We examined whether the observed relationship between quality and spending at the hospital level is mediated by the influence of geography: is the lack of association between spending and quality explained by geography, or does it also occur within narrowly defined geographic regions?
Data: quality
The Hospital Quality Alliance (HQA), a public-private collaboration between the CMS and several hospital organizations, began reporting individual hospitals’ performance on process-of-care measures through a Web site, Hospital Compare, on 1 April 2005.5 The measures focus on three major conditions for which treatments are supported by solid evidence: acute myocardial infarction (AMI), pneumonia, and congestive heart failure (CHF).6 The measures are the percentage of appropriate patients receiving a specific, often low-cost, evidence-based therapy. We analyzed data from 2004–2007.
We retained only those measures for which a majority of hospitals reported at least twenty-five observations in 2004—the first year for which data were available. This cut-off has been used in previous work to ensure sufficient statistical precision.7 Eleven process measures yielded at least twenty-five observations for a majority of hospitals: aspirin at arrival and at discharge and beta-blocker prescription at arrival and at discharge (for AMI); assessment of left ventricular function, the provision of discharge instructions, and angiotensin-converter enzyme (ACE) inhibitor or angiotensin-receptor blocker (ARB) prescription for patients with left ventricular systolic dysfunction (LVSD) (congestive heart failure); blood culture performed before receiving the first antibiotic in the hospital, first dose of antibiotic within four hours of admission, initial antibiotic selected appropriately, and assessment of arterial oxygenation within twenty-four hours of arrival (pneumonia).
Hospital quality measures
Following other work that emphasizes relative differences in hospital quality, we focused on percentile differences in quality scores.8 Because simple hospital ratings may fluctuate from year to year, we pooled data across all four available years (2004–2007).9 Before doing so, we assessed the correlation of performance on each measure across years. Although the correlations between consecutive years were higher than those between more distant years, these correlations were all positive and and exceeded 0.80 (p < 0.0001).
We first calculated the percentage of patients who received appropriate care for each condition to yield a summary score for that condition. If a hospital did not report adequate data for a given measure, that measure was not included in the hospital’s summary score. Next we created a composite score that used information for all three conditions by taking the mean of summary scores across conditions for each hospital. (We also used factor analysis to combine the three composite measures for each hospital, but the correlation between the factor index and the simple average was 0.98, so we used the average.) We then assigned percentile rankings to hospitals based on their performance on these measures. We analyzed percentile rankings for overall performance, as well as for each clinical condition.
Medicare spending data
Medicare spending on in-hospital services reflects the severity of disease, reimbursements due to graduate medical education (GME) and Medicare disproportionate-share hospital (DSH) payments, geographical price adjustments, and differences in practice patterns. To isolate the latter factor, we obtained data on hospital inpatient spending based on the intensity of inpatient care at the end of life (EOL) using spending in the two years preceding death. We used data from patients with chronic diseases, who would likely have experienced considerable contact with their hospitals in the two years before death. By focusing on variation in the treatment of patients with identical life expectancy, the EOL spending measure better reflects the portion of spending that is attributable to differences in practice patterns, not differences in severity of illness.10 EOL spending has been shown to be highly correlated with both total Medicare spending and spending for specific disease cohorts.11 In contrast to disease-specific cohorts, the sample size for the EOL measures is larger and thus allows more accurate measures of overall Medicare costs by hospital.
To construct our measure of EOL spending, we used Medicare Parts A and B spending and utilization data for hospital and physician services for chronically ill beneficiaries who died during 1999–2003.12 At death, each of these patients was assigned to the hospital in which they had received the majority of care in the previous two years. All data on spending and use from that patient’s claims were then assigned to that hospital as well. The vast majority of patients’ care occurred at the assigned hospital; the average percentage of inpatient days spent at the assigned hospital was 89.7 percent. Spending data were adjusted for differences in age, sex, race, and the relative frequency of chronic illness.
To remove the influence of varying reimbursements attributable to GME and Medicare DSH payments and geographical price adjustments, we constructed a measure of spending that reflects only the use of the services that explain a large amount of hospital spending: number of hospital days, total physician visits, ICU days, and ratio of specialist to primary care physician visits at the end of life. We regressed the hospitals’ total spending on the quantities of services provided, and we analyzed only the proportion of spending explained by the provision of these services. Our measure of EOL spending, which is adjusted for GME, DSH, and geographic price adjustments, has a correlation of 0.75 with the unadjusted measure (p < 0.001). Hospitals were then categorized into five quintiles of this “price-adjusted” EOL spending. Of the hospitals whose data are reported in the Dartmouth Atlas, 2,712 had adequate data for the calculation of this measure of spending and performance on quality indicators.
Analysis
We compared percentile scores of quality, using both condition-specific scores and composite scores, across the five quintiles of EOL spending, and we performed tests for trend across these quintiles. In a secondary analysis, we separated academic medical centers (AMCs) from other hospitals to examine whether the relationship between spending and quality was different for them. We also examined the relationship between spending and quality at the hospital level, after adjusting for geographic region (HRR). This fixed-effects analysis accounts for all factors that are fixed within HRRs. To illustrate the potential of this approach, we plotted the overall quality performance and spending for hospitals from two large metropolitan areas (New York and Los Angeles).
A regression model was used to report the association between a hospital’s percentile score and an increase of $10,000 in the intensity of the EOL spending measure (or from the middle to the highest quintile of spending). Each hospital was weighted by the number of observations on which its score was based.
We categorized the 2,712 U.S. hospitals with complete data on utilization, spending, and quality performance by quintile of EOL spending; mean EOL spending ranged from $16,059 in the lowest quintile to $34,742 in the highest quintile (Exhibit 1). Among AMCs, almost half of those reporting adequate data were in the top quintile of spending. The correlations between the condition-specific scores are somewhat weak: 0.59 for AMI and CHF, 0.32 for AMI and pneumonia, and 0.40 for CHF and pneumonia, a result found previously.13 There were significant negative relationships for performance on AMI, pneumonia, and overall quality scores (p < 0.001 for all; Exhibit 2). There was no association between performance on quality measures and spending for heart failure.
EXHIBIT 1
EXHIBIT 1
Number Of Hospitals Reporting Sufficient Data For Each Clinical Condition, By Quintile Of End-Of-Life (EOL) Spending And Medical Condition, 2004–2007
EXHIBIT 2
EXHIBIT 2
Percentile Of Quality, By Quintile Of Spending, All Hospitals, 2004–2007
For AMCs (Exhibit 3), the only significantly negative association was between performance on AMI quality measures and spending (p = 0.009). This association accounts for the negative trend seen in overall performance (p = 0.066); there is no significant relationship between performance on heart failure or pneumonia measures and spending.
EXHIBIT 3
EXHIBIT 3
Percentile Of Quality, By Quintile Of Spending, Academic Medical Centers, 2004–2007
To assess the impact of geographical differences in care intensity, we next repeated the national analysis after accounting for HRRs (Exhibit 4). Some of the association between quality and spending is mediated by geographical differences in care intensity. In this analysis, the relationship between performance on pneumonia measures and spending remains strong (p < 0.001) and largely accounts for the relationship between spending and overall quality performance (p = 0.015).
EXHIBIT 4
EXHIBIT 4
Performance Across Quintiles Of Spending, Adjusting For Hospital Referral Regions (HRRs), 2004–2007
For individual hospitals from two major metropolitan areas (Exhibit 5), there was no significant relationship between spending and quality within either region; both regions show wide variability on spending and quality.
EXHIBIT 5
EXHIBIT 5
Percentile Ranking And Spending For Individual Hospitals In New York (Manhattan And The Bronx) And Los Angeles, 2004–2007
To quantify the magnitude of the associations noted in the above exhibits, we performed an analysis of the change in percentile ranking associated with a $10,000 increase in EOL spending. A change of this magnitude in spending would move a hospital from the middle to the highest quintile of spending. For the entire sample (AMC and non-AMC hospitals), the associations were −5.3 percentile points for overall quality (p < 0.001), −5.2 percentile points for AMI (p < 0.001), −9.2 percentile points for pneumonia (p 0.001), and −0.3 percentile points (p = 0.687) for CHF.
Examining hospital performance across quintiles of spending intensity reveals the lack of positive association between quality and spending. Indeed, in our analysis the relationship was often negative—even within regions. This implies that the prevalence of hospitals with inefficient, fragmented care is not isolated to a few regions of the country.
Study limitations
Our study is not without limitations. First, the quality measures we used may penalize hospitals that treat sicker patients. Although this remains a possibility, we chose the measures in part because they are not sensitive to the ability to perform detailed risk adjustment. Moreover, recent work has found that patients with more comorbidities are more likely to receive higher-quality medical care.14
Second, we examined process-of-care measures, not outcome measures. Policymakers have focused on the former because they need no risk adjustment, but they have been criticized for their weak correlation with health outcomes, including mortality after AMI and CHF.15 More recent work has demonstrated that there is an inverse (but weak) relationship between performance on these measures and risk-adjusted mortality rates for each of the three conditions under investigation.16
A third concern is the EOL measure: a person treated in a high-intensity hospital may survive and thus not end up in the EOL sample. Presumably, this person experienced above-average spending; thus, excluding him or her from the sample would attenuate any measured differences in spending. Peter Bach and colleagues have also noted that regions with more “low cost” diseases will appear to experience lower spending in EOL cohorts.17 We adjusted our spending data for the relative frequency of diseases in each hospital’s patient population, to alleviate some of this concern.
Finally, our study used Medicare fee-for-service data to calculate measures of EOL spending. Other research has noted the similarity with which patients in this population are treated compared with patients having other sources of coverage, such as Medicare health maintenance organizations (HMOs) and private insurers.18
Implications
Although previous studies have examined the efficiency of the U.S. health care system at the regional level, our work is one of the first nationwide analyses of quality and spending at the individual hospital level. Our analysis suggests that hospitals achieve exemplary performance across wide ranges of care intensity, while higher- or lower-spending hospitals do not score uniformly well or poorly on quality indicators. If the purpose of quality reporting is to inform consumers, insurers, and providers about quality and to encourage selective referrals or competitive forces to improve quality, then the additional reporting of spending should strengthen these efforts.
Better reporting on these aspects of hospital performance may also allow us to understand how care is translated into performance on quality indicators. Some have suggested that intensive medical care (which is correlated with spending) can crowd out the provision of simpler, proven medical interventions.19 For example, in areas with more intensive management of heart attacks (that is, treatments such as angioplasty), AMI patients were found to be less likely to be treated with simple treatments such as beta-blockers and aspirin.20 With improved public reporting on quality and spending, it may be possible to understand how some providers can deliver outstanding care without raising costs.
Acknowledgments
This research was funded by the National Institute on Aging, Grant no. NIA P01 AG19783-02, and by the Taubman Center at Harvard University.
1. Fisher ES, et al. The Implications of Regional Variations in Medicare Spending, Part 1: The Content, Quality, and Accessibility of Care. Annals of Internal Medicine. 2003;138(4):273–287. [PubMed]Fisher ES, et al. The Implications of Regional Variations in Medicare Spending, Part 2: Health Outcomes and Satisfaction with Care. Annals of Internal Medicine. 2003;138(4):288–298. [PubMed]Baicker K, Chandra A. Medicare Spending, the Physician Workforce, and Beneficiaries’ Quality of Care. Health Affairs. 2004;23:w184–w197. doi: 10.1377/hlthaff.w4.184. published online 7 April 2004. [PubMed] [Cross Ref]Chandra A, Staiger DO. Productivity Spillovers in Healthcare: Evidence from the Treatment of Heart Attacks. Journal of Political Economy. 2007;115:103–140. [PubMed]
2. Fisher, et al. “The Implications of Regional Variations in Medicare Spending, Part 1”; and Baicker and Chandra. Medicare Spending
3. Starfield B, et al. Variability in Physician Referral Decisions. Journal of the American Board of Family Practice. 2002;15(6):473–480. [PubMed]
4. Fisher ES, et al. Creating Accountable Care Organizations: The Extended Hospital Medical Staff. Health Affairs. 2007;26(1):w44–w57. doi: 10.1377/hlthaff.26.1.w44. published online 5 December 2006. [PMC free article] [PubMed] [Cross Ref]
5. Centers for Medicare and Medicaid Services. Download Database—Hospital Compare. Aug, 2008. [accessed 5 May 2009]. http://www.hospitalcompare.hhs.gov/Download/DownloadDB.asp.
6. Bradley EH, et al. Hospital Quality for Acute Myocardial Infarction: Correlation among Process Measures and Relationship with Short-Term Mortality. Journal of the American Medical Association. 2006;296(1):72–78. Ibid. [PubMed]Werner RM, Bradlow ET. Relationship between Medicare’s Hospital Compare Performance Measures and Mortality Rates. Journal of the American Medical Association. 2006;296(22):2694–2702. [PubMed]Jha AK, et al. Care in U.S. Hospitals—The Hospital Quality Alliance Program. New England Journal of Medicine. 2005;353(3):265–274. [PubMed]Jha AK, et al. The Inverse Relationship between Mortality Rates and Performance in the Hospital Quality Alliance Measures. Health Affairs. 2007;26(4):1104–1110. [PubMed]
7. Jha, et al. “Care in U.S. Hospitals”; and Jha et al., “The Inverse”
8. Ibid.
9. McClellan MB, Staiger DO. Frontiers in Health Policy Research. Vol. 3. Cambridge, Mass: MIT Press; 2000. Comparing the Quality of Health Care Providers; pp. 113–136.
10. Fisher, et al. The Implications of Regional Variations in Medicare Spending, Part 1”; Fisher et al., “The Implications of Regional Variations in Medicare Spending, Part 2”; and J.S. Skinner, E.S. Fisher, and J.E. Wennberg, “The Efficiency of Medicare. In: Wise DA, editor. Analyses in the Economics of Aging. Chicago: University of Chicago Press; 2005.
11. Fisher ES, et al. Variations in the Longitudinal Efficiency of Academic Medical Centers. Health Affairs. 2004;23:VAR-19–VAR-32 . published online 7 October 2004. 10.1377.hlthaff.var.19. [PubMed]
12. Wennberg JE, et al. The Care of Patients with Severe Chronic Illness: An Online Report on the Medicare Program by the Dartmouth Atlas Project. 2006 . [accessed 15 May 2009]. http://www.dartmouthatlas.org/atlases/2006_Chronic_Care_atlas.pdf.
13. Jha, et al. Care in U.S. Hospitals
14. Higashi T, et al. Relationship between Number of Medical Conditions and Quality of Care. New England Journal of Medicine. 2007;356(24):2496–2504. [PubMed]
15. Bradley, et al. Hospital Quality”; and Werner and Bradlow, “Relationship
16. Jha, et al. The Inverse Relationship
17. Bach PB, Schrag D, Begg CB. Resurrecting Treatment Histories of Dead Patients: A Study Design That Should Be Laid to Rest. Journal of the American Medical Association. 2004;292(22):2765–2770. [PubMed]
18. Baker LC, Fisher ES, Wennberg JE. Variations in Hospital Resource Use for Medicare and Privately Insured Populations in California. Health Affairs. 2008;27(2):w123–w134. doi: 10.1377/hlthaff.27.2.w123. published online 12 February 2008. [PubMed] [Cross Ref]
19. Baicker K, Chandra A. The Productivity of Physician Specialization: Evidence from the Medicare Program. American Economic Review. 2004;94(2):357–361.
20. Chandra and Staiger, “Productivity Spillovers”; and Skinner JS, Staiger DO, Fisher ES Is Technological Change in Medicine Always Worth It? The Case of Acute Myocardial Infarction. Health Affairs. 2006;25(2):w34–w47. doi: 10.1377/hlthaff.25.w34. published online 7 February 2006. [PubMed] [Cross Ref]