|Home | About | Journals | Submit | Contact Us | Français|
To examine outcomes of Medicare enrollees who underwent primary total knee arthroplasty (TKA) in top-ranked orthopedic hospitals identified through the U.S. News & World Report hospital rankings and 2 comparison groups of hospitals.
We used Medicare Part A data to identify patients who underwent primary TKA between January 1, 2006, and December 31, 2006, in 3 groups of hospitals: (1) top-ranked according to U.S. News & World Report rankings; (2) not top-ranked, but eligible for ranking; and (3) not eligible for ranking by U.S. News & World Report. We compared the demographics and comorbidity of patients treated in the 3 hospital groups. We examined rates of postoperative adverse outcomes—a composite consisting of hemorrhage, pulmonary embolism, deep vein thrombosis, wound infection, myocardial infarction, or mortality within 30 days of surgery. We also compared 30-day all-cause readmission rates and hospital length of stay (LOS) across groups.
Our cohort consisted of 48 top-ranked hospitals (performing 10,477 primary TKAs), 288 eligible non–top-ranked hospitals (28,938 TKAs), and 481 hospitals not eligible for ranking (25,297 TKAs). Unadjusted rates of the composite outcome were modestly higher for top-ranked hospitals (4.3%, 455 patients) as compared with non–top-ranked hospitals (4.1%, 1191 patients) and hospitals ineligible for ranking (3.3%, 843 patients) (P<.001), but these differences were no longer significant after accounting for differences in patient complexity. Likewise, there were no significant differences in readmission rates or LOS across groups.
Rates of postoperative complications and readmission and hospital LOS were similar for Medicare patients who underwent primary TKA in top-ranked and non–top-ranked hospitals.
The U.S. News & World Report annual rankings of “America's Best Hospitals” is coveted by hospital leadership for marketing purposes and is used by patients in choosing where to seek medical care.1-3 Despite widespread public awareness of the rankings, there are relatively few empirical studies comparing outcomes and quality in top-ranked and non–top-ranked hospitals. Moreover, such comparisons have focused exclusively on cardiovascular disease.4-8 For example, Chen et al4 reported that patients admitted with acute myocardial infarction to hospitals ranked highly for cardiovascular care had significantly lower 30-day risk-adjusted mortality than patients admitted to other hospitals. More recent studies similarly showed improved short-term mortality for patients admitted to top-ranked hospitals for myocardial infarction,7 heart failure,5 and cardiovascular surgical procedures.6 Nevertheless, it is uncertain whether top-ranked hospitals have benefits outside the area of cardiovascular disease, including orthopedic surgery.
Our objective was to compare outcomes of patients who underwent primary total knee arthroplasty (TKA) in top-ranked and non–top-ranked orthopedic surgery programs. Specifically, we were interested in comparing a number of outcomes available in Medicare administrative data, including postoperative complications, hospital length of stay (LOS), and hospital costs in top-ranked and non–top-ranked hospitals located within the same health care markets.
The primary source of patient data was the 2006 and 2007 Medicare Provider Analysis and Review (MedPAR) data obtained from the Centers for Medicare and Medicaid Services (CMS). The MedPAR data contain information on all hospitalizations for fee-for-service Medicare beneficiaries and have been used extensively in prior studies of joint arthroplasty.9-13 Key data elements include patient demographics, primary and secondary diagnoses recorded by the International Classification of Diseases, Ninth Revision, Clinical Modification (ICD-9-CM) diagnostic codes, primary and secondary procedures recorded by ICD-9-CM procedure codes, diagnosis-related groups, admission source, admission and discharge dates, LOS (number of days from admission to discharge), total charges for in-hospital stay, date of death up to 3 years after discharge, and unique patient and hospital identifiers.
We supplemented the 2006 MedPAR data with several other databases to obtain additional patient and hospital characteristics: (1) the 2007-2009 U.S. News & World Report annual hospital rankings for orthopedics; (2) the 2006 American Hospital Association annual hospital survey data for hospital characteristics; (3) the Dartmouth Atlas of Health Care hospital referral region (HRR) definition file for hospital markets; (4) the rural-urban commuting area data to define hospital geographic location14; and (5) the 2000 US Census data for patients' zip code–level sociodemographic data. The use of each of these databases is described in additional detail in the following sections.
We used MedPAR data to identify all fee-for-service Medicare enrollees who underwent primary TKA (ICD-9-CM procedure code 81.54) between January 1, 2006, and December 31, 2006. Patients were excluded from our analyses if they (1) were younger than 65 years; (2) had pathologic fracture, conversion of previous TKA, or infection of knee or thigh during admission (because these small groups of high-risk patients tend to have substantially higher rates of postoperative adverse outcome and hospital resource use than other regular TKA patients)10; or (3) were missing data on race or sex.
We stratified all hospitals performing TKA into 1 of 3 mutually exclusive categories: (1) top-ranked orthopedic hospitals according to the 2007-2009 U.S. News & World Report rankings; (2) non–top-ranked hospitals, that is, hospitals eligible for the U.S. News & World Report rankings but not considered to be top ranked; and (3) hospitals that did not qualify for inclusion in the U.S. News & World Report rankings for reasons described subsequently. Figure 1 shows the flowchart of hospital selection.
Top-ranked hospitals were defined as all hospitals that were ranked in the U.S. News & World Report 50 best orthopedic hospitals at least twice during the 3-year period between 2007 and 2009.15-17 We used 2007-2009 U.S. News & World Report data along with 2006 MedPAR data because the U.S. News & World Report rankings are based, in part, on hospital performance during the 2 to 3 years preceding the publication of the rankings. According to the U.S. News & World Report methodology,18 to be considered for ranking in orthopedics, acute care hospitals must satisfy the following criteria: (1) be a member of the Council of Teaching Hospitals and Health Systems, be affiliated with a medical school, or make available at least 6 of 13 important advanced technologies (eg, diagnostic radioisotope services or robotic surgery); and (2) have at least 294 total Medicare discharges defined as orthopedic diagnosis-related groups during the most recent 3 years. The quality of eligible hospitals was then measured and ranked by a composite score that comprises 3 equally weighted components: structure, process, and outcome. Details of the U.S. News & World Report rankings have been published previously.18
We defined non–top-ranked hospitals as all hospitals that (1) were located in the same market (ie, HRR) as 1 or more top-ranked hospitals; and (2) met all criteria for eligibility for the U.S. News & World Report rankings but were not designated as top-ranked hospitals. Hospital referral regions represent hospital geographic markets for tertiary care defined by the Dartmouth Atlas file using established zip code algorithms.19 We chose non–top-ranked and ineligible hospitals from the same HRR regions as the top-ranked hospitals for comparisons to avoid introducing potential geographic variations in hospital practice patterns, resource utilization, and outcomes that might confound our analysis. We defined ineligible hospitals as all hospitals that (1) performed 6 or more TKAs in 2006; and (2) were not considered eligible for the U.S. News & World Report rankings (ie, were neither top ranked nor non–top ranked). Thus, our final cohort consisted of all top-ranked, non–top-ranked, and ineligible hospitals and respective patients who received primary TKA in these hospitals.
We used the 2006 American Hospital Association annual survey file to obtain hospital-level data on number of beds, hospital teaching status (member of Council of Teaching Hospitals and Health Systems, yes or no), hospital ownership (categorized as not-for-profit, for-profit, and government), whether the hospital was affiliated with a medical school, and nurse staffing level (calculated as the number of full-time–equivalent nurses divided by adjusted patient days).20 We categorized each hospital's geographic location as rural or urban using the rural-urban commuting area codes. We calculated each hospital's Medicare primary TKA volume from the 2006 MedPAR file.
Although mortality is often considered a primary outcome in studies evaluating surgical procedures, in the case of primary TKA, mortality is typically less than 1%, making mortality a less useful outcome.9,10,21 Thus, we built on methods that we and others have used for examining joint arthroplasty outcomes.9,22-25
We examined 4 separate outcomes that can be evaluated using the MedPAR data for each hospital. These include a composite measure of patient 30-day postoperative adverse outcome, all-cause 30-day readmission rates, hospital LOS, and total in-hospital cost. The composite postoperative complication measure was developed using a method we and others have used previously.9,22-25 Specifically, we evaluated the occurrence of a composite representing the occurrence of one or more of 7 individual complications occurring within 30 days of TKA admission: sepsis, hemorrhage, pulmonary embolism (PE), deep vein thrombosis (DVT), wound infection requiring readmission, myocardial infarction, and mortality. Thirty-day readmission was defined as any readmission to an acute care hospital within 30 days of discharge for each TKA patient; all readmissions were “assigned” to the hospital that performed the index surgical procedure. Hospital LOS was calculated from the MedPAR data for all patients and compared among the 3 hospital groups. In-hospital cost for each patient was calculated as total charges for the TKA hospitalization multiplied by the hospital-wide cost-to-charge ratio found in the CMS Medicare cost report.26
First, we used bivariate methods (χ2 tests for categorical variables and analysis of variance for continuous variables) to evaluate the differences in demographics and prevalence of key comorbid conditions for patients who underwent TKA in top-ranked, non–top-ranked, and ineligible hospitals. Comorbid illnesses were identified using algorithms developed for use with administrative data.27,28 We calculated the mean number of comorbid conditions for patients treated in each of the 3 groups of hospitals. We compared the socioeconomic status of patients treated in each group of hospitals (median household income and percentage of high school graduation) by linking the zip code of each patient's residence to data available from the 2000 US Census and then compared socioeconomic measures for patients treated in each of the 3 groups of hospitals. Second, we used similar methods to compare the characteristics of the 3 groups of hospitals. Specifically, we examined differences in teaching status, bed size, nurse staffing, and TKA volume across the 3 groups (top-ranked, non–top-ranked, and ineligible for ranking).
Third, we used a generalized estimating equation to examine outcomes among our 3 types of hospitals (top ranked, non–top ranked, and ineligible) after accounting for differences in patient characteristics and hospital factors. Ineligible hospitals were considered the reference category in the models. We assumed binomial distribution with a logit link function for binary outcomes (ie, occurrence of any 30-day postoperative adverse outcome and all-cause 30-day readmission) and Poisson distribution for continuous outcomes (LOS and cost).29,30 In addition, all models incorporated hospital- and HRR-level random effects to account for the hierarchical clustering of patients within hospitals, which in turn were clustered within HRR regions.
Finally, we performed sensitivity analyses in which, instead of limiting non–top-ranked and ineligible hospitals to those located in the same HRR markets as top-ranked hospitals, we included all hospitals of the nation (and their TKA patients) that performed at least 6 Medicare TKA procedures in 2006. We also repeated our analyses while looking at 90-day outcomes rather than 30-day outcomes to ensure the robustness of our findings.
All analyses were performed using SAS version 9.2.3 (SAS Institute, Cary, NC). This project was approved by the University of Iowa Institutional Review Board.
Our final sample was composed of 64,712 patients receiving primary TKA in 817 hospitals in 2006. Of these patients, 10,477 were treated in 48 top-ranked hospitals, 28,938 patients were treated in 288 non–top-ranked hospitals, and 25,297 patients were treated in 481 hospitals that were ineligible for inclusion in the U.S. News & World Report rankings (Table 1). Patients treated in top-ranked hospitals were slightly older, were less likely to be female, and had slightly higher rates of several comorbid conditions compared with patients treated in ineligible hospitals. Patients receiving TKA in top-ranked hospitals resided in zip codes with moderately higher incomes when compared with patients admitted to non–top-ranked and ineligible hospitals ($53,517 vs $50,103 vs $45,023, respectively; P<.001).
Top-ranked and non–top-ranked hospitals were more likely to be not-for-profit when compared with ineligible hospitals and were more likely to be major teaching hospitals and to be affiliated with a medical school (P<.001; Table 2). The top-ranked and non–top-ranked hospitals also tended to be larger than ineligible hospitals (mean number of beds, 637.6 and 330.2 in top-ranked and non–top-ranked hospitals, respectively, compared with 147.8 in ineligible hospitals), have higher nurse staffing levels (mean nurse staffing ratio, 3.8 and 3.0 in top-ranked and non–top-ranked hospitals, respectively, compared with 2.7 in ineligible hospitals), and have higher average Medicare TKA volume (269.8 and 122.4 in top-ranked and non–top-ranked hospitals, respectively, compared with 59.5 in ineligible hospitals).
The unadjusted rate of the composite adverse outcome within 30 days of TKA was 4.3% for top-ranked hospitals, 4.1% for non–top-ranked hospitals, and 3.3% for ineligible hospitals (P<.001; Table 3). The unadjusted 30-day readmission rates were 8.2%, 8.4%, and 8.1% in the 3 groups of hospitals, respectively (P=.38).
The mean in-hospital costs for patients undergoing primary TKA were $20,336, $20,811, and $27,069, respectively, for top-ranked, non–top-ranked, and ineligible hospitals (P<.001; Table 3).
In adjusted analyses accounting for differences in patient demographics and comorbidity (Table 4), odds of experiencing the composite outcome were similar across the 3 hospital groups. Similarly, in adjusted analyses we found no differences in 30-day all-cause readmission (Table 4).
The mean unadjusted LOS were 3.7 days, 3.8 days, and 3.7 days, respectively, for top-ranked, non–top-ranked, and ineligible hospitals (P<.001; Table 3). After multivariable adjustment for patient and hospital characteristics, hospital type was not significantly associated with LOS (Table 4).
In the sensitivity analyses, results were similar when we examined 90-day postoperative outcomes and when we included hospitals that had been excluded from our primary analysis.
In an analysis of Medicare administrative data, we found similar rates of postoperative complications and LOS for patients who underwent primary TKA in top-ranked and non–top-ranked hospitals when compared with hospitals that were not eligible for the U.S. News & World Report rankings. Specifically, we found no evidence of lower postoperative complication rates, reduced readmission rates, or reduced hospital LOS in top-ranked hospitals. Alternatively, we found that top-ranked and non–top-ranked hospitals had lower costs for TKA than hospitals ineligible for the rankings. Our results are complex and warrant further discussion.
To our knowledge, this is the first study that evaluates outcomes and cost in “America's Best” orthopedic hospitals. While a number of prior studies have demonstrated that the reputations of top-ranked hospitals for cardiovascular diseases may be justified,4-8 it is far less certain what top rankings mean for other disciplines of medicine, including orthopedics. Despite a lack of evidence for improved outcomes, top-ranked programs in all disciplines heavily promote their top rankings for marketing purposes.31
There are 2 potential interpretations of our findings. First, it is possible that the U.S. News & World Report rankings truly do not capture a group of hospitals with improved orthopedic quality when it comes to TKA. Second, it is possible that the U.S. News & World Report rankings do capture a truly superior group of hospitals but that Medicare administrative data do not capture these improved outcomes. Thus, it is important to comment on the challenge of evaluating nationwide outcomes in the area of joint arthroplasty. More than 500,000 TKA procedures are performed annually in the United States at a cost of more than $8 billion.32,33 Despite the volume of procedures performed and the associated costs, our ability to assess outcomes (and quality) after joint arthroplasty remains relatively rudimentary when compared with other conditions, most notably cardiovascular disease. In particular, because mortality after elective TKA is uncommon and our ability to detect other outcomes (eg, DVT, PE, infection) reliably using administrative data is imperfect,10,21 physicians, payers, and patients are faced with a conundrum. Ideally we would have access to national TKA registries containing additional outcomes, including quality of life and functional status. Such registries are being developed but are not yet widely available.34,35 Thus, for the time being, we are left with administrative data that have clear value but notable limitations. Repeating our analyses with more detailed patient-level outcomes is a logical next step.
We also found that Medicare patients admitted to top-ranked hospitals resided in wealthier zip codes than patients admitted to non–top-ranked and ineligible hospitals, which is interesting and worth further study. It is possible that top-ranked hospitals attract wealthier patients because of the visibility provided by rankings such as those by U.S. News & World Report—an interesting possibility. It is also interesting to note that top-ranked hospitals had modestly lower rates of DVT and PE but higher rates of hemorrhage; this could be explained if top-ranked hospitals had higher use of pharmacologic thromboembolism prophylaxis.
Our findings with regard to TKA costs warrant brief mention. In particular, our finding that TKA costs were markedly higher in hospitals ineligible for the U.S. News & World Report rankings than in top-ranked and non–top-ranked hospitals is somewhat surprising. Prior studies have found that academic medical centers—a group of hospitals disproportionately represented in the U.S. News & World Report rankings—typically have higher costs when compared with other hospitals.36,37 However, at least some of the higher costs that have been observed in teaching hospitals seem to be related to the greater complexity of patient populations served by these hospitals.38 Thus, our finding of higher costs among the smaller hospitals that were ineligible for the U.S. News & World Report rankings is somewhat puzzling and requires confirmation. It is also important to recognize that our estimates of cost were derived from Medicare cost-to-charge ratios, a method that, while commonly used in health services research, is well known to have important limitations.39
This study has several limitations. First, our analyses were limited to Medicare fee-for-service patients, given their routinely available hospital claims through the CMS; thus, our findings may not be generalizable to Medicare health maintenance organization or non-Medicare patients. Second, our analyses focused on in-hospital and short-term postoperative outcomes but did not examine other dimensions of TKA outcomes, such as longer-term functional status, pain reduction, and quality of life. Potential differences in these outcomes and their impact on long-term health care expenditures for TKA patients treated in different hospitals remain unknown. Third, our study shares the same limitations of administrative data-based analyses as previous studies. The ICD-9-CM codes in the claims data are relatively insensitive in identifying comorbidities and complications and sometimes may miscode the 2 types of diagnoses, which would lead to bias in our estimates of hospital quality and outcomes. However, there is no evidence that the issues of undercoding and miscoding of conditions are more substantial in either ranked or nonranked hospitals. Under the assumption that such coding issues are largely random across hospitals, our finding of cost difference between hospital groups tended to be a conservative estimate of the true difference, although the estimates of no difference in other outcomes may be a result of downward biases. Finally, our estimates of cost depend largely on the Medicare cost-to-charge ratios. However, previous comparative analyses of Medicare costing data reveal differences across hospitals in the reporting of both revenues and expenses and treatment of details such as charity care, bad debt, nonoperating income, and cash flow. Because of these limitations, it is possible that our analysis of hospital costs provides an inaccurate comparison across the 3 hospital categories. That said, cost-to-charge ratios are commonly used in comparison of hospital costs and remain well established and ubiquitous in outcomes research.
We found similar rates of postoperative complications and hospital LOS for Medicare beneficiaries who received primary TKA in top-ranked and non–top-ranked hospitals. Our results suggest that the advantage of top-ranked orthopedic hospitals for patients undergoing primary TKA may be smaller than might have been envisioned historically.
The views expressed in this article are those of the authors and do not necessarily represent the views of the Department of Veterans Affairs.
Grant Support: Dr Cram was supported by a K23 career development award (RR01997201) from the National Center for Research Resources at the National Institutes of Health (NIH) and the Robert Wood Johnson Physician Faculty Scholars Program. Drs Cram and Vaughan-Sarrazin are supported by the Department of Veterans Affairs. Dr Miller is supported by a T32 training grant (CA148062-01) from the National Cancer Institute at the NIH. The work is also funded by R01 HL085347-01A1 from the National Heart, Lung, and Blood Institute at the NIH.
Author Interview Video