We used the 2003 MedPAR Part A 100 percent file, which contains discharge data for all hospitalizations of Medicare beneficiaries enrolled in the fee-for-service program. We linked the MedPAR discharge dataset to the 2003 annual survey of the American Hospital Association and limited our analyses to enrollees age 65 or older admitted to hospitals that care for general medical or surgical patients. We used the risk-adjustment software developed by Elixhauser, available from the AHRQ, to define the presence or absence of 30 comorbidities in approximately 10.4 million discharges from 4,504 hospitals.11
We then applied the PSI software, Version 3.0, to the dataset.Although the AHRQ software calculates 20 different hospital-level PSIs, we excluded surgical or obstetric indicators and focused on the PSIs that relate to care for medical inpatients. The four PSIs examined were: death in low mortality DRG, decubitus ulcer, failure to rescue, and selected infection due to medical care. Death in low mortality DRG refers to in-hospital deaths per 1,000 patients in DRGs with less than 0.5% mortality; decubitus ulcer refers to cases that developed during hospitalization per 1,000 discharges with a length of stay of 5 or more days; failure to rescue refers to deaths per 1,000 patients that developed specified complications of care during hospitalization (such as pneumonia, sepsis, gastrointestinal bleed, etc.); selected infection due to medical care refers primarily to cases of infection due to intravenous lines and catheters. Further details about the PSI software, including specific patient exclusions made by each indicator, are available from AHRQ.3
We used the PSI software to calculate risk-adjusted PSI rates for each hospital. The risk-adjustment accounts for baseline differences in patient age, gender, modified DRG, and comorbidities among hospitals3
The AHRQ software does not perform risk adjustment for death in low mortality DRG rates since all patients eligible for this event are considered low risk at baseline. We followed AHRQ guidelines and excluded hospitals with less than 30 cases in the denominator (patients at risk for a particular event) in order to create stable estimates.12
Relationship of PSIs and Process Metrics
We examined the relationship between hospital performance on the PSIs and its performance on HQA quality metrics collected between July 2005 through June 2006.13
Led by CMS, HQA is a collaboration of leading oversight organizations trying to improve the quality of hospital care by publicly reporting performance data.14
CMS collects process measures and imposes financial penalties for non-participation. We examined the ten core measures that apply to three medical conditions: acute myocardial infarction (AMI), congestive heart failure (CHF), and pneumonia. We created a summary performance score in each of the three clinical conditions using methodology endorsed by The Joint Commission.15
Summary scores were calculated if a hospital reported at least 30 discharges for any one of the indicators pertaining to the condition. The summary score is the sum of the numerators divided by the sum of the denominators of all the indicators in the clinical condition. We examined the relationships between each HQA summary score and each PSI.
Relationship of PSIs and In-Hospital Mortality
We examined whether performance on PSIs was related to in-hospital mortality rates in the six medical conditions identified by AHRQ as conditions where mortality rate is a marker of quality. We used limited license APR-DRG software to create modified DRGs and a severity index for each discharge in the MedPar dataset.16
We then calculated risk-adjusted in-hospital mortality rates using the AHRQ Inpatient Quality Indicator Software, version 3.0, on the same MedPar dataset. We examined mortality for six conditions: AMI, CHF, pneumonia, gastrointestinal bleed, stroke, and hip fracture. Further information regarding ICD-9 codes, inclusion, and exclusion criteria used by the Inpatient Quality Indicators are available from the AHRQ.17
We excluded all hospitals with less than 30 patients at risk in order to have more stable mortality estimates. We examined mortality rates for AMI with and without transfers. The results were qualitatively similar, and the results with transfers are shown.
PSI Performance of US News Hospitals
We examined whether hospitals selected by US News & World Report in 2004 as a top-50 hospital in cardiac disease and cardiac surgery or as a top-50 hospital in respiratory disorders performed better on the PSIs. US News & World Report measures comprehensive quality by equally weighting three factors: reputation among experts, structural characteristics such as availability of health technologies, and risk-adjusted mortality rates.18
We chose to examine the top performing cardiac and pulmonary hospitals since these conditions represent the bulk of inpatient medical care. We examined the performance on each PSI among the combined group of 75 hospitals.
Statistical Analyses We used Spearman correlations to examine the relationship between individual PSIs and HQA summary scores and then assigned hospitals into categories based on performance in each PSI. For two PSIs, death in low-mortality DRG and selected infection due to medical care, nearly half of the hospitals had zero events. For these two PSIs, we dichotomized hospitals into those with events and those without events. We tested for variance equality and used the appropriate t-tests to compare the HQA performance of hospitals with events against those without events. For failure to rescue and decubitus ulcer, we categorized hospitals into quartiles based on their complication rate and then calculated the average HQA summary scores within each quartile. We examined for linear trend in HQA performance across the PSI quartiles using analysis of variance.We followed a similar approach when we examined the relationship among hospital PSI rates and mortality rates, beginning with Spearman correlations and then calculating mean mortality rates in each PSI performance category described above. We tested for variance equality and used appropriate t-tests to compare the mortality rates in the two categories for death in low mortality DRG and selected infection due to medical care. For failure to rescue and decubitus ulcer, we examined for linear trend in mortality rates across the PSI quartiles using analysis of variance. In order to facilitate interpretation, we calculated relative mortality rates by dividing the mortality rate in each category by the mortality rate in the worst PSI performance category.For our analysis of US News Hospitals, we used a non-parametric (Wilcoxon) test to compare the median performance of US News hospitals to non-US News hospitals. We chose a non-parametric approach since the number of US News hospitals was relatively small and the PSI data were not normally distributed.In secondary analyses, we used linear regression to examine the PSIs’ relationships with HQA processes and mortality rates accounting for baseline differences in the following hospital characteristics: bed size, teaching status (member of the Council of Teaching Hospitals or not), regional US location (Northeast, Midwest, South, West), urban or rural location, presence or absence of medical intensive care unit, presence or absence of cardiac care unit, percent Medicare patients, and percent Medicaid patients. We also examined the relationship between failure to rescue and in-hospital mortality by excluding deaths that were considered both failure to rescue cases and deaths in each of the six medical conditions. All analyses were conducted using SAS 9.1 (Carey, NC).