|Home | About | Journals | Submit | Contact Us | Français|
Patients and payers wish to identify hospitals with good surgical oncology outcomes. Our objective was to determine whether differences in outcomes explained by hospital structural characteristics are mitigated by differences in patient severity.
Using hospital administrative and cancer registry records in Pennsylvania, we identified 24,618 adults hospitalized for cancer-related operations. Colorectal, prostate, endometrial, ovarian, head and neck, lung, esophageal, and pancreatic cancers were studied. Outcome measures were 30-day mortality and failure to rescue (FTR) (30-day mortality preceded by a complication). After severity of illness adjustment, we estimated logistic regression models to predict the likelihood of both outcomes. In addition to American Hospital Association survey data, we externally verified hospitals with National Cancer Institute (NCI) cancer center or Commission on Cancer (COC) cancer program status.
Patients in hospitals with NCI cancer centers were significantly younger and less acutely ill on admission (p < .001). Patients in high volume hospitals were younger, had lower admission acuity, yet had more advanced cancer (p < .001). Unadjusted 30-day mortality rates were lower in NCI-designated hospitals (3.76% vs. 2.17%, p = .01). Risk-adjusted FTR rates were significantly lower in NCI-designated hospitals (4.86% vs. 3.51%, p = .03). NCI center designation was a significant predictor of 30-day mortality when considering patient and hospital characteristics (OR 0.68, 95% CI 0.47–0.97, p = .04). We did not find significant outcomes effects based on COC cancer program approval.
Patient severity of illness varies significantly across hospitals, which may explain the outcome differences observed. Severity adjustment is crucial to understanding outcome differences. Outcomes were better than predicted for NCI-designated hospitals.
Oncology patients comprise a large proportion of hospital caseloads. Based on projections of cancer incidence, their presence is expected to increase. In addition, tumor-directed surgical procedures are being performed with increasing frequency on patients of older age and related comorbidities. Variations in outcomes from surgical oncology procedures are widely reported; the majority of these studies have focused on outcome differences by procedure volume,1–5 or receipt of care in a hospital recognized by the National Cancer Institute (NCI) cancer center program.6 The quality gap observed in surgical oncology outcomes might worsen given the increased attention to provide anti-cancer therapies to older adults, many of whom may have comorbities (Trimble & Christian, 2006).
Based on research findings, stakeholder groups in the United States have suggested that rare or complex cancer operations should be performed by physicians or hospitals achieving certain annual case volume targets.7 In 1992, Canadian provinces began the process of regionalizing cardiac procedures in response to documented variations in outcome.8 Similar proposals might be considered for receipt of surgical oncology care in facilities achieving certain benchmarks, such as NCI Cancer Center or the Commission on Cancer (COC) cancer program status.9 At the time of this study, NCI clinical cancer center designation required robust clinical and basic science research programs that underwent peer- and site-reviewed. In addition, comprehensive cancer center designation required shared research resources, as well as a cancer control and population science research program. The COC program credential required: state-of-the-art clinical services that span the phase of diagnosis through completion of treatment; a cancer committee leadership program; care conferences were patient cases are discussed and continuing education is provided, and; an established cancer registry.9 These credentials were confirmed by a formal site visit conducted by COC members. Before options to redirect patients with cancer to credentialed facilities are considered, additional research is needed to ascertain if and why differences in quality exist, and to rigorously examine the outcome differences in multiple datasets.
As part of our team’s research program in elucidating the relationship between nursing care and surgical patient outcomes, 11,12 we studied outcomes in a sample of surgical oncology patients admitted to Pennsylvania hospitals between 1998–1999. Mortality outcomes were superior when patients received care in hospitals with better nurse staffing, more favorable nurse perceptions of their workplace, and nurses with higher educational preparation.13 One intriguing finding that we follow up on in this study is that the only significant hospital characteristic associated with more favorable outcomes - in addition to nursing factors - was NCI cancer center designation. This paper extends our previous research to examine more closely the array of hospital and patient characteristics and their relationship to patient outcomes using enhanced patient severity adjustment. Do certain types of hospitals - including those with cancer specialty designation - have better outcomes for surgical oncology patients? To what extent are differences in patient outcomes, if found, explained by patient characteristics? The findings are pertinent to clinical and payer practices that encourage referrals to hospitals with specific organizational characteristics.
After human subjects exempt review, we performed secondary analysis of linked data created by merging inpatient claims from the Pennsylvania Health Care Cost Containment Council (PHC4), the Pennsylvania Cancer Registry, and the American Hospital Association annual survey data. The list of National Cancer Institute’s10 clinical and comprehensive cancer centers available from the NCI’s website, and a list of approved cancer programs provided by the American College of Surgeons were used to identify hospitals in the sample with those designations in 1998–1999. Details of the linkage procedure have been reported elsewhere.13
Our analytic sample included 24,618 adults treated in 164 acute care hospitals between 1998–1999 with a diagnosis and surgical procedure for one of the following cancers: head and neck, esophagus, colon-rectum, pancreas, lung, ovary, prostate, and endometrium. Breast cancer patients were excluded from this analysis because of their significantly shorter lengths of hospital stay.
Whenever possible, existing definitions from the outcomes research literature focused on hospital characteristics were used. Hospital beds set up and staffed were categorized as: 100 beds or fewer, 101–250 beds, 251 beds or higher.11 Hospitals that performed solid organ or open heart transplants in 1999 were coded as providers of “advanced procedures.”14 Prior studies have suggested the provision of advanced technological resources may have spillover effects for other conditions.15 We used the ratio of medical residents or fellows per beds set up and staffed to categorize teaching status: Non-teaching hospitals had no residents/fellows per bed; minor teaching hospitals had a lower than 1:4 resident/fellow to bed ratio; major teaching hospitals had at least one resident/fellow per 4 beds.16, 17 We constructed quartiles of hospital procedure volume for the total number of procedures performed at each hospital on our set of ICD-9 diagnosis codes for the years 1998 and 1999.18 For example, hospitals received credit for all right hemi-colectomies performed, regardless of whether the underlying diagnosis was for malignancy. Dichotomous variables were created to reflect whether a hospital had received cancer center or cancer program status by the NCI or COC, respectively.
Tumor registry data were combined with hospital claims to measures patients’ risk for poor outcomes. We then estimated logistic regression models to predict 30-day mortality and failure to rescue using split-sample methodology. In a random fifty percent sample of the patients, 83 logistic regression models with a single covariate reflecting a patient characteristic were estimated to predict 30-day mortality.19 Patient variables with significant coefficients at p ≤ .10 were retained in the severity model (a list of the final variables and the coefficients is available from the author). The model was replicated in the remaining 50 percent of the sample, with no corresponding differences in coefficients and significance observed. The retained 25 variables reflected demographics, comorbidity, and cancer information. Model discrimination for the full sample, reflected by the C statistic, was 0.83 for mortality and 0.76 for failure to rescue.20 Age was measured as both a linear and quadratic term. Non-white ethnicity was not a statistically significant variable in the severity model. While this may be partially explained by low numbers of non-white patients in Pennsylvania, we chose to retain the variable in our models to account for unmeasured socioeconomic differences by race and ethnicity. Results did not change when this variable was excluded from the model. By state regulations, each hospital admission in Pennsylvania was abstracted routinely by trained medical records coders for key clinical findings to construct the Atlas ™ (formerly known as MEDISGRPS) severity of illness score.21–23 In contrast to usual methods of measuring severity using diagnosis and procedure codes, The Atlas ™ score uses data from the medical record to measure physiologic data, such as unstable vital signs, abnormal laboratory, radiology, or diagnostic test results. For each hospitalization, the resulting score is reported as a categorical variable (0 = no probability of inpatient mortality to 4 = > 0.5 probability of inpatient mortality). Based on an existing severity adjustment approach, 19 we constructed an algorithm to detect comorbidities from claims data up to 90 days preceding the studied admission, and each comorbidity was treated as a dichotomous variable. Tumor type was treated as a categorical variable, length of cancer diagnosis (in months) was a continuous measure, and a dichotomous measure was used to reflect distant or systemic cancer stage.
The two dichotomous outcomes were obtained by the linkage of death records to the cancer registry and inpatient claims records. 30-day mortality is the occurrence of death within 30 days of hospital admission. Failure to rescue (FTR) is a death within 30 days of hospital admission for patients who have also experienced a postoperative complication.24,25 A set of diagnosis and procedure codes (that were not coded in the 90 days prior to admission) are the basis for 40 complications considered. The empirical advantage of failure to rescue is the outcome measure does not “punish” a hospital should a patient experience a complication since complications are associated with case mix severity; it merely identifies whether the hospital rescued the patient successfully from the complication. Following established procedures, 11, 12 patients who died postoperatively were assumed to have experienced a complication, even if no complication was coded explicitly in the discharge abstract. Thus, FTR includes all patients who died within 30 days of hospital admission. The denominator between 30-day mortality and FTR differs. In the former, the denominator is all patients in the sample, while in the latter, the denominator is only patients who experience a complication or who die within 30 days of admission.
We tested bivariate relationships between clinical severity and hospital characteristics using the appropriate t, F, or chi-square test. We also calculated bivariate associations of hospital characteristics with unadjusted and adjusted outcomes rates for hospitals. These risk-adjusted rates were calculated using the ratio of observed events (deaths or failures) divided by the expected number of events predicted by the risk adjustment model, multiplied by the sample’s respective event rate. We ruled out multicollinearity among hospital and nursing characteristics by examining correlation matrices for high correlations, and by yielding acceptable variance inflation factor and tolerance values. We then performed a patient-level analysis and estimated a series of logistic regression models to predict death and failure to rescue. First, models estimated the effect of each hospital characteristic without additional variables in the model. Next, models included the 25 variables identified in the risk adjustment model. Our final models considered all patient and hospital characteristics simultaneously. Robust, cluster methods were specified in STATA version 10.0 (STATA Corp, College Station, Texas) to adjust standard errors and account for patient clustering in hospitals.26, 27 Coefficients were transformed to odds ratios, and 95 percent confidence intervals were calculated for all parameter estimates.
The analyses reported here used a dichotomy of cancer program status, however the COC reported separate categories based on volume and teaching status. A sensitivity analysis using the four categories revealed no differences in our results. Because our sample is quite heterogeneous in tumor type, we also performed an analysis of these variables stratified by volume-sensitive tumors (pancreas, esophagus, and lung, versus all others). We also replicated our findings for 30-day mortality using a measure of 60-day mortality. Our results and conclusions did not change appreciably.
Table 1 presents differences in clinical severity and cancer severity by hospital characteristics (the clinical variables for the entire sample are presented in the first column). The mean age of the sample was 68.3 years, and approximately one third of study patients were below the age of 65. The majority of patients received colorectal or prostate resections.
Admission severity and cancer severity differed significantly by hospital characteristics. Patients in hospitals with NCI cancer centers were of younger age, and lower Atlas™ admission severity than in other hospitals. NCI hospitals cared for a larger proportion of ovarian, prostate, and pancreatic cancer patients than non-NCI hospitals (results not shown). The proportion of patients with distant metastases was not significantly different across hospitals. Similarly, the average length of cancer diagnosis was 19.0 months, and did not differ significantly by hospital characteristics (results not shown). Hospitals with COC Cancer Program status had younger patients, yet slightly more patients with metastatic cancer. When contrasted with lower volume hospitals, patients in hospitals in the highest quartile of procedure volume were younger, with fewer comorbidities, and lower Atlas ™ severity scores. Similar trends for age and Atlas™ severity were observed for hospitals of larger size, teaching intensity, and performance of advanced procedures.
Table 2 shows the unadjusted and risk-adjusted outcome rates based on hospital characteristics. These are hospital-level outcome rates, with the adjusted rates calculated by the proportion of observed over expected events multiplied by the sample’s overall mortality or failure to rescue rate. The overall hospital-level unadjusted rates of 30-day mortality and failure to rescue were 3.72%, and 10.5%, respectively. T and F tests were used to compare outcomes rates across hospital characteristics with two or three or more strata, respectively. While outcomes are uniformly better in hospitals with NCI cancer center designation, the only significant differences were found when comparing unadjusted 30-day mortality rates (p < .01), and adjusted failure to rescue rates (p = .03). Hospitals performing advanced procedures, such as organ transplantation or coronary artery bypass graft operations, had significantly lower unadjusted death and FTR (both p = .03). These differences were no longer significant when outcome rates were adjusted for severity of illness. Significant differences in outcome rates based on COC cancer program approval, teaching status, or hospital procedure volume were not observed.
Table 3 shows the results of logistic regression models to predict 30-day mortality and failure to rescue from the patient-level data. Three series of models for both outcomes are presented: first, each hospital characteristic’s unique odds ratio on the outcome is reported; the next column is a model that includes all patient characteristics with each hospital characteristic separately, and; the final column reflects all patient and hospital characteristics simultaneously specified in the model.
From the first series of models, significant predictors of 30-day mortality included high teaching intensity (OR 0.71, 95% CI 0.54–0.93), NCI cancer center (0.60, 95% CI 0.50–0.72), advanced procedure hospitals (OR 0.80, 95% CI 0.68–0.96), and highest quartile of procedure volume (OR 0.64, 95% CI 0.48–0.88). Models estimating failure to rescue found similar effects for high teaching intensity, NCI cancer centers, and highest procedure volume. In the results for Model II, where patient characteristics were modeled with each hospital characteristic, the only hospital characteristic that significantly predicted outcomes when patient severity was considered was NCI cancer center (30-day mortality OR = 0.64, 95% CI = 0.50 to 0.83; FTR OR = 0.67, 95% CI = 0.47 to 0.96). From Model III, the only variable to predict 30-day mortality when all patient and hospital characteristics were simultaneously considered was NCI cancer center (OR = 0.68, 95% CI = 0.47 to 0.97). No hospital characteristics significantly predicted the odds of failure to rescue when all characteristics were considered.
We report significant differences in clinical severity, cancer severity, and outcomes for surgical oncology patients by hospital characteristics. Contrary to what might be expected, severity of illness does not appear uniformly higher in NCI cancer centers. However, NCI cancer centers in our study achieved lower mortality rates than would be expected on the basis of case mix. In other types of hospitals studied, more favorable mortality rates were found to be largely a product of less severely ill patients. The absence of outcome differences by COC status, either adjusted or unadjusted, suggests that Commission on Cancer standards in place at the time of the study did not convey a direct outcome benefit for patients in this study. It would be worthy to re-examine this question in additional datasets as COC standards have changed over time. It is also possible that many hospitals could meet COC standards, but elected to not obtain formal program approval. This would result in few actual differences between the COC and non-COC hospitals in our sample.
Patient and provider selection are two other explanations for these observations. Younger patients may feel compelled to travel outside their immediate area and seek facilities or providers based on reputation. In a study of chemotherapy outcomes, patients who traveled greater than 15 miles for treatment had superior survival to patients treated locally.28 Alternately, physicians in hospitals with higher teaching intensity, advanced resources, and higher volumes may deem patients too frail to undergo operations and instead recommend less invasive management. Our data are from 1998 to 1999; this is because of the unique linkage of datasets that are not routinely available to investigators. While the procedures studied at the time are common operations for cancer, confirmation of our results in more contemporary samples, coupled with the measurement of process of care variables, would be a useful addition to this area of research.
Our inability to detect significant outcome differences on hospital characteristics may be due to the coarseness of some measurements. For example, knowledge of individual physician characteristics such as provider volume, training, and board certification could refine our approach.5 Because our initial study was not designed to examine the volume-outcome relationship a priori, we have small numbers of tumor types where volume-outcome relationships have been previously documented. Thus, these findings should be interpreted with caution, yet application of the risk adjustment methods used in this study could be applied in the future to larger samples of these patients. Other important outcomes, such as recurrence, late survival, costs, and subsequent health care utilization, were not examined in this study due to data availability. While we had a large number of hospitals in our analysis, not all acute care hospitals in Pennsylvania were included because of missing claims or administrative data. We were unable to adjust our analysis for prior receipt of chemo- or radiotherapy, or consider any care provided outside the Commonwealth of Pennsylvania. While only four hospitals with NCI status were in our sample, they accounted for seven percent of the patient sample. Confirmation of our findings in more hospitals with and without NCI status is suggested. However, our study contributes to the cancer outcomes research literature by extending the analysis outside of the Medicare-eligible population. When compared with other cancer outcomes study focused on hospital differences, we included both admission severity and cancer severity in our models. While most studies report adjustment for age, sex, and comorbidities, we have described our analytic approach and model discrimination statistics in greater detail. Cancer severity variables and Atlas™ severity scores were among the strongest predictors of outcomes in our severity adjustment models; these measures are often not available in traditional claims-based analyses. Datasets that combine claims, tumor registry, and physiologic variables, such as the National Surgical Quality Improvement Program,29 are optimal targets for replication of our analyses. However, a challenge remains to study structure, process, and outcomes in hospitals who do not participate in voluntary data collection efforts.
Hospitals with high teaching intensity, capabilities to perform advanced procedures, and national credentials, were not always caring for the sickest patients. After risk adjustment, few hospital characteristics were significantly associated with 30-day mortality or failure to rescue. Our report underscores the necessity for robust risk adjustment in cancer outcomes research, and explicit reporting of risk adjustment procedures in publications. From the management and policy perspectives, recommendations to reorganize oncology surgical care based on these factors should await further confirmation. Our confirmation of favorable benefits to patients who receive care in National Cancer Institute-designated cancer centers should prompt additional research into underlying differences in care processes in these institutions.
Funding: National Institute of Nursing Research R01-NR04513, American Cancer Society, DSCN-03-202-01-SCN, the Oncology Nursing Society via the Pennsylvania Tobacco Settlement Funds, and a predoctoral training grant from the National Institute of Nursing Research, T32-NR07104.