Herein, we demonstrate that risk-adjusted in-hospital mortality and morbidity following cardiac surgery operations in the United States do not depend upon operative time of year. In these analyses, the effects of academic quarter and cardiothoracic surgery teaching hospital status did not independently influence mortality or the composite incidence of postoperative complications for patients undergoing a variety of commonly performed cardiac operations. Consequently, operative time of year does not appear to be a predictor for mortality or morbidity at cardiothoracic surgery teaching hospitals. In addition, this study highlights important trends that exist within United States hospitals including higher risk-adjusted mortality for the performance of complex cardiac operations as well as regional differences in the performance of cardiac operations.
The potential for a “July phenomenon” has been postulated as a potential explanation for variations in patient outcomes following both medical and surgical hospitalizations at academic centers. This proxy as a measure of resident experience has been investigated within various patient populations with mixed results.2–5, 10
Englesbe and colleagues (2007) from the University of Michigan reported a large, multi-institutional cohort study of 20,254 patients undergoing a variety of surgical operations using data from the American College of Surgeons-National Surgical Quality Improvement Program (NSQIP). In their study, 30-day mortality rates for patients undergoing operations at beginning of an academic year (July 1–August 30) were 41% higher than the end of the year (April 15–June 15).10
However, this study fails to provide details related to surgical case mix, and it remains unclear how many cardiac or vascular operations were included in their analyses. A similar study by Rich and colleagues (1993) demonstrated the presence of a “July effect” on patient outcomes following internal medicine admissions.4
However, other published surgical series involving Medicare and trauma patients have failed to demonstrate a significant influence of operative time of year on surgical outcomes.3, 5
Our results demonstrate that operative time of year does not independently influence patient mortality following the performance of common, cardiac operations at academic institutions. These findings are in general agreement with previous studies examining the influence of resident experience on cardiac surgical outcomes. Of the few published series, the largest to date was performed by Bakaeen and colleagues in 2009. In this retrospective review, the authors demonstrated similar risk-adjusted morbidity (OR=1.01 [0.96–1.07], P
=0.67) and operative mortality (OR=0.99 [089–1.11] P
=0.90) for cardiac surgical patients undergoing operations during early (July 1 –August 31) and late (September1–June 30) periods of an academic year using the Veterans Affairs (VA) Continuous Improvement in Cardiac Surgery Program (CICSP) database. Although limited by the inclusion of VA patients only, the representation of select affiliated academic medical centers, and a ten-year study period (10/1997–10-2007) that witnessed significant changes in resident work hour restrictions, their results concur with our findings. Two additional single institution studies also support our results and found no effect of cardiothoracic surgical resident turnover or experience as a function of academic season on the outcomes of cardiac procedures.11, 12
The findings of this study extend the examination of resident work experience and the investigation of academic season to reflect contemporary trends at academic medical centers operating within current resident training hours restrictions. In our analyses, we controlled for a potential 45 patient, hospital and operation related confounding factors, including cardiothoracic surgery teaching hospital status and the surprisingly high prevalence (50.35%) of non-elective operative status, in each of our predictive models. Importantly, the concomitant influence of academic quarter and cardiothoracic surgery teaching hospital status was assessed through the inclusion of an interaction term between these two variables in each statistical model. Even after these adjustments, performance of cardiac surgical operations during a given academic quarter at cardiothoracic surgery teaching hospitals was not associated with patient mortality.
The additional contribution of system-related influences must be considered in our analysis of patient outcomes at academic medical centers. In our analysis, we used academic quarter as a proxy for the unmeasured influence of new surgical trainees on cardiothoracic surgery patient outcomes. Although the potential lack of experience of new trainees during the early academic season would most likely exert the largest direct influence on patient outcomes, other system related processes, including attending coverage, fluctuations in nursing staff schedules, transitions in allied health care personnel, team debriefing and handing off of patient care between surgical teams may exert important influences during early academic seasons. The current study can not account for these immeasurable factors as these variables are not prospectively collected and available in the NIS database. In addition, we were unable to statistically control for the influence of resident and attending surgeon experience with respect to the number of clinical years of training and/or surgical practice. Thus, it is possible that more senior cardiothoracic residents and/or attending surgeons performed a higher percentage of more complex operations and/ or the postoperative care during the study period.
This study has important clinical relevance as it provides a nationally representative and broadly generalizable sampling of an increasingly reported phenomenon within academic medical centers. To our knowledge, this study represents the most comprehensive investigative description of the influence of academic seasonality within a nationwide, cardiac surgical patient population. As surgical subspecialty care often requires more specified postoperative management, we believe that the lack of differences in patient outcomes at the beginning of an academic year compared to the end may represent the influences of a higher level of supervision from more senior surgical trainees and attending surgeons, the presence of more mid-level providers (physicians assistants and nurse practitioners), and the utilization of more protocolized postoperative treatment algorithms and/or pathways. Consequently, our results are hypothesis generating and provide a legitimate clinical context for future prospective studies. Additionally, we believe our data dispels the belief that cardiac surgery patient outcomes may be adversely affected at teaching institutions during the early academic season, and that patients should be reassured of the safety of performance of cardiac surgery operations at academic medical centers throughout the academic year.
This study has certain limitations. The retrospective study design and use of an administrative database introduces the possibility of selection bias, especially at the surgeon level, and the potential for errors due to any unrecognized miscoding of diagnostic, procedure, and complication codes. However, the strict randomization and annual validation of the NIS dataset reduces the likelihood such bias. In addition, the NIS lacks data collection for several important factors, including affiliate teaching status for non-teaching hospitals, level of resident training, intraoperative role of trainees, or precise transition points in patient care at the beginning or end of resident service rotations. Constraints imposed by collected data points and the confinements of variable definitions, such as renal failure and elective vs. non-elective status, should be considered. Alternatively, other cardiac specific databases, such as the Society of Thoracic Surgeons (STS) Adult Cardiac Surgery database, may provide for more specific adjustment of certain patient and operation related factors. The present results should also be evaluated within such databases. The lack of long-term follow-up data may result in the underestimation of true mortality and morbidity rates. Additionally, the use of in-hospital mortality as reported within the NIS does not provide estimates of 30-day mortality, and this study did not analyze mortality related to hospital re-admissions or include other high risk cardiac operations such as thoracic aorta or aortic root procedures. Finally, we are unable to include adjustments for other well-established surgical risk factors such as low preoperative albumin levels or poor nutrition status or the influence of an unmeasured confounder, which limits the ability to completely risk stratify patients. However, our statistical models proved resilient to the presence of a potentially unmeasured confounder on sensitivity analysis.