In analyses of over 48,000 patients admitted to ICUs in 5 major teaching hospitals, using a validated method of adjusting for admission severity of illness, several important findings emerge. First, in-hospital mortality and LOS were similar in patients admitted to intensive care units from July through September and during later months of the academic year. Moreover, results were consistent when July, August, and September were analyzed separately, and there was no discernible pattern of variation when examining outcomes for individual months over the entire year. Furthermore, we were unable to detect differences when individual academic years, surgical and nonsurgical patients, and individual hospitals and ICUs were examined separately. These results were all similar in analyses of roughly 108,000 patients admitted to minor teaching and nonteaching hospitals.
While the existence of a so-called July Phenomenon in teaching centers remains in popular perception, few prior studies have examined differences in mortality or other indicators of quality. Buchwald, et al. sampled 2,703 medical and surgical patients admitted from 1982 to 1984 to a single hospital in Boston, and found no significant difference for patients admitted in July or August compared to those admitted in April and May.13
Rich, et al. also found no difference in mortality in 21,679 medical patients admitted from 1980 to 1986 to a single hospital in St. Paul,14
and in a follow-up examination of 240,467 medical and surgical patients admitted to 3 teaching hospitals in the Minneapolis–St. Paul region between 1983 and 1987.15
These same studies found conflicting results in LOS outcomes. In the study by Buchwald et al., there was no significant difference in LOS.13
In contrast, the first study by Rich et al. found decreased LOS and decreased hospital charges as housestaff experience increased.14
However, the multicenter follow-up found no difference in LOS on the medical services, and that LOS actually increased over the academic year on the surgical services.15
The current findings add to these earlier studies in important ways. First, the study involved a more contemporary cohort of patients and may be more reflective of recent patterns of hospital utilization. Second, the current study adjusted for admission severity of illness using more sophisticated risk-adjustment models that were developed from clinical data abstracted from patients' medical records. Moreover, the current study was the first to analyze a cohort of critical care patients who may be particularly susceptible to harm from errors in judgment. The current study also included a large cohort of patients in nonteaching hospitals for comparison. Finally, the study employed a large sample size and was powered to detect relatively small differences in severity-adjusted mortality. Examination of the confidence intervals around the odds ratios for major teaching hospitals () indicates that our analysis would have been able to detect a statistically significant difference if the odds of death was roughly 1.20 for July or roughly 1.10 for the first quarter of the academic year.
When interpreting our findings it is important to consider several limitations. First, although the sample included several teaching and nonteaching hospitals, our study was limited to evaluation of patients admitted in a single geographic region. The generalizability of our findings to regions should be established. Moreover, the generalizability of our findings to non-ICU settings of care is uncertain. Although we chose to study critical care patients because of their higher acuity and the likelihood that they would be susceptible to initial errors in judgment, it is possible that ICU residents may have a greater level of supervision than their non-ICU counterparts, thus masking any evidence of the July Phenomenon.
Second, our findings are based on data from the early to mid 1990s and may not reflect the impact of recent changes in billing regulations in teaching hospitals. However, one would expect that these recent changes would increase oversight of trainees by attending physicians and further diminish the July Phenomenon.
Third, while our risk-adjustment model exhibited excellent discrimination, it is possible that variables not assessed by APACHE III may have contributed to variations in mortality. Such factors as functional status, mental health, social support, and health insurance status, may confound the interpretation of our data with respect to the outcomes observed.25
Moreover, no information was collected regarding the goals of ICU treatment, patient and/or family preferences for specific ICU treatments, or resuscitation status.
In addition, our analysis was unable to exclude an effect of seasonal variation in hospital mortality that may mask or enhance variation due to housestaff inexperience. For example, an underlying increase in hospital mortality during winter months could mask the appearance of a July Phenomenon and give the impression of a constant risk across the calendar year. However, we did not see evidence of seasonal variation in mortality in nonteaching hospitals.
Fourth, because risk-adjustment models incorporated data from up to 24 hours after admission, it is possible that adverse effects during the first 24 hours of admission were included in the severity assessment. However, prior studies that compared severity-of-illness assessments at admission and at 24 hours have found little difference between the 2 timepoints.18,26
Furthermore, we could identify no significant variation in Apache III Acute Physiology Scores over the academic year, indicating no evidence for a July Phenomenon in care delivered during the first 24 hours of ICU admission.
Fifth, protocols for collecting study data excluded patients who died within 1 hour of admission to the ICU. Such patients may have been particularly vulnerable to delays in diagnosis and therapy. However, analyses examining mortality occurring within 1 day and 3 days of ICU admission yielded nearly identical findings as results examining all deaths.
Finally, our analysis was limited to the outcomes of in-hospital mortality and LOS. Although these 2 outcome measures are widely used indicators, quality of care encompasses multiple dimensions. Thus, the implications of our study for other aspects of the quality of care, such as the processes of care, costs, functional outcomes, patient satisfaction, and long-term mortality, are uncertain.
The current findings are in contrast with a growing body of literature that suggests that physician experience contributes to patient outcomes.1–11
This could indicate that the overall impact of trainees on patient outcomes is small. Alternatively, these results could suggest that hospitals and residency training programs compensate for housestaff inexperience early in the academic year. Compensation might include selective scheduling of more senior residents and greater oversight by attending physicians, nurses, and critical care fellows. Our data did not allow for direct analyses of these organizational factors that might protect from a July Phenomenon.
In conclusion, we found no evidence to support the existence of a July Phenomenon in this cohort of patients admitted to teaching hospital ICUs. Further research should analyze other dimensions of quality of care and examine those organizational features that may help teaching hospitals compensate for the inexperience of housestaff early in the academic year.