|Home | About | Journals | Submit | Contact Us | Français|
Contributors: BJ conceived, initiated, and coordinated the study and is the study guarantor. All the authors were involved in collecting, analysing, and interpreting the data; in particular, AH provided statistical support, and SG, BA, BJ, SD, and AC analysed the hospital episode statistics and the other routine data. The paper was written jointly by BJ, BA, BH, and LI, with all other authors commenting and editing the drafts and approving the final version.
To ascertain hospital inpatient mortality in England and to determine which factors best explain variation in standardised hospital death ratios.
Weighted linear regression analysis of routinely collected data over four years, with hospital standardised mortality ratios as the dependent variable.
Eight million discharges from NHS hospitals when the primary diagnosis was one of the diagnoses accounting for 80% of inpatient deaths.
Hospital standardised mortality ratios and predictors of variations in these ratios.
The four year crude death rates varied across hospitals from 3.4% to 13.6% (average for England 8.5%), and standardised hospital mortality ratios ranged from 53 to 137 (average for England 100). The percentage of cases that were emergency admissions (60% of total hospital admissions) was the best predictor of this variation in mortality, with the ratio of hospital doctors to beds and general practitioners to head of population the next best predictors. When analyses were restricted to emergency admissions (which covered 93% of all patient deaths analysed) number of doctors per bed was the best predictor.
Analysis of hospital episode statistics reveals wide variation in standardised hospital mortality ratios in England. The percentage of total admissions classified as emergencies is the most powerful predictor of variation in mortality. The ratios of doctors to head of population served, both in hospital and in general practice, seem to be critical determinants of standardised hospital death rates; the higher these ratios, the lower the death rates in both cases.
Wide variations in English hospital inpatient death rates have been observed over many years,1–4 and concerns have been expressed that such variations could reflect important differences in the quality of medical care available in different hospitals.5,6 Hitherto, research has provided contradictory evidence about the relation of hospital mortality to quality of care.6–9 While differences in patients’ age and severity of illness may explain some of the variation in hospital death rates, adjustment for age, sex, and severity leaves a large amount of unexplained variation.10–15
Comparisons of hospital inpatient death rates, published annually in the United States as league tables, have resulted in lively discussion and debate about their compilation and usefulness.13,16–18 Meaningful comparison of hospital death rates requires adjustments for severity of illness, length of hospital stay, age, diagnosis, and type of admission. Suitably standardised hospital death rates are used both as indicators of quality of care and in the setting of standards in the United States.19–22
The NHS offers unique opportunities for examining the reasons for differences in hospital death rates because it provides a virtually closed system of care available to almost everyone in the country. Since 1987, data have been routinely collected nationally on every admission to hospital, providing a comprehensive database on all inpatient admissions. By linking other sources of routinely collected data to analyse inpatient hospital death rates, we attempted to ascertain differences in hospital mortality in England and to determine the main factors explaining the variation between hospitals over a four year period.
We obtained data from three main sources: the NHS hospital episode statistics data system,23 the national decennial census,24,25 and other routine NHS data such as hospital characteristics,26,27 hospital staffing levels, and general practitioner distribution over England.28 For 51 hospitals, the results of a patient centred survey were available.29
The hospital episode statistics database from 1991-2 to 1994-5 includes information on every inpatient spell in NHS hospitals in England. Each spell includes the following information: patient’sage, sex, postcode of residence, primary diagnosis and up to six additional sub-diagnoses coded with the International Classification of Diseases, Ninth Revision (ICD-9), type of admission (emergency or elective), and length of stay.
We obtained census data from 1991 at the level of the 8595 English electoral wards (average population 5500 residents), which provided a range of socioeconomic indicators30–33 and the percentage of people with self reported, limiting, longstanding illness. The census data also contained information about the NHS facilities, hospices, and local authority or nursing home places available within each area.
We used other data sources, many routinely published by the NHS, such as numbers of hospital beds and indicators of staffing levels of hospital doctors and nurses and general practitioners per head of population.
NHS hospitals vary greatly in their size and purpose. Our goal was to compare roughly similar facilities, and we therefore selected the data using criteria based on type and size as well as on the quality of the data recorded in the hospital episode statistics database.
We looked at four years of data, from1991-2 to 1994-5. We excluded community and specialty institutions, small hospitals (under 9000 admissions during the four years), and hospitals without accident and emergency units. We also excluded any hospital that had poor quality data (more than 30% of inpatient episodes without a valid discharge or more than 30% of primary diagnoses recorded as “unknown”) for at least one of the four years. We included discharge records only—that is, episodes which ended in discharge (alive or dead) from the hospital rather than transfer to the care of another consultant within the hospital. In this paper, we use the terms admission and discharge to refer to the same outcome measure, namely the number of alive or dead hospital discharges; the term hospital refers to hospital trusts, which may occupy more than one site.
Discharges were included in the analysis if the primary diagnosis was one of 85 primary diagnoses which accounted for 80% of deaths. Weeliminated from the analyses all transfers between hospitals (2% of admissions and 3% of discharges). Data on deaths outside hospital were unavailable; it was therefore difficult to take account of differences in discharge practices that could affect comparisons of inpatient mortality. To address this situation, we recorded the availability of other NHS resources within each hospital health authority area, selected patients by lengths of stay of less than 14, 21, or 28 days, and used length of stay as a possible explanatory variable.
Several studies stress the importance of adjusting for severity of illness in hospital admissions when comparing quality of health care.34–41 Since hospital statistics of inpatient episodes do not include detailed data on clinical severity, in addition to standardising for primary diagnosis, we calculated several measures of comorbidity based on discharge diagnoses for each hospital: the number of bodily systems affected by disease, the percentage of patient admissions with one of the 15 most serious primary diagnoses (responsible for 50% of all deaths), and the percentage both of cases and of deaths with comorbidities (that is, subdiagnoses) in each of the 85 diagnoses that led to 80% of all deaths. We ranked subdiagnoses by their univariable correlation with hospital standardised mortality ratios and created a measure of comorbidity by combining the top two or three comorbidity diagnoses. We used each of these measures in our model as independent estimates of the severity of illness treated.
Because initial findings suggested that the percentage of emergency admissions was the strongest predictor of hospital standardised mortality ratios, we built up two models, the first (model A) included all admissions (both emergency and elective), and the second (model B) looked at mortality for emergency admissions only.
We conducted weighted multiple linear regressions that took account of the varying hospital volumes of cases (weights were defined as the reciprocal of the standard error squared where standard errors were derived using the normal approximation to a Poisson distribution of observed deaths). Each potential explanatory variable was used separately in a univariable regression model, and then multivariable analyses were performed. Backwards and forwards stepwise selection techniques were used, with a significance level of P=0.01. The adjusted R2 was derived—this is the percentage of variation explained by the model after adjustment for the number of variables in the model.
The residuals were checked with standard diagnostic methods and were found to be satisfactory.42 The stability of the final model was checked by repeating the fitting procedure after removing observations with high influence. Fractional polynomials were also used to check for curvature in the explanatory variables, and no curvature was found.43
Dependent variable—Our dependent variable was the hospital indirectly standardised mortality ratio, which is defined as the ratio of actual number of deaths to expected deaths multiplied by 100. We calculated death rates for the four years studied stratified by age (using 10 year age groups), sex, and the 85 primary diagnoses. These were used to calculate the expected deaths for each hospital by multiplying the number of hospital inpatient admissions in each stratum of age, sex, and primary diagnosis by the stratum specific rates. We also calculated hospital standardised mortality ratios using direct standardisation, which produced similar results to those from indirect methods.
Independent variables—The Appendix lists each of the independent variables considered in a univariable analysis. Three types of variables were used: aggregated discharge data such as the percentage of emergency cases, individual hospital data such as total number of beds, and community attributed data such as the percentage of patients with limiting longstanding illness. Aggregate discharge data was taken from the individual discharge records and aggregated across each hospital. Community data was taken from geographical areas (1991 electoral wards and 1995 health authorities), attributed from area of residence to each discharge (via postcode), and then averaged across discharges for each hospital.
We retained 183 acute general hospital trusts for analysis, roughly two hospitals per health authority in England. Over the four year study period, 7.7 million admissions were considered, of which 60% were classified as emergencies, accounting for 93% of all deaths considered (table (table1).1). These 183 hospitals covered 85% of all admissions (88% of emergency admissions) in the England hospital episode statistics data for the 85 diagnoses.
Crude death rates varied between hospitals from 3.4% to 13.6%, with a mean mortality of 8.5%. The mean annual mortality fell from 9.2% to 7.6% over the four years. When annual death rates were standardised by age, sex, and primary diagnoses the mean hospital standardised mortality ratios fell from 104.9 to 97.0 (average annual fall of 2.6%).
Length of stay proved not to be significant, and table table22 shows results only for all lengths of stay. It shows the predictors associated with hospital standardised mortality ratios at the 1% significance level and their regression coefficients. Table Table33 shows the univariable associations for these predictors.
For model A, based on emergency and elective admissions, the adjusted R2 was 0.65. For model B, based on emergency admissions only, the adjusted R2 was 0.50. The results show that in model A, after adjustment for the percentage of emergency admissions, the best predictors of hospital mortality were numbers of hospital doctors per 100 hospital beds and general practitioners per 100 000 population. The figure displays hospital standardised mortality ratios for three groups of hospitals and areas: low doctored (numbers of doctors per beds and general practitioners per head of population below mean values by at least ½ SD), with a mean hospital standardised mortality ratio of 112; high doctored (doctors per beds and general practitioners per head of population more than ½ SD above mean), with a mean hospital standardised mortality ratio of 88; and intermediate hospitals, with a mean hospital standardised mortality ratio of 99.
In model A higher hospital standardised mortality ratios were associated with higher percentages of emergency admissions, lower numbers of hospital doctors per hospital bed, and lower numbers of general practitioners per head of population. The numbers of hospital doctors of different grades were also considered as explanatory variables, but total doctors per bed was found to be the best predictor. Higher hospital standardised mortality ratios were also associated with four other factors: low standardised admissions ratios for the health authority where the hospital was located, higher percentages of live discharged patients who went home (that is, non-death discharges to normal residence), higher percentages of cases of comorbidities of bronchopneumonia or heart failure or fracture of neck of femur, and lower availability of NHS facilities per 100 000 population for the health authority where the hospital was located. At the 5% level of significance, only one other predictor entered the model—possession of a specialist renal unit, which was associated with lower hospital standardised mortality ratios. At the 1% level, only the proportion of emergency admissions, numbers of hospital doctors per bed, and numbers of general practitioners per head of population were significant.
For model B, the percentage of cases with comorbidities of bronchopneumonia or malignant neoplasm was a significant predictor: number of general practitioners per 100 000 population was no longer significant. At the 5% level of significance, two variables entered the model, the proportion of grade A nurses (auxiliary nurses in training) as a percentage of all hospital nurses and bed occupancy. High percentages of grade A nurse and high bed occupancy were associated with higher hospital standardised mortality ratios.
By removing the effect of factors directly beyond hospital control (that is, all except doctors per bed), it is possible to calculate a hospital standardised mortality ratio that is likely to be a more valid measure of hospital quality of care. When we did this the range of resulting hospital standardised mortality ratios narrowed to 79-125.
We have calculated hospital death ratios adjusted for age, sex, and diagnosis and looked at their association with factors likely, on clinical grounds, to be associated with quality of care. We focused on factors in the hospital and in the community surrounding the hospital that took account of financial and human resources, such as the number of doctors and nurses per hospital bed and the number of general practitioners per head of population from which hospital admissions were drawn.
The overall standardised death ratio in the 183 hospitals studied decreased on average by 2.6% a year between 1991-2 and 1994-5, but the variation between hospitals remained large. The associations we found between lower numbers of general practitioners per head of population and higher death rates raise several possible explanations. When general practitioners are relatively overworked the patients whom they send to hospital may be relatively sicker; and in these areas patients are more likely to be admitted as emergencies: high percentage of emergency admissions was significantly correlated with low numbers of general practitioners per 100 000 population, that is, with high average list size (r=−0.35, P<0.001). In model A of our regression analysis a reduction of 5000 hospital deaths per year was associated with a 27% increase in hospital doctors (9000 more doctors) or an 8.7% increase in general practitioners (2300 more doctors). In other words, our results suggest that a 1% increase in the number of hospital doctors per bed (333 more hospital doctors if the number of beds remains unchanged) is associated with a 0.119% decrease in hospital standardised mortality ratios (186 fewer deaths), and a 1% increase in general practitioners per head of population (267 more general practitioners if the population is unchanged) is associated with a 0.368% decrease in hospital standardised mortality ratios (575 fewer deaths).
In discussing risk and safety in hospital practice, Vincent puts heavy clinical workloads at the top of a list of conditions in which unsafe acts may occur.44 Compared with other countries in the Organisation for Economic Cooperation and Development (OECD), the United Kingdom has a low number of physicians per head of population,45 although it is planned to change this (Department of Health press release 98/337, 14 August 1998). In 1994 the United Kingdom had 1.6 physicians per 1000 population, which is more than one standard deviation below the mean of 2.7 for the 28 countries recorded by the OECD, the UK average amounting to only 59% of the OECD average for that year.
A higher percentage of live discharges to patients’ homes was also associated with higher hospital standardised mortality ratios. This probably reflects the fact that, where there are more NHS facilities, hospices, and local authority or nursing home places available, patients are more likely to be discharged to one of these for recovery and any deaths that follow would not be in hospital. The number of NHS facilities per head of population in the district surrounding the hospital is also a good predictor—the more facilities, the lower the hospital standardised mortality ratio. This effect may be similar to that of non-home discharges—that is, where these facilities do not exist patients are more likely to remain in hospital to die.
The age standardised admission ratio was also an important predictor, with higher admission rates being associated with lower mortality ratios—possibly indicating that some hospitals may admit relatively higher percentages of less sick patients because they have lower thresholds for admission.
At the 5% level of significance, hospitals with a specialist renal unit had lower hospital standardised mortality ratios—possession of a renal unit possibly being a marker of the quality of hospital care generally. Measures of social deprivation of the area of residence were not significantly related to mortality ratios. However, the percentage of hospital nurses graded A (the lowest grade, which indicates auxiliary nurses in training) was associated with higher hospital standardised mortality ratios: this result further reinforces the relation between staffing factors and outcomes.
Contrary to recent US data,46 teaching hospital status was significant at the univariable level, but, once adjusted for doctor:bed ratio in the multivariable regression, proved not to be significant. University teaching hospitals had 56% higher doctor:bed ratios than non-teaching hospitals (mean values 0.378 v 0.243 respectively).
Considerable care should be exercised in interpreting hospital mortality data. In view of the literature on case mix,9,13,16,34,36,47 it is surprising that only one of our measures of comorbidity was significant in the model (table (table3),3), and this might be related to the lack of data on severity of illness. Data for individual hospitals could prove useful, especially if broken down by individual diagnoses or specialties, provided that the number of cases is sufficient to give narrow confidence intervals and the data adjustments described can be made.48–51 Results could prompt hospitals with high standardised mortality ratios to examine their care processes and staff ratios.
We have found an association between mortality rates and doctor number (in hospital and in general practice). We know of no studies that have looked at this association before, and our findings need to be validated by further investigations. A matched pair study of patients admitted to hospitals with high and low standardised mortality ratios could help to elucidate these findings. In such an investigation detailed data would have to be collected to allow for accurate adjustment of case mix.
Most of the significant predictors in our two models are outside the direct influence of hospital policy (except doctor numbers per bed), and adjustment for these external factors narrows the range of mortality ratios. This finding indicates that variation in quality of hospital care is smaller than incompletely adjusted statistics suggest, and that our model may be used to produce more valid indicators of quality of care.
We thank Professor John Henry and Dr Paul Aylin for reading the paper, Debbie Hart for data preparation, and the BMJ’s referees for their comments.
Aggregate discharge data
Community attributed data†
Competing interest: Professor Jarman is the medical member of the Bristol Royal Infirmary inquiry, but this research was completed before he was appointed on 26 January 1999.
*Variables with high adjusted R2 from univariable regression entered into multivariable regression models
†Based on electoral ward of patient residence and averaged for all admissions (aggregate health authority of hospital data except where stated)