|Home | About | Journals | Submit | Contact Us | Français|
The author's reply: We thank Dr Ward for his response to our paper. He suggests that the EuroSCORE has some problems as a risk model, which we have already acknowledged in our paper. We do feel, however, that the EuroSCORE is “fit for purpose” as we have described. The suggestion that apparent improvements in outcome are purely due to surgeons manipulating the risk score is contradicted by the evidence presented; crude mortality was significantly lower after public disclosure despite increases in the mean age and the proportion of octogenarians (along with other risk factors), both of which are objective numerical measures uploaded from hospital information systems and are not open to “gaming” but clearly related to increased operative risk. We have audited the quality of our risk scoring locally and have not seen evidence of “gaming”.
Interestingly, the author states that our study conclusions are not supported by experience from New York State, citing a single reference. We summarised data from multiple studies from several American states in our discussion, and put our findings into context. We did not overstate the case. We have already acknowledged that an ideal study into the implications of publicly reporting outcomes would include data on patients turned down for surgery but do not agree with the suggestion that it has damaged surgical training, a claim made with no justifying evidence. Data from our hospital show that the proportion of cases performed by trainees each year 2003–4 to 2006–7 are 31%, 34%, 31% and 34%, respectively, despite named surgeon data being published in 2005. Clearly, the number of cases done by trainees has not suffered, but it is easy to construct an argument that the quality of supervision has improved because outcomes are scrutinised, and it may be that this has contributed to the improvement in quality that we have shown.