|Home | About | Journals | Submit | Contact Us | Français|
Pressure is mounting for surgeons to demonstrate that they can operate well, maintain their performance, and deliver acceptable results. Improved data collection after the Bristol affair may provide more information on the performance of individual surgeons, but a large number of failures are needed before statistical significance is reached,1 and, for patients, this will be a case of shutting the stable door after the horse has bolted. We need to be able to measure operative skill, set standards, and assess surgeons before any damage is done.
Although many factors influence surgical outcome, the skill of the surgeon in the operating theatre is very important. A skilfully performed operation is 75% decision making and 25% dexterity2; in some specialties, such as minimally invasive surgery, dexterity becomes more important. Though surgeons have formal examinations in surgical knowledge, there is no such requirement to show operative dexterity. Common sense suggests that technical skill does affect outcome. However, despite variation in operative results between surgeons,3 it has been impossible to relate outcome to surgical dexterity. A major reason for this is that we have no way of reliably assessing operative skill. This deficiency in assessment needs to be addressed.4
Investigators have observed surgeons in the operating theatre and in the skills laboratory using both objective and subjective criteria. Operative speed is one objective measurement of technical skill and can be important. Robert Liston challenged observers, “Now gentlemen, time me” 28 seconds before placing an amputated limb in the sawdust.5 More recently, time has been used to quantify skill in junior6 and experienced7 surgeons. Measuring competence merely by setting time targets for certain procedures is, however, crude and probably unacceptable. A fast surgeon is not necessarily a good surgeon. Counting the number of procedures performed has also been used as a tool to accredit surgeons8 but tells us nothing about how well the surgeon operates.
Finding objective criteria for judging good surgical technique is difficult, and most assessments are purely subjective. Lord Lister was observed to have “none of the dramatic dash and haste of the surgeon of previous times ... he proceeded calmly, deliberately, and carefully.” As he told his students, “Anaesthetics have abolished the need for operative speed and they allow time for careful procedure.”5 Junior surgeons have been ranked using global scores based on subjective criteria, but multiple observers are needed to obtain acceptable reliability.9 Gathering panels of experts to watch videos or attend theatre may be possible for a research project but is expensive in manpower and time, and this limits its feasibility in real life.
Assessment may be easier in the surgical skills training laboratory than in theatre. Surgeons may behave differently under simulated conditions, but if the tasks are designed carefully to reflect real surgical practice such tests could fulfil the essential requirements of feasibility, reliability, and validity.10 Abstract tests of manual dexterity have not stood up to validation11 and would appear to be so far removed from the act of surgery as to be unhelpful in selecting potential surgeons. Subjective methods using structured scoring systems have been shown to be reliable.12 Although multiple observers were used to rate candidates in terms of “economy” and “fluidity” of movement, it was difficult to validate these scores with subjective rankings of residents in the operating theatre.13
Recent work has tracked the movement of laparoscopic surgical instruments in the laboratory. Objective measurements of economy of motion and number of movements made are generated by the assessment device. These criteria have been validated for tasks in both reality and virtual reality.14,15 Devices that objectively and reliably quantify surgical dexterity could have advantages over traditional subjective evaluation, particularly as a screening tool.
A system that can provide unbiased and objective measurement of surgical precision (rather than just speed) could help training, complement knowledge based examinations, and provide a benchmark for certification. A specific and sensitive test of operative competence could also detect an important problems and might improve surgical outcome. Revealing underperformance early would allow for further training or career guidance towards other less practical specialties. The surgical profession needs a reliable and valid method of assessing the operative skill of its members. A driving test may not be a guarantee against accidents but it makes it less likely that you career off the road. Surgeons, the public, and politicians need reassurance.