We found that the use of an incident reporting system that is integrated into an EPR was very high among health care professionals; 85.1% of the procedures performed had an incident reporting form filled. Most of these forms (84.5%) reported “no incident” occurring. In 15.5% of the remaining forms one or several intraoperative incidents were reported.
This level of reporting is significantly higher than the level usually reported in the literature, on average between 4.1% to 23%. 16-18
There are several reasons which can explain these differences. First, authors 16,17
find a low level of reporting for systems that use handwritten forms or verbal communication. This implies that reporters have to spend extra time completing forms or contacting the risk manager or quality assurance administrator to report incidents. A recent hypothesis emerging from qualitative investigations suggests that time constraints could play a major role in the level of incident reporting, particularly by physicians. 24,25
An incident reporting form integrated into the EPR which is used on a day-to-day basis by health care professionals does not require extra time to be completed. Incident reporting is largely facilitated. This may partly explain why we found a significantly higher level of reporting than other authors. 16,17
Secondly, even when authors report the use of Web-based systems 18,26
which also include predefined categories of events and should be theoretically more convenient to use than paper-based forms, the level of reporting is still low (4.3% to 30.7%). This could be due to specific characteristics of the user interface design. Benson et al. 26
implemented a computerized incident reporting form with 89 different types of predefined categories of events. However, it has been demonstrated that the human short term memory processing has limitations and often obeys the “seven plus or minus two rule.” 27
A list of incidents which does not exceed seven lines and seven incidents by line is much more likely to be processed effectively and appropriately than a long list of categories of events, displayed over several pages. In our system, we limited the different categories of incidents to 16 with two additional categories for “no incident” occurring and others not listed (narrative field) respectively. Predefined categories could be easily selected from a short list of events with a single mouse-click. Furthermore, the positioning of the reporting form in the EPR, immediately below the section for the recording of the anesthetic technique used and medication administered may have acted as a constant reminder for users that the reporting form should be completed. This may explain why Sanborn et al., 18
despite the use of a short and computerized incident reporting form found only 4.1% of adverse events reported, as their form was located on another Web page.
The third reason for a higher level of reporting in our system when compared to current figures reported in the literature could be the strict control of the reporting system within professional boundaries. Our system was developed, implemented, and controlled by physician staff members of the department of anesthesia.
Although the system did not guarantee anonymity to reporters, almost none of the incidents reported were disclosed outside the department. Events were strictly managed within departmental boundaries by the anesthesia quality assurance officer and all staff members involved. Only exceptional sentinel events such as wrong side procedures or death in operating theatre required notification to hospital administrators and risk managers. 28
This may decrease fear of litigation, a barrier to incident reporting. 14
The fourth reason could be the regular feedback provided to staff members on the use of the system and the weekly discussions on some of the incidents during the weekly departmental mortality-morbidity conferences. This may increase staff members’ recognition of the utility and efficiency of the reporting system.
Finally, the impact of a positive reporting culture has also to be discussed. If staff members reporting incidents know that their report will be used as a quality improvement tool and not as a punitive method, the level of voluntary reporting will significantly increase. 29
In our department, reported incidents are used on a regular basis to stimulate discussions during the weekly mortality-morbidity conferences and are considered as a helpful tool to improve practice and to learn from mistakes. This certainly contributes to enhance the reporting culture in the department and the use of our system. However the major benefits seemed to result from the reporting system design itself. Before its implementation the number of incidents reported on traditional paper-based forms was, on average, seven per week. After 2002 and during all the years following its full implementation, the level of reporting was four times higher, on average 30 incidents a week.
We also found that the integrated reporting system had a good level of relevance and reliability. Only 19.8% of events reported in the free text section were not related to patient safety. Also, 80.6% of the predefined categories of incidents reported matched those identified by reviewers in medical charts. This shows that allowing part of the reporting to be performed on preselected categories of events does not impact on the reliability of the information provided. There are several reasons to explain this phenomenon. First, major incidents occurring during anesthesia were defined following a large comprehensive consensus process which involved a large number of staff members of the department. This resulted in a large number of potential incidents occurring to be discussed and the most pertinent to be carefully selected. The end result was a comprehensive catalogue of anesthetic incidents made available for discussion and the most relevant ones to be listed. Secondly the use of an open-ended field, allowed the remaining non-listed events to be reported. Finally, regular feedback provided to staff members allowed categories to be modified and others to be added or deleted, as required.
There are a number of limitations to this study. First, the EPR and its integrated reporting form were mainly used by anesthesiologists for patient management during the perioperative period. Because of confidentiality, inter-operability, and portability issues, its use was limited to departmental boundaries. Its effectiveness and reliability at a hospital level is therefore unclear. However there is no reason to suspect that the four-fold increase in the level of reporting we observed in our department, after the implementation of the new system, could not be observed in other hospital settings.
Secondly, to assess the level of adherence, we used all categories of incidents, including the “no incident” category. However, for various reasons anesthesiologists may have chosen to tick the category “no incident” while an incident had actually occurred. This may have led to an ambiguous interpretation of the true level of adherence to reporting practices. This is why we assessed the level of agreement between the reporting system and medical charts. We found that a large amount of incidents documented in the medical record were actually reported on the EPR.
Thirdly, to assess the level of agreement between events notified in the reporting system and medical charts, we used a screening method to select charts most likely to include undesirable events. We chose patients with an unplanned admission to the ICU. While increasing the efficiency of the chart review process this may have introduced a selection bias of charts likely to include those adverse events classified within the predefined categories of incidents of the reporting system. This may have falsely increased the level of matching between both measurement methods, as the presence of one incident in the reporting system which would match one of those found by reviewers would be considered as a proof of agreement. To account for this potential bias, we restricted the definition of incident in medical charts to those events identified by both reviewers.
Fourth, 19.4% of incidents reported did not match those identified by reviewers in medical charts with a level of agreement beyond chance (the kappa score) of 0.50 (95%CI 0.44–0.56). This may be due to the use of chart review as a reference or gold standard to assess reliability of the reporting system. Limitations of this method are well known, 30
particularly the low level of agreement between reviewers regarding the presence of adverse events and suboptimal care. 31,32
As we considered incidents which were only identified by both reviewers in medical charts (full agreement), a number of events truly occurring and identified by only one of the two reviewers were excluded. This resulted in a number of incidents measured in medical charts not to be considered for the analysis and leading to a lower level of agreement between both systems than could have been expected.