We used clinical surveillance to detect AEs on four hospital services. AE risk per encounter was 13% with a range of 4% to 25% depending on service. Overall, 36% of AEs were preventable, though this was highly variable between services, ranging from 13% to 56%. General Internal Medicine had the greatest number of AEs but also had the highest number of patients and comorbidities. After controlling for length of stay and the number of patients observed, the risk of AEs, including those with the most severe consequences, was greatest in the two critical care areas. There were differences in the types of AEs identified; however, therapeutic errors were common in all subgroups studied. Finally, clinical surveillance identified an important number of potential AEs.
These findings are important because they identify ample opportunities for quality improvement. We found a greater AE risk than most previous studies of AEs which have tended to rely solely on chart review.1–7
We used a clinical observer to identify events within 24 h of their occurrence, and reviewed them within 7 days of their identification. The timely identification of problems would ultimately mean a more rapid response to rectify them. The reviews were performed by clinicians who knew the clinical setting and were well respected by their local peers. We believe that these factors resulted in classifications with a much greater degree of face validity than existing methods to identify AEs (such as incident reports and chart reviews). These factors also led us to identify a significantly greater proportion of events attributed to diagnostic and therapeutic errors, than prior studies. Furthermore, the systematic nature of our data collection ensured our methods were less biased than morbidity and mortality reviews or closed-claims analysis.15 43 44
Our study supports the concept that improvement strategies must be locally developed and must target specific problems. We found that AE risk and type varied by service. The underlying explanation for this finding likely relates to the frequency of underlying processes occurring within these areas that in turn lead to more opportunities to cause harm. Regardless of the cause, it follows that solutions will also necessarily vary. For example, in the Cardiac Surgery Intensive Care unit, interventions should be directed to preventing certain types of surgical complications. On the Medicine service, interventions that focus specifically on reducing errors related to the medication administration process would be most beneficial. The conclusion that errors and their solutions vary by patient service is intuitively obvious but is often not incorporated in the ‘top down’ approaches to improve quality and safety taken by accreditation organisations or other regulators.
Our study also supports the conclusion that priority setting, as it pertains to safety improvement, must occur in a highly strategic manner. The greatest number of AEs occurred on the service with the frailest patients. Effective interventions to improve safety in these patients would have the largest reduction on the institutional number of events. However, after controlling for the number of patients and length of observation, these patients had a risk of events similar to the least frail patients. The highest event risk and the most severe events occurred in services that had very invasive therapies and where patients were most acutely ill. Therefore, efforts to prevent injuries in these groups might be more efficient and associated with a greater overall impact in terms of reducing the economic burden of AEs. For all these reasons, organisations will have to be very clear about their goals when targeting the safety problems they hope to solve.
Finally, this research has highlighted the feasibility of clinical surveillance. We used a comprehensive strategy involving the use of direct observation, voluntary and prompted reporting, as well as daily chart reviews. This combination of AE detection methods has been recommended by others15
and serves to increase the overall performance of the monitoring system. Our programme was well accepted in the four services studied with very little financial support (the clinical observer was a funded position). However, all other functions required to maintain the process were voluntary, and there was very little infrastructure support required. There was no evidence that providers who were being observed were overtly or covertly averse to the programme or changed their behaviours. In fact, the methodology was so well received that our institution wanted to and is currently taking steps to implement it on other services.
Our findings are important in that we used a reproducible method of case detection and case review applying a standard method of classification to facilitate comparison. However, there are three important limitations which we would like to highlight. First, we are uncertain of the programme's reliability. Threats to reliability include the observer and peer-review processes. We attempted to mitigate these concerns through standardisation of the process of observation using triggers and case report forms; and, for the peer review, the use of multiple reviewers. These approaches have been successful in improving reliability in prior settings. However, we recommend further research before our method be adopted by health systems to compare institutions. Second, the generalisability of our surveillance feasibility is unknown, particularly because it was performed using a single observer. Even though we successfully implemented our programme within multiple settings in different facilities, it is possible that other settings may create challenges. Further work is recommended to establish whether the programme can be successfully adapted within non-teaching hospitals and in different health systems. Third, the observer cannot be everywhere at once and may miss some events as they are happening. This means the programme cannot be used to intervene systematically and directly in patient care when it does not meet the standard. However, this is not the stated purpose of the programme. Furthermore, if cases are detected in which an immediate intervention is warranted, then this appropriate response should be easily performed. For example, we identified a few instances when a critical laboratory abnormality had not been recognised and simply brought it to the attention of the treating team.
In summary, our study supports the conclusion that clinical surveillance appears to be an effective means of detecting patient safety issues. Before making a general recommendation for widely adopting this method, we recommend further evaluations to address four issues. First, we recommend comparing the findings of clinical surveillance with other methods of identifying patient safety events. Second, we recommend an economic analysis to determine the most efficient method of adverse event detection. Third, we would recommend evaluations of this method in different hospitals. Finally, and most importantly, we recommend studies to determine if adverse event detection leads to improvement in patient outcomes.