|Home | About | Journals | Submit | Contact Us | Français|
Objective To evaluate the performance of a routine incident reporting system in identifying patient safety incidents.
Design Two stage retrospective review of patients' case notes and analysis of data submitted to the routine incident reporting system on the same patients.
Setting A large NHS hospital in England.
Population 1006 hospital admissions between January and May 2004: surgery (n=311), general medicine (n=251), elderly care (n=184), orthopaedics (n=131), urology (n=61), and three other specialties (n=68).
Main outcome measures Proportion of admissions with at least one patient safety incident; proportion and type of patient safety incidents missed by routine incident reporting and case note review methods.
Results 324 patient safety incidents were identified in 230/1006 admissions (22.9%; 95% confidence interval 20.3% to 25.5%). 270 (83%) patient safety incidents were identified by case note review only, 21 (7%) by the routine reporting system only, and 33 (10%) by both methods. 110 admissions (10.9%; 9.0% to 12.8%) had at least one patient safety incident resulting in patient harm, all of which were detected by the case note review and six (5%) by the reporting system.
Conclusion The routine incident reporting system may be poor at identifying patient safety incidents, particularly those resulting in harm. Structured case note review may have a useful role in surveillance of routine incident reporting and associated quality improvement programmes.
Patient safety incidents (defined as any unintended event caused by the health care that either did or could have led to patient harm) have been shown to cause harm in between 3% and 17% of hospital inpatients.1 2 3 4 5
After the development of the national risk management standards in 1995, most NHS hospitals in England and Wales established reporting systems as part of their risk management programme.6 People involved in or witnessing a patient safety incident complete a form that is sent to the local reporting system, where the incident is classified and entered into a database.7 The National Patient Safety Agency developed a national reporting and learning system in 2003 to collate reports of patient safety incidents from local organisations.8 This system aims to help the NHS to learn from patient safety incidents and to identify trends and patterns relating to patient safety.8 9 The system should, therefore, be able to identify a representative sample of patient safety incidents and provide adequate data about the cause, contributory factors, preventability, and impact of these incidents.9 10 In this paper we evaluate the relative performance of a local routine incident reporting system that feeds into the national reporting and learning system, by comparing it with a well validated method of systematically reviewing case notes.1 2 3
We did the study in a large NHS hospital trust in England in 2005. We selected a stratified random sample of 1006 admissions (>24 hours' stay) between January and May 2004 from eight specialties: surgery; urology; orthopaedics; general medicine; medicine for the elderly; oncology; ear, nose, and throat; and ophthalmology. All data extracted were anonymised and kept confidential. The study consisted of using structured data extraction tools to do a two stage retrospective case note review of the sample admissions and reviewing the patient safety incidents reported by the routine hospital reporting system for the same admissions.
We used previously described methods to do the case note review.1 2 3 Five trained nurses screened patients' records by using 18 explicit criteria (box). We used one (or more) positive criterion as an indicator of a patient safety incident and scrutinised these medical records in stage two. One of the other nurses independently reviewed a 10% sample to assess inter-rater reliability. In addition, medical staff fully reviewed 10% of admissions for which no positive criteria were identified to identify false negatives (fig 11). ).
ARF=acute renal failure; CVA=cerebral vascular accident; DVT=deep vein thrombosis; MI=myocardial infarction; PE=pulmonary embolism.
In stage two, three hospital doctors reviewed the records that had one positive criterion in stage one. The doctors were trained to use a structured review form to judge if a patient safety incident had occurred and to assess its type and consequences. One of the other doctors independently reviewed 90 medical records to assess the inter-rater reliability (fig 11). ).
We inspected data on the routine adverse incident reporting system for the 1006 admissions in our sample to see if patient safety incidents had been reported. We calculated the number, percentage, and type of patient safety incidents identified by the case note review and routine reporting system and classified them into three groups according to the routine reporting system policy (table 11).). We calculated the proportion of admissions with patient safety incidents identified by each method and the proportion of these incidents that were judged to have caused patient harm for each method, along with 95% confidence intervals. We used Cohen's κ to assess the inter-rater reliability.11
Patient safety incidents—According to a combination of case note review and the reporting system, a total of 324 patient safety incidents were reported in 230 of the 1006 admissions (22.9%; 95% confidence interval 20.3% to 25.5%). Case note review identified 303 (94%) of the 324 incidents. The reporting system identified 54 (17%) of the total number of patient safety incidents, all of them of group I type (table 11). ).
Patient safety incidents causing harm to patients (adverse events)—Of the 1006 admissions, 110 (10.9%; 9.0% to 12.8%) had at least one patient safety incident resulting in harm to the patient (a total of 136 adverse events). In other words, 42% of patient safety incidents resulted in adverse events, of which all were detected by the case note review and 6 (5%) by the reporting system. All 21 patient safety incidents missed by case note review were minor (fig 22),), whereas 130 (44.7%) incidents missed by the reporting system led to patient harm.
We found that 23% of hospital admissions in eight specialties were associated with patient safety incidents and 11% with adverse events. This is similar to rates found in studies using similar methods in the United Kingdom (10.8%)1 and internationally (7.5% to 16.6%).2 3 4 5
The routine reporting system as implemented in this large hospital missed most patient safety incidents that were identified by case note review and detected only 5% of those incidents that resulted in patient harm. This suggests that the routine reporting system considerably under-reports the scale and severity of patient safety incidents.
Structured case note review, when carried out by trained professionals, has been shown to reliably detect adverse events.1 2 3 6 12 The reviewers in this study were specifically trained, and inter-rater reliability was good at both stages11: 84% between nurses in the first stage (κ=0.67) and 90% between doctors (κ=0.76) in the second stage (table 22). ).
This study is based on data from one large hospital, where the performance of the incident reporting system may differ from that in other hospitals. However, this trust is a high reporter to the national reporting and learning system and the distribution of the types of patient safety incidents detected in this study was similar to that found in a recent analysis of patient safety incidents from 230 NHS organisations.8 Some hospitals report hospital acquired infections to systems other than the adverse incident reporting system13; however, even excluding infections, the reporting system detected only 24% of all patient safety incidents and only 5% of those resulting in patient harm. This suggests that our results may be generalisable.
A recent report by the House of Commons Committee of Public Accounts14 was critical of the adequacy of the national reporting and learning system. Our study provides empirical evidence that the data collected by the system may be biased. This is unlikely to be caused by teething problems, as the national reporting system was designed to complement pre-existing local reporting arrangements.7 If the NHS is to gather accurate information on serious injuries and deaths resulting from patient safety incidents, as recommended by the Committee of Public Accounts,14 then relying on voluntary reporting may not be sufficient. Voluntary reporting systems may under-report incidents, owing to lack of feedback; time constraints; fear of shame, blame, litigation, or professional censure; and unsatisfactory processes.15 16 17 18 19
The results do not mean that the early themes emerging from the analysis of the national reporting and learning system data are not useful,8 but estimates of the type and severity of incidents are likely to be biased. More importantly, perhaps, the value of these data locally as a component of safety programmes is questionable.
More research is needed to help to develop a reporting system that is capable of providing an accurate picture of the type, nature, and severity of incidents and at reasonable cost. Even if detection is improved, this will not in itself result in improvements in patient safety. We need to develop and evaluate cost effective ways in which good data monitoring can be used as part of quality improvement.
The routine incident reporting system may not provide an accurate picture of the extent and severity of patient safety incidents, particularly those resulting in harm to patients. Healthcare organisations should consider routinely using structured case note review on samples of medical records as part of quality improvement.
We thank Alan Maynard, Mike White, and Michael Porte for their support and advice. We also thank for their advice Denis Smith, Carl Thompson, Fiona Fylan, Richard Lilford, Charles Vincent, Graham Neale, Maria Woloshynowych, Martin Bland, Jeremy Miles, Ian Woods, Ann McEvoy, Donald Richardson, Glen Miller, Caroline Mosely, Dawn Taylor, Mary Nannary, and Sally Grabham. We are also grateful to clinical and administrative staff of the host hospital for their support.
Contributors: AB-AS designed and managed the project, wrote the research proposal, collected and analysed data, and wrote the final paper. TAS supervised the project; commented on the protocol, data collection, and analysis; was responsible for the quality control; and assisted on the paper. AC and AT piloted the instruments and process, assisted with stage two case note review, provided advice, and commented on the paper. Celia Grant, Eileen Richardson, William Gray, Yvonne Dobson, and Lorraine Wright screened the medical records and collected data in stage one and discussed the findings. AB-AS is the guarantor.
Funding: AB-AS was supported by a scholarship from the Iranian Ministry of Health and now works at the School of Public Health, Teheran University of Medical Sciences. All the researchers are independent from the Iranian Ministry of Health.
Competing interest: None declared.
Ethical approval: Hospital research ethics committee (reference number 04/Q1108/7).