|Home | About | Journals | Submit | Contact Us | Français|
Attending physicians are well positioned to identify medical errors and understand their consequences. The spectrum of errors that can be detected by attending physicians in the course of their usual practice is currently unknown.
To determine the frequency, types, and consequences of errors that can be detected by attending hospitalist physicians in the care of their patients, and to compare the types of errors first discovered by attending hospitalists to those discovered by other providers.
Prospective identification of errors by attending physicians.
Two hundred–bed, academic hospital.
Five hundred twenty-eight patients admitted to the general medicine service from October 2000 to April 2001.
Errors, both near misses and adverse events, were identified during the course of routine, clinical care by 2 attending hospitalists. Errors first detected by other health care workers were also recorded.
Of the 528 patients admitted to the hospitalist service, 10.4% experienced at least 1 error: 6.2% a near miss and 4.2% an adverse event. Although differences did not achieve statistical significance, most of the errors first detected by house staff, nurses, and laboratory technicians were adverse events; most of the errors first detected by the attending hospitalists, pharmacists, and consultants were near misses. Drug errors were the most common type of error overall.
Attending physicians engaged in routine clinical care can detect a range of errors, and differences may exist in the types of errors detected by various health care providers.
Medical error reporting must be improved for patient safety to be enhanced. Efficient and reliable reporting systems would track errors, enabling providers to identify the scope, causes and consequences of errors. Although this information is necessary to prevent errors, current reporting systems are under-utilized1 and fail to provide a complete and accurate record of medical errors. To date, much of the current understanding of medical errors is based on retrospective chart reviews.2–4 Those studies are limited by the reluctance of clinicians to document errors in the medical record and the inability of an uninvolved reviewer to fully assess what actually happened in the clinical encounter. As a result, the actual rate of errors is not known, and a full understanding of the clinical factors giving rise to those errors has been beyond reach.
Although systems of prospective error reporting are critical, physicians do not actively participate in most of these systems. They are used primarily by pharmacists in medication error tracking and nurses in incident reporting. The few studies that have employed prospective, physician reporting of errors have used data collected by house staff,5–7 not attendings. Attendings are well positioned to identify errors: they are involved with day-to-day inpatient care, yet they possess the experience and clinical judgment that comes with being senior physicians.
The need for improvements in physician error reporting is underscored by recent reports from the Institute of Medicine (IOM)1 and the Joint Commission on Accreditation of Healthcare Organizations.8 These organizations are calling for voluntary error reporting by physicians to promote patient safety. In addition to answering the mandates from these organizations, physician error tracking will help foster awareness of current problems in health care delivery and engender an overall sense of the importance of patient safety.
The rate and types of errors that could be reported by attending physicians engaged in routine clinical care of patients are not known. It also is unclear whether differences exist in the types of errors detected by attending physicians compared to other health care providers. We sought to determine the incidence, types, and consequences of errors that could be detected prospectively by attending hospitalist physicians, and to compare the types of errors discovered by attendings with those discovered by other health care providers.
We conducted a prospective, observational study at a 200-bed, academic medical center from October 2000 to April 2001. The last 2 weeks of December 2000 and the month of March 2001 were excluded because the hospitalists were not present on the wards during these times. The medical center utilizes an inpatient hospitalist system. Approximately 80% of the patients admitted to the general medical service were under the care of one of the two hospitalists (SIC and KAO). The hospitalists were intimately involved in the care of the inpatients and were present on the wards all day (Monday through Friday), with the exception of one-half day per week spent in the clinic. These hospitalists were the primary investigators as well as the data collectors for the study. The Institutional Review Board of the hospital approved the protocol.
The physicians reported incidents that fit the definition of error used in the IOM report, “To Err is Human.” In that report, error is defined specifically as the “failure of a planned action to be completed as intended or the use of a wrong plan to achieve an aim.”1 This definition was chosen because it is widely familiar and encompasses all errors, regardless of the actual outcome. Cases in which a bad outcome occurred (such as death or disability) without a preventable cause were not considered errors and therefore were not included.
Errors that resulted in an adverse outcome were categorized as “adverse events.” Errors that did not result in patient harm were categorized as “near misses.” Using the system employed in the Harvard Medical Practice Study (MPS), errors that fit the IOM definition were further classified as drug, diagnostic, therapeutic, procedure-related, prevention, or fall.3 Drug errors were those primarily related to drug dosing or contraindications. Diagnostic errors were those related to delay in or incorrect diagnosis, such as misinterpretation of a chest x-ray. Therapeutic errors were delays in or incorrect pharmacologic and non-pharmacologic treatments for illnesses. Procedure-related errors were mistakes made in the planning or execution of procedures. An error in prevention was one related to failure to take indicated preventive measures. Fall errors were those in which a fall was caused by improper patient management.
The 2 attending hospitalists responsible for the care of the patients collected the data prospectively. Once an error was identified, the consequence was determined. The hospitalists did not look for adverse outcomes and then retrospectively assess whether an error had occurred. These physicians did not undergo any formal training in error detection. All errors identified during the activities of routine, clinical care of patients were documented. Implicit criteria were used in deciding which events qualified for inclusion. Information about errors was drawn from rounds and informal discussions with the house staff and other health professionals (e.g., nurses, pharmacists), review of laboratory and ancillary data, as well as patient interviews and physical examinations.
Errors first discovered by the hospitalists were included, as were those brought to their attention (unsolicited) by other health care workers, including residents, nurses, pharmacists, consultants, and laboratory technicians. The other health care workers were not notified that the study was occurring. No special efforts, above and beyond routine care of each hospitalist's own patients, were employed to detect errors. Specifically, no extra testing, chart reviews, interviews with health care providers, or patient interviews were conducted.
Each potential error identified by one hospitalist was reviewed by the other. Only cases in which both hospitalists agreed that an error had occurred were included. When differences of opinion existed, a systematic adjudication process was followed. This process consisted of consideration of the following: whether the case fit the study definition of error; and whether different action could have reasonably been taken at the time with the information available, or if the “error” was only obvious using hindsight. Disagreement occurred in less than 10% of cases. The physicians discussed new errors at least 2 to 3 times per week, thus aiding vigilance in detection. The physicians recorded only errors occurring in their own patients; they did not look for errors in each other's patients.
While care rendered by members of the hospitalist team (i.e., house staff, attendings) in the emergency department was included for consideration, patients transferred to or from other services (e.g., intensive care unit, cardiology) were included only for the duration of their admission spent on the hospitalist service. This was done because the attending hospitalists had limited involvement with the patient care taking place on other services. Errors occurring before admission to the hospital were not included, but errors occurring during hospitalization and discovered only after discharge were included.
All information was coded on standardized forms. The forms were completed as soon as possible after an error was discovered, usually within 48 hours. Each physician independently recorded all information about the incident. This information was then entered verbatim into a database and updated weekly. These data were confidential.
The attending physicians classified the errors using the system employed in the Harvard MPS, as detailed above. They also provided a narrative account of the error. Neither the names of the health care providers nor the date of the incident was recorded.
Using the best clinical judgment of the attending physician caring for the patient, the outcome of the error was determined. Factors considered in this determination included the patient's clinical condition at the time of the error, comorbidities, the nature of the error, and the timing of the error in relation to any ensuing complication. Additionally, in determining whether an error was responsible for prolongation of the hospital stay, a determination was made of what the hospital course would have been like in the absence of an error. In the case of an adverse event, the attending physician recorded a descriptive summary of the consequences of the error for the patient. This included not only the immediate consequences, but any problems (including readmission) noted during the course of the hospitalization or that the attending became aware of after discharge. In the case of a near miss, the consequence was coded as “none.”
Information on who brought the errors to the attention of the hospitalists was collected in only three fourths of the cases. The decision to record the source of the error was made while the study was already in progress, thus accounting for the incomplete information. If one of the hospitalists first discovered the error on his/her own, this was coded as “attending.” If someone else first brought the error to the attention of the hospitalists, this person's title was recorded (e.g., pharmacist, resident, consultant, nurse).
Except where otherwise indicated, all errors that fit the IOM definition were included. To report the percentage of patients who experienced an error, however, we imposed a limit of 1 error per patient and divided the number of errors by the number of patients admitted. For those calculations, only the error with the most serious consequence for the patient was included, as has been done in previous work.4
Comparisons of categorical variables were made using the χ2 test. The Student t test was used to compare dimensional variables. SAS software (Version 8.0; SAS Institute Inc., Cary, NC) was used for all statistical analyses.
The demographic characteristics and length of stay of the patients who experienced errors are compared with those of all other patients admitted to the general medical service during the same time period in Table 1. There were no statistically significant differences in the composition of the 2 groups with regard to age, race, or sex. Mean length of stay was greater in patients who experienced errors compared to patients who did not.
Among all patients admitted to the hospitalist service during the study period (N = 528), 10.4% (95% confidence interval [95% CI], 7.8% to 13%) experienced at least 1 error. A total of 63 errors were detected, with 8 patients experiencing 2 errors. No patient experienced more than 2 errors. Of the 63 errors detected, 62% were near misses and 38% were adverse events. The 2 hospitalists detected a similar number of errors: SC discovered 36 and KO discovered 27.
Only 1 error per patient-admission was considered to report the percentage of hospital admissions complicated by near misses and adverse events. Of all admissions, 6.2% (n = 33) were complicated by a near miss and 4.2% (n = 22) were complicated by some type of adverse event. An adverse event that caused some morbidity to the patient, but without prolongation of the hospital stay and without causing readmission occurred in 2.1% (n = 11) of all admissions. An adverse event that caused morbidity that was responsible for prolongation of the hospital stay occurred in 1.9% (n = 10) of all admissions. An adverse event that caused hospital readmission occurred in only 1 patient. No deaths could be attributed to adverse events.
The following results show differences in rates of error types and differences in types of errors discovered by various health care providers. With a total of 63 errors, we are under-powered to detect modest differences. However, in this initial study it is interesting to examine trends in the data as a source of hypothesis generation and as groundwork for future studies.
Table 2 shows the frequency of the types of errors. When all errors were considered, and when only near misses were considered, drug errors were the most common type. For adverse events, drug and therapeutic errors were equally common. No statistically significant differences were found comparing the rates of error class (i.e., drug, diagnostic, therapeutic, procedure, prevention, fall) between the categories of errors (all errors, near misses, and adverse events).
The 2 hospitalists discovered 17/47 (36%) of the errors on their own, and pharmacists, resident physicians, consulting physicians, nurses, and ancillary staff discovered the remaining 30/47 (64%). There were differences in the rates of error category (near miss and adverse event) detected by each source (Table 3). House staff, nurses, and laboratory technicians were more likely to detect adverse events than near misses, despite the fact that the near misses were much more frequent than the adverse events. Nine of the 14 total errors detected by house staff, nurses, and laboratory technicians were adverse events. The 2 attending physicians, pharmacists, and consultants were more likely to detect near misses. Seventeen of the 33 total errors detected by the 2 attending physicians, pharmacists, and consultants were near misses. These differences did not achieve statistical significance (P = .3).
Comparing the errors detected first by attending physicians with those first detected by the house staff, differences in the rates of error class were observed. Whereas the attendings were more likely to detect therapeutic errors, house staff physicians were more likely to detect drug errors. House staff also detected procedural errors at a higher rate than did the attending hospitalists. None of these differences were statistically significant. All errors detected by the pharmacists were drug errors.
Our study documents the rate, nature, and consequences of all errors detected by attending hospitalist physicians caring for patients at a 200-bed academic hospital. All errors discovered by two hospitalists during routine clinical care of the patients were documented, and reports of errors from other health care providers were included. Using this method, 10.4% of all admissions were complicated by an error: 6.2% by a near miss and 4.2% by an adverse event. While the length of stay was greater in patients who experienced errors compared with patients who did not, this is not necessarily a result of the errors, but likely represents increased exposure to possible errors.
Lack of consensus regarding definitions of “error,”“adverse event,” and “near miss” makes direct comparisons of the rate of errors found in this study with those found in other studies difficult. The definition of “adverse event” used in this study differed somewhat from that used in the Harvard MPS. That definition included only events that prolonged hospital stay or caused disability at the time of discharge.2 Counting only errors that fit the MPS definition, an adverse event rate of 2.1% of all admissions was found in this study, slightly lower than the rate of 3.7% reported in the MPS. Similar to previous studies of errors among medical patients, drug errors were the most common type of error and near misses were more common than adverse events.3–6
Several important issues should be considered in the interpretation of this study. The study was conducted at only 1 site and by only 2 attending physicians, potentially limiting the generalizability of the results. The attendings had no formal training in error detection, but relied on the knowledge gained by review of the literature before commencing data collection.
There was no external review of the cases to verify the occurrence of errors, or to pick up on errors that may have escaped detection. The attendings also did not conduct chart reviews to investigate reports of errors by other team members and to detect errors that these team members may not have reported. Instead, they relied on information gathered during the course of routine clinical care. While these issues may have resulted in under-reporting of the total number of errors, the study was specifically designed to determine what practicing physicians could detect in the course of everyday clinical care.
The ability to detect errors is likely related to the amount of time spent on the wards and degree of involvement with daily patient care. The hospitalists in this study had limited presence on the wards during evenings and weekends, increasing the likelihood of missed errors during those times. Other, nonhospitalist attending physicians may have even less involvement with inpatient care. Despite concern over litigation that may result from reporting errors on their own patients, the hospitalists were highly motivated to detect errors, as they were the primary investigators for the study. Not all physicians are likely to be as motivated to detect and report errors, but this study demonstrates what can be accomplished.
Our study makes several contributions to the field of error reporting. First, the reliance on reporting by practicing physicians engaged in routine patient care, while potentially limiting detection, renders the errors that were detected highly clinically relevant. The study provides a perspective on the types and rates of errors that can be expected from self-reports if practicing attending physicians were to take an active role in monitoring health care quality.
Quality improvement initiatives are often criticized for being costly and difficult to implement. Faced with limited resources and rising costs, health care professionals are often reticent to embark on such initiatives. The method of error detection we used is practical and could be readily adopted in clinical settings with minimal associated expense.
A recent study9 suggests that agreement is moderate in judgments about adverse events among reviewers of medical records. This does not address the issue of accuracy of these judgments, however. Indeed, one study7 showed that errors reported by house staff physicians (who were prompted by e-mail) were detected by independent, retrospective medical record review in fewer than 50% of cases. Prospective error reporting will improve our understanding of the spectrum of errors encountered in clinical practice. This understanding can enhance patient safety in the future.
While other studies have employed house staff detection and reporting of errors, no other study to date has examined the differences in the types of errors reported by various health care providers caring for the same patients. Despite the fact that the near misses were far more common than the adverse events, house staff, nurses, and ancillary staff were somewhat more likely to detect the adverse events. Attending physicians were somewhat more likely than house staff to detect therapeutic errors. While these differences did not achieve statistical significance, it is not surprising that the detection and reporting of errors is not uniform among providers, given their distinct roles in patient care. These differences highlight the importance of having as many different types of health care providers involved in error reporting as possible.
Few large-scale studies of near misses have been conducted by the medical community. Other industries, such as the nuclear power industry or aviation industry, actively study near misses and their relation to adverse events.1 Near misses occur at a higher frequency than adverse events, thus facilitating their study. These other industries have capitalized on the fact that the same systems issues can be involved with both near misses and adverse events, with the only distinguishing feature being recovery mechanisms. Indeed, there were no substantial differences in the types of errors (e.g., drug, therapeutic, etc.) that made up the near misses compared to the adverse events. Lessons learned from near misses should be applied to prevent actual adverse events.
Fear of legal repercussions is a major barrier to more open reporting of mistakes by health professionals. While the IOM report calls for increased protection from litigation to encourage more open reporting of adverse events, this protection has not yet come to fruition. Study and report of near misses obviates this problem to some extent, because errors not associated with ill effects for the patient are less likely to be used as the basis for malpractice litigation. Increased study of near misses may prove to be an important part of expanding our current knowledge of errors. Armed with this improved understanding, it will then be possible to move forward in implementing and evaluating systems changes so that medical care is safer in the future.
The authors are grateful to Dr. Eric Holmboe for his editorial review and comments.
Dr. Chaudhry is a Robert Wood Johnson Clinical Scholar and is supported by the Department of Veterans Affairs.