|Home | About | Journals | Submit | Contact Us | Français|
Abstract Objective: Errors of omission are a common cause of systems failures. Physicians often fail to order tests or treatments needed to monitor/ameliorate the effects of other tests or treatments. The authors hypothesized that automated, guideline-based reminders to physicians, provided as they wrote orders, could reduce these omissions.
Design: The study was performed on the inpatient general medicine ward of a public teaching hospital. Faculty and housestaff from the Indiana University School of Medicine, who used computer workstations to write orders, were randomized to intervention and control groups. As intervention physicians wrote orders for 1 of 87 selected tests or treatments, the computer suggested corollary orders needed to detect or ameliorate adverse reactions to the trigger orders. The physicians could accept or reject these suggestions.
Results: During the 6-month trial, reminders about corollary orders were presented to 48 intervention physicians and withheld from 41 control physicians. Intervention physicians ordered the suggested corollary orders in 46.3% of instances when they received a reminder, compared with 21.9% compliance by control physicians (p < 0.0001). Physicians discriminated in their acceptance of suggested orders, readily accepting some while rejecting others. There were one third fewer interventions initiated by pharmacists with physicians in the intervention than control groups.
Conclusion: This study demonstrates that physician workstations, linked to a comprehensive electronic medical record, can be an efficient means for decreasing errors of omissions and improving adherence to practice guidelines.
Almost half of all industrial disasters have been reported to be errors of omission resulting from oversights and distractions.1,2 Physicians are also prone to such errors.3,4 Despite good intentions and adequate knowledge, they overlook new abnormalities,5,6,7 fail to perform preventive care,8 and do not appropriately monitor drug therapy.9 These errors are probably due to man's limitations as a data processor rather than to correctable human deficiencies.8
Certain medical decisions are simple and require primarily that the physician recognize that the decision needs to be made. Ordering gentamicin (the stimulus) should, with few exceptions, trigger a decision to order gentamicin levels. Many such drug-test and drug-drug decisions must be made: coumadin and prothrombin times; angiotensin converting enzyme (ACE) inhibitors and serum creatinine levels; intravenous theophylline and theophylline levels; and insulin and blood glucose monitoring. In each of these pairs of orders, the second follows from the first as a proposition to its corollary. Thus, we refer to the first as the trigger order and the second the corollary order.
Although the decision to carry out the corollary order in the above case is simple, the need to make a decision may not be recognized.10 Physicians frequently fail to do pre-intervention testing (e.g., checking creatinine levels before ordering an intravenous pyelogram) or follow-up testing (e.g., ordering serum drug levels to monitor gentamicin treatments). Hospitals invest in drug utilization review programs, chart reviews, and educational efforts to reduce these types of mistakes, but with limited long-term success.
We and others have shown that computer-generated reminders can reduce mistakes in physicians' ordering practices; in particular, reminders reduce errors of omission in outpatient settings.11,12,13,14,15,16 These outpatient reminders were printed on paper reports and placed in the patient's chart before a clinic visit. Reminders delivered as the physician writes orders should be particularly effective, since informational interventions made at the decision point have greater influence than those delivered later.17 We hypothesized that reminding the physician to make the decision, presented as a fully formed order in inpatient settings, would have an even greater effect on errors of omission regarding corollary orders.
At the time of this study, internal medicine physicians in our institution had been entering all of their patient orders directly into an electronic patient record system for more than 4 years.18 (At present all physicians write all hospital orders through the computer). The computer system could provide feedback to the physicians as they enter orders. When a physician writes an order for certain drugs or tests, the system can suggest the orders that are the natural corollaries to the first. Such suggested orders are presented as fully formed orders that the physician can accept or reject with a single keystroke. These reminders reduce reliance on memory and provide standardization of care. Here, we report the result of a randomized, controlled clinical trial to determine whether suggesting corollary orders to the physician, while they are writing their order, could reduce errors of omission during inpatient stays.
We studied the inpatient general medicine wards of Wishard Memorial Hospital, an inner-city public teaching hospital. Patients are cared for by one of six independent services (Red service, Green service, and so on). A group of physicians consisting of a faculty internist (usually a generalist), a senior resident, and two interns (usually categorical medical housestaff) cover each service. A different set of physicians rotate onto the service every 6 weeks. We refer to a specific group of physicians who cover one service for one rotation as a team. During a year, eight different teams would have worked on one service. As described below, teams were randomly assigned to intervention or control services.
Patients were not formally randomized to services, but rather admitted to the services in sequence so that all six services received equal numbers of admissions over time. On average, a team admitted approximately 80-90 patients per rotation, and cared for an average of 16 patients at once. Prior analyses, however, have shown no significant difference in patient demographics, clinical characteristics, or severity of illness among the patients admitted to different services.18 Patients remained on the same service when the team of physicians staffing the service changed at the end of each 6-week rotation. When a patient had multiple admissions during the study, we only included data for the first admission.
The Regenstrief Medical Record System (RMRS) provides a nearly complete electronic patient record that integrates inpatient and outpatient data.19 The patient's electronic record includes demographic information, diagnoses and problem lists, inpatient and outpatient visits, admitting history and physical examination reports, discharge summaries, vital signs, immunizations given, nearly all diagnostic test results (including serologies, cervical cytology, and mammograms), procedures, and outpatient prescriptions. Data from the record are available to physicians as printed flowsheets, via “online” data retrieval terminals, and through the order entry workstations located throughout the hospital and associated clinics.
When this study began, all medicine physicians had been entering all inpatient orders directly into physician workstations for 12 months.18 At that time, providers had access to more than 70 personal computer (PC) workstations distributed around the hospital, emergency room, and clinics. The workstations are linked via a network to a central file server and a cluster of Digital Equipment Corporation's VAX computers. Since orders no longer have to be written in the paper chart, 75% of orders are now written from sites other than the patient's ward. Once orders are entered, the system sends them electronically to the nurses' workstation on the patient's home ward, and requisitions are printed at appropriate locations (e.g., pharmacy, radiology, or heart station). Less than 5% of orders are entered by nursing staff as verbal orders from physicians.
We used standard reference texts20 and drug package inserts supplemented by our knowledge of local practice to identify 87 target orders (76 drugs and 11 tests; see Table 1) that could be paired with one or more corollary orders; for example, aminoglycosides being paired with peak and trough aminoglycoside levels, or warfarin and prothrombin time. We chose target orders that are used frequently enough to produce usable data and for which there was some support for corollary orders. Three-quarters of these target order-corollary order pairs were already part of our hospital's armementarium of drug utilization review criteria, which were developed independently by a hospital committee of staff physicians and clinical pharmacists. These criteria were always applied retrospectively, whereas the computer-based rules were designed to be prospective.
Each of the rows in Table 1 defines corollary orders for drugs or tests, but each row identifies a class of drugs. (Full details of the reminders are available from the authors.) The category “oral hypoglycemics,” for example, represents three different oral agents from our formulary. The first column identifies a trigger order; the second column identifies its corollary orders. The corollary order either prepares the patient to receive the item in the trigger order, prevents adverse effects of the trigger order, or monitors for adverse effects of the trigger order. The first row, for example, says that a heparin drip order requires a platelet count—the corollary order—before the heparin is started and again 24 hours later.
When suggesting orders, the computer took into account other factors, such as the status of the order (is it a new order or a revision of an old order?), the time elapsed since the last time the order being suggested was written, and whether any orders for a near equivalent item (e.g., a blood urea nitrogen level versus serum creatinine level) had already been written.
We made human-readable versions of the corollary order guidelines available to both study and control physicians. More than half of the guidelines were also being actively promoted through the hospital's drug utilization review (DUR) program. During the study, all medicine physicians wrote their orders using the computer order entry system. When a physician entered a trigger order (an order from the first column of Table 1) for a particular patient, a rule-based reminder program analyzed the data in that patient's electronic medical record. The program determined which, if any, of the corollary orders from Table 1 should be presented. For intervention physicians, the computer displayed the suggested corollary orders in a workstation window as shown in Figure 1. Notice that these orders are fully formed and that the physician can accept, reject, or modify them with a few keystrokes. When the computer suggested corollary orders to the physician, the physician was free to accept or reject them as he or she saw fit. For control physicians, the computer recorded the corollary orders for later analysis but did not inform the physician about them.
The study was a randomized, controlled trial conducted over 30 weeks, starting in October 1992. At the beginning of the study, three of the six services were randomly assigned to be intervention services, and the remaining three were assigned to be controls.
The Chief Medical Resident constructed teams of faculty, housestaff, and students based on scheduling issues, clinical skills, and personalities. The study biostatistician then randomly assigned the teams to services. Physicians assumed the study status of their assigned service throughout their rotation. Physicians on intervention services received reminders about suggested corollary orders. Those on control services did not.
The system assigned patients to intervention or control status based on the service to which they were admitted. Patients never changed study status during a hospital admission. If the patient's hospitalization crossed rotation periods, he or she remained on the same service and retained that service's study status, even though a different team of physicians was randomly assigned to provide the care.
Physicians care for patients from more than one service at night and on weekends. The Chief Medical Resident constructed the residents' evening coverage schedule to separate coverage for patients based on their services' study status so that, if there were no coverage switches, control physicians provided overnight and weekend coverage only for control patients, and intervention physicians cared only for intervention patients. To avoid contamination that could occur when scheduling conflicts put intervention physicians in charge of control physicians' patients and vice versa, the computer suggested corollary orders to intervention physicians only when they were writing orders for intervention physicians' patients. It suppressed the display when any physician wrote orders for control physicians' patients. Furthermore, the computer never displayed corollary orders to control physicians when they were writing orders. Nurses and pharmacists could enter verbal orders from physicians, but the computer never suggested corollary orders during verbal order writing sessions.
Corollary orders are presented to medical students when they draft orders for the physician's approval, but were not presented to the physician when they reviewed these orders prior to electronically signing them.
The order entry system's databases provided us with information about the trigger orders and suggested corollary orders. We obtained information about the physician compliance with the corollary orders from the ordering system, which carried records of all orders, and the RMRS, which contains all test results and drug administration records.
We obtained information about length of stay and hospital charges from our hospital discharge records and billing system, respectively. Pharmacists' interventions with physicians were extracted from a database maintained by the pharmacy for administrative purposes. We looked at creatinine as an outcome to see if drug monitoring for renal falure had any outcome effect on serum creatinine levels. We obtained information about the creatinine levels during the hospital stay from the RMRS.
We examined several outcome variables. The variable on which we expected the main effect was the per physician “compliance” with the automated guidelines about corollary orders: i.e., the number of times a physician ordered the suggested corollary orders divided by the total number of suggested corollary orders. We computed three different compliance rates: (1) immediate compliance: physicians wrote orders for the suggested corollary orders during the same ordering session in which they wrote the triggering order; (2) 24-hour compliance: the physician ordered the suggested corollary order within 24 hours of a trigger order; and (3) hospital stay compliance: the physician ordered the suggested corollary order any time during the hospital stay after the trigger order was entered.
The denominator for all three per-physician measures was the number of corollary orders suggested to the physician by the computer. The numerator for immediate compliance was the number of corollary orders that the physician wrote during the same ordering session as the triggering order for that corollary. A single order could trigger suggestions about more than one corollary order; e.g., an order for intravenous gentamicin would trigger suggestions to order both serum creatinine and serum gentamicin levels (see Fig. 1). If the physician ordered only one of these two corollary orders, the immediate compliance score for that triggering order would be 50%. A suggestion for the same corollary order could occur more than once during the hospital stay, whether the physician responded on the first occasion or not. For example, each change in dose of intravenous heparin would trigger a suggestion for another measure of the activated partial thromboplastin time (APTT). Each such order for heparin would count as a separate triggering event and would be associated with a separate compliance score. The physician's overall immediate compliance score was the arithmetic mean of the immediate compliance scores for each of the trigger events. In computing immediate compliance we did not distinguish between a physician accepting the computer's suggested orders and the physician independently writing the order during that same ordering session.
We computed the physician's 24-hour compliance by the same method used for the immediate compliance, except that the ordering of a suggested corollary order any time within 24 hours after the triggering order counted as compliance with the suggestion. We averaged the 24-hour compliance score for each triggering event to obtain a physician's overall 24 hour compliance score. By definition, the 24-hour compliance was greater than or equal to the immediate compliance.
To calculate a physician's hospital stay compliance, we counted an order for the suggested item written any time after the triggering order until the end of the hospitalization as a complying response. This is the most liberal definition of compliance, but it ignores potential problems of timing (e.g., ordering a gentamicin level later than the fourth dose). It is the least strict definition because one order for APTT written at discharge would count as compliance for all APTT orders that the computer suggested during the hospital stay.
Housestaff physicians were the target of the intervention, so they were the unit of analysis. When physicians served more than one rotation and could not be assigned to the same study status for all rotations, we excluded the data from all rotations after the physician's original study status changed. We also excluded data about suggested orders that occurred when physicians' and patients' study status differed—as could occur if a physician traded his or her night call with a physician who had a different study status.
Faculty are proscribed from writing orders (other than “do not resuscitate” orders) except during emergencies. Therefore, the analysis was limited to housestaff physicians. Because the physicians practice within teams, they are not fully independent units. Interns write orders independently, but they still might be influenced by the resident or staff leaders of their teams. Further complicating the association, some physicians served with different residents and/or interns on different rotations during the study. To allow for this clustering of physicians within teams, we used generalized estimating equations (GEEs). This method can account for the hierarchical relationships in the data set without the need to discard repeated observations within clusters.21 We analyzed the immediate, 24-hour, and hospital stay compliance using GEEs.
To complement the above analysis, with its complex hierarchical model, we also analyzed a subset of the above data using a simpler approach. For this analysis, we considered only the physician's response to the first occurrence of a unique trigger-corollary order pair per patient. So, for example if the patient had multiple changes in heparin drip rate and the computer suggested an APTT to follow up each of these dosage changes, in this analysis we would only count the physician's response to the first suggestion. From this point we computed the per-physician immediate, 24-hour, and hospital stay compliance as above, and we compared the intervention and control physicians' mean compliance scores by Student's t test. In this simpler analysis, we ignored possible interactions among physicians within teams.
We also examined several patient-specific “outcomes:” length of stay, hospital charges, number of pharmacist interventions, and average creatinine during the hospital stay (a common suggested response order to evaluate potential nephrotoxicity of stimulus drug orders). The distributions of length of hospital stay and charges were highly skewed to the right, so we applied log transformations to these two variables to produce more normal distributions. For the few measures of patient status (creatinine levels) we compared intervention patients with control patients using Student's t test, ignoring the clustering within physician or physician teams.
We examined intervention and control patients on some clinical and demographic variables to be sure that the two groups were comparable, using Student's t test and χ2 test to compare patient attributes between the study and control groups.
The randomized, controlled trial ran for 30 weeks, beginning in October 1992. There were 6 different housestaff rotations during the 30 week period, with 6 teams of faculty and housestaff per rotation.
Six physicians were excluded from the study because they received fewer than five suggestions about corollary orders. This cutoff was chosen by inspection of the distribution of number of suggested consequent orders. These were mostly off-service physicians who covered night calls for one or two nights but were not part of teams assigned to a service. A total of 86 housestaff physicians received more than 5 suggestions about corollary orders during the study: 45 intervention physicians and 41 control physicians. Nine physicians changed study status when they returned for a second rotation during the study. For these physicians we only included data for the rotations before they changed study status.
During the study, the intervention and control physicians cared for 2,181 different patients during 2,955 different admissions. Table 2 shows the demographic and clinical characteristics of these patients. No significant differences between intervention and control patients exist for any of these variables.
Of these 2,181 patients, 1,686 (77.3%) had at least 1 order written (814 intervention patients and 872 control patients) that would trigger a suggestion for a corollary order. In all, intervention and control physicians entered 7,394 trigger orders which resulted in 11,404 suggestions for corollary orders. On average, a trigger order generated suggestions for 1.5 corollary orders. Trigger orders made up 9.6% of all orders written for the 2,181 patients. Patients with at least 1 suggested corollary order per admission had an average of 6.8 such suggestions per admission.
The effect of the computer suggestions was very strong, whether measured as immediate, 24-hour, or hospital stay compliance. Intervention physicians ordered the corollary orders required by our guidelines twice as often as control physicians did, when measured by immediate compliance (46.3% versus 21.9%, p < 0.0001). Significant differences between study and control physicians also appear in 24 hour compliance (50.4% vs 29.0%, p < 0.0001) and hospital-stay compliance (55.9% vs 37.1%, p < 0.0001). Because corollary orders for saline lock had such a large effect and are the least significant clinically, we repeated the simple analyses excluding saline lock orders and found immediate compliance was 46.4% vs. 27.6% (p < 0.0001), 24-hour compliance was 50.9% versus 35.3% (p < 0.0001) and hospital-stay compliance was 56.0% vs. 43.5% (p < 0.0001). The effects were almost identical whether measured on all of the data using a complicated GEE model or measured as first occurrence compliance using a simple Student's t test. The mean immediate compliance to the first occurrence of a suggestion was 48% among intervention physicians and 23% among control physicians (p < 0.001). The other first compliance scores and the significance levels were also very close to their GEE counterparts.
Figure 2 is a histogram comparing the 24 hour compliance of study and control physicians. There is little overlap between the study and control populations. Several control physicians had compliance rates below 20%, and no control physician reached a compliance rate greater than 50%. On the other hand, study physicians all maintained compliance rates of at least 30%, and some reached levels of 70%.
There is very little difference between the immediate and 24-hour compliance scores, indicating that corollary orders that are not written at the same time as their trigger order are unlikely to be written later during the same day.
The difference in the compliance scores of intervention and control physicians shrinks by almost one fifth from immediate to hospital stay compliance. This results from a greater increase in the control compliance. Nonetheless, a large difference (18 percentage points) separates the compliance scores of intervention and control physicians even when measured as hospital stay compliance.
Breakdowns of compliance by trigger and corollary order illustrate the kinds of items the intervention affected most extensively. Table 3 shows the 24-hour compliance scores for intervention and control physicians broken down by the 25 most common trigger orders. Table 4 shows comparable data broken down by the 25 most common corollary orders. In both cases, the top 25 orders account for more than 80% of the suggestions provided.
The effect of the intervention varied by specific trigger-corollary order pair. Computer reminders increased adherence to guidelines concerning many important corollary orders. For example, they increased 24-hour compliance for monitoring serum levels of gentamicin, vancomycin (though the value of monitoring is debatable), and theophylline by 9, 26, and 24 percentage points respectively. Differences persisted when hospital compliance was assessed. We were surprised by these results because we had assumed that most physicians were already complying fully with guidelines about antibiotic and theophylline level monitoring.
The reminders also caused large improvements in compliance with suggestions to order prothrombin times after coumadin dosage changes, APTT after heparin dose changes, baseline creatinines before vancomycin and aminoglycoside antibiotics, and radiographs to check for line placement and lung status during mechanical ventilation. The difference between intervention and control compliance rates for these suggestions was as much as 25 percentage points. On the other hand, computer suggestions to order baseline creatinine measurements before starting administration of cimetidine or ranitidine had no effect. In retrospect, we considered this a possible appropriate response to a guideline with only a theoretic basis.
Pharmacists made 105 interventions with intervention physicians and 156 with control physicians (two-tailed p = 0.003) for errors considered to be life threatening, severe, or significant.
There was no difference in maximum serum creatinine levels between the groups (1.51 ± 1.25 for intervention patients versus 1.42 ± 0.88 for controls; p = 0.28).
Length of stay and total inpatient charges were not different for intervention patients compared with control patients. The average length of stay was 7.62 days for intervention patients and 8.12 days for control patients, a difference of -0.5 days (95% confidence interval of the difference is -0.17 to 1.19; p = 0.94). Average hospital charges were $8,073.52 for intervention patients and $8,589.47 for control patients, a difference of -$515.95 (95% confidence interval of the difference is -$828.41 to $1,316.85; p = 0.68).
An increase in charges might have been expected, since the aim of all the reminders was to increase the utilization of the suggested order items. However, the variance and confidence intervals of charges and length of stay are too large to conclude anything from these results.
Computer suggestions about corollary orders had large effects on the adherence to our guidelines about corollary orders, especially when measured in terms of immediate or 24-hour compliance. Thus, they reduced errors of omission. That we observed smaller differences in compliance when measured over the entire hospital stay is not surprising. With a larger time window, there is more time for providers to remember to order the item, for other physicians (and consultants) to write (or induce) the order, and for other indications to arise for the order. Further, active institutional controls, such as pharmacokinetics consulting services, may have had more time to influence the ordering process. The interventions increased adherence to many guidelines that were being promoted by our Pharmacy and Therapeutic Committee, such as the requirement for APTT measures after each heparin dosage change and the follow-up of amnioglycoside therapy with measurements of serum levels. The reminders also significantly reduced the number of adverse or potential adverse effects as measured by the pharmacy's intervention log. Pharmacists had to call physicians to ask about drug-related interventions one third less often for study patients than for control patients.
We did not see any effects on outcomes such as length of stay, serum creatinine, or charges. However, the guidelines touched only 9.6% of the orders written during this study, so we had not expected to see important outcome effects when the study affected such a small part of the overall care process.
The clinical importance of the suggestions about corollary orders varied. Some with low clinical significance, such as ordering a saline lock when IVFs are discontinued, can have large economic impact because the service often cannot be billed without an order. The intervention had large effects on some practices that were already part of the pharmacy review process, such as recommendations about ordering peak and trough gentamicin levels to monitor for adverse effects of intravenous gentamicin, and APTTs to monitor the efficacy of heparin therapy. The reminder had little or no effect in some corollary orders: e.g., suggestions to measure creatinine levels before using cimetidine. Indeed, the rate of response to this common suggestion was less than 20% in both intervention and control cases; when creatinine was ordered, it may have been for reasons other than the cimetidine order. The large difference in response rates by suggestion indicates that physicians did not blindly accept the suggested orders. In past studies, we have seen a similar phenomenon; computer reminders had their greatest effect on compliance when the physicians agreed with, and intended to comply with, the rules.12 Physicians may choose not to accept corollary orders for several reasons: (1) the orders were not appropriate (i.e., the rules are in error); (2) the physician did not agree with the basis for the reminder (disagree with guideline); or (3) they chose not to deal with the suggestion mentally and so dismissed them (no time).
In previous studies, we have used the microcomputer workstations to discourage the ordering of unnecessary tests and treatments, with significant reduction in costs.22,23,24 Given that this current intervention only suggested ordering tests or drugs (it never suggested not testing or discouraged ordering a drug), we had expected an increase in resource use. However, there were no significant differences in the hospital charges for study and control patients. It is possible that even though the guidelines suggested more resource use, that the better care they promoted led to lower costs by avoiding complications. Given the high cost of drug-induced complications,37 increased resource use may be significantly affected by avoiding even one complication.
The study was done in a teaching hospital where only residents write orders. (Staff physicians guide the care process, but by policy they do not write orders.) However, in other studies both inside and outside of academic centers, reminders influenced residents and staff significantly, and reminders have influenced family practice physicians in private practice.16,25 We believe that similar effects would be observed in most settings.
Physicians forget to do baseline testing (e.g., measuring creatinine levels before ordering an intravenous pyelogram) or follow-up testing (e.g., using serum drug levels to monitor gentamicin treatments). These errors of omission, one type of the errors Reason refers to as latent error, are difficult to prevent because it is difficult to identify the omission.2 One way to improve physician compliance with such guidelines is face-to-face “reverse detailing” (this is done for errors of commission, not omission), but these and other educational efforts are labor intensive and cannot always be scaled up to a large practice environment.
Errors in medical practice can have dire effects, yet errors commonly occur. Physicians have difficulty accepting that mistakes are inevitable, and they take responsibility for mistakes made by others caring for their patients.26,27,28 When errors are investigated, the immediate cause of the error is typically identified and corrected, but the root causes are not. The way to reduce errors is to design systems that will prevent or detect them. Leape3 outlines four mechanisms for redesigning health care systems to significantly reduce the chance of error: (1) reduce reliance on memory; (2) improve access to information; (3) standardize; and (4) train. Computer order entry systems provide easy access to patient and textbook level information29; they provide standardization through preformed order sets, and they provide for active “training” via patient-specified reminders.
We derived most of the guidelines about corollary orders from pharmacy rules for quality assurance, rules about monitoring therapy for therapeutic effects, and for measuring renal function. Our Pharmacy and Therapeutics Committee has clear recommendations on these subjects. Pharmacy and Therapeutics Committees nationwide have long worked to improve compliance with such guidelines. They institute drug utilization review programs, chart reviews, and education efforts, but have had little long term success, despite the money and effort invested.30,31,32 These programs are difficult to sustain because they require ongoing investment and continuous renewal. Even when physicians are aware of the appropriate monitoring guidelines, they fail to carry them out.10
The use of computers to remind physicians about corollary orders as they write trigger orders can be sustained without significant ongoing costs, assuming physicians are already writing orders with computer workstations. Furthermore, analysis at another institution suggests that computer interventions during order entry have the potential to reduce adverse effects by 25-49%,33 and many hospitals are now introducing such systems. There are costs associated with writing and maintaining the guidelines, but those costs are not large. It took one of the authors (JMO) about 2 weeks to write the rules used in this study. (These same rules have been running untended since this study ended.)
Presentation of fully formed suggested orders has another, though lesser, benefit. If the physician already intended to write the order, this approach makes computer order entry more time competitive with the alternative paper method.34 The execution of the rules and display of the suggested order screen took less than half a second on 33-MHz, Intel 80486-based microcomputers, and this saved the physician the 10 to 20 seconds it might have taken to order the same tests if he or she had to find the item on a menu and type in the order instructions. Physicians do not have to pause to think about ordering follow-up tests, find the test's name on a menu (or type it in), or enter the instructions related to the order; they accept the order with a single keystroke.
Many other opportunities exist to improve care and speed the order entry process using order feedback. For another study, we are now building computer guidelines that suggest orders for hypertension management according to the patient's blood pressure control, co-morbidities, age, gender, and race. Without impeding the physician's goals, the computer can remind them of the “preferred” approach and simplify the order entry process simultaneously, while leaving the physician in ultimate control of the decision. As physician order entry systems become more common, this will be an efficient way to disseminate and implement guidelines.
These findings must be interpreted in light of limitations in the study. First, the data are from internal medicine housestaff at a single institution, and it may not be possible to generalize from them. Second, the design does not allow us to separate the effects of the intervention (corollary orders) and the guidelines on which they are based. Finally, the relatively small study size limits the ability to detect changes in patient outcomes.
Computer systems can definitely increase compliance with guidelines that reflect the current beliefs of the ordering physicians. While not universally available at present, such systems are available at leading academic centers and throughout the Veterans' Administration.35 By demonstrating how clinical decision support systems can decrease errors in physician practice, our results may stimulate more widespread implementation of these systems.36
We acknowledge the technical assistance provided by Burke Mamlin, Jeff Warvel, and Jill Warvel. We thank the physicians, students, nurses, and staff of Wishard Memorial Hospital, Indianapolis, Indiana, for their patience. In particular, acknowledge the indispensable efforts of Terry Hogan, RN, Brenda Smith, RN, and Cheryl Wodniak, RN.
Financial support provided by Agency for Health Care Policy and Research, grants HS 05626 and HS 07719, National Library of Medicine, contract NO1-LM-3-3410.