Search tips
Search criteria 


Logo of jamiaAlertsAuthor InstructionsSubmitAboutJAMIA - The Journal of the American Medical Informatics Association
J Am Med Inform Assoc. 1999 Nov-Dec; 6(6): 512–522.

Improving Response to Critical Laboratory Results with Automation

Results of a Randomized Controlled Trial


Objective: To evaluate the effect of an automatic alerting system on the time until treatment is ordered for patients with critical laboratory results.

Design: Prospective randomized controlled trial.

Intervention: A computer system to detect critical conditions and automatically notify the responsible physician via the hospital's paging system.

Patients: Medical and surgical inpatients at a large academic medical center. One two-month study period for each service.

Main outcomes: Interval from when a critical result was available for review until an appropriate treatment was ordered. Secondary outcomes were the time until the critical condition resolved and the frequency of adverse events.

Methods: The alerting system looked for 12 conditions involving laboratory results and medications. For intervention patients, the covering physician was automatically notified about the presence of the results. For control patients, no automatic notification was made. Chart review was performed to determine the outcomes.

Results: After exclusions, 192 alerting situations (94 interventions, 98 controls) were analyzed. The intervention group had a 38 percent shorter median time interval (1.0 hours vs. 1.6 hours, P = 0.003; mean, 4.1 vs. 4.6 hours, P = 0.003) until an appropriate treatment was ordered. The time until the alerting condition resolved was less in the intervention group (median, 8.4 hours vs. 8.9 hours, P = 0.11; mean, 14.4 hours vs. 20.2 hours, P = 0.11), although these results did not achieve statistical significance. The impact of the intervention was more pronounced for alerts that did not meet the laboratory's critical reporting criteria. There was no significant difference between the two groups in the number of adverse events.

Conclusion: An automatic alerting system reduced the time until an appropriate treatment was ordered for patients who had critical laboratory results. Information technologies that facilitate the transmission of important patient data can potentially improve the quality of care.

Errors and oversights occur in medical practice,1,2,3,4 which is not surprising given the volume of information clinicians must manage. Improving the systems within which clinicians practice is likely to be much more effective at improving care than chastising individual clinicians about individual errors.1 Increasingly, information technology is being used to improve systems of care by detecting and informing clinicians about key clinical events.5

It is important that clinicians respond in a timely and appropriate manner to critical laboratory results for their patients, but studies have shown that optimal response does not always occur. In a study of adverse events among inpatients,6 we found that faster and more appropriate response to critical laboratory results might have prevented 4.1 percent of adverse events, and another 5.5 percent of adverse events might have been prevented by improved communication of laboratory results that were important in the context of a patient's medication regimen. Also, in a study of “life-threatening” laboratory results, Tate7 found that only 50 percent were responded to appropriately. In a study done to evaluate the treatment of critical laboratory results at our institution, we found that in 27 percent of cases, more than five hours passed before an appropriate treatment was ordered.8 In an outpatient setting, McDonald9 found that computer reminders increased compliance with a variety of protocols (including treatment of critical laboratory results) from 22 to 51 percent. Other studies have shown similar impacts of reminder systems.10,11

The process by which clinicians receive and act on critical laboratory results is complex. Although regulations require hospital laboratories to communicate critical results for inpatients directly to the responsible provider,12,13,14 and most institutions have special reporting procedures in place for critical results,15 the identity of the provider responsible for a patient's care is rarely known to the laboratory personnel. Usually, the laboratory technologist telephones the patient's floor, and this sets in motion a chain of communication involving the unit secretary, nurses, and only eventually the responsible clinician. This complex interdisciplinary communication requires more coordination than is usually found in medical settings, and such procedures often fail.16 Also, many results that do not formally fall into the “critical value” category on the basis of simple threshold criteria and do not require special reporting procedures may be serious in the context of a patient's other data (for example, a rapidly falling but not yet critically low hematocrit).

Several recent efforts have used information technology to detect such important events and present them to providers in a timely manner. For example, Rind et al.17 demonstrated that e-mail reminders decreased from 96 to 72 hours the time until nephrotoxic or renally excreted medications were adjusted when inpatients had rising creatinine values. Also, Hripscak et al.18 and Jain et al.19 developed an event monitor that informs physicians by e-mail when radiographic evidence of tuberculosis is found in patients who are not in respiratory isolation.

We have developed a system that automatically detects critical laboratory results and pages the responsible provider.20,21 We defined critical conditions to be laboratory results that met our hospital's critical reporting criteria as well as results that met other, more complex criteria indicating that prompt attention may be necessary. We performed a study to determine whether this automated system would reduce the time until an appropriate treatment is ordered for patients with a critical condition and reduce the time until the critical condition is resolved.


Study Settings

The study was carried out at Brigham and Women's Hospital (BWH), a 720-bed tertiary-care hospital in Boston, Massachusetts. The Brigham Integrated Computing System (BICS)22 provides administrative and clinical computing services at BWH. All patient laboratory results are stored in BICS, and physicians and nurses enter inpatient orders directly into the system.23 The BWH laboratory generates 3,000,000 inpatient chemistry and hematology results per year, of which 0.96 percent meet its critical reporting criteria.8 Such critical results are telephoned by the laboratory's technologists to the patient's floor as soon as they are available and just before the results are stored in the hospital's database, when they become available for general review. Previous evaluation has shown that the BWH laboratory's compliance with this policy is outstanding.8

Patient Population

The study period for the medical service was Dec 1, 1994, to Jan 31, 1995, and the study period for the surgical service was Sep 1, 1995, to Oct 30, 1995. All medical and surgical inpatients during the respective study periods were included in the study. The patient was the unit of randomization. Assignment to the control or intervention group was performed using an internal patient-specific identifier assigned sequentially at the time a patient is registered. This identifier is not available to the clinicians. Randomizing by patient allowed us to study patient-level outcomes. Randomizing by “team” would have allowed us to study the impact of the intervention on physicians, but BWH does not have teams to which physicians belong exclusively. The BWH Human Research Committee approved the study.

Knowledge Base

Twelve alerting rules were used in this study ([triangle])—seven rules based on the value of a single laboratory result, four rules that detected changes in laboratory results over time, and one rule that detected a drug-laboratory interaction. The rules were based on a set of alerting conditions developed previously.7 The system was not intended to alert for all the laboratory's critical results.

Table 1
Frequency Distribution of Alerts

Alerting System Design

The study made use of a clinical alerting system that had been in use at BWH since June 1994.21 The alerting system ([triangle]) uses a continuously running event monitor to determine whether new patient data satisfy any rule-based alerting criteria. The manner in which alerting rules are represented in the computer is discussed elsewhere.20 If an alert is generated, a notification program automatically pages the alerting patient's covering physician. The covering physician is determined from an automated “coverage list” database,24 which identifies the physician who is primarily responsible for each patient at any given time. The callback number on the digital pager is “8888,” which indicates that an automated alert has been generated for one of the physician's patients. The physician can then log on to any computer workstation to review the alert.

Figure 1
Automated alerting system software architecture. New clinical data are sent to the event monitor. The event monitor determines whether the new data warrant an alert. If so, the notification program is called. The notification program queries the coverage ...

The alert review screen ([triangle]) displays patient identifying information, the alert message, the alert details, any active medications that are relevant to the alert,* and alert-specific actions that the physician may invoke, such as orders for therapeutic medications (e.g., intravenous or oral potassium for a patient with a hypokalemia alert) or additional laboratory tests. Most actions can be invoked by only a single keystroke; some require the order to be completed (e.g., entering the dose for a medication order).

Figure 2
Alert review and therapeutic action screen. This is the screen that the physician sees when reviewing the alert. The patient's identifying information is shown at the top, followed by the time of the alert, the alert message, and the details of the ...

After the alert has been reviewed, the physician is asked to answer a single multiple-choice question presented on the computer screen—“Which of the following apply?”—to elicit the physician's attitude. The possible answers are: 1) I will take action as a result of this alert, 2) I was already aware of this data, 3) This alert is not interesting, 4) The data in this alert are wrong (i.e., false positive), or 5) Other. The physician is required to check one answer.

If the physician has not reviewed the alert within 15 minutes, a fail-safe notification sequence is initiated ([triangle]). The border of the BICS computer screens on the alerting patient's floor turns red, indicating that an automated alert is present for one of the patients on the floor. The nurses on the floor can then review the alert on the computer. If, after 30 more minutes, the alert still has not been reviewed, a computer workstation in the telecommunication office begins to beep. A telephone operator then reviews the alert and calls the patient's floor with the details of the alert. The telephone operator records the name of the person to whom they gave the information (usually a unit secretary or a nurse). In this way, all alerts communicated by the alerting system are reviewed within 45 minutes.

Figure 3
Notification failsafe sequence. If the alert has not been reviewed within 15 minutes after the physician has been paged, the borders of the computer screens on the patient's floor turn red. The computers on the inpatient floors display new information ...


The automated alerting system detected the presence of alerting situations for both intervention and control patients. However, the notification function was initiated (i.e., the covering house officer was paged) only for intervention patients.

During the study, we did not replace the BWH laboratory procedure for calling the floor with critical laboratory results. Thus, for patients in the intervention group who had an alert based on a simple critical result, the physician was notified by the automated alerting system as well as by the laboratory's routine notification procedure. The difference is that the laboratory's phone calls went to the unit secretary or nurse, whereas the automated pages went directly to the physician.


The primary outcome of the study was the length of the time interval from the filing of the alerting result —i.e., the time when the laboratory result was stored in the database and became available for review by providers—to the ordering of appropriate treatment. A secondary outcome was the interval between the result's filing time and the resolution of the critical condition. The time of resolution was defined as the arrival time in the laboratory of a test or the time of a bedside test (e.g., fingerstick glucose) that demonstrated that the alerting condition was no longer present. Outcomes were assessed by trained reviewers using chart review. Reviewers did not have access to the randomization identifier and so were blinded as to whether the alert was in the intervention or the control group.

Assessment of when an appropriate treatment was ordered was performed by the reviewers using explicit criteria ([triangle]). These data were found in physician's orders, daily flow sheets, the medication record, or the progress notes. The goal was to measure the time until the physician acted; therefore, we measured the time that the treatment order was placed rather than the time that the treatment was administered. If an order existed for “potassium replacement according to scale,” we defined the time of treatment for hypokalemia as the time the potassium was actually administered. We also used the time of administration when orange juice or other oral glucose replacement was given by mouth to treat hypoglycemia.

Table 2
Criteria Used by Reviewers to Determine the Time Appropriate Treatment Was Ordered

Although the study was not designed to have sufficient power to detect differences in adverse outcomes, we collected data on how often a number of adverse outcomes occurred. These outcomes were death, cardiopulmonary arrest, an unexpected transfer to the intensive care unit, delirium, stroke, new renal insufficiency, new acute renal failure, dialysis, or an unexpected return to the operating room. The adverse outcomes were included only if they occurred within 48 hours of the alerting event.

For the intervention group, we determined the fraction of the page notifications to which the physicians responded and the fraction in which physicians took action directly from the alert review screen.


We excluded repeat alerts of the same type for a patient during a single admission; alerts for patients who had “do not resuscitate” (DNR) orders at the time of the alert (DNR was used as a proxy for patients whose conditions were terminal and who may not receive aggressive therapy); alerts due to “non-representative” laboratory results (i.e., results that were deemed, by trained reviewers using explicit criteria, not to represent the patient's physiologic state, because they were preceded and followed shortly by values in the normal range, with no treatment having been given for the condition); and alerts for which treatment had been initiated before the alerting value was reached (e.g., hyperglycemia where treatment was initiated before the glucose level reached the alerting criteria). The first and last exclusion criteria were included because we wanted to measure the impact of the system only on newly recognized critical laboratory results.


Results for the time intervals until appropriate treatment was ordered and until the alerting condition was resolved are reported as medians, with 25 and 75 percent quartiles, and as means, because the effect on high outliers is important.25 Univariate comparisons between the intervention and control groups for primary and secondary outcomes were made using the Wilcoxon rank sum statistic. We also report comparisons with the use of the Student t-test to determine the effect of the intervention on the tails of the distribution. On the basis of pilot data,8 we estimated that we had 80 percent power to detect a 66 percent reduction in the time until the treatment was ordered and a 50 percent reduction until the time the condition was resolved.


The study periods included 11,412 patient-days on the medical service and 11,097 patient-days on the surgical service. A total of 560 alerts were detected—292 in the control group and 268 in the intervention group ([triangle]). Among these, 131 were excluded because they were not the first occurrence of one of the alert types for a patient within an admission. Of the 429 remaining alerts, 41 were for patients whose status was DNR, and 51 were deemed to be nonrepresentative of the patients' true condition; 87 alerts fell into one or both of these categories and were excluded. Prior treatment for the condition had been started in 150 of the remaining 342 alerts, leaving 192 alerts for 178 patients (1.08 alerts per patient) where the alert represented a critical situation for which treatment had not yet been started (98 in the control group and 94 in the intervention group). The majority of the alerts were due to falling hematocrit, hyperglycemia, hyperkalemia, and falling potassium ([triangle]). There were more falling hematocrit alerts on the surgical service and more low potassium alerts on the medical service.

Figure 4
Diagram showing the number of alerts detected, excluded, and eventually analyzed for the current study.

The median time until treatment was ordered ([triangle]) for the 94 alerts in the intervention group was 1.0 hours; this was 38 percent less than the 1.6 hours for the 98 alerts in the control group (P = 0.003). The mean time in the intervention group was also shorter (4.1 hours vs. 4.6 hours, P = 0.003). Treatment was ordered more than 5 hours after 15 (16 percent) of the intervention alerts and 24 (24 percent) of the control alerts (P = 0.19, chi-square).

Table 3
Effect of Intervention on Time until Treatment Was Started and on Time until Condition Was Resolved, for All Alerting Situations

In eight cases, no resolution was documented prior to discharge. For the 184 alerts in which a resolution was documented, the median time until the condition was resolved was shorter in the intervention group (8.4 hours [n = 89] vs. 8.9 hours [n = 95], P = 0.11), as was the mean (14.4 hours vs. 20.2 hours, P = 0.11), but neither difference was statistically significant. When no resolution was documented, if the time of resolution was set to the time of discharge, the data did not change significantly (median, 8.6 hours [intervention] vs. 9.0 hours [control], P = 0.10; mean, 15.2 hours vs. 21.9 hours, P = 0.10).

Of the 192 alerts, 97 (50.5 percent) met the laboratory's critical reporting criteria and, in addition to being handled by the automatic alerting system, were communicated to the patient's floor by the laboratory technologist. Of these, 43 were in the intervention group and 54 were in the control group ([triangle]). Even in these instances, when the laboratory's usual reporting procedure was invoked, there was a trend in the intervention group toward decreased time until the treatment was ordered (median, 0.7 hours vs. 1.1 hours, P = 0.06). The median time until the condition was resolved was not significantly different between the groups (7.0 hours, [phone call plus intervention, n = 40] vs. 8.1 hours [phone call only, n = 54], P = 0.43) and the means also were similar.

Table 4
Effect of Intervention on Time until Treatment Was Started and on Time until Condition Was Resolved, When Alerting Situation Satisfied the Laboratory's Critical Reporting Criteria and a Phone Call Was Made

Of the 95 results that did not meet the laboratory's reporting criteria ([triangle]), the 51 in the intervention group had a median time until treatment was ordered of 1.2 hours. This was significantly shorter than the median of 2.5 hours (P = 0.009) for the 44 control alerts (which did not meet the laboratory's calling criteria and thus were communicated neither by the alerting system nor by the laboratory). The mean time was also significantly shorter in the intervention group (4.8 hours vs. 6.1 hours, P = 0.01). The time until the critical condition resolved also was shorter in the intervention group for these alerts (median, 9.2 hours vs. 10.2 hours, P = 0.05; mean, 15.8 hours vs. 28.8 hours, P = 0.06).

Table 5
Effect of Intervention on Time until Treatment Was Started and on Time until Condition Was Resolved, When Alerting Situation Did Not Satisfy the Laboratory's Critical Reporting Criteria

Of the 94 intervention alerts, physicians reviewed 65 (69 percent). The median time until a treatment was ordered for the 65 alerts reviewed by the physicians was 0.5 hours. Nurses reviewed 7 alerts (7 percent) and 22 alerts (23 percent) were reviewed by the telecommunication staff who communicated them to the floor. Physicians said they would take an action as a result of the alert for 50 of the 65 alerts they reviewed (77 percent), although they took an action directly from the alert review screen for only 10 alerts (15 percent). For only 10 alerts (15 percent) did the physicians say they were already aware of the alerting data.

A total of 58 adverse events were identified among the 192 alerting situations ([triangle]). The mortality rate was 7.4 percent in the intervention group (7 of 94 patients) and 13.3 percent in the control group (13 of 98 patients), although this difference was not statistically significant (P = 0.19, chi-square). The total adverse event rate per patient was similar in the two groups (31 events in 94 intervention patients [0.33 events per patient] vs. 27 events in 98 control patients [0.28 events per patient], P = 0.41).

Table 6
Adverse Events among Alerting Situations


Critical laboratory results are signs of major perturbations of key physiologic systems. If patients with such results are not treated promptly and appropriately, organ damage or death may ensue. In this study we found that communicating critical laboratory results directly to the responsible provider decreased the time until an appropriate treatment was ordered, and there was also a trend toward a decreased duration of the life-threatening condition. The effect of the system was present even when the result was communicated to the patient's floor via the laboratory's telephone-based critical reporting procedures and was more pronounced when the result met complex criteria not communicated by standard procedures.

The effect of the intervention was probably due to the fact that the patient's key provider received the information directly, with the pertinent data highlighted and in a context that made ordering the correct treatment easier. Although no difference in adverse event rates was seen, there was a trend toward decreased mortality in the intervention group, and we did not design the study to have sufficient power to identify differences in adverse event rates. Moreover, this is a situation in which improved process can be expected to decrease adverse event rates in the long term. Also, their responses to the attitudinal question indicated that physicians felt the automatic alerts were important.

When used to evaluate and communicate patient data, advanced information technologies can add value in many ways. Currently, hospital laboratories routinely only use single value thresholds as critical indicators,15 because other data are not readily available. In contrast, if a comprehensive clinical database is present, the computer can also detect such events as changes in laboratory results over time and laboratory results that are serious if patients are taking certain medications (drug-laboratory interactions) or have a specific condition such as renal failure. These complex situations are more specific and may be more meaningful than simple thresholds.

Computers also can facilitate response to alerting situations by presenting the information in a suitable context. While it is useful to know that a patient has a critically low serum potassium value, it is even more useful to know that the patient also is receiving high doses of furosemide and digoxin. Presenting information in a suitable context is especially important in cross-coverage situations where the responsible provider may be unfamiliar with the involved patient.26 A background alerting application, such as that described in this study, provides a useful counterpart to reminders embedded in order entry applications.27,28 Reminders embedded in applications can be presented in real time to clinicians using interactive applications on a computer workstation. Background alerts are triggered by data that enter the computer system from other automated systems, so special notification procedures are required to communicate the information to the clinician.

Several nontrivial prerequisites are needed to develop an alerting system such as the one described here. First, the data needed to make the decisions (inferences) must be available electronically on a common computing platform. The alerts in this study made use of laboratory results, medication orders, and patient demographics in our integrated patient database. Most institutions do not yet have physicians writing orders electronically29; however, almost all have laboratory data available and many have patient medication data available in the hospital pharmacy system.

Second, interfaces between the decision-making systems and notification systems (e.g., paging and electronic mail systems) also are necessary. Such interfaces are becoming technically easier and more widespread.

Third, it can be difficult to determine which provider to notify. Such “coverage list” applications are not yet in widespread use but are essential if information technology is to fulfill its potential to enhance communication. In this study, the computer contacted the covering house officer directly and had methods to follow up if the house officer did not respond to the notification. In contrast, Tate et al.7 displayed the alert message when the patient's data were reviewed, and Rind et al.17 sent e-mail messages to physicians who had previously reviewed the patient's data. Providing laboratory personnel with access to the coverage information would not necessarily add efficiencies, because the technologists' workflow would be interrupted if they have to page a physician, wait for a response, and remember to contact someone else if no response were received. Future applications of automated notification technology may require a system to identify and send messages to such individuals as the patient's primary care physician, referring physician, or other health professional.

Fourth, it is time consuming to decide which events are worthy of automated pages, automated e-mail messages, or other special communication methods. Evidence from the literature to inform such choices is limited, and acquiring this knowledge from experts may be arduous. Tate et al.17 spent more than a year in a formal consensus development process with intensivists to determine which events merited alerts. Although we were able to customize their knowledge base for our institution, we spent significant time specifying new rules. The Arden Syntax,30 which specifies a standard form for medical logic, may allow knowledge developed through consensus at one institution to be more easily reviewed and used elsewhere. Standardization of the representation of medical concepts31 and medical vocabularies32 will also facilitate transfer of electronic knowledge.

Further enhancements to the system are possible. For example, we have added rules that notify physicians about a low platelet count in a patient who is receiving a platelet antagonist and about administration to a patient with renal failure of a medication that should be avoided in that condition. Also, we have added the ability to “retract” an alert. This is necessary if data are entered incorrectly, an alert is generated, and the data are subsequently corrected.33 Physicians continue to be comfortable with the system; in a later analysis, physicians said they would take action in 70 percent of the alerts, and they in fact took action directly from the review screen in 39 percent of the alerts.21 The discrepancy between the numbers is probably due to situations where patients must be examined before an action can be taken.

If the number of rules in such systems expands, care will have to be taken not to overload physicians with alerts. Some systems, such as one in use at the University of Utah Medical Center, permit physicians to “subscribe” to rules in which they are interested (Pierre Pincetl, MD, oral communication, Nov 1997). Also, as data become increasingly available in electronic form, alerting rules can increase in specificity. For example, if a woman is known to be pregnant, she could have a different alerting value for glucose than other patients.

This study has several limitations. The work was carried out at one institution, so the results may not be generalizable to other settings or institutions. Also, we found no significant change in patient outcome as a result of the intervention, although the study was not designed to have sufficient power to detect such differences. Finally, we tested the intervention only with the subset of the critical laboratory results we felt were most important to treat urgently; although there may be many opportunities to use a system such as this to improve care, the effect may differ if other laboratory results are used. We also excluded from this analysis alerts that did not definitely require a response by the physician (e.g., false positives, alerts for patients whose status was DNR, and repeat alerts). This was legitimate because this study examined the impact of the alerting system on treatment delays. Future evaluations of alerting systems should take all alerts into account and determine how often they are deemed useful.

We conclude that the automated alerting system decreased the time it took for physicians to respond to critical laboratory results. Improvements in patient outcomes should follow. Physicians thought that the communicated information was important. Information technologies can help clinicians detect important abnormalities in the flood of data they continually receive.


This work was supported in part by research grant RO1-HS08927 from the Agency for Health Care Policy and Research.


*Each alerting rule has an associated list of relevant medications, and the alert review screen displays which of those medications the alerting patient is receiving.


1. Leape LL. Error in medicine. JAMA 1994:272(23);1851-7. [PubMed]
2. Bedell SE, Deitz DC, Leeman D, Delbanco TL. Incidence and characteristics of preventable iatrogenic cardiac arrests. JAMA. 1991;265(21):2815-20. [PubMed]
3. Leape LL, Bates DW, Cullen DJ, et al. Systems analysis of adverse drug events. ADE Prevention Study Group. JAMA. 1995;274(1):35-43. [PubMed]
4. Classen DC, Evans RS, Pestotnik SL, Horn SD, Menlove RL, Burke JP. The timing of prophylactic administration of antibiotics and the risk of surgical-wound infection. N Engl J Med. 1992;326(5):281-6. [PubMed]
5. Classen DC. Clinical decision support systems to improve clinical practice and quality of care. JAMA. 1998;280(15):1360-1. [PubMed]
6. Bates DW, O'Neil AC, Boyle D, et al. Potential identifiability and preventability of adverse events using information systems. J Am Med Inform Assoc. 1994;1:404-11. [PMC free article] [PubMed]
7. Tate KE, Gardner RM, Weaver LK. A computerized laboratory alerting system. MD Comput. 1990;7(5):296-301. [PubMed]
8. Kuperman GJ, Boyle D, Jha A. How promptly are inpatients treated for critical laboratory results? J Am Med Inform Assoc. 1998;5:112-9. [PMC free article] [PubMed]
9. McDonald CJ. Computer reminders, the quality of care, and the nonperfectability of man. N Engl J Med. 1976;295:1351-5. [PubMed]
10. Shea S, DuMouchel W, Bahamonde L, et al. A meta-analysis of 16 randomized controlled trials to evaluate computer-based clinical reminder systems for preventive care in the ambulatory setting. J Am Med Inform Assoc. 1996;3:399-409. [PMC free article] [PubMed]
11. McDonald CJ, Hui SL, Smith DM, et al. Reminders to physicians from an introspective computer medical record: a two-year randomized trial. Ann Intern Med. 1984;100(1):130-8. [PubMed]
12. Accreditation Manual for Pathology and Clinical Laboratory Services. Chicago, Ill: The Joint Commission, 1994.
13. Commission on Laboratory Accreditation. 1992 Inspection Checklist. Northfield, Ill: College of American Pathologists, 1992.
14. Clinical Laboratory Improvement Amendments of 1988. Federal Register., Feb 28, 1992.
15. Kost GJ. Critical limits for urgent clinician notification at U.S. medical centers. JAMA. 1990;263(5):704-7. [PubMed]
16. Leape LL, Bates DW, Cullen DJ, et al. Systems analysis of adverse drug events. ADE Prevention Study Group. JAMA. 1995;274(1):35-43. [PubMed]
17. Rind DM, Safran C, Phillips RS, et al. Effect of computer-based alerts on the treatment and outcomes of hospitalized patients. Arch Intern Med. 1994;154(13):1511-7. [PubMed]
18. Hripcsak G, Clayton PD, Jenders RA, Cimino JJ, Johnson SB. Design of a clinical event monitor. Comput Biomed Res. 1996;29(3):194-221. [PubMed]
19. Jain NL, Knirsch CA, Friedman C, Hripscak G. Identification of suspected tuberculosis patients based on natural language processing of chest radiograph reports. Proc AMIA Annu Fall Symp. 1996:542-6. [PMC free article] [PubMed]
20. Kuperman GJ, Teich JM, Bates DW, McLatchey J, Hoff T. Representing hospital events as complex conditionals. Proc 19th Annu Symp Comput Appl Med Care. 1995:137-41. [PMC free article] [PubMed]
21. Kuperman GJ, Teich JM, Bates DW, et al. Detecting alerts, notifying the physician, and offering action items: a comprehensive alerting system. Proc AMIA Annu Fall Symp. 1996:704-8. [PMC free article] [PubMed]
22. Teich JM, Glaser JP, Beckley RF, et al. Toward cost-effective, quality care: The Brigham Integrated Computing System. In: Steen EB (ed). Proceedings of the 2nd Nicholas E. Davies CPR Recognition Symposium. Washington DC: Computerized Patient Record Institute, 1996:3-34.
23. Bates DW, Kuperman G, Teich JM. Computerized physician order entry and quality of care. Qual Manag Health Care. 1994;2(4):18-27. [PubMed]
24. Hiltz FL, Teich J. Coverage list: a provider-patient database supporting advanced hospital information services. Proc 18th Annu Symp Comput Appl Med Care. 1994:809-13. [PMC free article] [PubMed]
25. Bates DW, Kuperman GJ, Jha A, et al. Does the computerized display of charges affect inpatient ancillary test utilization? Arch Intern Med. 1997;157:2501-8. [PubMed]
26. Petersen LA, Orav EJ, Teich JM, O'Neil AC, Brennan TA. Using a computerized signout to improve continuity of inpatient care and prevent adverse events. Jt Comm J Qual Improv. 1998;24(2):77-87. [PubMed]
27. Overhage JM, Tierney WM, Zhou XH, McDonald CJ. A randomized trial of “corollary orders” to prevent errors of omission. J Am Med Inform Assoc. 1997;4:364-75. [PMC free article] [PubMed]
28. LePage LF, Gardner RM, Golubjatnikov OK. Improving blood transfusion practice: role of a computerized expert system. Transfusion. 1992;32:253-9. [PubMed]
29. Sittig DF, Stead WW. Computer-based physician order entry: the state of the art. J Am Med Inform Assoc. 1994;1:108-23. [PMC free article] [PubMed]
30. Hripcsak G, Ludemann P, Pryor TA, Wigertz OB, Clayton PD. Rationale for the Arden Syntax. Comput Biomed Res. 1994;27(4):291-324. [PubMed]
31. Jenders RA, Sujansky W, Broverman CA, Chadwick M. Toward improved knowledge sharing: assessment of the HL7 Reference Information Model to support medical logic module queries. Proc AMIA Annu Fall Symp. 1997:308-12. [PMC free article] [PubMed]
32. Huff SM, Rocha RA, McDonald CJ, et al. Development of the Logical Observations Identifiers, Names and Codes (LOINC) vocabulary. J Am Med Inform Assoc. 1998;5:276-92. [PMC free article] [PubMed]
33. Kuperman GJ, Hiltz FL, Teich JM. Advanced alerting features: displaying new relevant data and retracting alerts. Proc AMIA Annu Fall Symp. 1997:243-7. [PMC free article] [PubMed]

Articles from Journal of the American Medical Informatics Association : JAMIA are provided here courtesy of American Medical Informatics Association