Developing and maintaining a computerized screening system generally involve several steps. The first and most challenging step is to collect patient data in electronic form. The second step is to apply queries, rules, or algorithms to the data to find cases with data that are consistent with an adverse event. The third step is to determine the predictive value of the queries, usually by manual review.
The data source most often applied to patient safety work is the administrative coding of diagnoses and procedures, usually in the form of ICD-9-CM and CPT codes. This coding represents one of the few ubiquitous sources of clinically relevant data. The usefulness of this coding—if it is accurate and timely—is clear. The codes provide direct and indirect evidence of the clinical state of the patient, comorbid conditions, and the progress of the patient during the hospitalization or visit. For example, administrative data have been used to screen for complications that occur during the course of hospitalization.12,13
However, because administrative coding is generated for reimbursement and legal documentation rather than for clinical care, its accuracy and appropriateness for clinical studies are variable at best. The coding suffers from errors, lack of temporal information, lack of clinical content,15
and “code creep”—a bias toward higher-paying diagnosis-related groups (DRGs).16
Coding is usually done after discharge or completion of the visit; thus its use in real-time intervention is limited. Adverse events are poorly represented in the ICD-9-CM coding scheme, although some events are present (for example, 39.41 “control of hemorrhage following vascular surgery”). Unfortunately, the adverse event codes are rarely used in practice.17
Despite these limitations, administrative data are useful in detecting adverse events. Such events may often be inferred from conflicts in the record. For example, a patient whose primary discharge diagnosis is myocardial infarction but whose admission diagnosis is not related to cardiac disease (e.g., urinary tract infection) may have suffered an adverse event.
Pharmacy data and clinical laboratory data represent two other common sources of coded data. These sources supply direct evidence for medication and laboratory adverse events (e.g., dosing errors, clinical values out of range). For example, applications have screened for adverse drug reactions by finding all of the orders for medications that are used to rescue or treat adverse drug reactions—such as epinephrine, steroids, and antihistamines.18–20
Anticoagulation studies can utilize activated partial thromboplastin times, a laboratory test reflecting adequacy of anticoagulation. In addition, these sources supply information about the patient’s clinical state (a medication or laboratory value may imply a particular disease), corroborating or even superseding the administrative coding. Unlike administrative coding, pharmacy and laboratory data are available in real time, making it possible to intervene in the care of the patient.
With increasing frequency, hospitals and practices are installing workflow-based systems such as inpatient order entry systems and ambulatory care systems. These systems supply clinically rich data, often in coded form, which can support sophisticated detection of adverse events. If providers use the systems in real time, it becomes possible to intervene and prevent or ameliorate patient harm.
The detailed clinical history, the evolution of the clinical plan, and the rationale for the diagnosis are critical to identifying adverse events and to sorting out their causes. Yet this information is rarely available in coded form, even with the growing popularity of workflow-based systems. Visit notes, admission notes, progress notes, consultation notes, and nursing notes contain important information and are increasingly available in electronic form. However, they are usually available in uncontrolled, free-text narratives. Furthermore, reports from ancillary departments such as radiology and pathology are commonly available in electronic narrative form. If the clinical information contained in these narrative documents can be turned into a standardized format, then automated systems will have a much greater chance of identifying adverse events and even classifying them by cause.
A study by Kossovsky et al.22
found that distinguishing planned from unplanned readmissions required narrative data from discharge summaries and concluded that natural language processing would be necessary to separate such cases automatically. Roos et al.23
used claims data from Manitoba to identify complications leading to readmission and found reasonable predictive value, but similar attempts to identify whether or not a diagnosis represented an in-hospital complication of care based on claims data met with difficulties resolved only through narrative data (discharge abstracts).
A range of approaches is available to unlock coded clinical information from narrative reports. The simplest is to use lexical techniques to match queries to words or phrases in the document. A simple keyword search, similar to what is available on Web search engines and MEDLINE, can be used to find relevant documents.12,25–27
This approach works especially well when the concepts in question are rare and unlikely to be mentioned unless they are present.26
A range of improvements can be made, including stemming prefixes and suffixes to improve the lexical match, mapping to a thesaurus such as the Unified Medical Language System (UMLS) Metathesaurus to associate synonyms and concepts, and simple syntactic approaches to handle negation. A simple key-word search was fruitful in one study of adverse drug events based on text from outpatient encounters.17
The technique uncovered a large number of adverse drug events, but its positive predictive value was low (0.072). Negative and ambiguous terms had the most detrimental effect on performance, even after the authors employed simple techniques to avoid the problem (for example, avoid sentences with any mention of negation).
Natural language processing28.29
promises improved performance by better characterizing the information in clinical reports. Two independent groups have demonstrated that natural language processing can be as accurate as expert human coders for coding radiographic reports as well as more accurate than simple keyword methods.30–32
A number of natural language processing systems are based on symbolic methods such as pattern matching or rule-based techniques and have been applied to health care.30–45
These systems have varied in approach: pure pattern matching, syntactic grammar, semantic grammar, or probabilistic methods, with different tradeoffs in accuracy, robustness, scalability, and maintainability. These systems have done well in domains, such as radiology, in which the narrative text is focused, and the results for more complex narrative such as discharge summaries are promising.36,41,46–50
With the availability of narrative reports in real time, automated systems can intervene in the care of the patient in complex ways. In one study, a natural language processor was used to detect patients at high risk for active tuberculosis infection based on chest radiographic reports.45
If such patients were in shared rooms, respiratory isolation was recommended. This system cut the missed respiratory isolation rate approximately in half.
Given clinical data sources, which may include medication, laboratory, and microbiology information as well as narrative data, the computer must be programmed to select cases in which an adverse event may have occurred. In most patient safety studies, someone with knowledge of patient safety and database structure writes queries or rules to address a particular clinical area. For example, a series of rules to address adverse drug events can be written.17
One can broaden the approach by searching for general terms relevant to patient safety or look for an explicit mention of an adverse drug event or reaction in the record. Automated methods to produce algorithms may also be possible. For example, one can create a training set of cases in which some proportion is known to have suffered an adverse event. A machine learning algorithm, such as a decision tree generator, a neural network, or a nearest neighbor algorithm, can be used to categorize new cases based on what is learned from the training set.
Finally, the computer-generated signals must be assessed for the presence of adverse events. Given the relatively low sensitivity and specificity that may occur in computer based screening,17,51
it is critical to verify the accuracy of the system. Both internal and external validations are important. Manual review of charts can be used to estimate sensitivity, specificity, and predictive value. Comparison with previous studies at other institutions also can serve to calibrate the system.