Clinical decision-support systems, to be effective, must have access to accurate diagnostic and prescribing information. We found that electronic, claims-derived diagnosis codes have good accuracy for identifying ARI and UTI visits in our practice-based research network. ARIs as a group would be expected to have better accuracy than individual ARI diagnoses: a wider diagnostic definition leads to increased sensitivity and positive predictive value and decreased specificity. These findings compare well to other studies that examined the accuracy of electronic diagnoses, mostly chronic, that had sensitivities from 40% to 100%, specificities from 91% to 100%, and positive predictive values from 85% to 100%.2,23,25,27,28,29
In contrast, we identified poor accuracy in electronic antibiotic prescribing for ARIs and UTIs. The specificity (93%) and positive predictive values (90%) for electronic antibiotic prescribing were good. However, the sensitivity was low (43% overall), reflecting a gap between note-documented antibiotic prescribing and electronic antibiotic prescribing. Although there was substantial improvement over time—perhaps because of increased clinician familiarity, improvements in the prescribing module, or encouragements of leadership—at the end of the study period, the sensitivity of electronic antibiotic prescribing was still only 58%.
Hogan and Wagner1
performed a systematic review of the accuracy of data in electronic health records and found that medications had a sensitivity (completeness) of between 93% and 100% and a positive predictive value (correctness) of 83%. Thiru et al.2
found that electronic prescribing information in primary care had a sensitivity of between 93% and 100% and a positive predictive value of 100%. A more recent study of the accuracy of medication lists for older Veterans Affairs patients found a sensitivity of 75% and a positive predictive value of 87%.30
These studies generally examined chronic medications and medication lists that were “descriptive” rather than “prescriptive,” as is our system. Future work should examine whether there are differences in the use of electronic prescribing between acute and chronic medications within a single network or health system.
Understanding the accuracy of electronic data is important for patient care.31,32
The low sensitivity of electronic antibiotic prescribing represents a safety problem.20
Electronic antibiotic prescribing avoids errors associated with handwritten prescriptions and provides medication interaction checking, allergy checking, medical problem interaction checking, laboratory checking, and prospective monitoring for potential adverse drug events.33,34,35
The failure of clinicians to use electronic prescribing also limits the potential benefits of clinical decision support; clinicians are not using a key “effector arm” of clinical decision support.36
For example, clinicians not using electronic prescribing will not interact with clinical decision support that recommends penicillin as the antibiotic of choice for group A β
-hemolytic streptococcal pharyngitis or that recommends not prescribing antibiotics for acute bronchitis.37,38
Understanding the accuracy of electronic data is also important for quality improvement and research purposes.31,39
The apparent antibiotic prescribing rate differs depending on whether one examines chart-documented antibiotic prescribing or electronic antibiotic prescribing. In an analysis of the appropriateness of antibiotic prescribing for ARIs and UTIs, electronic antibiotic prescribing would presumably appear “better,” with a lower antibiotic prescribing rate. However, this is simply an artifact of the quality of data. Similarly, for intervention studies that seek to reduce antibiotic prescribing for ARIs and UTIs, the more easily accessible electronic antibiotic prescribing rate could be misleading. The baseline rate would be too low and the effectiveness of the intervention would be blunted by an artifactual “floor effect.”
This study has limitations that should be considered. First, this study was performed on a sample of visits with a primary billing diagnosis of an ARI or UTI that constrains our ability to comment on specificity and sensitivity of the diagnosis. Use of a data set with broader inclusion criteria would likely lead to decreased specificity, but presumably increased sensitivity. In addition, the claims diagnoses that we evaluated are used primarily for administrative purposes, not clinical. However, claims diagnoses are electronically available and frequently used for quality improvement, profiling, and clinical operations. We are presently implementing an “end-of-visit” system in which the clinician enters the diagnostic code prospectively within the LMR.
Second, the visit notes are an imperfect gold standard. Even though the physician documented an antibiotic prescription in the visit notes, we did not validate that the physician actually wrote or called in the prescription, the patient had it filled at a pharmacy, or the patient took the antibiotic. While the visit note is only a proxy for “the truth,” it is an accessible, economical measure of what the clinician is trying to do. Similarly, we did not assess whether the documentation supported the diagnosis. Third, we were unable to locate the visit note for some of our randomly selected visits. Most of the missing visits were for women with a diagnosis of UTI and may represent a woman coming to the clinic to have a urinalysis performed. Considering that failure of documentation is a marker of poor quality, exclusion of these visits would probably bias the results toward higher sensitivity. Fourth, including electronic antibiotic prescriptions within 30 days of the index visit likely inflated the sensitivity and decreased the specificity of electronic antibiotic prescribing. Despite these last two limitations, we still found a disturbingly low sensitivity for electronic antibiotic prescribing.
Finally, this analysis was performed in an urban and suburban PBRN with a single electronic health record and focused only on the prescribing of one class of medication for two acute conditions. Although these results may not generalize to other settings, electronic health records, conditions, and medications, they demonstrate that clinicians, researchers, and clinical leaders need to understand the accuracy of electronic information that they are using.