Identification of Trials
We searched MEDLINE (1966 to April 2006) and EMBASE (1976 to April 2006) for studies on electronic prescribing that evaluated its effect on medication errors and ADEs. We combined MeSH terms, such as “Medical Order Entry Systems,” “prescriptions, drug,” “drug therapy, computer-assisted” with general search terms such as “order entry,” “CPOE,” “POE,” “order communication,” “prescription system,” “drug prescription,” “prescribing,” “ordering,” and “computerized reminders.” Those search terms were combined with terms that searched for evaluation studies that were taken from an earlier literature survey. 18
The complete query is available upon request.
We also searched the Cochrane Database of Systematic Reviews and examined the reference lists of the relevant reviews in order to find further studies. Finally, we performed a hand search of the Journal of the American Medical Informatics Association (1994–2006), of the International Journal of Medical Informatics (1997–2006), and of the Methods of Information in Medicine (1990–2006). We completed this by hand-searching the references of retrieved study papers. We did not restrict the search to any single language.
Inclusion Criteria and the Selection of Studies
Intervention: We included studies wherein the intervention was electronic prescribing. As an intervention, we included all the computer-based application systems to order drugs that are used at the point of care. We included electronic prescribing systems independent of the level of decision support that they provided (e.g., with or without alerts on drug-drug interaction), and for all types of drugs. We excluded the studies that used electronic prescribing only for ordering diagnostic tests or therapeutic procedures.
Control: We only included studies wherein either an electronic prescribing system was compared with handwritten ordering, or where an electronic prescribing system with a more sophisticated functionality (e.g., a drug-drug interaction alert) was compared with another less sophisticated system.
Population and setting: We included studies where physicians were the primary users of the electronic prescribing system. We excluded studies where other groups (e.g., nurses, pharmacists) were the primary users. We included all clinical settings such as outpatient care, inpatient care, and intensive care. We included all patient groups.
Study design: We included randomized controlled trials, non-randomized controlled trials, before-after trials, as well as time-series analysis with multiple measurements. We only included field studies and excluded all lab studies and simulation studies.
Outcomes: We included studies that evaluated the effect of electronic prescribing on medication errors, potential ADEs, and ADEs. We defined medication error as all errors in the process of ordering, transcribing, dispensing, administering, and monitoring medication. This included an inappropriate drug, dosing, frequency, route, or timing (when related to patient safety), involving problems such as illegible or unsigned orders, and problems related to drug-allergy, drug-drug, drug-lab, and other interactions. Potential ADEs were defined as a medication error with significant potential to harm a patient that may or may not actually reach a patient. Adverse drug events (ADE) were defined as patient injuries resulting from drug use. We excluded studies where medication errors or ADEs were not the primary focus of the study, and studies that were still ongoing. We excluded papers if groups were definitely not comparable. If the data reporting was unclear, we contacted the authors and requested further information.
Data Extraction and Study Quality Assessment
We extracted data from the text, tables, and graphs of the original publications. Two reviewers (EA and CM) examined the data and reached consensus after discussion. In addition, one reviewer (PSI) independently reviewed all the extracted data. All of the cases of discordant data were resolved by discussion.
To detect medication error rates and ADE rates, we used the definition that is provided in each paper (please see Table 3, available online at www.jamia.org
, for the definition of medication error and ADE for each respective paper).
When no absolute numbers were provided for medication error or ADE, we calculated these numbers based on the given data (e.g., if the frequency of ADEs was only given per 1.000 patient-days, the absolute ADE number could be calculated from the number of patient-days). To determine the study size, we used the number of orders as an observation unit of the analysis. If the number of orders was not available, the number of patients or patient-days was used. If multiple data were reported in a study (e.g., in time-series analysis), we used the data of the last reported measurement.
We classified the functionality of the CPOE system either as
- • no decision support: selection of drugs from a list, information on available doses and on costs, access to drug monographs, no further decision-support;
- • limited decision support: evidence-based patient-specific recommendation of a drug, dosing, frequency etc.; or
- • advanced decision support: at least some drug-allergy, drug-drug interaction, drug-lab, or other patient-specific alerts.
All results were reported in systematic evidence tables. The study quality was assessed by using a checklist (please see Table 1, available online at www.jamia.org
), which was developed based on a 16-item assessment tool by the German Scientific Working Group Technology Assessment for Health Care that was applied independently by two reviewers (PSI, EA). Differences in judgment were then solved by discussion.
For each study, the risk ratio (RR) with its 95% confidence interval (CI) was calculated by comparing medication error rates, potential ADE rates, and ADE rates between the intervention and comparison group. If available, the number of orders was used as the denominator. Otherwise, the number of patient-days or the number of patients was used as the denominator.
We used a graphical approach based on forest plots to perform sub-group analysis and arranged studies by increasing risk ratios within subgroups. Subgroups were a priori defined as potentially relevant such as the clinical setting (inpatient, outpatient, or intensive care), patient group, type of drugs, type of system (home-grown or commercial), functionality (no, limited, or advanced decision support), study design, and method to detect errors.
All the analyses were performed with the software package STATA 9.2 (StataCorp, College Station, Texas, USA).