|Home | About | Journals | Submit | Contact Us | Français|
Electronic information systems have been proposed as one means to reduce medical errors of commission (doing the wrong thing) and omission (not providing indicated care).
To assess the effects of computer-based cardiac care suggestions.
A randomized, controlled trial targeting primary care physicians and pharmacists.
A total of 706 outpatients with heart failure and/or ischemic heart disease.
Evidence-based cardiac care suggestions, approved by a panel of local cardiologists and general internists, were displayed to physicians and pharmacists as they cared for enrolled patients.
Adherence with the care suggestions, generic and condition-specific quality of life, acute exacerbations of their cardiac disease, medication compliance, health care costs, satisfaction with care, and physicians' attitudes toward guidelines.
Subjects were followed for 1 year during which they made 3,419 primary care visits and were eligible for 2,609 separate cardiac care suggestions. The intervention had no effect on physicians' adherence to the care suggestions (23% for intervention patients vs 22% for controls). There were no intervention-control differences in quality of life, medication compliance, health care utilization, costs, or satisfaction with care. Physicians viewed guidelines as providing helpful information but constraining their practice and not helpful in making decisions for individual patients.
Care suggestions generated by a sophisticated electronic medical record system failed to improve adherence to accepted practice guidelines or outcomes for patients with heart disease. Future studies must weigh the benefits and costs of different (and perhaps more Draconian) methods of affecting clinician behavior.
Patient safety is an increasingly important focus of quality improvement activities.1 Most interventions to improve safety have focused on preventing errors of commission (doing the wrong thing)2 yet arguably more damage is done by errors of omission (not doing the right thing). For example, recent reports have shown that as many as half of patients suffering acute myocardial infarctions fail to receive aspirin and β-blockers,3–6 errors of omission that could cost more lives and cause more disability than errors resulting from negligent practice.2
In caring for patients with chronic conditions, errors of omission might be reduced by adhering to simple rules or algorithms based on well-documented efficacy studies.7–11 However, physicians (like most humans) struggle at times with simple mundane tasks that require vigilance and managing large volumes of poorly organized information.12 Failure to adhere to accepted guidelines and practice standards can be due to the complex health care systems and medical decisions,13–15 difficulty in applying available clinical knowledge,16,17 physicians' idiosyncracies,18 and human errors.19
Electronic medical record systems and computer-based interventions have been touted as a potential means of facilitating the translation of research into practice by enhancing physicians' compliance with evidence-based guidelines.20 Although interventions such as computer reminders have increased adherence to preventive care guidelines,21 there is less experience with their use in managing chronic illnesses, and their effects have been inconsistent.22,23 We used an established electronic medical record system containing a sophisticated physician order entry system24 to test the effects of a guideline-based decision support system for managing patients with ischemic heart disease and chronic heart failure.
The Indiana University Institutional Review Board approved this study, which was performed in IU Medical Group-Primary Care (IUMG-PC),25 an academic primary care group practice affiliated with an inner-city public teaching hospital. At the time of this study, IUMG-PC's faculty general internists, internal medicine residents, and 1 nurse practitioner provided primary care to 13,000 adults during 50,000 visits annually in 4 identical, adjacent hospital-based practices, each with separate physician, nursing, and clerical staffs. Each practice met during 8 half-day sessions per week attended by 2 or 3 faculty and 3 to 5 residents. Faculty physicians practiced 1 to 5 half-days per week, whereas fellows practiced 1 to 2 half-days per week; residents attended the practice 1 half-day per week. Attendance by the nurse practitioner was variable. Each academic year new physicians were randomly assigned to the practice sessions and patient panels of departing physicians. All physicians had primary responsibility for their panel of patients, including residents who briefly presented each visit to 1 of the faculty. New patients were given the next available appointment and were permanently assigned to both their primary care physician and half-day practice session. Prior studies have shown no systematic differences between the 4 practices and their half-day sessions in patients' characteristics or physicians' decisions.21,26 Patients with heart failure were eligible if they had objective evidence of left ventricular dysfunction on an echocardiogram (either the cardiologist impression of left ventricular systolic dysfunction or a fractional shortening of less than 25%) or cardiac scintigram report (ejection fraction less than 30%). A diagnosis of heart failure on an inpatient or outpatient problem list was neither necessary nor sufficient. Patients with ischemic heart disease were eligible with 1 of the following: 1) inpatient, outpatient, or emergency department diagnosis of coronary artery disease, angina, or myocardial infarction; 2) definitive diagnostic test (e.g., an echocardiogram or scintigram showing segmental abnormalities in the left ventricular wall, an electrocardiogram demonstrating significant Q-waves, or results of cardiac enzyme studies indicating acute myocardial injury); or 3) more than 2 prescriptions for long-acting nitrates.
This trial utilized a 2 × 2 factorial design. To avoid contaminating physicians practicing in the same session, before enrolling patients we randomly assigned half of the IUMG-PC's 32 practice sessions to receive the physician intervention (described below). We used the random number generator in SAS (SAS Institute, Cary, NC) to assign faculty physicians attending more than 1 half-day session per week to intervention or control status. Then we randomized all remaining half-day practice sessions to intervention or control status. As a result, each firm had an equal number of intervention and control sessions, and all of each physician's practice sessions had the same study status.
We block randomized the 11 full-time and 9 part-time outpatient pharmacists to intervention and control status so that half of the patients in physician intervention sessions and half in the physician control sessions were randomly assigned to receive all outpatient prescriptions from intervention pharmacists (as described below). Repeated internal audits have shown that 95% of the patients receiving primary care through IUMG-PC's 4 hospital-based practices fill more than 95% of their prescriptions at Wishard's outpatient pharmacy. The study thus had 4 groups of patients: physician intervention only, pharmacist intervention only, both physician and pharmacist intervention, and no intervention (pure control patients).
The electronic medical record system generated weekly lists of scheduled appointments for patients meeting the eligibility criteria. Patients keeping their appointments were approached by a research assistant in the waiting room who attempted to enroll those able to communicate (i.e., could hear and understand instructions) and had access to a working telephone. Those signing informed consent statements were enrolled. Trained interviewers subsequently administered a baseline questionnaire by telephone.
We used evidence-based guidelines published by the Agency for Health Care Policy and Research (AHCPR)27 and national professional organizations to develop our cardiac care rules. We28 and others29 have stressed that effective guidelines must be both evidence based and translated into local accepted standards of practice. We therefore presented the initial treatment rules30 to a panel of local general internists and cardiologists who read each rule and graded it as acceptable as written, acceptable after making minor modifications, or unacceptable (needing major modifications). The panel accepted all of the minor modifications suggested and adjudicated each “unacceptable” rule. The final rules were programmed into a locally developed decision support system31 and the computer workstations that all physicians in the study practices must use to write all outpatient orders.24,32,33
The care suggestions for chronic heart failure fell into 5 major categories: 1) reducing afterload (mainly using angiotensin-converting enzyme [ACE] inhibitors at maximum tolerated doses); 2) maintaining diuresis (with thiazide or loop diuretics, depending on serum creatinine) for patients with prior peripheral or pulmonary edema or with more than 5 pounds of weight gain since study enrollment; 3) adding digoxin for patients with severe heart failure (New York Heart Association [NYHA]34 class III or IV, recent hospitalization for heart failure, or a left ventricular ejection fraction, or its equivalent, of <30%); 4) treating comorbid conditions (e.g., hypertension, hypercholesterolemia); and 5) encouraging regular exercise, smoking cessation, and weight reduction. At the time that this study was being designed (1994), β-blockers were not being routinely recommended for treating chronic heart failure.
The care suggestions for ischemic heart disease included: 1) routinely prescribing β-blockers for patients without reactive airways disease, peripheral vascular disease, or left ventricular systolic dysfunction; 2) routinely prescribing low-dose aspirin (or ticlopidine for those allergic to aspirin); 3) routinely prescribing long-acting nitrates; 4) treating comorbid conditions; and 5) encouraging regular exercise, smoking cessation, and weight loss.
The data to identify subjects and for the cardiac care suggestions came from the Regenstrief Medical Record System (RMRS), a comprehensive electronic medical record that includes demographics, visit information, diagnoses, drugs, diagnostic test results (both numeric and coded text of imaging study reports), and vital signs.24 For algorithms requiring symptom information, the workstation software required physicians to enter the current blood pressure, symptoms, and NYHA functional class at the start of each workstation order-writing session for enrolled patients.
We sought to determine the direct effects of the computerized cardiac care suggestions, not their educational effects. Therefore, 2 of the authors (WMT and JMO) presented Medicine Grand Rounds on the role of evidence-based practice guidelines in primary cardiac care and the role of computer-based decision support systems in quality improvement.35 Another investigator (MDM) presented a similar program to pharmacists. We also met individually with each primary care physician and pharmacist, described the study, and provided a printed and fully referenced summary of the locally approved guidelines for managing chronic heart failure and ischemic heart disease. We described the local expert panel and the process of creating the guidelines and presented them as the accepted standards of care in IUMG-PC.
For each outpatient visit, the physician received the patient's paper chart and an encounter form printed for that visit.24 After seeing the patient, the physicians wrote all orders for drugs, tests, nursing activities, consultations, etc. on their workstations.24,32,33 During the study period, physicians received a variety of patient-specific feedback about various clinical issues. Using data from each patient's electronic medical record and data entered by the physician (vital signs, symptoms, and NYHA class), the workstation generated guideline-based cardiac care suggestions for all enrolled patients. For patients in the physician control group, these suggestions were withheld. For patients in the physician intervention group, the cardiac care suggestions were printed at the end of the medication list on the encounter form and displayed as “Suggested Orders” on physicians' workstations (Fig. 1). This screen displayed the suggested order, possible actions for each order (i.e., order or omit), and a brief explanation. Physicians could view the guidelines and references via the “help” key. They could avoid all suggestions made for that patient that day by hitting the “escape” key.
During the study, an enrolled patient could present to the IUMG-PC practice for unscheduled care when his or her physician was not available. When this occurred, the system displayed indicated cardiac care suggestions only when both the patient and the physician caring for the patient that day were in the intervention group. Also, throughout the study, longstanding computer-generated preventive care reminders24,35 were presented to all study physicians when indicated.
When any patient presented a new or refill prescription to the outpatient pharmacy, a pharmacy technician entered data into the RMRS pharmacy module. The prescription was then checked by a pharmacist, and the RMRS printed the bottle label. Counseling occurred as needed. For this study, we created the Pharmacist Intervention Recording System (PIRS), in which pharmacists recorded their counseling.36 For all enrolled patients (regardless of study group), cardiac care suggestions generated by the above rules were stored in PIRS which, for those randomized to the pharmacist intervention, printed a note (rather than bottle labels) instructing the pharmacist to view the care suggestions in PIRS. The pharmacist had 3 options: fill the prescription(s) as usual, discuss the suggestion(s) with the patient and encourage discussions with his or her primary care physician, or contact the ordering physician by telephone or e-mail. Using e-mail, the pharmacist could type a brief message to the physician which PIRS would deliver (along with the cardiac care suggestion) to the workstation network. The message would be displayed to the physician the next time he or she logged onto any outpatient workstation.
The primary outcome variables in this study were adherence to the care suggestions, health-related quality of life, and exacerbations of heart disease. Secondary patient outcomes included patient satisfaction with their physician and pharmacist, medication compliance, and direct health care costs. A telephone interviewer administered the quality of life instruments at baseline and 12 months after enrollment. To avoid interviewer bias, all interviewers and research assistants interacted with patients in all study groups and were blinded to study group throughout the study. We assessed generic quality of life using the Short Form 36 (SF-36),37 which had been locally validated.38,39 We used the McMaster Chronic Heart Failure Questionnaire (CHQ)40 modified to include patients with both heart failure and ischemic heart disease41 to assess cardiac-specific quality of life. We assessed satisfaction with physicians with the questionnaire developed by the American Board of Internal Medicine (ABIM),42 which had also been locally validated.43 Satisfaction with pharmacists was assessed using a questionnaire we created and validated.44 Medication compliance was assessed with instruments developed by Inui45 and Morisky46 and by calculating the medication possession ratio,47 also locally validated,48 which uses computer-stored refill information. From patients' RMRS records, we extracted evidence of all-cause and condition-specific emergency department visits and hospitalizations and death dates. We extracted RMRS and workstation data to assess physician and pharmacist compliance with the cardiac care suggestions.
We assessed physicians' attitudes toward clinical practice guidelines with a questionnaire modified from one developed by the American College of Physicians.49 We administered this questionnaire following the guideline educational sessions but before the study started.
The unit of analysis for all outcomes was the patient. Because we intervened on the physician and pharmacist, significant differences between study groups were assessed using random effects generalized linear models (for continuous variables) and generalized estimating equations (for categorical variables) to account for correlations of results among patients treated by specific physicians. We did not control for the intervention pharmacist because there was no one-to-one assignment of pharmacists to patients. Before the study began, we performed power analyses that indicated that we needed 500 patients to have 80% power to detect 1 unit change in the standard error of measurement (SEM) for each subscale score of the CHQ.50,51 We anticipated attrition of 25% of the subjects during the study, so we sought to enroll 700 patients with heart failure and/or ischemic heart disease. Prior studies in this practice have shown a substantial overlap of these 2 conditions.30
We considered a patient to be eligible for an evidence-based suggestion if the suggestion was generated any time between enrollment and closeout. Regardless of the number of times the patient visited the IUMG-PC or the outpatient pharmacy during the study, each cardiac care suggestion was counted only once. We did not include counseling about smoking cessation, exercise, or weight loss, which we could not measure. We defined care as adhering to the guideline if the patient received the suggested treatment at any time between the day the suggestion was first generated and 30 days following the patient's study closeout date. We created a compliance score for each patient by dividing the total number of care suggestions into the number complied with. We used analysis of variance to compare patients' compliance rates between the 4 study groups and Student's t tests to assess the separate effects of the physician and pharmacist interventions.
We compared the numbers of clinic visits and adverse events such as hospitalizations between study groups using Poisson regression. We created overall and subscale scores for the SF-36,37 CHQ,40,41 and ABIM instruments43 and then used analysis of covariance to compare these scores between study groups, using baseline scores and indicators for ceiling and floor effects as covariates. In addition, we classified patients as improved or worse on the quality of life measures if their scores changed by more than 1 SEM50,51 which we compared between study groups using logistic regression.
We estimated true costs by dividing charges by their cost center's cost-to-charge ratio.52,53 Costs and the medication possession ratio were analyzed using Wald-type tests for log-normal data containing zeros.54 Comparisons for the Inui medication compliance measure were made using logistic regression, while the Morisky measure was analyzed using analysis of covariance models with baseline scores and indicators for ceiling and floor effects as covariates. We calculated a score concerning attitudes toward guidelines by summing the 7 items and then compared this score with physicians' guideline compliance rates using Pearson's correlations.
Between January 1, 1994 and May 1, 1996, 1,516 potentially eligible patients kept scheduled appointments to the IUMG-PC (see Fig. 2). Of these, 8% were missed by the research assistants, 21% refused to participate, and 16% were ineligible for reasons shown in Figure 2. Thus, 870 patients (78% of those eligible) gave informed consent and had baseline telephone interviews scheduled. Of these, 10% could not be reached by telephone and 9% refused to participate when contacted by telephone. Compared with the 706 (81% of the enrolled patients) completing baseline interviews, the 325 patients who refused to participate were slightly older (63 vs 58 years, P < .001), but there were no differences in sex, race, or number of prior primary care visits.
At the time of enrollment, these 706 patients received primary care from 1 nurse practitioner and 94 physicians (a mean of 3.7 ± 2.6 patients per provider, median 3, range 1–14), including 33 faculty general internists, 10 fellows (who practice as faculty), and 51 residents. The randomization process described above resulted in 197 receiving the physician intervention, 158 receiving the pharmacist intervention, 170 receiving both interventions, and 191 with no intervention. As shown in Table 1, the mean age of enrolled patients was 61 years, two-thirds were women, and slightly more than half were African American. The mean number of primary care visits (scheduled and unscheduled) during the study was just under 5 and did not differ between study groups. There were also no intergroup differences in age, gender, race, or the number of patients completing 12-month interviews.
There were 2,609 separate guideline-based cardiac care suggestions generated for 628 patients (89% of those enrolled) keeping scheduled IUMG-PC visits, a mean of 4.1 ± 2.6 (SD) per patient (median = 4, range = 1 to 22). As shown in Table 2, there were no intergroup differences in the number of patients eligible for suggestions or the number of different suggestions per patient.
Neither the physician nor the pharmacist intervention had any significant effect on whether patients' cardiac care was compliant with the suggestions (P > .8 across the 4 intervention groups by analysis of variance, with P > .7 and P > .4 when testing the physician and pharmacist interventions separately). While we examined data on level of training, this variable did not enter into the model. Table 2 shows the most common categories of cardiac care suggestions; there were no significant differences between study groups. To assess the validity of the cardiac care suggestions, 1 investigator (JMO) reviewed the paper and electronic records of a 10% random sample (N = 72) of all enrolled patients. Four paper charts could not be located. For the remaining 68 patients, all paper and electronic notes and patient data for all IUMG-PC encounters during the study were searched for evidence that each cardiac care suggestion was invalid or the physician disagreed with it. There was no evidence in any chart that the care suggestions were inappropriate (all met the guidelines' indications), and there was no note in any chart that the physician disagreed with a suggestion.
Table 3 shows the results of the quality of life outcomes for the 480 patients (68%) with closeout interviews. Controlling for baseline scores, there were no differences between groups in any of the SF-36 subscales or the 4 subscales of the CHQ. Similar results occurred when we analyzed SF-36 and CHQ scores based on whether there was a change of ±1 SEM.
As shown in Table 4, there were no clinically or statistically significant differences between groups in the total number of cardiac-specific emergency department visits or hospitalizations. Table 4 also shows that there were no intergroup differences in outpatient, inpatient, or total health care costs. Only 2% of enrolled patients died during the study, and there were no intergroup differences (P > .9). In addition, patient medication compliance as measured by the Inui measure45 (mean compliance = 87%) and the Morisky instrument46 (mean score = 0.79) was not different between study groups (P > .69). Nor were there intergroup differences in the medication possession ratio (P > .10).44,48 Controlling for satisfaction at baseline, there were no differences between patient groups in satisfaction with the physicians (P > .5) or pharmacists (P > .4).
Ninety-four physicians cared for the 706 study patients at the time of enrollment, and an additional 107 also provided primary care during the study which overlapped 3 academic years. Of these 201 physicians, 151 (75%) completed prestudy guideline attitude surveys (Table 5). Although they generally felt that the guidelines were a good educational tool and a convenient source of information intended to improve the quality of care, many physicians felt that guidelines were oversimplified “cookbook” medicine, too rigid to apply to individual patients, hampering physician autonomy, and intended to decrease health care costs. However, there was no correlation between physicians' guideline attitude score and compliance with the study's cardiac care suggestions (P > .20). Eighty-nine (58%) physicians were very or somewhat familiar with the Agency for Healthcare Research and Quality's guidelines for the management of patients with left ventricular systolic dysfunction, with the remainder having never heard of them or only knowing the name. When asked about specific recommendations from the guidelines, 75% agreed or strongly agreed with treating patients with left ventricular systolic dysfunction who were asymptomatic on a diuretic with angiotensin receptor blocker and 60% agreed or strongly agreed that negative inotropic medications should be avoided in these patients.
In 1999, the Institute of Medicine published a report critical of the unacceptable rate of medical errors in American medicine.19 In that report, computer information systems were proposed as major tools for reducing errors. In 2001, the Institute of Medicine published its first follow-up report on medical errors.1 This report states, “Research on the quality of care reveals a picture of a system that frequently falls short in its ability to translate clinical knowledge and technology into practice.” We rigorously tested the effect of an intensive, computer-based, patient-specific intervention targeting physicians and pharmacists to reduce errors of omission among patients with heart disease. Although opportunities to improve care were common, the intervention had no measurable effect on either adherence to the evidence-based guidelines or any clinical or subjective patient outcome. This disappointing result occurred in an academic primary care practice where prior computer-based decision support interventions have repeatedly improved physicians' compliance with outpatient preventive care,21,55,56 inpatient preventive care,57 and inpatient drug-monitoring guidelines.58 Given the lack of physicians' enthusiasm for computer-based chronic care suggestions, we must conclude that they rebelled at the notion of the computer telling them how to manage their patients. In another study of a similarly unsuccessful computer-based guideline intervention, the median number of times physicians accessed the decision support system was zero, despite substantial work to make it user friendly.59
It is possible that physicians and pharmacists found the intervention intrusive and time consuming. Both were required to look at the intervention messages, but pressing the “escape” key erased the messages from the computer screen. A similar previous inpatient preventive care intervention also had no effects on patient care,60 but simply disabling the “escape” key made the same intervention remarkably effective.57 In both interventions, the workstations delivered similar suggestions; they could not force the physicians to read or act upon them. However, by forcing some action (either acting upon them or explaining why not), the inpatient intervention became quite effective. This is similar to prior findings in our outpatient, paper-based preventive care reminders56 and suggests that the intervention in the current study would have been more powerful if it required physicians to either comply with each suggestion or document their reasons for declining.
Physicians received feedback in the same fashion based on other guidelines during this study, and we estimate that they received feedback from this study once during each half-day they practiced, which ensured they were familiar with the way the feedback was delivered but that they were not “overwhelmed” by the frequency of reminders. In addition, the large number of physicians at multiple levels of training that were included in the study ensured that a broad array of physician characteristics and practice styles were included.
An alternative to forcing physicians to respond is rewarding them for complying with practice guidelines. However, studies of such rewards have had mixed results. Martin et al.61 and Hillman et al.62 found no effect of financial incentives, while Kouides et al. showed that a financial incentive significantly (if modestly) increased influenza vaccination rates.63 Since the advent of managed care, in which physician performance is linked to financial rewards and/or punishments, such incentives should be reassessed.
The pharmacist intervention also had no effects on patients' care or their outcomes. However, pharmacists could not write orders for patients; they could only make suggestions to the primary care physicians either directly or through their patients. Even though we encouraged the pharmacists to make care suggestions directly to the physicians and provided a simple e-mail system for doing so, these interactions rarely took place. If the intervention increased pharmaceutical care discussions between pharmacists and patients (which we could not measure), it had no effect on patients' satisfaction with their pharmacists.
Finally, basic provider attitudes toward guidelines in general may have to change before realizing any salutatory effects on care. One untested approach would be to allow the physicians to identify those guidelines with which they would like to comply and allow them to design both the thresholds for intervening and the manner in which they would like to receive care suggestions. In such instances, the intervention messages might be seen by the physicians as self-reminders and hence augment adherence.
Despite the negative results of this intensive, interactive intervention, we do not suggest that evidence-based guidelines and computer-based decision support cannot or should not be pursued as tools for a quality improvement. The recent emphasis on reducing medical errors19 mostly targets errors of commission (i.e., doing the wrong thing), whereas errors of omission (i.e., not doing the right thing) may be substantially more prevalent.2–6 Focusing interventions on errors of omission may thus provide substantially larger opportunities to improve patients' care and outcomes. However, expense and sometimes intrusive interventions should be carefully studied before being implemented in busy practices.
The opinions expressed herein do not necessarily reflect those of the funding agency or the authors' institutions. The authors gratefully thank the physicians and nurse managers of IU Medical Group-Primary Care, without whose support our studies would be impossible. We also are grateful to Marina Weisburd for programming support, and Rita Willis for project management.
Grant Support: 1R01-HS07763 from the Agency for Healthcare Research and Quality.