Search tips
Search criteria 


Logo of cmajCMAJ Information for AuthorsCMAJ Home Page
CMAJ. 2010 March 23; 182(5): E216–E225.
PMCID: PMC2842864

Effect of point-of-care computer reminders on physician behaviour: a systematic review



The opportunity to improve care using computer reminders is one of the main incentives for implementing sophisticated clinical information systems. We conducted a systematic review to quantify the expected magnitude of improvements in processes of care from computer reminders delivered to clinicians during their routine activities.


We searched the MEDLINE, Embase and CINAHL databases (to July 2008) and scanned the bibliographies of retrieved articles. We included studies in our review if they used a randomized or quasi-randomized design to evaluate improvements in processes or outcomes of care from computer reminders delivered to physicians during routine electronic ordering or charting activities.


Among the 28 trials (reporting 32 comparisons) included in our study, we found that computer reminders improved adherence to processes of care by a median of 4.2% (interquartile range [IQR] 0.8%–18.8%). Using the best outcome from each study, we found that the median improvement was 5.6% (IQR 2.0%–19.2%). A minority of studies reported larger effects; however, no study characteristic or reminder feature significantly predicted the magnitude of effect except in one institution, where a well-developed, “homegrown” clinical information system achieved larger improvements than in all other studies (median 16.8% [IQR 8.7%–26.0%] v. 3.0% [IQR 0.5%–11.5%]; p = 0.04). A trend toward larger improvements was seen for reminders that required users to enter a response (median 12.9% [IQR 2.7%–22.8%] v. 2.7% [IQR 0.6%–5.6%]; p = 0.09).


Computer reminders produced much smaller improvements than those generally expected from the implementation of computerized order entry and electronic medical record systems. Further research is required to identify features of reminder systems consistently associated with clinically worthwhile improvements.

Computerized systems for entering orders and electronic medical records represent two of the most widely recommended improvements in health care.1 These systems offer the opportunity to improve practice by delivering reminders to clinicians at the point of care. Such reminders range from simple prescribing alerts to more sophisticated support for decision-making.

Previous reviews have classified all computer reminders together, including computer-generated paper reminders and email alerts sent to providers, along with reminders generated at the point of care.25 They have also typically reported the proportion of studies with results that were on balance “positive.”24 We conducted a systematic review to quantify the expected magnitude of improvements in processes of care from computer reminders delivered to physicians during their routine electronic ordering or charting activities.


Data sources

We searched the MEDLINE database (1950 to July 2008) using relevant Medical Subject Headings and combinations of text words such as “computer” or “electronic” with terms such as “reminder,” “prompt,” “alert” and “support.” A methodologic filter identified all potential clinical trials. We similarly searched the Embase and CINAHL databases (both to July 2008). We also retrieved all articles that mentioned computers, reminder systems or decision support from the Cochrane Effective Practice and Organisation of Care registry (, which covers multiple bibliographic databases. Finally, we scanned reference lists of all included studies and review articles. For non-English-language articles, we screened English translations of titles and abstracts, pursuing a full-text translation as needed to determine inclusion or exclusion of the study.

Study selection

Eligible studies evaluated the effects of computer reminders on processes or outcomes of care using a randomized or quasi-randomized controlled design (allocation on the basis of an arbitrary but not truly random process, such as even or odd patient identification numbers). We required that clinicians encounter the reminder during routine performance of the activities of interest, such as prescribing medications or documenting clinical information. Reminders that required clinicians to deviate from their usual activities (e.g., to use a special program without any prompt from the main clinical information system) were excluded because relying on users to remember to call up such resources undermined the core notion of a reminder.


We focused primarily on improvements in processes of care rather than on clinical outcomes, because we wished to determine the degree to which computer reminders achieved their main goal, namely changing provider behaviour. The degree to which such changes ultimately improve patient outcomes will vary depending on the strength of the relation between targeted processes and clinical outcomes. Consequently, if computer reminders do not improve patient outcomes, this may reflect inadequate connections between the targeted processes and outcomes of care rather than a failure to change physician behaviour. Nonetheless, we did capture clinical out-comes, including intermediate outcomes such as control of blood pressure. We excluded outcomes primarily related to resource use, such as length of hospital stay.

We standardized all outcomes so that increases always corresponded to improvements in care. For instance, if a study reported the proportion of patients who received inappropriate medications, we would record the complementary proportion of patients who received appropriate care.

Data extraction

For any given article, two of three investigators (K.S., A.J. or A.M.) independently screened the citation for inclusion. They abstracted the following data from included articles: clinical setting, number of participants, methodologic details, characteristics of the computer reminder, the presence of cointerventions, and the results for eligible outcomes. Discrepancies between the two reviewers were resolved by discussion, involving the third reviewer if necessary to achieve consensus.

Statistical analysis

We anticipated that many studies would assign intervention status at the provider level but would not account for “cluster effects” when analyzing patient-level data.6,7 Correcting for clustering effects can sometimes be achieved by estimating the intraclass correlation coefficients, especially if the primary studies all report the same outcome and a minority provide relevant data upon which to base imputations.8 In this case, however, few studies contained the necessary data, and studies tended to report multiple outcomes, which required an additional assumption that correlations within clusters do not vary across different outcomes.

To preserve the goal of quantifying the effects of computer reminders without resorting to numerous assumptions and conveying a misleading degree of precision, we focused on the median and interquartile range (IQR) for improvements reported by eligible studies. This method, first used in a large review of strategies for implementing guidelines,9 has since been applied in Cochrane reviews of interventions to improve practice1014 and other systematic reviews of quality improvement interventions.1518

Quantifying the median improvement involves two distinct uses of “median.” First, to handle multiple outcomes within individual studies, we identified the median improvement across each study’s eligible outcomes. If a study reported 10 adherence-related outcomes, we calculated the median absolute difference in adherence between the intervention and control groups. With each study represented by its median outcome, we then calculated the median effect and IQR across all included studies. For the purposes of sensitivity analyses, we repeated this calculation using the best outcome from each study.

The median and IQR convey the magnitudes of improvement achieved in the majority of studies. This method avoids skewing by a few outlying studies with highly positive results and 95% confidence intervals inappropriately narrowed by ignoring important clustering effects. It also permits nonparametric analyses of potential associations between study features and effect size in order to examine subgroups of studies with larger or smaller magnitudes of effect. For instance, we looked for associations between magnitude of effect and study size, markers of methodologic quality, features of the study context (e.g., ambulatory v. inpatient) and characteristics of the reminders (e.g., requiring users to enter a response before continuing with their work). We performed all such comparisons using a nonparametric Mann–Whitney rank-sum test.


Of 2036 citations identified, we excluded 1662 at the initial stage of screening and an additional 374 after review of the full-text articles. A total of 28 articles (reporting 32 comparisons) met all of our inclusion criteria (Figure 1).1946 The full review has recently been published in The Cochrane Library.47

Figure 1
Results of literature search. *Excluded topics included expert systems (e.g., artificial intelligence or neural network applications) for facilitating diagnosis or for estimating prognosis; decision support not directly related to patient care (e.g., ...

Of the 32 comparisons, 19 were in the United States and 8 occurred in inpatient settings (Table 1, located at the end of the article). Only six comparisons involved a quasi-randomized design, typically allocating intervention status on the basis of even or odd provider identification numbers. Twenty-six comparisons allocated intervention status to providers or provider groups (cluster trials); 12 of these comparisons accounted for clustering effects in the analysis. Seventeen trials reported a power calculation that included a target effect size. Twelve trials reported a target improvement in adherence to processes of care; 10 of these trials specified an absolute increase of at least 10% (Table 1).

Table 1
Description of 28 studies (32 comparisons) included in a systematic review of the effects of point-of-care computer reminders on physician behaviour

Figure 2 displays the median improvements in adherence to processes of care for each included study (for details about the results from each study, see Appendix 1, available at Pooling data across studies (Table 2), we found that the median improvement in adherence associated with computer reminders was 4.2% (IQR 0.8%–18.8%). Prescribing behaviours improved by a median of 3.3% (IQR 0.5%–10.6% [21 trials]), adherence to target vaccinations by 3.8% (IQR 0.5%–6.6% [6 trials]) and test-ordering behaviours by 3.8% (IQR 0.4%–16.3% [13 trials]). Table 2 also shows the results obtained when we used the best outcome from each study instead of the median improvement.

Figure 2
Median absolute improvements in adherence to processes of care between intervention and control groups in each study. Each study is represented by the median and interquartile range for its reported outcomes; studies with single data points reported only ...
Table 2
Improvements in adherence to processes of care across the 28 studies (32 comparisons) included in the review

Across eight comparisons that reported dichotomous clinical outcomes (e.g., achievement of target treatment goals), patients in the intervention groups experienced a median absolute improvement of 2.5% (IQR 1.3%–4.2%). For blood pressure control, the single most commonly reported outcome, patients in the intervention groups experienced a median reduction in systolic blood pressure of 1.0 mm Hg (IQR 2.3 mm Hg reduction to 2.0 mm Hg increase) and a median reduction in diastolic blood pressure of 0.2 mm Hg (IQR 0.8 mm Hg reduction to 1.0 mm Hg increase).

Study features and effect size

We found no significant correlation between effect size and the following study features: publication year, country (United States v. other), study design (randomized v. quasi-randomized) or sample size (whether calculated on the basis of patients or providers) (Figure 3). We considered that studies with high adherence rates in control groups (a marker for baseline adherence) might achieve smaller improvements in care, because they had smaller opportunities for improvement. Surprisingly, studies with control-group adherence rates that were higher than the median across all studies showed larger effect sizes (Figure 3). When we analyzed the potential impact of baseline adherence in various other ways (e.g., focusing on the highest and lowest quartiles of baseline adherence), we found no evidence that small improvements reflected high baseline quality of care.

Figure 3
Median effects for adherence to processes of care by study feature. *Kruskall–Wallis test; all other p values reflect Mann–Whitney test. †Quasi-RCT refers to randomized controlled trials in which intervention status was assigned ...

We observed a trend toward larger improvements with inpatient interventions than with outpatient interventions (median 8.7% [IQR 2.7%–22.7%] v. 3.0% [IQR 0.6%–11.5%]; p = 0.34). All inpatient interventions occurred at two institutions that had well-developed, “homegrown” computerized systems for order entry by providers. Moreover, the recipients of computer reminders from these institutions consisted primarily of physician trainees.

Our grouping of studies on the basis of track records in clinical informatics did not result in significant differences, except that the studies from Brigham and Women’s Hospital in Boston, USA, reported a median improvement of 16.8% (IQR 8.7%–26.0%),26,31,37,40,46 compared with 3.0% (IQR 0.5%–11.5%) for studies from the other institutions (p = 0.04).

Features of computer reminders and effect size

We analyzed a number of reminder characteristics to look for associations with effect size (Figure 4). Only the requirement for providers to enter a response to the reminder showed a trend toward larger improvements (median 12.9% [IQR 2.7%–22.7%] v. 2.7% [IQR 0.6%–5.6%] for no response required; p = 0.09). No trends toward larger effect sizes existed based on the type of targeted problem (underuse v. overuse of a targeted process of care), inclusion of patient-specific information, provision of an explanation for the alert, inclusion of a specific recommendation with the alert, development of the reminder by the study authors, or the type of system used to deliver the reminder (CPOE [computerized provider order entry] v. electronic medical records).

Figure 4
Median effects for adherence to processes of care by reminder feature. *Underuse = targeting improvements to increase the percentage of patients who receive targeted process of care (e.g., increasing the percentage of patients receiving the influenza ...

Reminders that were “pushed” onto users (i.e., users automatically received the reminder) did not achieve larger effects than reminders that required users to perform some action to receive them (i.e., users had to “pull” the reminders); only 4 of the 32 comparisons involved “pull” reminders. A three-armed cluster randomized controlled trial of reminders for screening and treatment of hyperlipidemia45 directly compared these two modes of delivering reminders. Patients cared for at practices randomly assigned to deliver automatic alerts were more likely to undergo testing for hyperlidemia and receive treatment than were patients at clinics where reminders were delivered to clinicians only “on demand.”

Sensitivity analyses

We re-analyzed the potential predictors of effect size (study features and characteristics of reminders) using a variety of choices for the representative outcome from each study, including the outcome with the middle value (rather than a calculated median) and the best outcome (the outcome associated with the largest improvement in adherence to the process). None of these analyses substantially altered the main findings.


Across the 32 comparisons, computer reminders achieved small to modest improvements in care, with a median improvement of 4.2% (IQR 0.8%–18.8%). Even using the best out-come from each trial, the median improvement was only 5.6% (IQR 2.0%–19.2%). These changes fall below the thresholds for clinically significant improvements specified in most trials, and they are certainly smaller than the improvements generally expected from computerized order entry and electronic medical record systems. Interestingly, these improvements are also no larger than those observed for paper-based reminders.5,48

With the upper quartile of reported improvements beginning at an almost 20% increase in adherence to processes of care, some studies in our review clearly did show larger effects. However, we were unable to identify any study characteristic or reminder feature that predicted larger effect sizes, except for a statistically significant increase in magnitude of effect seen in studies involving a well-developed, homegrown computer order entry system at Brigham and Women’s Hospital.26,31,37,40,46 A trend toward larger effects was also seen for reminders that required users to enter a response in order to proceed; however, this finding may have been confounded by the uneven distribution of studies from Brigham and Women’s Hospital. Thus, we do not know if the success of computer reminders at this institution reflects the design of reminders requiring user responses, other features of the computer system or perhaps institutional culture.

Included studies often provided limited descriptions of key features of the reminders and the systems through which they were delivered. We attempted to overcome this problem by abstracting basic features, such as whether user responses were required and whether the reminder displayed a justification for its content. But heterogeneity within even these apparently straightforward categories could mask important differences in effect. Important differences in effect may also reflect characteristics that we found difficult to operationalize (e.g., the “complexity” of the reminder) or that were inadequately reported. This problem of limited descriptive detail of complex interventions and the resulting potential for heterogeneity among included interventions in systematic reviews has been consistently encountered in the quality-improvement literature.49,50

Conventional meta-analyses estimate mean effects and 95% confidence intervals by calculating weighted averages across study results. The individual weights derive from study precision such that larger studies contribute greater weight to the meta-analytic result. However, more than half of the studies included in our review reported spuriously high precision, and most of the studies did not report the data required to adjust for this problem. For example, of the 26 clustered trials, only 9 provided a single value for the intra-cluster correlation coefficient, and only 3 reported values for all outcomes. Because we could not accurately weight studies based on precision, we focused on the median and interquartile range for study effects, a method that has found increasing application in systematic reviews of interventions for quality improvement.9,1315,17,18,51

The main potential drawback of this method is that we assigned equal weight to all of the studies. However, for our results to have substantially misrepresented the true impacts of computer reminders, the minority of studies with large magnitudes of effect would also have to be the larger studies (and thus deserving of greater weight in a meta-analysis). Not only is this unlikely in general, we specifically showed that study size bore no relation to effect size, using various definitions of study and effect size.


Computer reminders typically increased adherence to target processes of care by amounts below thresholds for clinically significant improvements. A minority of studies showed more substantial improvements, consistent with the expectations of those who advocate widespread adoption of computerized order entry and electronic medical record systems. However, until further research identifies study design and reminder features that reliably predict clinically worthwhile improvements in care, implementing these expensive technologies will constitute an expensive exercise in trial and error.

Supplementary Material

[Online Appendix]


Funding: Kaveh Shojania and Jeremy Grimshaw received salary support from the Government of Canada Research Chairs Program. Craig Ramsay’s position in the Health Services Research Unit is funded in part by the Chief Scientist Office of the Scottish Government Health Department. Alain May-hew receives salary support from the Canadian Institutes of Health Research. The views expressed are those of the authors and not the funding agencies.

Previously published at

See also research article by Villeneuve and colleagues

Competing interests: None declared.

Contributors: Kaveh Shojania and Jeremy Grimshaw conceived the study. All of the authors contributed to refinements of the study design and to the analysis and interpretation of the data. Kaveh Shojania drafted the initial manuscript, and all of the other authors provided critical revisions. All of the authors approved the final manuscript submitted for publication. Kaveh Shojania is the guarantor for this paper.

This article has been peer reviewed.


1. Aspden P, Wolcott JA, Bootman JL, et al. Committee on Identifying and Preventing Medication Errors. Preventing medication errors: quality chasm series. Washington (DC): The National Academies Press; 2006.
2. Garg AX, Adhikari NK, McDonald H, et al. Effects of computerized clinical decision support systems on practitioner performance and patient outcomes: a systematic review. JAMA. 2005;293:1223–38. [PubMed]
3. Hunt DL, Haynes RB, Hanna SE, et al. Effects of computer-based clinical decision support systems on physician performance and patient outcomes: a systematic review. JAMA. 1998;280:1339–46. [PubMed]
4. Kawamoto K, Houlihan CA, Balas EA, et al. Improving clinical practice using clinical decision support systems: a systematic review of trials to identify features critical to success. BMJ. 2005;330:765. [PMC free article] [PubMed]
5. Dexheimer JW, Talbot TR, Sanders DL, et al. Prompting clinicians about preventive care measures: a systematic review of randomized controlled trials. J Am Med Inform Assoc. 2008;15:311–20. [PMC free article] [PubMed]
6. Whiting-O’Keefe QE, Henke C, Simborg DW. Choosing the correct unit of analysis in medical care experiments. Med Care. 1984;22:1101–14. [PubMed]
7. Donner A, Donald A. Analysis of data arising from a stratified design with the cluster as unit of randomization. Stat Med. 1987;6:43–52. [PubMed]
8. Shojania KG, Ranji SR, McDonald KM, et al. Effects of quality improvement strategies for type 2 diabetes on glycemic control: a meta-regression analysis. JAMA. 2006;296:427–40. [PubMed]
9. Grimshaw JM, Thomas RE, MacLennan G, et al. Effectiveness and efficiency of guideline dissemination and implementation strategies. Health Technol Assess. 2004;8:iii–iv. 1–72. [PubMed]
10. Doumit G, Gattellari M, Grimshaw J, et al. Local opinion leaders: effects on professional practice and health care outcomes [review] Cochrane Database Syst Rev. 2007;(1):CD000125. [PubMed]
11. Farmer AP, Legare F, Turcot L, et al. Printed educational materials: effects on professional practice and health care outcomes [review] Cochrane Database Syst Rev. 2008;(3):CD004398. [PubMed]
12. Forsetlund L, Bjorndal A, Rashidian A, et al. Continuing education meetings and workshops: effects on professional practice and health care outcomes [review] Cochrane Database Syst Rev. 2009;(2):CD003030. [PubMed]
13. Jamtvedt G, Young JM, Kristoffersen DT, et al. Audit and feedback: effects on professional practice and health care outcomes [review] Cochrane Database Syst Rev. 2006;(2):CD000259. [PubMed]
14. O’Brien MA, Rogers S, Jamtvedt G, et al. Educational outreach visits: effects on professional practice and health care outcomes [review] Cochrane Database Syst Rev. 2007;(4):CD000409. [PubMed]
15. Ranji SR, Steinman MA, Shojania KG, et al. Interventions to reduce unnecessary antibiotic prescribing: a systematic review and quantitative analysis. Med Care. 2008;46:847–62. [PubMed]
16. Shojania KG, McDonald KM, Wachter RM, et al., editors. Closing the quality gap: a critical analysis of quality improvement strategies Volume 1 — series overview and methodology. Rockville (MD): Agency for Healthcare Research and Quality; 2004. [(accessed 2009 Nov. 26)]. [technical review 9; AHRQ publication no 04-0051-1] Available:
17. Steinman MA, Ranji SR, Shojania KG, et al. Improving antibiotic selection: a systematic review and quantitative analysis of quality improvement strategies. Med Care. 2006;44:617–28. [PubMed]
18. Walsh JM, McDonald KM, Shojania KG, et al. Quality improvement strategies for hypertension management: a systematic review. Med Care. 2006;44:646–57. [PubMed]
19. Bates DW, Kuperman GJ, Rittenberg E, et al. A randomized trial of a computer-based intervention to reduce utilization of redundant laboratory tests. Am J Med. 1999;106:144–50. [PubMed]
20. Christakis DA, Zimmerman FJ, Wright JA, et al. A randomized controlled trial of point-of-care evidence to improve the antibiotic prescribing practices for otitis media in children. Pediatrics. 2001;107:E15. [PubMed]
21. Dexter PR, Perkins S, Overhage JM, et al. A computerized reminder system to increase the use of preventive care for hospitalized patients. N Engl J Med. 2001;345:965–70. [PubMed]
22. Eccles M, McColl E, Steen N, et al. Effect of computerised evidence based guidelines on management of asthma and angina in adults in primary care: cluster randomised controlled trial. BMJ. 2002;325:941. [PMC free article] [PubMed]
23. Filippi A, Sabatini A, Badioli L, et al. Effects of an automated electronic reminder in changing the antiplatelet drug-prescribing behavior among Italian general practitioners in diabetic patients: an intervention trial. Diabetes Care. 2003;26:1497–500. [PubMed]
24. Flottorp S, Oxman AD, Havelsrud K, et al. Cluster randomised controlled trial of tailored interventions to improve the management of urinary tract infections in women and sore throat. BMJ. 2002;325:367. [PMC free article] [PubMed]
25. Frank O, Litt J, Beilby J. Opportunistic electronic reminders. Improving performance of preventive care in general practice. Aust Fam Physician. 2004;33:87–90. [PubMed]
26. Hicks LS, Sequist TD, Ayanian JZ, et al. Impact of computerized decision support on blood pressure management and control: a randomized controlled trial. J Gen Intern Med. 2008;23:429–41. [PMC free article] [PubMed]
27. Judge J, Field TS, DeFlorio M, et al. Prescribers’ responses to alerts during medication ordering in the long term care setting. J Am Med Inform Assoc. 2006;13:385–90. [PMC free article] [PubMed]
28. Kenealy T, Arroll B, Petrie KJ. Patients and computers as reminders to screen for diabetes in family practice. Randomized-controlled trial. J Gen Intern Med. 2005;20:916–21. [PMC free article] [PubMed]
29. Kralj B, Iverson D, Hotz K, et al. The impact of computerized clinical reminders on physician prescribing behavior: evidence from community oncology practice. Am J Med Qual. 2003;18:197–203. [PubMed]
30. Krall MA, Traunweiser K, Towery W. Effectiveness of an electronic medical record clinical quality alert prepared by off-line data analysis. Stud Health Technol Inform. 2004;107:135–9. [PubMed]
31. Kucher N, Koo S, Quiroz R, et al. Electronic alerts to prevent venous thromboembolism among hospitalized patients. N Engl J Med. 2005;352:969–77. [PubMed]
32. McCowan C, Neville RG, Ricketts IW, et al. Lessons from a randomized controlled trial designed to evaluate computer decision support software to improve the management of asthma. Med Inform Internet Med. 2001;26:191–201. [PubMed]
33. Meigs JB, Cagliero E, Dubey A, et al. A controlled trial of web-based diabetes disease management: the MGH diabetes primary care improvement project. Diabetes Care. 2003;26:750–7. [PubMed]
34. Overhage JM, Tierney WM, McDonald CJ. Computer reminders to implement preventive care guidelines for hospitalized patients. Arch Intern Med. 1996;156:1551–6. [PubMed]
35. Overhage JM, Tierney WM, Zhou XH, et al. A randomized trial of “corollary orders” to prevent errors of omission. J Am Med Inform Assoc. 1997;4:364–75. [PMC free article] [PubMed]
36. Peterson JF, Rosenbaum BP, Waitman LR, et al. Physicians’ response to guided geriatric dosing: initial results from a randomized trial. Stud Health Technol Inform. 2007;129:1037–40. [PubMed]
37. Rothschild JM, McGurk S, Honour M, et al. Assessment of education and computerized decision support interventions for improving transfusion practice. Transfusion. 2007;47:228–39. [PubMed]
38. Roumie CL, Elasy TA, Greevy R, et al. Improving blood pressure control through provider education, provider alerts, and patient education: a cluster randomized trial. Ann Intern Med. 2006;145:165–75. [PubMed]
39. Safran C, Rind DM, Davis RB, et al. A clinical trial of a knowledge-based medical record. Medinfo. 1995;8:1076–80. [PubMed]
40. Sequist TD, Gandhi TK, Karson AS, et al. A randomized trial of electronic clinical reminders to improve quality of care for diabetes and coronary artery disease. J Am Med Inform Assoc. 2005;12:431–7. [PMC free article] [PubMed]
41. Tamblyn R, Huang A, Perreault R, et al. The medical office of the 21st century (MOXXI): effectiveness of computerized decision-making support in reducing inappropriate prescribing in primary care. CMAJ. 2003;169:549–56. [PMC free article] [PubMed]
42. Tape TG, Campbell JR. Computerized medical records and preventive health care: success depends on many factors. Am J Med. 1993;94:619–25. [PubMed]
43. Tierney WM, Overhage JM, Murray MD, et al. Effects of computerized guidelines for managing heart disease in primary care. J Gen Intern Med. 2003;18:967–76. [PMC free article] [PubMed]
44. Tierney WM, Overhage JM, Murray MD, et al. Can computer-generated evidence-based care suggestions enhance evidence-based management of asthma and chronic obstructive pulmonary disease? A randomized, controlled trial. Health Serv Res. 2005;40:477–97. [PMC free article] [PubMed]
45. van Wyk JT, van Wijk MA, Sturkenboom MC, et al. Electronic alerts versus on-demand decision support to improve dyslipidemia treatment: a cluster randomized controlled trial. Circulation. 2008;117:371–8. [PubMed]
46. Zanetti G, Flanagan HL, Jr, Cohn LH, et al. Improvement of intraoperative antibiotic prophylaxis in prolonged cardiac surgery by automated alerts in the operating room. Infect Control Hosp Epidemiol. 2003;24:13–6. [PubMed]
47. Shojania KG, Jennings A, Mayhew A, et al. The effects of on-screen, point of care computer reminders on processes and outcomes of care [review] Cochrane Database Syst Rev. 2009;(3):CD001096. [PubMed]
48. Balas EA, Weingarten S, Garb CT, et al. Improving preventive care by prompting physicians. Arch Intern Med. 2000;160:301–8. [PubMed]
49. Shojania KG, Grimshaw JM. Evidence-based quality improvement: the state of the science. Health Aff (Millwood) 2005;24:138–50. [PubMed]
50. Grimshaw J, McAuley LM, Bero LA, et al. Systematic reviews of the effectiveness of quality improvement strategies and programmes. Qual Saf Health Care. 2003;12:298–303. [PMC free article] [PubMed]
51. Shojania KG, Ranji SR, Shaw LK, et al. Diabetes mellitus care. In: Shojania KG, McDonald KM, Wachter RM, et al., editors. Closing the quality gap: a critical analysis of quality improvement strategies. Vol. 2. Rockville (MD): Agency for Healthcare Research and Quality; 2004. [(accessed 2009 Nov. 26)]. [technical review 9; AHRQ publication no 04-0051-2] Available:

Articles from CMAJ : Canadian Medical Association Journal are provided here courtesy of Canadian Medical Association