PMCCPMCCPMCC

Search tips
Search criteria 

Advanced

 
Logo of jgimedspringer.comThis journalToc AlertsSubmit OnlineOpen Choice
 
J Gen Intern Med. Aug 2012; 27(8): 974–984.
Published online Mar 10, 2012. doi:  10.1007/s11606-012-2025-5
PMCID: PMC3403145
Changing Clinical Practice Through Patient Specific Reminders Available at the Time of the Clinical Encounter: Systematic Review and Meta-Analysis
Tim A. Holt, PhD,corresponding author1,2 Margaret Thorogood, PhD,2 and Frances Griffiths, PhD2
1Department of Primary Care Health Sciences, University of Oxford, 2nd floor, 23-38 Hythe Bridge Street, Oxford, OX1 2ET UK
2Health Sciences Research Institute, University of Warwick, Coventry, UK
Tim A. Holt, Phone: +44-1865-289281, Fax: +44-1865-289287, tim.holt/at/phc.ox.ac.uk.
corresponding authorCorresponding author.
Received July 7, 2011; Revised October 25, 2011; Accepted February 3, 2012.
OBJECTIVE
To synthesise current evidence for the influence on clinical behaviour of patient-specific electronically generated reminders available at the time of the clinical encounter.
DATA SOURCES
PubMed, Cochrane library of systematic reviews; Science Citation Index Expanded; Social Sciences Citation Index; ASSIA; EMBASE; CINAHL; DARE; HMIC were searched for relevant articles.
STUDY ELIGIBILITY CRITERIA, PARTICIPANTS AND INTERVENTIONS
We included controlled trials of reminder interventions if the intervention was: directed at clinician behaviour; available during the clinical encounter; computer generated (including computer generated paper-based reminders); and generated by patient-specific (rather than condition specific or drug specific) data.
STUDY APPRAISAL AND SYNTHESIS METHODS
Systematic review and meta-analysis of controlled trials published since 1970. A random effects model was used to derive a pooled odds ratio for adherence to recommended care or achievement of target outcome. Subgroups were examined based on area of care and study design. Odds ratios were derived for each sub-group. We examined the designs, settings and other features of reminders looking for factors associated with a consistent effect.
RESULTS
Altogether, 42 papers met the inclusion criteria. The studies were of variable quality and some were affected by unit of analysis errors due to a failure to account for clustering. An overall odds ratio of 1.79 [95% confidence interval 1.56, 2.05] in favour of reminders was derived. Heterogeneity was high and factors predicting effect size were difficult to identify.
LIMITATIONS
Methodological diversity added to statistical heterogeneity as an obstacle to meta-analysis. The quality of included studies was variable and in some reports procedural details were lacking.
CONCLUSIONS AND IMPLICATIONS OF KEY FINDINGS
The analysis suggests a moderate effect of electronically generated, individually tailored reminders on clinician behaviour during the clinical encounter. Future research should concentrate on identifying the features of reminder interventions most likely to result in the target behaviour.
KEY WORDS: reminder systems, electronic health records, computer systems, decision support systems, clinical
Computer generated reminder systems are commonly used to support routine health care. They utilise electronic data to identify clinical errors and opportunities for screening, preventive interventions, improved prescribing, and both diagnostic and monitoring tests. Previous studies have found that the response of clinicians to such reminders is variable, and a number of reviews have described existing tools, where possible measured their impact, and in some cases attempted to identify factors influencing effect size.111 Reminder systems are diverse in their design. Some are used to support specific clinical areas of care (e.g. diabetes), presenting current recommendations or evidence, and do not require patient specific data. Others are triggered simply by an attempt to prescribe a specific drug therapy, for instance reminding the prescriber of lithium that blood monitoring is required. Shojania et al. studied the impact of ‘on-screen’ reminders as a Cochrane systematic review,10 and excluded computer generated paper-based reminders and email alerts occurring outside clinical encounters. They hypothesised that this approach would identify a more consistent effect, in contrast to the variable results reported in previous reviews. This group derived a median absolute change in adherence of 4.2% with IQR 0.8-18.8%, suggesting significant variation in response, and factors predicting effect size were difficult to identify.
A subset of reminder system draws on patient specific data in the electronic record and is therefore tailored to the individual. For this review we were interested specifically in individually tailored reminders and in the impact of these tools on decision making. We concurred with Shojania et al. over the importance of the ‘point of care’ setting, but hypothesised that tailored reminders might provide a more consistently positive effect. Although the reminders that we studied were exclusively clinician directed, individual tailoring might conceivably carry greater impact, as the resulting behaviour often requires patient involvement for completion (e.g. uptake of screening).
We chose to study both on-screen and paper-based reminders provided that they were generated by electronic information specific to the individual in a health record and available at the clinical encounter. In contrast to the Shojania review, we chose the odds ratio technique to estimate effect size as we were interested in the relative likelihood of achieving the desired outcome in the presence of a reminder rather than the absolute change in outcome. This approach may be more appropriate where baseline activity varies significantly between different trial settings, as relative benefit tends to be more stable across risk groups than absolute benefit.12 We were also interested in detecting any variation in response according to clinical area and in changes in responsiveness over the past 40 years, during which the use of electronic records has become widespread. A review protocol was written but not published.
Literature Search
We systematically examined the literature from 1970 to February 2011 describing controlled trials of computer generated reminder interventions that draw on patient specific information and are available to clinicians during clinical encounters. We searched the following databases for relevant articles: PubMed, Cochrane library of systematic reviews; Science Citation Index Expanded; Social Sciences Citation Index; ASSIA; EMBASE; CINAHL; DARE; HMIC. The following search strategy (or adaptations of it) was used in each database:
  • Reminder systems [MeSH] AND (Health OR Medic* OR Clinical) AND (Computer*
  • [text word] OR Electronic* [text word])
We looked at reference lists of retrieved articles and past systematic reviews of similar interventions. We included non-randomised controlled trials, provided data collection from both arms was contemporaneous. We did not consider ‘before/after’ studies to be sufficiently valuable, given the potential for secular trends (including health policy changes) to confound the influence of the effect, and such studies were excluded.
Selection of Articles
The inclusion criteria were applied to each paper by two reviewers, with disagreements resolved by the third reviewer.
Extraction of Data
For each identified paper, two reviewers assessed methodological quality and extracted the outcome data using a formatted extraction sheet. Where necessary, study authors were contacted for clarification. We assessed risk of bias according to inadequate random sequence generation (at study level); and incomplete outcome data, selective reporting, and unit of analysis error (at the outcome level). The latter was used as a basis for a correction for clustering in the meta-analysis.
Outcome Measures
Changes in process or clinical outcome included rates of screening, vaccination, diagnostic tests, blood pressure measurement, blood pressure control, rate of venous thrombo-embolism, and measures of prescribing quality.
Analysis
Odds ratios were derived for all binary outcomes where available. We used a random effects model with the Mantel–Haenszel method in RevMan version 5.2 to combine the data. Where multiple outcomes were reported, we derived a pooled outcome measure for each study. Heterogeneity was measured using the Tau2 and I2 statistics. Tau2 is a measure of between study variance appropriate for a random effects meta-analysis.12 I2 gives the proportion of the variability that is attributable to heterogeneity rather than chance.12
Trials of reminder interventions may be affected by ‘unit of analysis errors,10 through failure to correct for clustering. For instance, a trial may use as its outcome the proportion of patients achieving a clinical target at the end of the study, but it was the clinicians, clinical teams or clinics (not the patients individually) that had been randomised to use or not to use the reminders. If uncorrected, the precision of effect size measurement will be over-estimated by this error.
We tested the effect of introducing a correction factor where clustering had not been accounted for, using a recommended technique.12 An assumed intra-class correlation co-efficient of 0.03 was identified as appropriate from a published source.13 This was used to derive a design effect estimate for each study based on its mean cluster size, and the numerator and denominator values for each trial arm were divided by this factor. The pooled odds ratio was then re-estimated to account for clustering. Recognising the risk of applying a single ICC to many studies, we undertook an analysis to measure the sensitivity of the pooled odds ratio and its confidence interval to a range of assumed ICC values.
We also examined subgroups of reminder intervention according to pre-specified clinical areas and distinguished articles according to whether the trial was ‘explanatory’ or ‘pragmatic’ in design. ‘Explanatory’ studies were those in which the denominator was the reminder opportunity, i.e. the clinical encounter in which the reminder was triggered. The outcome was the proportion of all examples in which a clinician actually encountering a patient and presented with a reminder, responded to it. ‘Pragmatic’ studies used as their outcome the proportion of a population of patients whose clinicians were potentially exposed to a reminder intervention in whom the recommended care occurred. Some of the outcome denominator population might not have presented to the clinician during the study period, whilst others might have presented a number of times. Whilst some studies were difficult to categorise, we considered these groups to represent methodologically distinct designs worthy of separate analysis.
Finally, we sub-grouped studies according to the decade of publication, looking for a secular trend in the responsiveness of clinicians to such reminders, and assessed risk of publication bias using a funnel plot.
Selection of Articles
We initially identified 683 articles following removal of duplicates. Abstracts were examined to remove obviously irrelevant papers, leaving 234 for full text examination. Of these, 192 articles were excluded by at least two reviewers (Fig. 1). This left 42 trial reports in the final group.1455 Forty-one of these used binary outcomes. The other46 used length of hospital stay. Two papers reported clinical outcomes (control of blood pressure24 and rate of venous thrombo-embolism)29 and all the rest involved process outcomes. One paper19 reported three different intervention arms and one control arm. This study was entered as three separate comparisons and the numbers in the control arm were divided by three to avoid over-weighting. A further two papers47,53 reported two equally important forms of reminder that were both included as separate comparisons. Where possible we aggregated separately reported subgroups of outcome within the same trial to provide an estimate of overall effect. For instance, a single reminder intervention might promote screening tests, clinical measurement and immunisations, with each outcome reported separately. One paper21 reported multiple outcomes with no primary outcome and was not included in the meta-analysis as it was not possible using this method to aggregate the outcome data from this paper reliably. There were therefore 44 comparisons using a binary outcome available for the meta-analysis. There were examples in which the desired effect of the reminder was to reduce rather than increase the outcome measure15,29,32,43,47,48,50. In such cases we used the method described by Shojania et al.10 to impute a corrected numerator in order that the effect was measured in the same direction as for the other studies. There was only one example49 of a trial that was controlled but not randomised.
Figure 1.
Figure 1.
PRISMA flow diagram for systematic review.
Meta-Analysis
For the 44 binary outcome comparisons an overall odds ratio of 1.79 [95% confidence interval 1.56, 2.05] was derived in favour of the reminders. Heterogeneity was high presumably due to clinical and methodological diversity, with an overall Tau2 = 0.18, Chi2 = 1530.40, p < 0.00001, I2 = 97%. The one study using a continuous outcome46 reported a non-significant difference in length of hospital stay. The study that was excluded on the basis of multiple outcomes reported no effect of the reminder system on clinical care.21 For our included studies, 32 out of 44 comparisons showed a significant positive effect and 11 showed no significant effect. One study48 appeared to show a significant negative effect but this was dependent on the definitions of intervention and control in a study comparing two different reminder systems.
To reduce clinical diversity we attempted subgroup analyses based on area of care (although there was much overlap). There was evidence (of borderline significance, Chi2 = 11.47, p = 0.04) of subgroup differences in effect size. Odds ratios ranged from 1.24 [95%CI 1.01–1.52] for condition specific but multiple reminders to 4.69 [95%CI 1.25–17.53] for vaccination reminders (Fig. 2). The condition specific but multiple reminders subgroup had a relatively low Tau2 score of 0.03 with Chi2 = 7.84, p = 0.05. The odds ratios in favour of the intervention for the explanatory and pragmatic sub-groups were 1.90 and 1.71, respectively, and there was no significant improvement in heterogeneity scores.
Figure 2.
Figure 2.
Forest plot of all studies (44 comparisons) reporting binary outcomes, grouped by area of care. These are based on raw extracted data prior to our adjustment for clustering.
There was no evidence that odds ratios were different between the 1980s, 1990s and 2000s, and only one study from the 1970s was included.
Characteristics of the reminder interventions were examined to look for factors likely to influence the effect size, including clinical priority, remunerative factors, and ease of use. These are explored in the Discussion section below (Table 1).
Table 1
Table 1
Characteristics of Included Studies
Methodological Quality
For many studies procedural details such as randomisation techniques were unreported. Trials of reminder interventions sometimes randomise at the level of the clinician or clinical team, but analyse using patient level outcome data. Some form of unit of analysis issue potentially affected 28 studies14,1822,2428,31,32,35,3739,4145,4749,5153 and 32 comparisons. In sixteen cases14,1921,24,26,31,3739,45,47,48,5153 this was discussed and corrective action taken to adjust confidence intervals or p values appropriately. However, the raw data that we extracted had not undergone this correction and we therefore applied our own adjustment as described above. For the 32 comparisons affected, the initial odds ratio in favour of reminders was 1.87 [95% CI 1.54, 2.28]. Following our adjustments the odds ratio for these studies had changed to 1.90 and the confidence interval had widened slightly to [1.54, 2.33]. There was no change in the overall pooled odds ratio of all studies combined (1.79, [1.58, 2.02]). The results of our adjustment for clustering are given in Fig. 3. Table 2 gives the results based on a range of assumed ICC values, suggesting that the analysis was not sensitive to this assumed value over a 100 fold scale. Figure 4 shows risk of bias tables (a) for each study (b) aggregated.
Figure 3.
Figure 3.
Forest plot of all studies reporting binary outcomes, grouped according to presence or absence of a unit of analysis (UoA) issue, with correction to account for clustering in the first group. In the published source papers a similar correction had been (more ...)
Table 2
Table 2
Pooled Odds Ratios for the Subgroup of Comparisons Requiring Correction for Clustering, Using a Range of Assumed ICC Values
Figure 4.
Figure 4.
Risk of bias tables (a) for each study (b) aggregated.
Publication Bias
We derived a funnel plot which was broadly symmetrical with no evidence of substantial publication bias.
Summary of Findings
The majority of interventions in our review produced significant changes in measured outcomes, but there were numerous examples of no effect and it appears that reminders are often ignored. There is no evidence that such tools were more effective in the 2000s than in the 1980s or 1990s, and our effect size estimate is very similar to a previously published value from 19967, albeit using different inclusion criteria.
Features Influencing Effect Size
Characteristics of individual studies are given in Table 1. We examined these to see whether specific features associated with a more consistent effect could be identified. Kawamoto et al.4 have reported four features believed to be relevant in clinical decision support systems: automatic provision of decision support as part of clinical workflow; provisions of recommendations rather than just assessments; provision of decision support at the time and location of decision making; and computer-based decision support. Whilst all our trials involved computer generated reminders, some of these were paper-based. We looked at whether this feature influenced success, and also considered a number of other potentially relevant issues suggested by other investigators.5662 These included clinical priority and relevance, cost-effectiveness considerations, accessibility, intrusiveness, and the time required to respond.
Computer generated but paper-based reminders were involved in 12 of our 44 comparisons.1619,3336,4042,54 The remainder were displayed either exclusively on a computer screen or in both formats. There was no significant difference in the odds ratios obtained between these subgroups.
It is difficult to judge which issues physicians are likely to consider most important clinically. Vaccination reminders might in most situations be considered less urgent than immediate prescribing safety or laboratory monitoring issues, but in fact were associated with a stronger effect, albeit based on a small number of studies. However the one trial reporting a significantly positive result for a clinical (rather than a process) outcome involved the prevention of venous thrombo-embolism in hospitalised patients identified and flagged as ‘at risk’ of this serious condition.29
None of our included trials specifically reported ‘payment by result’ as a direct consequence of responding to a reminder, but this may have been an unreported factor in settings where remuneration is partly based on quality or efficiency of care. In some cases the electronic record itself had been established at least partly for the purpose of gathering billing information. Shea et al.46 mention financial pressures relevant to their length of hospital stay outcome. Others mentioned the health economic benefits of cost-effective monitoring and prescribing, promoted by reminders, and Tierney 198750 included charges per visit as a secondary outcome.
It is difficult to interpret from a published study exactly how much time clinicians had available and how onerous the recommended action might have been. In a large study based in Canada, the reminder requiring activation by the clinician was in fact more effective than the one appearing spontaneously.48 Van Wyk et al. arrived at the opposite conclusion in their trial.53 They included an ‘on-demand’ arm that required the user to actively seek the recommendation by accessing an overview screen in the patient’s record. In this arm responsiveness was significantly lower than in the ‘alerting’ arm which required no positive action. Eccles et al. reported a similar finding that highlights the difficulties in successfully embedding the reminder into the workflow.21 The negative results in this study were attributed by the authors to low usage of the system, despite its integration into the clinical software.
Other interesting phenomena were reported in the studies we examined. Chambers et al.18 included an arm in which the reminders only appeared ‘sometimes’ (in addition to the ‘always reminded’ arm whose data were used in our meta-analysis). The clinicians reminded ‘sometimes’ had a lower adherence than those reminded ‘never’ (i.e. controls), suggesting that they had become dependent on the alerts to remember to arrange influenza immunisation for eligible patients.
Strengths and Limitations
Our study is limited partly due to heterogeneity of effect sizes and by difficulties in synthesising data from diverse trial designs. The effect under investigation is likely to depend on the health care setting, the detailed design of the reminder, and the priorities of both clinician and patient. Attempts to substantially reduce heterogeneity through subgroup analyses were unsuccessful but our measurement of effect size is nevertheless meaningful. We focussed specifically on ‘reminder’ interventions and may have missed some studies of more generalised decision support systems in which reminders were a minor element. A further limitation is the lack of detail given in some trial reports over how the system actually operated in practice and what was required of the user in practical terms.
Our review provides data specific to tailored reminders available during clinical encounters, and is the only recently published example of a meta-analysis using a relative (odds ratio) technique rather than an absolute change method in this area of care. This technique provides a more consistent measure of effect across diverse studies, but is more sensitive to outliers than the median absolute benefit technique.11 Trial reports accounted for clustering effects in some cases, risking unit of analysis errors in others. We applied our own correction for clustering in the analysis of the raw trial data to estimate the effect of clustering on our pooled odds ratio.
Future Research
Most individual reminder trials are designed to find out whether a system works rather than why it works. Mayo-Smith and Agrawal used a mixed method to investigate this area, conducting an observational study of reminder completion rates followed by a questionnaire survey of users.63 They also reviewed literature reporting this issue specifically, and included studies using qualitative methods. They reported a number of possible features of reminders, settings and users that appear to facilitate or obstruct response, and such clues might become the basis for a more extensive programme of investigation.
CONCLUSIONS
Individually tailored, computer generated reminders generally produce positive but modest effects on clinicians’ behaviour. Such interventions are inexpensive, widely available, and offer the potential both to improve clinical care and to impact health outcomes. There is now an extensive literature demonstrating these benefits. The specific features of such tools and the particular settings that determine their effect are still unclear but should become the focus of future research in this area.
Acknowledgements
We thank Dr Simon Gates for analytical advice and Samantha Johnson for assistance with database searching and retrieval of papers. We also thank the authors of studies for which clarification was required for providing useful information.
Conflict of Interest
The authors declare that they do not have a conflict of interest.
Funding source
Funded through internal sources. No external funding.
1. Balas EA, Austin SM, Mitchell JA, Ewigman BG, Bopp KD, Brown GD. The clinical value of computerized information services - A review of 98 randomized clinical trials. Arch Fam Med. 1996;5(5):271–8. doi: 10.1001/archfami.5.5.271. [PubMed] [Cross Ref]
2. Bennett JW, Glasziou PP, Sim I. Review: Computerised reminders and feedback can improve provider medication management. Evid Base Med. 2003;8(6):190. doi: 10.1136/ebm.8.6.190. [Cross Ref]
3. Garg AX, Adhikari NKJ, McDonald H, Rosas-Arellano MP, Devereaux PJ, Beyene J, et al. Effects of computerized clinical decision support systems on practitioner performance and patient outcomes: A systematic review. JAMA. 2005;293(10):1223–38. doi: 10.1001/jama.293.10.1223. [PubMed] [Cross Ref]
4. Kawamoto K, Houlihan C, Balas E, Lobach D. Improving clinical practice using clinical decision support systems: a systematic review of trials to identify features critical to success. BMJ. 2005;330(7494):765. doi: 10.1136/bmj.38398.500764.8F. [PMC free article] [PubMed] [Cross Ref]
5. Mitchell E, Sullivan F. A descriptive feast but an evaluative famine: systematic review of published articles on primary care computing during 1980-97. BMJ. 2001;322(7281):279–82E. doi: 10.1136/bmj.322.7281.279. [PMC free article] [PubMed] [Cross Ref]
6. Montgomery AA, Fahey T. A systematic review of the use of computers in the management of hypertension. Journal of Epidemiology and Community Health. 1998;52(8):520–5. doi: 10.1136/jech.52.8.520. [PMC free article] [PubMed] [Cross Ref]
7. Shea S, DuMouchel W, Bahamonde L. A meta-analysis of 16 randomized controlled trials to evaluate computer-based clinical reminder systems for preventive care in the ambulatory setting. J Am Med Inform Assoc. 1996;3(6):399–409. doi: 10.1136/jamia.1996.97084513. [PMC free article] [PubMed] [Cross Ref]
8. Shiffman RN, Liaw Y, Brandt CA, Corb GJ. Computer-based guideline implementation systems: a systematic review of functionality and effectiveness. J Am Med Inform Assoc. 1999;6(2):104–14. doi: 10.1136/jamia.1999.0060104. [PMC free article] [PubMed] [Cross Ref]
9. Dexheimer JW, Talbot TR, Sanders DL, Rosenbloom ST, Aronsky D. Prompting clinicians about preventive care measures: a systematic review of randomized controlled trials. J Am Med Inform Assoc. 2008;15(3):311–20. doi: 10.1197/jamia.M2555. [PMC free article] [PubMed] [Cross Ref]
10. Shojania KG, Jennings A, Mayhew A, et al. The effects of on-screen, point of care computer reminders on processes and outcomes of care. Cochrane Database Syst Rev. 2009;3:CD001096. [PubMed]
11. Shojania KG, Jennings A, Mayhew A, et al. Effect of point-of-care computer reminders on physician behaviour: a systematic review. CMAJ. 2010;182:E216–25. doi: 10.1503/cmaj.090578. [PMC free article] [PubMed] [Cross Ref]
12. Higgins JPT, Green S (editors). Cochrane Handbook for Systematic Reviews of Interventions Version 5.1.0 [updated March 2011]. The Cochrane Collaboration, 2011. Available from www.cochrane-handbook.org (last accessed 24.1.12).
13. Adams G, Gulliford MC, Ukoumunne OC, et al. Patterns of intra-cluster correlation from primary care research to inform study design and analysis. J Clin Epidemiol. 2004;57(8):785–794. doi: 10.1016/j.jclinepi.2003.12.013. [PubMed] [Cross Ref]
14. Matheny ME, Sequist TD, Seger AC, Fiskio JM, Sperling M, Bugbee D, et al. A randomized trial of electronic clinical reminders to improve medication laboratory monitoring. J Am Med Inform Assoc. 2008;15(4):424–9. doi: 10.1197/jamia.M2602. [PMC free article] [PubMed] [Cross Ref]
15. Bates DW, Kuperman GJ, Rittenberg E, Teich JM, Fiskio J, Ma'luf N, et al. A randomized trial of a computer-based intervention to reduce utilization of redundant laboratory tests. Am J Med. 1999;106(2):144–50. doi: 10.1016/S0002-9343(98)00410-0. [PubMed] [Cross Ref]
16. Burack RC, Gimotty PA, George J, Simon MS, Dews P, Moncrease A. The effect of patient and physician reminders on use of screening mammography in a health maintenance organization. Results of a randomized controlled trial. Cancer. 1996;78(8):1708–21. doi: 10.1002/(SICI)1097-0142(19961015)78:8<1708::AID-CNCR11>3.0.CO;2-1. [PubMed] [Cross Ref]
17. Burack RC, Gimotty PA, George J, McBride S, Moncrease A, Simon MS, et al. How reminders given to patients and physicians affected pap smear use in a health maintenance organization: results of a randomized controlled trial. Cancer. 1998;82(12):2391–400. doi: 10.1002/(SICI)1097-0142(19980615)82:12<2391::AID-CNCR13>3.0.CO;2-K. [PubMed] [Cross Ref]
18. Chambers CV, Balaban DJ, Carlson BL, Grasberger DM. The effect of microcomputer-generated reminders on influenza vaccination rates in a university-based family practice center. J Am Board Fam Pract. 1991;4(1):19–26. [PubMed]
19. Dexter PR, Wolinsky FD, Gramelspacher GP, Zhou XH, Eckert GJ, Waisburd M, et al. Effectiveness of computer-generated reminders for increasing discussions about advance directives and completion of advance directive forms. A randomized, controlled trial. Ann Intern Med. 1998;128(2):102–10. [PubMed]
20. Dexter PR, Perkins S, Overhage JM, Maharry K, Kohler RB, McDonald CJ. A computerized reminder system to increase the use of preventive care for hospitalized patients. N Engl J Med. 2001;345(13):965–70. doi: 10.1056/NEJMsa010181. [PubMed] [Cross Ref]
21. Eccles M, McColl E, Steen N, Rousseau N, Grimshaw J, Parkin D, et al. Effect of computerised evidence based guidelines on management of asthma and angina in adults in primary care: cluster randomised controlled trial. BMJ. 2002;325(7370):941. doi: 10.1136/bmj.325.7370.941. [PMC free article] [PubMed] [Cross Ref]
22. Filippi A, Sabatini A, Badioli L, Samani F, Mazzaglia G, Catapano A, et al. Effects of an automated electronic reminder in changing the antiplatelet drug-prescribing behavior among Italian general practitioners in diabetic patients: an intervention trial. Diabetes Care. 2003;26(5):1497–500. doi: 10.2337/diacare.26.5.1497. [PubMed] [Cross Ref]
23. Frank O, Litt J, Beilby J. Opportunistic electronic reminders: improving performance of preventive care in general practice. Aust Fam Physician. 2004;33(1/2):87–90. [PubMed]
24. Hicks LS, Sequist TD, Ayanian JZ, Shaykevich S, Fairchild DG, Orav EJ, et al. Impact of computerized decision support on blood pressure management and control: A randomized controlled trial. J Gen Intern Med. 2007;23(4):429–41. doi: 10.1007/s11606-007-0403-1. [PMC free article] [PubMed] [Cross Ref]
25. Judge J, Field TS, DeFlorio M, Laprino J, Auger J, Rochon P, et al. Prescribers' responses to alerts during medication ordering in the long term care setting. J Am Med Inform Assoc. 2006;13(4):385–90. doi: 10.1197/jamia.M1945. [PMC free article] [PubMed] [Cross Ref]
26. Kenealy T, Arroll B, Petrie KJ. Patients and computers as reminders to screen for diabetes in family practice. Randomized-controlled trial. J Gen Intern Med. 2005;20(10):916–21. doi: 10.1111/j.1525-1497.2005.0197.x. [PMC free article] [PubMed] [Cross Ref]
27. Kralj B, Iverson D, Hotz K, Ashbury FD. The impact of computerized clinical reminders on physician prescribing behavior: Evidence from community oncology practice. Am J Med Qual. 2003;18(5):197–203. doi: 10.1177/106286060301800504. [PubMed] [Cross Ref]
28. Krall MA, Traunweiser K, Towery W. Effectiveness of an electronic medical record clinical quality alert prepared by off-line data analysis. Medinfo. 2004;11(Pt 1):135–9. [PubMed]
29. Kucher N, Koo S, Quiroz R, Cooper JM, Paterno MD, Soukonnikov B, et al. Electronic alerts to prevent venous thromboembolism among hospitalized patients. N Engl J Med. 2005;352(10):969–77. doi: 10.1056/NEJMoa041533. [PubMed] [Cross Ref]
30. Litzelman DK, Dittus RS, Miller ME, Tierney WM. Requiring physicians to respond to computerized reminders improves their compliance with preventive care protocols. J Gen Intern Med. 1993;8(6):311–7. doi: 10.1007/BF02600144. [PubMed] [Cross Ref]
31. Lo HG, Matheny ME, Seger DL, Bates DW, Gandhi TK. Impact of non-interruptive medication laboratory monitoring alerts in ambulatory care. J Am Med Inform Assoc. 2009;16(1):66–71. doi: 10.1197/jamia.M2687. [PMC free article] [PubMed] [Cross Ref]
32. McCowan C, Neville RG, Ricketts IW, Warner FC, Hoskins G, Thomas GE. Lessons from a randomized controlled trial designed to evaluate computer decision support software to improve the management of asthma. Med Inform Internet Med. 2001;26(3):191–201. doi: 10.1080/14639230110067890. [PubMed] [Cross Ref]
33. McDonald CJ. Use of a computer to detect and respond to clinical events: Its effect on clinical behavior. Ann Intern Med. 1976;84:162–7. [PubMed]
34. McDonald CJ, Wilson GA, McCabe GP., Jr Physician response to computer reminders. JAMA. 1980;244(14):1579–81. doi: 10.1001/jama.1980.03310140037026. [PubMed] [Cross Ref]
35. McDowell I, Newell C, Rosser W. A randomized trial of computerized reminders for blood pressure screening in primary care. Med Care. 1989;27(3):297–305. doi: 10.1097/00005650-198903000-00008. [PubMed] [Cross Ref]
36. McDowell I, Newell C, Rosser W. Computerized reminders to encourage cervical screening in family practice. J Fam Pract. 1989;28(4):420–4. [PubMed]
37. Murray MD, Harris LE, Overhage JM, Zhou XH, Eckert GJ, Smith FE, et al. Failure of Computerized Treatment Suggestions to Improve Health Outcomes of Outpatients with Uncomplicated Hypertension: Results of a Randomized Controlled Trial. Pharmacotherapy. 2004;24(3):324–37. doi: 10.1592/phco.24.4.324.33173. [PubMed] [Cross Ref]
38. Overhage JM, Tierney WM, McDonald CJ. Computer reminders to implement preventive care guidelines for hospitalized patients. Arch Intern Med. 1996;156(14):1551–6. doi: 10.1001/archinte.1996.00440130095010. [PubMed] [Cross Ref]
39. Overhage JM, Tierney WM, Zhou XH, McDonald CJ. A randomized trial of "corollary orders" to prevent errors of omission. J Am Med Inform Assoc. 1997;4(5):364–75. doi: 10.1136/jamia.1997.0040364. [PMC free article] [PubMed] [Cross Ref]
40. Rosser WW, McDowell I, Newell C. Use of Reminders for Preventive Procedures in Family Medicine. CMAJ. 1991;145(7):807. [PMC free article] [PubMed]
41. Rosser WW, Hutchison BG, McDowell I, Newell C. Use of Reminders to Increase Compliance with Tetanus Booster Vaccination. CMAJ. 1992;146(6):911–7. [PMC free article] [PubMed]
42. Rossi RA, Every NR. A computerized intervention to decrease the use of calcium channel blockers in hypertension. J Gen Intern Med. 1997;12(11):672–8. doi: 10.1046/j.1525-1497.1997.07140.x. [PMC free article] [PubMed] [Cross Ref]
43. Rothschild J, McGurk S, Honour M, Lu L, McClendon A, Srivastava P, et al. Assessment of education and computerized decision support interventions for improving transfusion practice. Transfusion. 2007;47(2):228–39. doi: 10.1111/j.1537-2995.2007.01093.x. [PubMed] [Cross Ref]
44. Safran C, Rind DM, Davis RB, Ives D, Sands DZ, Currier J, et al. Guidelines for management of HIV infection with computer-based patient's record. Lancet. 1995;346(8971):341–6. doi: 10.1016/S0140-6736(95)92226-1. [PubMed] [Cross Ref]
45. Sequist TD, Gandhi TK, Karson AS, Fiskio JM, Bugbee D, Sperling M, et al. A randomized trial of electronic clinical reminders to improve quality of care for diabetes and coronary artery disease. Journal of the American Medical Informatics Association. 2005;12(4):431–7. doi: 10.1197/jamia.M1788. [PMC free article] [PubMed] [Cross Ref]
46. Shea S, Sideli RV, DuMouchel W, Pulver G, Arons RR, Clayton PD. Computer-generated informational messages directed to physicians: effect on length of hospital stay. J Am Med Inform Assoc. 1995;2(1):58–64. doi: 10.1136/jamia.1995.95202549. [PMC free article] [PubMed] [Cross Ref]
47. Tamblyn R, Huang A, Perreault R, Jacques A, Roy D, Hanley J, et al. The medical office of the 21st century (MOXXI): effectiveness of computerized decision-making support in reducing inappropriate prescribing in primary care. CMAJ. 2003;169(6):549–56. [PMC free article] [PubMed]
48. Tamblyn R, Huang A, Taylor L, Kawasumi Y, Bartlett G, Grad R, et al. A randomized trial of the effectiveness of on-demand versus computer-triggered drug decision support in primary care. J Am Med Inform Assoc. 2008;15(4):430–8. doi: 10.1197/jamia.M2606. [PMC free article] [PubMed] [Cross Ref]
49. Tape TG, Campbell JR. Computerized medical records and preventive health care: success depends on many factors. Am J Med. 1993;94(6):619–25. doi: 10.1016/0002-9343(93)90214-A. [PubMed] [Cross Ref]
50. Tierney WM, McDonald CJ, Martin DK, Rogers MP. Computerized display of past test results. Effect on outpatient testing. Ann Intern Med. 1987;107(4):569–74. [PubMed]
51. Tierney WM, Overhage JM, Murray MD, Harris LE, Zhou XH, Eckert GJ, et al. Effects of computerized guidelines for managing heart disease in primary care. J Gen Intern Med. 2003;18(12):967–76. doi: 10.1111/j.1525-1497.2003.30635.x. [PMC free article] [PubMed] [Cross Ref]
52. Tierney W, Overhage J, Murray M, Harris L, Zhou X, Eckert G, et al. Can computer-generated evidence-based care suggestions enhance evidence-based management of asthma and chronic obstructive pulmonary disease? A randomized, controlled trial. Health Serv Res. 2005;40(2):477–97. doi: 10.1111/j.1475-6773.2005.0t369.x. [PMC free article] [PubMed] [Cross Ref]
53. Wyk MA, Lei J, Mosseveld M, Bohnen AM, Bemmel JH. Assessment of decision support for blood test ordering in primary care. A randomized trial. Ann Intern Med. 2001;134(4):274–81. [PubMed]
54. White KS, Lindsay A, Pryor TA, Brown WF, Walsh K. Application of a computerized medical decision-making process to the problem of digoxin intoxication. J Am Coll Cardiol. 1984;4(3):571–6. doi: 10.1016/S0735-1097(84)80104-7. [PubMed] [Cross Ref]
55. Holt TA, Thorogood M, Griffiths F, Munday S, Friede T, Stables D. Automated electronic reminders to facilitate cardiovascular disease prevention: randomised controlled trial. Brit J Gen Pract. 2010;60(573):e137–43. doi: 10.3399/bjgp10X483904. [PMC free article] [PubMed] [Cross Ref]
56. Gandhi TJ, Sequist TD, Poon EG, et al. Primary care clinicians’ attitudes towards electronic clinical reminders and clinical practice guidelines. Proceedings of the AMIA Symposium 2003:848. [PMC free article] [PubMed]
57. Krall MA, Sittig DF. Subjective assessment of usefulness and appropriate presentation mode of alerts and reminders in the outpatient setting. Proceedings of the AMIA Symposium 2001:334–338. [PMC free article] [PubMed]
58. Saleem JJ, Patterson ES, Militello L, Render ML, Orshansky G, Asch SM. Exploring barriers and facilitators to the use of computerized clinical reminders. JAMIA. 2005;12:438–447. [PMC free article] [PubMed]
59. Patterson ES, Doebbeling BN, Fung CH, Militello L, Anders S, Asch SM. Identifying barriers to the effective use of clinical reminders: bootstrapping multiple methods. J Biomed Inform. 2005;38:189–199. doi: 10.1016/j.jbi.2004.11.015. [PubMed] [Cross Ref]
60. Sittig DF, Krall MA, Dykstra RH, Russell A, Chin HL. A survey of factors affecting clinician acceptance of clinical decision support. BMC Med Informat Decis Making. 2006;6:6. doi: 10.1186/1472-6947-6-6. [PMC free article] [PubMed] [Cross Ref]
61. Krall MA, Sittig DF. Clinicians’ assessments of outpatient electronic medical record alert and reminder usability and usefulness requirements. Proceedings of the AMIA Symposium 2002:400–404. [PMC free article] [PubMed]
62. Agrawal A, Mayo-Smith MF. Adherence to computerized clinical reminders in a large healthcare delivery network. Medinfo. 2004;11(1):111–114. [PubMed]
63. Mayo-Smith MF, Agrawal A. Factors associated with improved completion of computerized clinical reminders across a large healthcare system. Int J Med Informat. 2007;76(10):710–716. doi: 10.1016/j.ijmedinf.2006.07.003. [PubMed] [Cross Ref]
Articles from Journal of General Internal Medicine are provided here courtesy of
Society of General Internal Medicine