Search tips
Search criteria 


Logo of smrLink to Publisher's site
South Med Rev. 2012 December; 5(2): 42–50.
Published online 2012 December 27.
PMCID: PMC3606938

Barriers and Facilitators to Adoption of a Web-based Antibiotic Decision Support System


Objective: To measure clinicians’ perceptions of barriers and facilitators to the adoption of a Computerised Decision Support System (CDSS) for antibiotic approval, and to examine the correlation between these perceptions and actual usage of the system by clinicians.

Methods: This study was conducted in a tertiary care university hospital of Melbourne, Australia. A survey tool comprising of demographic items and newly developed scales to measure clinicians’ perceptions of barriers and facilitators to use of an CDSS was developed. Cross-sectional mail surveys were sent to 250 Junior and Senior Medical Staff and Pharmacists in a tertiary care hospital. Cronbach’s alpha was used to measure the reliability of the perceptions scales. One way ANOVA was used to assess the differences between participants’ responses; Tamhane’s test was used for post-hoc analysis. Pearson correlations were used to measure the relationship between the participants’ scores on the scales and their actual use of the CDSS under study.

Results: The overall survey response rate was 54%. Cronbach’s alpha for the perceived barrier and facilitator scales were 0.80 and 0.88, respectively. Senior medical staff perceived significantly more barriers than junior medical staff and pharmacists. Statistically significant differences were observed between the scores of the participants on a number of items on the perceived barriers and facilitators scales. Negative correlations were observed between the participants’ scores on the perceived barriers scale and their use of the system. (r= -0.415, p= 0.001).

Conclusions: The scales to measure perceived barriers and facilitators to adopt antibiotic CDSS have shown acceptable reliability and validity measures. Important differences exist between senior and junior medical staff about the barriers and facilitators to adopting the CDSS which may influence future use by clinicians.

Keywords: Computerised Decision Support Systems, Clinicians, Antimicrobials, Perceptions, Scales, Validity, Reliability, Adoption


The prevalence of multidrug-resistant bacteria is increasing at an alarming rate [1]. Inappropriate prescribing of antimicrobial agents by healthcare professionals is an important factor contributing to antimicrobial resistance [2, 3]. Computerised Decision Support Systems (CDSS) that deliver evidence-based recommendations regarding appropriate selection and use of antibiotics at the point of care have been demonstrated to improve prescribing of antibiotics [4, 5]. Nonetheless, failure of CDSS is not an uncommon occurrence; two recent reviews of evaluation studies of CDSS found that about one third failed to achieve their intended outcome [6, 7]. It is interesting to note that the percentage of CDSS that failed in improving physicians’ performance has not changed significantly over time. This is despite improvement in CDSS implementation and quality of evaluative study designs [6, 8]. Since negative perceptions by clinicians of CDSS can affect acceptance [9], studying clinicians’ perceptions may provide useful insights into the determinants of successful implementation of CDSS.

There is an increasing interest in studying clinicians’ perceptions [10, 11] and their role in adoption of CDSS [12, 13]. However, given the scarcity of reliable and valid tools to measure clinicians’ perceptions of CDSS [14], researchers often have to develop their own tools to measure the outcome of interest [13, 15]. The measurement of human perceptions and attitudes is a common practice in human psychology, and there are explicit guidelines for development and testing of such measures [16]. Unfortunately, a number of studies that have attempted to measure clinicians’ perceptions of CDSS or other Computerised Clinical Information Systems do not report reliability or validity [14]. As such, there is a need to study clinicians’ perceptions of CDSS and the relationship between such perceptions and the usage of CDSS by the clinicians using valid and reliable measurement tools.

Aims of the study

The objective of this study was to examine the reliability and validity of a newly developed scale to measure clinicians’ perception of barriers and facilitators to adopting a web-based antibiotic approval system. Another objective was to study the relationship between clinicians’ perceptions of a web-based antibiotic approval system.

This study was conducted in parallel to another study that measured clinicians’ perceptions of ease and usefulness of the antibiotic approval system and reported elsewhere [17].


Hospital case site and CDSS

The Royal Melbourne Hospital (RMH) is a tertiary referral centre and teaching hospital in Melbourne, Australia. Since 2000, the infectious diseases department has been developing and implementing computerised decision support tools. A web-based antibiotic approval system launched in March 2001 was shown to successfully reduce the prescribing of third generation cephalosporins [18]. Recently, a web-based system (Guidance DS®) that provides electronic approval for prescribing of restricted antibiotics, computerised clinical guidelines, and antibiotic decision support was introduced at the RMH. The antibiotic approval module of Guidance DS®, iApprove®, was implemented in February 2005 to replace a previous antibiotic approval program for third generation cephalosporins; clinicians had a choice to request an infectious diseases consult if they did not prefer to use the system to obtain approvals. A brief description of iApprove® is provided as Appendix 1.

Survey development

The complete survey instrument has been previously reported [17] whereas the scales measuring perceived barriers and facilitators are shown as Appendix-2. The development and content validation of the perceived barriers and facilitators scales is reported.

Negative perceptions have been proven to discourage clinicians’ adoption of CDSS and ultimately the failure of these systems [9]. In addition, CDSS often utilise clinical guidelines as their knowledge base and clinicians’ attitudes towards a particular guideline used in a CDSS may affect their attitude towards that system [19]. A number of barriers associated with poor adoption of CDSS and related technologies are associated with the implementation strategies used for the systems [20, 21]. The lack of: technical infrastructure [19]; training [9, 22]; education about the system [9]; and, local champions [21] are associated with poor uptake of CDSS by clinicians. Lack of applicability to individual patients [9, 22] and while there is conflicting evidence on the impact of these systems on clinician autonomy [15, 23], at least one report suggests lack of autonomy can negatively affect adoption [24]. A review of evaluation studies identifies the design features of CDSS that improves clinician performance [7] . They found that the systems that provide decision support as part of clinical workflow, at the time and location of decision making, as well as those that provide recommendations rather than only assessments, were associated with higher success rates[7]. In addition, the integration of other systems such as Computerised Provider Order Entry (CPOE) with the CDSS [15]; seeking clinicians’ opinion about the system [10]; and improving technical infrastructure [25] are all reported as facilitators to clinicians’ adoption of CDSS.

In light of the above-mentioned barriers and facilitators to clinicians’ adoption of CDSS, a pool of 14 items to measure perceived barriers and 15 items to measure perceived facilitators were initially drafted into a measurement tool. This draft was reviewed by the following people at the study hospital: two infectious diseases physicians, a pharmacist involved in drug use evaluation, six Junior Medical Staff (JMS = interns, residents, registrars - two each), one Senior Medical Staff (SMS) and two ward pharmacists. These items were also reviewed by researchers familiar with survey design at two universities as well as clinicians involved in studies related to CDSS from outside of the case site. The initial draft containing 29 items was considered lengthy by most of the reviewers; in addition, suggestions were made related to the items’ structure by the researchers at the universities. As a result of this review, 5 items were removed and one “double-barrel” item was divided into two for the barriers scale and 5 items were removed from the facilitators scale.

The participants

Three categories of clinicians were invited to participate: JMS; SMS; and pharmacists. These categories of clinicians were represented (those using iApprove® as a part of their daily workflow) to seek approval for prescribing antibiotics (JMS), those who were monitoring antibiotic approvals (pharmacists) and those who were not using the system themselves, but were making clinical decisions that could affect the usage of iApprove® (SMS). Since SMS were not using the system themselves, items on both scales were modified to seek SMS’ opinion of what they perceived as likely barriers and facilitators to the adoption of the antibiotic CDSS by the JMS (available from the author on request).

Survey deployment

Ethical approvals were obtained from Monash University Standing Committee on Ethics in Research involving Humans and Royal Melbourne Hospital Human Ethics and Research Committee.

To protect individual identities, all surveys were allocated unique identification codes. On 1st of June 2005, the coded survey along with the participant information sheet, instructions and a postage-paid self-addressed envelope was mailed to a total of 150 JMS, 70 SMS and 30 pharmacists working in inpatient wards. A stratified, random sampling method was used to select the JMS and SMS but did not apply to pharmacists because of their low numbers. This was to ensure adequate representation from all clinical wards. A reminder email was sent to all of the selected clinicians two and four weeks after the initial distribution. Another reminder was sent with repeat surveys to the non-respondents eight weeks after the initial distribution. Permission to monitor participants’ usage of iApprove® was obtained as part of seeking consent. A computerized usage log of individual clinicians was automatically generated by the system from February 1 to December 31, 2005; the usage log showed the number of times an individual clinician accessed iApprove® to obtain approval for restricted antibiotics.

Data analysis

Reliability of both scales was measured using Cronbach’s alpha co-efficient [26]. The value of Cronbach’s alpha lies between 0 and 1. Values below 0.6 are considered as unacceptable, values between 0.7 and 0.8 as acceptable and values of 0.8 and above as good reliability [16]. Means, medians and percentages were calculated for the scales to measure barriers and facilitators to use of iApprove®. One-way ANOVA was performed to assess any significant differences in the study parameters among participants. Tamhane’s test was used for the post-hoc analysis of ANOVA. Bi-variate Pearson correlation was performed to estimate correlations between variables. The aim of the correlation analysis was two-fold. Firstly, to determine any association between the perceptions and actual and reported use of the system by the clinicians. Secondly, to examine the criterion- and construct-related validity of the newly developed scales. An alpha value of 0.05 was set to test the significance of differences and correlations among study parameters. SPSS version 11.5 was used.


A total of 35 survey packs were received as unclaimed returned mail, 10 of the 35 were from SMS, while 25 were from JMS. Sixty-five completed and 2 partly completed surveys were received from JMS, 29 completed surveys from SMS and 19 completed surveys from pharmacists. Subtracting the number of surveys that were unclaimed, there was an overall response rate of 54 %. A breakdown of response rates across the groups, together with respondents’ demographics is shown in Table 1. Cronbach’s alpha of the scales to measure perceived barriers and perceived facilitators of iApprove® was 0.88 (n=110) and 0.80 (n=111), respectively. The detailed reliability indices (available from the authors) did not identify any items the removal of which would be expected to increase the reliability co-efficient significantly.

Table thumbnail
Table 1: Demographics of the respondents to the survey.

Perceived barriers to use of iApprove®

Table 2 shows the mean and median scores of the perceived barriers to use of iApprove® for all participants combined, as well as for the individual categories of participant. Overall, the scores on the perceived barrier scale were relatively low and among the categories of participants there was a strong trend for the scores on individual items to be higher for the SMS than for JMS and pharmacists. Statistically significant differences were observed among categories of participants on the scores of all the items tested. The participants’ scores on the item measuring the effect of iApprove® on medical autonomy was not tested due to its inapplicability to pharmacists; nevertheless, the mean scores on the autonomy item among JMS and SMS are not different (see Table 2).

Tamhane’s test identified the inter-participant differences. Significant differences were observed on the scores for the lack of awareness item among all the participants, pharmacists’ scores being significantly lower compared to those of JMS (p=0.019) and SMS (p<0.001). The SMS scores were significantly higher than those of JMS and pharmacists on the following items: lack of familiarity [JMS & pharmacists (p<0.001)]; lack of training [JMS & pharmacists (p<0.001)]; lack of benefit in using the system [JMS & pharmacists (p<0.001)]; lack of technical support [JMS (p=0.001), pharmacists (p=0.006); and, lack of time [JMS (p=0.03), pharmacists (0.01)]. The SMS also scored significantly higher than the JMS (p=0.005) but not the pharmacists (p=0.14) on the item relating to the lack of computer terminals as a barrier to use of iApprove®. The pharmacists scores were significantly lower than the SMS (p<0.001) and the JMS (p=0.001) on the item about the rigidity of the system, while the scores of pharmacists on the item about the disruption of workflow by iApprove® were significantly lower than those of the SMS (p=0.03) but not the JMS (p=0.26).

Table thumbnail
Table 2: Perceived barriers to use iApprove®

Perceived facilitators to use of iApprove®

The mean and median scores for the perceived facilitators to use of iApprove® are shown in Table 3. Overall, the participants’ scores on the perceived facilitators scale were relatively high (in comparison with those on the perceived barriers scale) with a mean of 3.5 and above and median of 4 on all of the items. Compared to the scores of the participants on the perceived barriers scale, the differences among the participants on the scores on perceived facilitators scale were smaller. Significant differences among the participants existed on the items: on endorsement by the departmental heads; linking order entry; organising more training; and, seeking clinicians’ feedback for future modifications (Table 3). A post-hoc analysis of ANOVA found SMS scores were significantly higher than those for JMS on the following items on the perceived facilitators scale: endorsement by the departmental heads (p=0.004); linking order entry with the system (p=0.03); and, providing feedback to the users (p=0.005). Pharmacists scored significantly higher than the SMS (p=0.03) on the item on making the system available in PDA format and than the JMS (p=0.001) on the item on organising more training.

Table thumbnail
Table 3: Perceived facilitators to use of iApprove®

Usage of iApprove®

A total of 1419 requests were made to iApprove® for approval to prescribe restricted antibiotics by 191 users during a period of 11 months; 143 of these requests were rejected resulting in a total of 1276 approvals. Of the 1419 approval requests, 492 were made by 52 JMS who participated in the study; 70 of these requests were rejected by iApprove® resulting in 422 restricted antibiotic approvals granted to this category of study participants. Figure 1 shows the usage of iApprove® by the clinicians who participated in the study compared to those who did not participate in this study. A total of 53 JMS responded to the self-reported usage item on the survey. About 6% of JMS used the system frequently (more than one to once a week); 36% used it less frequently (less than once a week to once every two weeks) whereas 19% used it infrequently (less than once every two weeks to more than once a month) while 26 % used it rarely (once a month to less than once a month). Three participants reported using the system only once, while one reported never using the system. The relationship between the reported and actual use of the system failed to reach statistical significance (r=0.26, p=0.06).

Criterion- and Construct-related validity of scales

A scale appears to demonstrate criterion-related validity if the scores of the participants on that scale correlate with a particular behaviour or related outcome of interest [27]. It would therefore be expected that the participants who used the system often would score lower on the perceived barriers scale. Statistically significant negative bivariate correlations were observed between the scores on the perceived barriers scale and the actual use of iApprove®(r= -0.415, p= 0.001). Correlation between the clinicians’ scores on the perceived facilitators scale and their use of iApprove® were not significant (r=0.12, p=0.33).

A scale demonstrates construct validity if the scores on the measure correlate well with a similar measure [27]. As shown in Appendix-1, the following three items on the perceived barriers scale were reversed in the perceived facilitators scale: lack of computers; lack of training; and, lack of technical support. The authors hypothesised that the respondents who score higher on the above-mentioned three items will also score higher on the following three items on the perceived facilitators scale: increasing the computer terminals; organising more training; and, increasing technical support. Scores on the above-mentioned three items on the perceived barriers scale were significantly correlated with the scores of the corresponding three items on the perceived facilitators scale (r= 0.34 p<0.001).


An in-depth reliability analysis did not identify any item in either the perceived barriers or the perceived facilitators scale that could be removed to achieve any gains in Cronbach’s alpha co-efficient; this indicates that the scales demonstrate good reliability [27]. The two scales were subjected to content validation prior to their usage in the study. Content validation is an important initial step in the validity assessment of psychometric measures [27]. Given that the usage of the system was voluntary throughout the study period, a statistically significant correlation between the reported and actual use of the system and participants scores on the perceived barrier scale is an indicator of criterion-related validity of the scale [16].

Despite the fact that the overall scores of all the participants on the perceived barriers scale were quite low, the SMS consistently scored higher on most of the items. This is interesting as the SMS were specifically asked to indicate what they thought were the barriers that their juniors faced while using the system. Since SMS were ultimately making most of the prescribing decisions, their negative perceptions could affect the adoption of the system by the junior staff. In fact, senior doctors or opinion leaders have been shown to affect the successful implementation of CDSS [28, 29]. Stevenson et al found that the resistance from the senior medical staff at four out of five rural hospitals was one of the important factors that prevented the successful implementation of an antibiotic CDSS [28].

At the hospital in the present study, SMS spent substantially less time on the wards compared to the JMS and pharmacists. Therefore, the differences in perceived lack of awareness, familiarity, training and technical support may be explained by the general unfamiliarity with the SMS, its implementation and the advertising strategy of the iApprove® team. The differences in the perceived lack of time may be explained in the light of the workflow of SMS; as described above, SMS spend limited time on the wards supervising JMS during clinical rounds and JMS may be occupied with other clinical duties such as examining patients, answering queries of their consultant and writing progress notes. The difference in the perceived lack of benefit may be due to the difference in the clinical experience and knowledge between SMS and JMS/pharmacists. Since senior clinicians often work at a higher cognitive level due to their clinical knowledge and experience [30], SMS in the present study may not have considered it as beneficial as their junior colleagues. Halm et al reported that the senior doctors were less likely to find Community Acquired Pneumonia (CAP) guidelines helpful to them than the junior doctors [31]. Similarly, Lomotan et al evaluated the usefulness and effectiveness of an CDSS for asthma management and found that the system was rarely used by the consultants due to their high baseline cognition of disease process [32].

It is interesting to note that the JMS scored significantly higher than the pharmacists on the lack of awareness item; the difference may be due to the variability in the working rosters and job descriptions of both groups. Pharmacists work day shifts in the study hospital while JMS work around the clock in rotating shifts; in addition, pharmacists are also responsible for monitoring approvals for the restricted antibiotics. The fact that pharmacists scored significantly lower on the rigidity of the system item, compared to the other two groups, may be due to the differences in clinical work of these groups. Doctors often have more in-depth knowledge of the individual patient circumstances compared to pharmacists who are mainly concerned with pharmacotherapy-related issues. Lastly, it is important to note that the majority of the senior and junior medical staff did not find the use of iApprove® limited their medical autonomy. Most CDSS utilise some sort of practice guidelines and one of the common barriers of clinicians’ use of practice guidelines is the perception of the limitation of their medical or clinical autonomy [33]. The limited number of studies available on this issue has shown inconsistent results. Darr et al found the perceived limitations on medical autonomy of clinicians a barrier to use of an EMR-based system [34], while physicians in the study by Grundmeier et al held neutral perceptions with regards to the effects of CDSS on their decision making [15]. The knowledge base of iApprove® is founded on the Antibiotic Guidelines®, a well known reference that is commonly used in Australian Healthcare settings. Therefore, the medical staff in the present study may not have felt that the use of iApprove® was limiting their medical autonomy or was too rigid when prescribing for their patients.

The apparent reason behind the differences among the participants on the endorsement by the departmental heads item on the perceived facilitators scale may lie in the nature of individual prescribing practices. Clinicians make prescribing decisions on a case by case basis and the JMS who are involved in these decisions may not see endorsement of departmental heads as having any role in facilitating their use of iApprove®. On the other hand, the differences among the SMS and the JMS on linking the system with CPOE may be due to the fear of increased workload on the part of JMS. A time motion study that compared the time spent by doctors in writing medication orders using CPOE, with hand-written orders found that the doctors spent 9% of their time in ordering medicines using CPOE compared to 2% with hand-written orders [35]. The lack of time has been reported as one of the major barriers of clinicians’ use of CDSS and related computer technologies [9, 36].

Limitation and Strengths

Certain limitations of the present study should be considered. The response rate was less than optimal yet similar to the average response rate cited in the literature [37]. Participants were potentially identifiable which may have contributed to a lower response rate. It is important to note that the study managed to attract clinicians with various degrees of system usage (see Appendix 1) and therefore, the sample seems to be adequate to address the aims of the study; clinicians’ perceptions of the system and the relationship between their perceptions and their usage of the system. The authors were not able to demonstrate the criterion-related validity for the perceived facilitator scale as there was no theoretical justification to expect any correlation between usage and scores on the perceived facilitators scale; none of the facilitators were in place at the time of the study. With regards to the construct-related validity estimation of both scales, the assessment would have been more robust if a previously validated scale was used for the purpose [27]. However, given the fact that the average response rate of surveys of physicians is not adequate [37], asking clinicians to respond to an additional measure was deemed inappropriate by the authors. Nevertheless, the validation items selected for the above purpose were justifiable and demonstrate significant inter-item correlation thus providing insight towards the construct validity of the two scales. He authors also believe that a more appropriate approach to address the construct validity would be to conduct a factor analysis to explore the internal structure. However, such analyses often require larger samples[16] than the one achieved in this study.

A number of strengths of the present study should also be considered. The authors were not only independent of the developers and implementers of the system but also of the institution where the system was implemented. This is expected to reduce the potential for investigator bias. It should also be noted that the use of the system was optional throughout the study period. Thus, clinicians were less likely to be influenced by hospital policy in their usage of the system. The present study also reported the correlations between clinicians’ perceptions and actual use of the system; often intended or self-reported usage of a system is used to study such correlations and these estimates of use may or may not represent the actual use of the system. The scarcity of reliable and valid tools in the field of medical informatics has been reported as a major dilemma faced by researchers [14]. In addition, researchers often do not report the reliability and/or validity assessment of their measurement tools [10, 11, 13]. The present study reported the development and validation of two newly developed scales to measure clinicians’ perceptions of an antibiotic CDSS which adds significantly to this void in literature.

Practice implications

Pharmacists working in tertiary environments are often responsible for assisting prescribers to make appropriate and rationale selection of antibiotics. While increasingly CDSS are gaining popularity in implementing hospital guidelines as well as unit based protocol regarding antibiotic use [5, 39], significant barriers to their adoption exist [22, 40]. The implementers of CDSS will be assisted by understanding the barriers surrounding CDSS adoption to enable more systems to be successfully implemented. The present study measured clinicians’ perceptions of an antibiotic CDSS delivered via the intranet in a tertiary care centre in Melbourne, Australia. This study involved a variety of categories of clinicians. Both the study setting and the study participants represent typical modern metropolitan tertiary care hospitals, and our findings and tools may be useful for other hospitals interested in the implementation of a web-based antibiotic CDSS. While the investigators are independent of the developers and implementers of the CDSS at the study hospital, the findings of this study have been made available to them to allow iterative improvements to the implementation strategy.


Significant differences exist among senior and junior medical staff that may influence their overall adoption of a CDSS system. The scales to measure perceived barriers and facilitators to clinicians’ use of an antibiotic CDSS appear to be valid and reliable. Future studies are needed to explore the barriers and facilitators to adoption of CDSS and related technologies by clinicians in other hospital settings. The tools developed and validated in this study would facilitate such studies.

Author Contributions

STRZ constructed the scale, administered survey, collected and analysed the data and wrote the initial draft of the manuscript. STRZ was a PhD student at the Faculty of Pharmacy and Pharmaceutical Sciences, Monash University and this work was done as a part of his PhD studies.

JLM reviewed the initial draft of the manuscript and made recommendations for the subsequent drafts.


Prof. Roger L. Nation, Faculty of Pharmacy and Pharmaceutical Sciences, for providing valuable comments on the first version of the manuscript.

Dr. Heather Smith, Manager, Medical Education Program, The Royal Melbourne Hospital, for facilitating the distribution and collection of the surveys

Marion Robertson, Drug Use Evaluation Pharmacist, Department of Clinical Pharmacology, The Royal Melbourne Hospital for facilitating the development and distribution of surveys

Prof. Graham Brown, Dr. James F Black, Dr. Karin Thursky and Dr. Kirsty Buising, Victorian Infectious Diseases Service, The Royal Melbourne Hospital for providing us with the opportunity to evaluate implementation of iApprove®

Appendix 1: A screen shot of Community Acquired Pneumonia protocol from iApprove®. 

An external file that holds a picture, illustration, etc.
Object name is smr-05-042-g002.jpg

Appendix 2: Perceived barriers and facilitators to use iApprove® at the Royal Melbourne Hospital. 

An external file that holds a picture, illustration, etc.
Object name is smr-05-042-g003.jpg

Funding Statement

No specific funding was sort for this study; the study was a part of Dr. Zaidi’s PhD and he was a full time student working on the project. Department of Pharmacy Practice did provide the stationary facilities and other overheads necessary for the research for all PhD students.




1. Talbot George H, Bradley John, Edwards John E, Gilbert David, Scheld Michael, Bartlett John G, Antimicrobial Availability Task Force of the Infectious Diseases Society of America. Bad bugs need drugs: an update on the development pipeline from the Antimicrobial Availability Task Force of the Infectious Diseases Society of America. Clin Infect Dis. 2005 Mar 1;42(5):657–68. doi: 10.1086/499819. [PubMed] [Cross Ref]
2. Johnson S V, Hoey L L, Vance-Bryan K. Inappropriate vancomycin prescribing based on criteria from the Centers for Disease Control and Prevention. Pharmacotherapy. 1995 Sep;15(5):579–85. [PubMed]
3. Lawton RM, Fridkin SK, Gaynes RP, McGowan JE. Practices to improve antimicrobial use at 47 US hospitals: the status of the 1997 SHEA/IDSA position paper recommendations. Society for Healthcare Epidemiology of America/Infectious Diseases Society of America. Infect Control Hosp Epidemiol. 2000 Apr;21(4):256–9. [PubMed]
4. Sintchenko V, Iredell JR, Gilbert GL, Coiera E. Handheld computer based decision support reduces patient length of stay and antibiotic prescribing in critical care. J Am Med Inform Assoc. 2005 [PMC free article] [PubMed]
5. Evans R S, Pestotnik S L, Classen D C, Clemmer T P, Weaver L K, Orme J F, Lloyd J F, Burke J P. A computer-assisted management program for antibiotics and other antiinfective agents. N Engl J Med. 1998 Jan 22;338(4):232–8. doi: 10.1056/NEJM199801223380406. [PubMed] [Cross Ref]
6. Garg Amit X, Adhikari Neill K J, McDonald Heather, Rosas-Arellano M Patricia, Devereaux P J, Beyene Joseph, Sam Justina, Haynes R Brian. Effects of computerized clinical decision support systems on practitioner performance and patient outcomes: a systematic review. JAMA. 2005 Mar 9;293(10):1223–38. doi: 10.1001/jama.293.10.1223. [PubMed] [Cross Ref]
7. Kawamoto Kensaku, Houlihan Caitlin A, Balas E Andrew, Lobach David F. Improving clinical practice using clinical decision support systems: a systematic review of trials to identify features critical to success. BMJ. 2005 Mar 14;330(7494):765. doi: 10.1136/bmj.38398.500764.8F. [PMC free article] [PubMed] [Cross Ref]
8. Hunt D L, Haynes R B, Hanna S E, Smith K. Effects of computer-based clinical decision support systems on physician performance and patient outcomes: a systematic review. JAMA. 1998 Oct 21;280(15):1339–46. [PubMed]
9. Rousseau Nikki, McColl Elaine, Newton John, Grimshaw Jeremy, Eccles Martin. Practice based, longitudinal, qualitative interview study of computerised evidence based guidelines in primary care. BMJ. 2003 Feb 8;326(7384):314. [PMC free article] [PubMed]
10. Gadd CS, Baskaran P, Lobach DF. Identification of design features to enhance utilization and acceptance of systems for Internet-based decision support at the point of care. Proc AMIA Symp. 1998:91–5. [PMC free article] [PubMed]
11. Rosenbloom S Trent, Talbert Doug, Aronsky Dominik. Clinicians' perceptions of clinical decision support integrated into computerized provider order entry. Int J Med Inform. 2004 Jun 15;73(5):433–41. doi: 10.1016/j.ijmedinf.2004.04.001. [PubMed] [Cross Ref]
12. Satsangi S, Weir CR, Morris AH, Warner HR. Cognitive evaluation of the predictors of use of computerized protocols by clinicians. AMIA Annu Symp Proc. 2003:574–8. [PMC free article] [PubMed]
13. Zheng K, Padman R, Johnson MP, Diamond HS. Understanding technology adoption in clinical care: clinician adoption behavior of a point-of-care reminder system. Int J Med Inform. 2005 Aug;74(7-8):535–43. [PubMed]
14. Friedman Charles P, Abbas Ume L. Is medical informatics a mature science? A review of measurement practice in outcome studies of clinical systems. Int J Med Inform. 2003 Mar;69(2-3):261–72. [PubMed]
15. Grundmeier R, Johnson K. Housestaff attitudes toward computer-based clinical decision support. Proc AMIA Symp. 1999:266–70. [PMC free article] [PubMed]
16. Nunnally J, Bernstein IH. Psychometric Theory. New York: McGraw Hill; 1994.
17. Zaidi Syed Tabish R, Marriott Jennifer L, Nation Roger L. The role of perceptions of clinicians in their adoption of a web-based antibiotic approval system: do perceptions translate into actions. Int J Med Inform. 2007 Jan;77(1):33–40. doi: 10.1016/j.ijmedinf.2006.11.008. [PubMed] [Cross Ref]
18. Richards Michael J, Robertson Marion B, Dartnell Jonathan G A, Duarte Margarida M, Jones Nicholas R, Kerr Dale A, Lim Lyn-Li, Ritchie Peter D, Stanton Graham J, Taylor Simone E. Impact of a web-based antimicrobial approval system on broad-spectrum cephalosporin use at a teaching hospital. Med J Aust. 2003 Apr 21;178(8):386–90. [PubMed]
19. Zielstorff R D. Online practice guidelines: issues, obstacles, and future prospects. J Am Med Inform Assoc. 1998 May;5(3):227–36. [PMC free article] [PubMed]
20. Bates David W, Kuperman Gilad J, Wang Samuel, Gandhi Tejal, Kittler Anne, Volk Lynn, Spurr Cynthia, Khorasani Ramin, Tanasijevic Milenko, Middleton Blackford. Ten commandments for effective clinical decision support: making the practice of evidence-based medicine a reality. J Am Med Inform Assoc. 2003 Aug 04;10(6):523–30. doi: 10.1197/jamia.M1370. [PMC free article] [PubMed] [Cross Ref]
21. Leigh Jenni A, Long Paul W, Barraclough Bruce H. The Clinical Support Systems Program: supporting system-wide improvement. Med J Aust. 2004 May 17;180(10 Suppl):101–3. [PubMed]
22. Patterson Emily S, Nguyen Anh D, Halloran James P, Asch Steven M. Human factors barriers to the effective use of ten HIV clinical reminders. J Am Med Inform Assoc. 2003 Oct 05;11(1):50–9. doi: 10.1197/jamia.M1364. [PMC free article] [PubMed] [Cross Ref]
23. Murray MD, Harris LE, Overhage JM, Zhou XH, Eckert GJ, Smith FE. Failure of computerized treatment suggestions to improve health outcomes of outpatients with uncomplicated hypertension: results of a randomized controlled trial. Pharmacotherapy. 2004 Mar;24(3):324–37. [PubMed]
24. Scott J Tim, Rundall Thomas G, Vogt Thomas M, Hsu John. Kaiser Permanente's experience of implementing an electronic medical record: a qualitative study. BMJ. 2005 Nov 03;331(7528):1313–6. doi: 10.1136/bmj.38638.497477.68. [PMC free article] [PubMed] [Cross Ref]
25. Goldstein Mary K, Coleman Robert W, Tu Samson W, Shankar Ravi D, O'Connor Martin J, Musen Mark A, Martins Susana B, Lavori Philip W, Shlipak Michael G, Oddone Eugene, Advani Aneel A, Gholami Parisa, Hoffman Brian B. Translating research into practice: organizational issues in implementing automated decision support for hypertension in three medical centers. J Am Med Inform Assoc. 2004 Sep;11(5):368–76. doi: 10.1197/jamia.M1534. [PMC free article] [PubMed] [Cross Ref]
26. Cronbach LJ. Designing evaluations of educational and social programs. San Franscico: Josse-Bass; 1982. Designing evaluations of educational and social programs.
27. DeVellis R. Scale Development: Theory and Application. Sage Publications; 2003.
28. Stevenson KB, Barbera J, Moore JW, Samore MH, Houck P. Understanding keys to successful implementation of electronic decision support in rural hospitals: analysis of a pilot study for antimicrobial prescribing. Am J Med Qual. 2005 Nov;20(6):313–8. [PubMed]
29. Ash Joan S, Stavri P Zoë, Dykstra Richard, Fournier Lara. Implementing computerized physician order entry: the importance of special people. Int J Med Inform. 2003 Mar;69(2-3):235–50. [PubMed]
30. Dawson N V. Physician judgment in clinical settings: methodological influences and cognitive performance. Clin Chem. 1993 Jul;39(7):1468–78. [PubMed]
31. Halm EA, Atlas SJ, Borowsky LH, Benzer TI, Singer DE. Change in physician knowledge and attitudes after implementation of a pneumonia practice guideline. J Gen Intern Med. 1999 Nov;14(11):688–94. [PMC free article] [PubMed]
32. Lomotan EA, Hoeksema LJ, Edmonds DE, Ramirez-Garnica G, Shiffman RN, Horwitz LI. Evaluating the use of a computerized clinical decision support system for asthma by pediatric pulmonologists. Int J Med Inform. 2012 Mar;81(3):157–65. [PMC free article] [PubMed]
33. Cabana MD, Rand CS, Powe NR, Wu AW, Wilson MH, Abboud PA. Why don’t physicians follow clinical practice guidelines? A framework for improvement. Jama. 1999 Oct 20;282(15):1458–65. [PubMed]
34. Darr Asaf, Harrison Michael I, Shakked Leora, Shalom Nira. Physicians' and nurses' reactions to electronic medical records. Managerial and occupational implications. J Health Organ Manag. 2003;17(5):349–59. [PubMed]
35. Shu K, Boyle D, Spurr C, Horsky J, Heiman H, O'Connor P, Lepore J, Bates D W. Comparison of time spent writing orders on paper with computerized physician order entry. Medinfo. 2001;10(Pt 2):1207–11. [PubMed]
36. Patterson Emily S, Doebbeling Bradley N, Fung Constance H, Militello Laura, Anders Shilo, Asch Steven M. Identifying barriers to the effective use of clinical reminders: bootstrapping multiple methods. J Biomed Inform. 2004 Dec 15;38(3):189–99. doi: 10.1016/j.jbi.2004.11.015. [PubMed] [Cross Ref]
37. Asch D A, Jedrziewski M K, Christakis N A. Response rates to mail surveys published in medical journals. J Clin Epidemiol. 1997 Oct;50(10):1129–36. [PubMed]
38. O'Sullivan Ian, Orbell Sheina, Rakow Tim, Parker Ron. Prospective research in health service settings: health psychology, science and the ‘Hawthorne’ effect. J Health Psychol. 2004 May;9(3):355–9. doi: 10.1177/1359105304042345. [PubMed] [Cross Ref]
39. Sintchenko Vitali, Iredell Jonathan R, Gilbert Gwendolyn L, Coiera Enrico. Handheld computer-based decision support reduces patient length of stay and antibiotic prescribing in critical care. J Am Med Inform Assoc. 2005 Mar 31;12(4):398–402. doi: 10.1197/jamia.M1798. [PMC free article] [PubMed] [Cross Ref]
40. Rocha B H, Christenson J C, Evans R S, Gardner R M. Clinicians' response to computerized detection of infections. J Am Med Inform Assoc. 2001 Mar;8(2):117–25. [PMC free article] [PubMed]

Articles from Southern Med Review are provided here courtesy of BioMed Central