An in-depth reliability analysis did not identify any item in either the perceived barriers or the perceived facilitators scale that could be removed to achieve any gains in Cronbach’s alpha co-efficient; this indicates that the scales demonstrate good reliability [27
]. The two scales were subjected to content validation prior to their usage in the study. Content validation is an important initial step in the validity assessment of psychometric measures [27
]. Given that the usage of the system was voluntary throughout the study period, a statistically significant correlation between the reported and actual use of the system and participants scores on the perceived barrier scale is an indicator of criterion-related validity of the scale [16
Despite the fact that the overall scores of all the participants on the perceived barriers scale were quite low, the SMS consistently scored higher on most of the items. This is interesting as the SMS were specifically asked to indicate what they thought were the barriers that their juniors faced while using the system. Since SMS were ultimately making most of the prescribing decisions, their negative perceptions could affect the adoption of the system by the junior staff. In fact, senior doctors or opinion leaders have been shown to affect the successful implementation of CDSS [28
]. Stevenson et al
found that the resistance from the senior medical staff at four out of five rural hospitals was one of the important factors that prevented the successful implementation of an antibiotic CDSS .
At the hospital in the present study, SMS spent substantially less time on the wards compared to the JMS and pharmacists. Therefore, the differences in perceived lack of awareness, familiarity, training and technical support may be explained by the general unfamiliarity with the SMS, its implementation and the advertising strategy of the iApprove® team. The differences in the perceived lack of time may be explained in the light of the workflow of SMS; as described above, SMS spend limited time on the wards supervising JMS during clinical rounds and JMS may be occupied with other clinical duties such as examining patients, answering queries of their consultant and writing progress notes. The difference in the perceived lack of benefit may be due to the difference in the clinical experience and knowledge between SMS and JMS/pharmacists. Since senior clinicians often work at a higher cognitive level due to their clinical knowledge and experience [30
], SMS in the present study may not have considered it as beneficial as their junior colleagues. Halm et al
reported that the senior doctors were less likely to find Community Acquired Pneumonia (CAP) guidelines helpful to them than the junior doctors . Similarly, Lomotan et al
evaluated the usefulness and effectiveness of an CDSS for asthma management and found that the system was rarely used by the consultants due to their high baseline cognition of disease process .
It is interesting to note that the JMS scored significantly higher than the pharmacists on the lack of awareness item; the difference may be due to the variability in the working rosters and job descriptions of both groups. Pharmacists work day shifts in the study hospital while JMS work around the clock in rotating shifts; in addition, pharmacists are also responsible for monitoring approvals for the restricted antibiotics. The fact that pharmacists scored significantly lower on the rigidity of the system item, compared to the other two groups, may be due to the differences in clinical work of these groups. Doctors often have more in-depth knowledge of the individual patient circumstances compared to pharmacists who are mainly concerned with pharmacotherapy-related issues. Lastly, it is important to note that the majority of the senior and junior medical staff did not find the use of iApprove® limited their medical autonomy. Most CDSS utilise some sort of practice guidelines and one of the common barriers of clinicians’ use of practice guidelines is the perception of the limitation of their medical or clinical autonomy [33
]. The limited number of studies available on this issue has shown inconsistent results. Darr et al
found the perceived limitations on medical autonomy of clinicians a barrier to use of an EMR-based system , while physicians in the study by Grundmeier et al
held neutral perceptions with regards to the effects of CDSS on their decision making . The knowledge base of iApprove® is founded on the Antibiotic Guidelines®, a well known reference that is commonly used in Australian Healthcare settings. Therefore, the medical staff in the present study may not have felt that the use of iApprove® was limiting their medical autonomy or was too rigid when prescribing for their patients.
The apparent reason behind the differences among the participants on the endorsement by the departmental heads item on the perceived facilitators scale may lie in the nature of individual prescribing practices. Clinicians make prescribing decisions on a case by case basis and the JMS who are involved in these decisions may not see endorsement of departmental heads as having any role in facilitating their use of iApprove®. On the other hand, the differences among the SMS and the JMS on linking the system with CPOE may be due to the fear of increased workload on the part of JMS. A time motion study that compared the time spent by doctors in writing medication orders using CPOE, with hand-written orders found that the doctors spent 9% of their time in ordering medicines using CPOE compared to 2% with hand-written orders [35
]. The lack of time has been reported as one of the major barriers of clinicians’ use of CDSS and related computer technologies [9
Limitation and Strengths
Certain limitations of the present study should be considered. The response rate was less than optimal yet similar to the average response rate cited in the literature [37
]. Participants were potentially identifiable which may have contributed to a lower response rate. It is important to note that the study managed to attract clinicians with various degrees of system usage (see Appendix 1) and therefore, the sample seems to be adequate to address the aims of the study; clinicians’ perceptions of the system and the relationship between their perceptions and their usage of the system. The authors were not able to demonstrate the criterion-related validity for the perceived facilitator scale as there was no theoretical justification to expect any correlation between usage and scores on the perceived facilitators scale; none of the facilitators were in place at the time of the study. With regards to the construct-related validity estimation of both scales, the assessment would have been more robust if a previously validated scale was used for the purpose [27
]. However, given the fact that the average response rate of surveys of physicians is not adequate [37
], asking clinicians to respond to an additional measure was deemed inappropriate by the authors. Nevertheless, the validation items selected for the above purpose were justifiable and demonstrate significant inter-item correlation thus providing insight towards the construct validity of the two scales. He authors also believe that a more appropriate approach to address the construct validity would be to conduct a factor analysis to explore the internal structure. However, such analyses often require larger samples[16
] than the one achieved in this study.
A number of strengths of the present study should also be considered. The authors were not only independent of the developers and implementers of the system but also of the institution where the system was implemented. This is expected to reduce the potential for investigator bias. It should also be noted that the use of the system was optional throughout the study period. Thus, clinicians were less likely to be influenced by hospital policy in their usage of the system. The present study also reported the correlations between clinicians’ perceptions and actual use of the system; often intended or self-reported usage of a system is used to study such correlations and these estimates of use may or may not represent the actual use of the system. The scarcity of reliable and valid tools in the field of medical informatics has been reported as a major dilemma faced by researchers [14
]. In addition, researchers often do not report the reliability and/or validity assessment of their measurement tools [10
]. The present study reported the development and validation of two newly developed scales to measure clinicians’ perceptions of an antibiotic CDSS which adds significantly to this void in literature.
Pharmacists working in tertiary environments are often responsible for assisting prescribers to make appropriate and rationale selection of antibiotics. While increasingly CDSS are gaining popularity in implementing hospital guidelines as well as unit based protocol regarding antibiotic use [5
], significant barriers to their adoption exist [22
]. The implementers of CDSS will be assisted by understanding the barriers surrounding CDSS adoption to enable more systems to be successfully implemented. The present study measured clinicians’ perceptions of an antibiotic CDSS delivered via the intranet in a tertiary care centre in Melbourne, Australia. This study involved a variety of categories of clinicians. Both the study setting and the study participants represent typical modern metropolitan tertiary care hospitals, and our findings and tools may be useful for other hospitals interested in the implementation of a web-based antibiotic CDSS. While the investigators are independent of the developers and implementers of the CDSS at the study hospital, the findings of this study have been made available to them to allow iterative improvements to the implementation strategy.