|Home | About | Journals | Submit | Contact Us | Français|
To influence physician practice behavior after implementation of a computerized clinical decision support system (CDSS) based upon the recommendations from the 2007 ACEP Clinical Policy on Syncope.
This was a pre-post intervention with a prospective cohort and retrospective controls. We conducted a medical chart review of consecutive adult patients with syncope. A computerized CDSS prompting physicians to explain their decision-making regarding imaging and admission in syncope patients based upon ACEP Clinical Policy recommendations was embedded into the emergency department information system (EDIS). The medical records of 410 consecutive adult patients presenting with syncope were reviewed prior to implementation, and 301 records were reviewed after implementation. Primary outcomes were physician practice behavior demonstrated by admission rate and rate of head computed tomography (CT) imaging before and after implementation.
There was a significant difference in admission rate pre- and post-intervention (68.1% vs. 60.5% respectively, p=0.036). There was no significant difference in the head CT imaging rate pre- and post-intervention (39.8% vs. 43.2%, p=0.358). There were seven physicians who saw ten or more patients during the pre- and post-intervention. Subset analysis of these seven physicians’ practice behavior revealed a slight significant difference in the admission rate pre- and post-intervention (74.3% vs. 63.9%, p=0.0495) and no significant difference in the head CT scan rate pre- and post-intervention (42.9% vs. 45.4%, p=0.660).
The introduction of an evidence-based CDSS based upon ACEP Clinical Policy recommendations on syncope correlated with a change in physician practice behavior in an urban academic emergency department. This change suggests emergency medicine clinical practice guideline recommendations can be incorporated into the physician workflow of an EDIS to enhance the quality of practice.
A gap exists between evidence-based knowledge and the care that is actually delivered to our patients [1, 2]. Knowledge translation is the process of bringing evidence from research to clinical practice. Practice guideline development, a pivotal step in this process, has limited effect on changing physician practice behavior [3–6]. This issue holds true in emergency medicine as well [7–9]. The American College of Emergency Physicians (ACEP) Clinical Policies have been shown to be safe and effective, and are even cited by other specialties [10, 11]. In spite of the benefits of the ACEP Clinical Policies, implementation of these clinical practice guidelines into physician practice continues to be a challenge. Even when physicians are aware of the evidence, they may not adhere to it [3, 12]. Lehrmann et al.  found that knowledge of the ACEP Clinical Policy on Hypertension did not translate into changes in physician practice.
Clinical Decision Support Systems (CDSS) are systems “designed to aid directly in clinical decision-making, in which characteristics of individual patients are used to generate patient-specific assessments or recommendations that are then presented to clinicians for consideration.” Clinical Decision Support Systems (CDSSs) can significantly improve clinical practice [14–16]. Kawamoto et al.  found that clinical practice was improved with provision of CDSSs: (1) as part of clinician workflow, (2) with recommendations rather than assessments, (3) at the time and location of decision-making, and (4) if computer-based. Realizing the potential efficacy of CDSSs as Emergency Department documentation increasingly becomes computerized, Napoli and Jagoda  and Gallagher  concluded that future practice guideline implementation research should focus on using CDSSs.
This study aimed to improve knowledge translation from evidence-based emergency medicine practice guidelines by creating a CDSS for implementation of the recommendations from an ACEP Clinical Policy in an urban academic emergency department. We specifically chose the 2007 ACEP Clinical Policy on Syncope  because it included newly published recommendations on a frequently encountered diagnosis. We also hypothesized that there would be room for change in previously accepted physician practice behavior for the reasons outlined below. We sought to identify the presence of a change in physician ordering of cranial imaging and admission practices due to implementation of a HPI-based CDSS in patients with a final diagnosis of syncope. In our retrospective control population, the baseline admission rate for syncope patients was 68%. Due to the high costs of admission for syncope (estimated at $2 billion annually), there may be potential for cost savings if syncope admission guidelines are more closely followed . The 2007 ACEP Clinical Policy on Syncope provides new recommendations on decision-making regarding need for head computed tomography (CT) imaging and hospital admission in adult patients presenting to the emergency department with syncope.
We hypothesized that incorporating these recommendations into a computerized CDSS embedded in the physician workflow of an emergency department information system (EDIS) would influence physician behavior. A change in behavior would suggest that this point-of-care decision support tool helps improve practitioner awareness, adoption, and adherence to ACEP practice guidelines and bridge the gap from evidence to practice.
This study was designed as a pre-post intervention design with a prospective cohort and retrospective controls. We conducted a medical chart review with 6-month retrospective baseline analysis and prospective data collection following implementation of a CDSS based on the 2007 ACEP Clinical Policy on Syncope into the EDIS.
The Mount Sinai School of Medicine Institutional Review Board reviewed and approved this study.
The Mount Sinai Medical Center (New York, NY) is an 1,171-bed tertiary care academic medical center located in Manhattan, bordering the Upper East Side and East Harlem. Mount Sinai has a 61-bed ED with a volume of 88,140 visits in 2007. The ED adopted a comprehensive EDIS in 2004 (Picis, ED Pulsecheck, Wakefield, MA, formerly IBEX) that provides triage, patient tracking, physician and nurse documentation, retrieval of charts from prior ED encounters and inpatient data, computerized provider order entry, results review, discharge instructions and prescription writing. Attendings and/or residents could complete charts and final diagnosis on every ED patient, including a history of present illness (HPI) and final emergency department diagnosis (chosen from a drop-down list of International Classification of Diseases, 9th Revision approved diagnoses or free-text entries).
The intervention assessed behavior of physicians caring for consecutive syncope patients over 18 years of age in the emergency department. The medical records of 410 patients were reviewed prior to implementation, and 301 records were reviewed after implementation. Physicians included 34 attending physicians board-certified or board-eligible in emergency medicine, with additional information-gathering and decision-making provided by residents primarily in emergency medicine (EM), as well as occasional rotators from the departments of internal medicine, psychiatry, and obstetrics and gynecology.
A three-item module based on the 2007 ACEP Clinical Policy on Syncope  was added to the Syncope HPI template, for completion by attending or mid-level providers (see Fig. 1, Table 1). Each item served a dual role in: (1) reminding physicians of the strength of the policy’s recommendations and (2) prompting physicians to document their clinical decision-making.
The items included in the module prompted physicians to risk-stratify adult syncope patients based on EKG findings, and to explain the rationale for head CT and admission. Each item included a drop-down list based on recommendations and phrasings from the ACEP Clinical Policy, whose guidelines state that head CTs should not be ordered in patients presenting with syncope unless suggested by specific findings in the patient’s history or physical, and that hospital admission be reserved for specific high-risk patients.
Coinciding with the appearance in the electronic chart, the module’s presence as a decision support tool was announced via e-mail by a non-investigator to the EM residents and faculty. The announcement only brought to the physicians’ attention that the tool had become available and did not include details of this study. There was no additional marketing strategy as the objective of the study was to assess the impact of the CDSS alone.
A retrospective chart review was employed to quantify baseline rates for ordering head CT and admission of adult patients with syncope presenting to the ED. Patients aged 18 years or older were identified by searching the electronic archive of all ED visits by patients in the periods 6 months pre-intervention and 20 weeks post-intervention, using a search of final diagnoses that included the word “syncope” or “syncopal.” Our EDIS stores final diagnoses in text format. Two abstractors (NC and NG, who were not blinded to the study purpose) then de-identified these records and exported them to Excel (Microsoft, Redmond, WA) for analysis.
Patient disposition (discharge from ED or admission to any inpatient service) was tabulated. Additionally, patient records were cross-matched against a radiology requisition record to determine which syncope patients had a head CT ordered and performed during their ED course.
Using SAS (version 9.2: SAS Institute Inc., Cary, NC), descriptive statistics were calculated (mean and standard deviation for continuous variables and proportions with the corresponding two-sided 95% confidence interval for categorical variables). Comparability of the pre-intervention group and the post-intervention group was analyzed using the two-sample t-test for age and the chi-square test for categorical variables, such as gender, admission rate, and head CT scan rate. Following the initial analysis, subset analyses for admission rate and head CT imaging rate were performed: (1) comparing the behavior of seven physicians who cared for ten or more patients in both the pre- or post-intervention groups and (2) comparing groups completing and bypassing the CDSS in the post-intervention group using the two-tail Z-test.
In the patient population studied in the San Francisco Syncope Rule derivation , application of the rule might have reduced admissions by 10%. Due to this finding, a 10% absolute reduction in admissions and head CT scans was used for the sample size justification. A one group χ2 test with a 0.05 two-sided significance level will have 80% power to detect a difference between a pre-intervention admission rate of 0.68 and a post-intervention rate of 0.58 when the sample size is 177. A one group χ2 test with a 0.05 two-sided significance level will have 80% power to detect a difference between a pre-intervention head CT scan rate of 0.40 and a post-intervention rate of 0.30 when the sample size is 182. As age was found to have a possible confounding effect on the outcome of ordering a head CT and admission in the univariate screen, it was included in a multivariable analysis.
In the 6-month pre-implementation period between 1 June and 30 November 2007, a total of 410 patients aged 18 years or older presented to the Mount Sinai ED with a final diagnosis of syncope (see Table 2). Two hundred forty-four of these 410 patients were female (59.5%), and the average age was 64.9 ± 21.3 years.
The decision support module was added to the Mount Sinai Emergency Department Information System (EDIS) on 4 February 2008, and between that date and 22 June 2008, a total of 301 patients aged 18 years or older presented to the Mount Sinai ED with a final diagnosis of syncope (see Table 2). One hundred eighty-five of these 301 patients were female (61%), and the average age was 61.9 ± 22.5 years. There was no significant difference in gender or age between the pre- and post-intervention groups. There was no significant interaction between intervention and age for admission rate (p<0.258) or for head CT rate (p<0.420).
In the pre-intervention cohort, 68.1% of patients were admitted [95% CI: (63.3 to 72.5)] (see Table 3 and Fig. 2). The post-intervention admission rate was 60.5% [95% CI: (54.7 to 66.0)]. There was a significant difference in admission rate pre- and post-intervention (p=0.036). The pre-intervention rate of obtaining a head CT was 39.8% [95% CI: (35.0 to 44.7)] compared to 43.2% [95% CI: (37.5 to 49.0)] post-intervention (see Table 3 and Fig. 2). There was no significant difference in the head CT scan rate pre- and post-intervention (p=0.358).
There were seven physicians who saw ten or more patients during the pre- and post-intervention. A subset analysis for the admission rate and head CT scan rate was performed on these seven physicians. There was a slight significant difference in the admission rate pre- and post-intervention for the subset of seven physicians who saw ten or more patients pre- and post-intervention (74.3% vs. 63.9%, respectively, p=0.0495; see Table 4 and Fig. 3). There was no significant difference in the head CT scan rate pre- and post-intervention for the subset of seven physicians who saw ten or more patients pre- and post-intervention (42.9% vs. 45.4%, respectively, p=0.660).
In the post-CDSS intervention group comparing admission and CT head rates when the CDSS was completed versus when it was not completed, subset analysis revealed an admission rate of 51.7% when the CDSS was completed compared to 64.0% when it was not (Z-value 1.96, statistically significant to 95.0% confidence level) and a CT head rate of 43.7% when the CDSS was completed compared to 43.0% when it was not (Z-value 0.03, not statistically significant; see Table 5). In the post-CDSS intervention group comparing admission and CT head rates when the CDSS was visible versus when it was not visible, subset analysis revealed an admission rate of 54.0% when the CDSS was visible compared to 73.3% when it was not (Z-value 3.06, statistically significant to 99.8% confidence level) and a CT head rate of 38.5% when the CDSS was visible compared to 52.5% when it was not (Z-value 2.19, statistically significant to 97.2% confidence level; see Table 6).
This study assessed the impact on physician management of syncope patients after implementation of a CDSS based upon the 2007 ACEP Clinical Policy on Syncope into an EDIS.
The admission rate for syncope patients was significantly lower in the post-intervention period compared to the pre-intervention period. The head CT imaging rate for syncope patients was not significantly different during the pre- and post-intervention periods. Since there was no significant interaction between intervention and age for admission rate (p<0.258) or for head CT rate (p<0.420), we can therefore conclude that age in conjunction with pre-/post-intervention does not affect the admission rate or ordering head CT scan rate in our cohort.
Subset analysis of physicians seeing more than ten patients in both the pre- and post-intervention periods showed similar changes and did not suggest cluster-associated phenomenon. Observed changes in admission rates in adult syncope patients may be indicative of improved awareness, adoption, and adherence to ACEP practice guidelines.
Subset analysis of physician behavior when the CDSS was completed versus not completed and visible versus not visible revealed significant differences in admission and CT head rates when the CDSS was visible and significant differences in admission when the CDSS was completed. These findings suggest that although this intervention was a passive one, the CDSS’s presence may have had significant effect on physician practice behavior. An alternative conclusion might be that physicians who are likely to ignore CDSS may also be less likely to adhere to evidence-based clinical practice guidelines.
It is well-established in the literature that even when physicians are aware of evidence, they may not adhere to it [3, 12]. Lehrmann et al.  found that increased knowledge of the ACEP Clinical Policy on Hypertension following distribution of the guidelines did not translate into changes in physician practice. Cabana et al.  identified knowledge, attitudes, and behavior as barriers to physician adherence to clinical practice guidelines. Kirkpatrick’s hierarchy of levels of evaluation proposes that “complexity of behavioral change increases as evaluation of intervention ascends the hierarchy” . As evaluation ascends from (1) reactions to (2) learning to (3) behavior to (4) results, the impact of the intervention strengthens from (1) learner satisfaction to (2) knowledge to (3) transfer of learning to the workplace to (4) impact on society, respectively. Since acquired knowledge of ACEP Clinical Policy on Hypertension did not translate into changes in physician practice in Lehrmann’s trial, we approached the problem of adopting evidence-based guidelines in clinical practice at the next level in the hierarchy of interventions, namely, transfer of learning to the workplace via evaluation of behavior.
While developing the CDSS, we focused on following the provisions for improved clinical practice outlined by Kawamoto et al.  Namely, the CDSS was included in the clinician workflow in our computer-based EDIS. We also developed a CDSS that used recommendations rather than assessments. Instead of explicitly assessing for compliance to ACEP Clinical Policy recommendations, we sought a measurable change in physician behavior. Although our population demographics were similar to previously studied populations, this admission rate is considerably higher than previously studied populations [10, 20, 22]. We chose head CT imaging as our other outcome because in the absence of focal neurologic findings, head CT imaging is of low yield in determining the etiology of syncope [23–25]. We suspected that clinicians were ordering more cranial imaging in syncope patients than necessary.
To our knowledge this is the first study to demonstrate a significant change in physician behavior after implementation of a CDSS based upon ACEP Clinical Policy. While the body of medical research and literature grows rapidly, practice guidelines provide a means to educate, summarize, and distill evidence-based medicine to the practicing physician. However, implementation and utilization of the ACEP Clinical Policies has historically been a challenge. Given the increased adoption of robust EDISs, these results will encourage further experimentation and implementation of CDSSs based upon evidence-based clinical practice guidelines into EDISs.
The next step for research and development of such CDSSs could include: (1) assessment of our CDSS closer to the point of decision-making or in a different practice environment, (2) integration of other ACEP Clinical Policies into similar CDSSs in an effort to create an EDIS with comprehensive decision support, or (3) focus on more complex outcome measures such as compliance with guidelines or patient outcome.
We have identified several limitations to our study. First, instead of explicitly assessing for compliance to ACEP Clinical Policy recommendations, we assessed for a measurable change in physician behavior. Our EDIS could not track specific use of the CDSS. Consequently, admissions or head CTs could have increased in one type of patient and decreased in another, leaving the global rate unchanged.
Second, the EDIS in our institution did not have the capability to incorporate decision support at the precise time and location of decision-making in physician work flow. Specifically, our EDIS provides decision support in the documentation template for History of Present Illness (HPI). Therefore, the CDSS was more passive than active, meaning that CDSS use did not actually result in the action of ordering a head CT or admitting a patient.
Although our chart extraction selected patients with a diagnosis of or including the term “syncope,” the physician documenting on such a patient had the option to choose the patient’s HPI template independently of the patient’s diagnosis. Therefore, a patient diagnosed with syncope might not have the Syncope HPI template. In such a scenario, the physician would not encounter the decision support tool. Furthermore, as is common in unpredictable environments like the ED, HPI documentation may occur before or after the decision to obtain imaging or admit the patient. Since this study was completed at a teaching hospital, the physician documenting the HPI was not always the physician deciding the patient’s diagnosis, need for imaging, or admission.
Next, to prevent delaying or disrupting physician work flow, we decided the HPI documentation could not be made to require use of the CDSS, so the CDSS could be bypassed without reading or completing it. In order to minimize interruption of work flow, we abbreviated the language of the recommendations from the ACEP Clinical Policy on Syncope to fit within the format of the drop-down menus seen in Fig. 1. We assume that the practice guideline was not originally authored with the intention of implementation in an abbreviated form. Thus, it is possible that rewording the recommendations creates a simplified or modified recommendation for clinical decision-making. In a separate performance improvement project, however, we had markedly improved documentation of aspirin and beta-blocker use in chest pain patients using a CDSS with a single reminder phrase just above the “ENTER” button of a given HPI template. We found the structure and placement of the chest pain CDSS provided a quick and accessible—yet not prohibitive—reminder.
It is important to note that the 2007 version of ACEP Clinical Policy on Syncope was revised to include new recommendations using the findings from the derivation and validation of the San Francisco Syncope Rule . Two studies have had problems validating the San Francisco syncope rule [26, 27].
A final limitation of this study was that the CDSS was trialed at the practice site of the CDSS’s creators. Garg et al.  found that “studies in which authors also created the CDSS reported better performance compared with those in which the trialists were independent of the CDSS development process.” We attempted to minimize this “Hawthorne effect” bias by having a third party announce the implementation of the CDSS without mention of the fact that we would be gathering data on physician behavior associated with the CDSS. Additionally, within our department there was no specific notification or implementation of new ACEP Clinical Policies.
In conclusion, in our urban academic emergency department the introduction of an evidence-based CDSS based upon ACEP Clinical Policy recommendations on syncope correlated with a change in physician practice behavior in terms of admission but not for head CT imaging. When the CDSS was visible but not used, it had significant effect on both admission and head CT imaging rates. This change suggests emergency medicine clinical practice guideline recommendations can be incorporated into the physician workflow of an EDIS to enhance the quality of practice. A more active CDSS implemented at the point of medical decision-making and whose use resulted in physician order entry might have a greater impact on behavior.
Edward R. Melnick, MD is an attending physician in the Department of Emergency Medicine at North Shore University Hospital in Manhasset, New York. This study was completed during his chief resident year in the Department of Emergency Medicine at Mount Sinai School of Medicine in New York, New York. Dr. Melnick’s research interest is in knowledge translation, specifically in the areas of clinical practice guideline development and guideline implementation. In this capacity, he serves on the ACEP Clinical Policies Committee and is the principal investigator for an ACEP Informatics Section grant exploring the feasibility of translating the ACEP Clinical Policies into computerized clinical decision support.
Harvard Macy Institute for Educators in the Health Professions, May 2008; SAEM Annual Meeting, May 2009.
The views expressed in this paper are those of the author(s) and not those of the editors, editorial board or publisher.