Our results suggest that the HIT alert does significantly impact provider behavior leading to increased and more rapid confirmatory HIT testing, treatment with alternative anticoagulation, and cessation of heparin. However, the alert did not increase the detection rate of HIT, or reduce the 90-day mortality or length of stay postindex date. Increasing HIT antibody testing, alternative anticoagulation use, and heparin discontinuation orders in the intervention group all further support the general consensus that CDS interventions can be powerful tools in directing provider behavior. There was a non-significant trend for providers in the intervention group to order alternative anticoagulation prior to the results of any HIT antibody testing more frequently than those in the control cohort. If indeed this does reflect a real change in practice behavior, a possible explanation could be that the alert by itself created a sense of urgency among the providers to be more aggressive in responding to the clinical scenario.
A greater proportion of total HIT orders occurred within the first 24 h after the triggering platelet count in the alerted cohort, suggesting that the alert also potentially decreased delays in ordering HIT antibody tests. That these behavioral changes were not even more robust could be partially explained by alert fatigue, a commonly described phenomenon in which providers are less likely to respond to similar alerts over time.23
Despite a significant increase in HIT testing, however, there was no subsequent meaningful improvement in the detection rate of HIT. The additional testing associated with the intervention group only resulted in more HIT antibody-negative patients. This important result ran contrary to our hypothesis that increasing the number of HIT antibody tests in patients with features consistent with HIT would increase the detection rate of patients with HIT. The most likely explanation for this finding is that changes in platelet count and timing of heparin exposure by themselves do not reliably predict which patients had HIT. This possibility is reflected by algorithms such as the 4T score, a tool created to help physicians diagnose HIT that incorporates not only platelet count changes and heparin exposure but also factors such as evidence of thrombosis and whether clinicians believe there are other, more likely explanations for the patients' findings.24
The 4T score requires clinical judgment as one of its ‘T’s and is therefore not amenable to an automated alert algorithm. Evaluation of the tool26
has demonstrated that the positive predictive value in patients referred for HIT testing with an intermediate or high risk 4T score ranges widely, from a dismal 11% to a still unsatisfactory 56%, demonstrating the difficulty in making this diagnosis.
The substudy comparing patients who actually triggered the HIT alert to those who met the original alert specifications demonstrated that the database query did an excellent job in identifying all patients who actually triggered the alert. However, 15% of patients included in the intervention group because they met the alert specifications did not actually trigger an alert. As noted in the Results section, these patients did not actually trigger alerts due to technical reasons associated with implementing the algorithm rather than not meeting the clinical criteria. Since the same method was applied to both intervention and control groups, it is unlikely that the small difference between the specifications of the study and the actual algorithm would have a differential impact on results.
This paper builds on research performed by Riggio et al
on a prior HIT decision-support intervention.27
These researchers found that, paradoxically, their HIT alert delayed initiation of diagnostic and treatment modalities and had no impact on clinical outcomes. Our study improved on a specific limitation described by these investigators, which was their ability to only identify patients who had HIT rather than those who met the HIT alert criteria in the control group. Consequently, their primary outcome related to timeliness of diagnosis and treatment for patients found later to have HIT, while our outcomes centered on improving provider response to patients with platelet count decreases typical of HIT. Our ability to closely approximate this control group allowed more direct measurement of the alert's impact on HIT antibody orders placed, detection rates, and mortality.
Our research agreed with some of their findings about HIT CDS alerts. The alert triggered further HIT testing in only 19% of patients in their population and 22% in ours. In both studies, despite increases in testing for HIT, there was no improvement in the detection of HIT. On the other hand, the HIT alert in the Riggio study did not improve the time to therapeutic intervention in patients who had HIT. A possible explanation might be that Jefferson had employed multiple other alerts for other clinical conditions, resulting in alert fatigue. While our study was not powered to compare timeliness of interventions in patients who ultimately had HIT, we do demonstrate that the HIT alert reduces delays in treatment for patients with criteria consistent with HIT.
Our study is limited inherently by its retrospective cohort design. Furthermore, our control and intervention groups were determined by a retrospective database query rather than a run-in period where patients who met the threshold were identified but their providers were not alerted. However, based on our validation study, we are confident that almost all of the patients who were supposed to receive the alert did in fact trigger the alert.
While there were patients included in both groups who likely did not trigger the alert, this finding is similar to what one would observe in an intention-to-treat analysis and should not differentially bias the results for the following reasons: first, the same query was employed to identify the control and intervention; second, the additional patients in the intervention group who did not trigger an alert would, if anything, likely have diluted the effect of the intervention on provider behavior. The fact that we still observe important, statistically significant differences in our two groups makes those results even more compelling.
Another limitation was that, at baseline, the two groups differed in their mean Charlson comorbidity score. However, although the difference was statistically significant, it was small enough in magnitude that it was likely not to be clinically meaningful. Furthermore, when we adjusted for this variable in the mortality and length of stay analyses, the results were consistent.
Unfortunately, resources were not available to manually review all patient records in the control and intervention groups to determine (1) whether the alert impacted important clinical outcomes such as thrombosis and hemorrhage or (2) whether there were any patterns among those patients who did not receive further testing. The length of inpatient hospital stay post-HIT alert was incorporated into our study to reflect such morbidity, and this did not demonstrate a significant difference. Given that the alert did not require providers to list their over-ride reason, and we did not conduct a qualitative exploration of providers' perceptions, we can only speculate why providers did not act upon the alert.
We could not measure whether the alert resulted in increased utilization of resources, that is, more hematology consults, increased drug costs, or other increases in resource allocation and financial costs. We also recognize that HIT antibody positivity is not the gold standard for the diagnosis of HIT. Nonetheless, by using this parameter for both cohorts, it functioned as a sensitive surrogate marker for measuring physician behavior.
Our study also has several strengths including a large sample and the ability to assess mortality and length-of-stay outcomes as well as the change in clinician behavior. Also, the overall incidence of HIT at MMC was consistent with that reported in other studies4–6
suggesting that our population is not unique. Further, the consistency of our results with the findings of others, regarding HIT alerts, adds confidence that our results were not a statistical or methodological aberration.27
Private business consortiums such as the Leapfrog group28
as well as the Federal government through the HITECH legislation are advocating for the implementation of CDS interventions in an effort to improve patient quality of care.29
While CDS interventions have the potential to lead to improved patient care, they can also lead to adverse consequences including overtreatment and increased, unnecessary costs.30
Stopping heparin and starting alternative anticoagulation in patients are not inconsequential decisions. Alternative anticoagulation is more costly and typically has an increased hemorrhagic risk.31
As CDS intervention proposals become more sophisticated in an attempt to model clinical reasoning, their implementation will inevitably place higher demands on computer information systems. To minimize this disruption, programmers must make modifications when translating these specifications into computer code. Consequently, it is important, as this research highlights, to validate a CDS intervention's specifications against its actual performance before widespread adoption.
Our research highlights the complexity of translating paper-based decision support into computerized alerts. Any algorithm, no matter how well intentioned and meaningfully developed and no matter how well designed to minimize harm to patient and increase awareness, must be evaluated prior to its widespread implementation. Such practice is not required in most institutions today.
Determining whether to implement a HIT CDS intervention depends on the objectives of the institution. If an organization's goal is to reduce delays in diagnosis and treatment in patients with features concerning for HIT, then this alert has the potential to be a useful tool. However, if the desire is to improve detection of HIT, our study offers no evidence that the intervention in its current form would be effective. Whether or not a more sophisticated algorithm could reduce the number of false-positive alerts and improve clinical outcomes warrants further study. Until such time, these data do not support implementation of a computerized HIT alert.