The Clinical Information Technology Assessment Tool for the ICU (CITAT-ICU) retains features of an earlier instrument for the general inpatient ward that was previously shown to be valid and reliable. 5
In this site-specific study, we modified the instrument to reflect the ways that normal hospital information processes might differ in the ICU setting. To examine the instrument’s potential application among a cohort of ICUs, we considered how scores on the CITAT-ICU correlated with the ability to reduce catheter-related bloodstream infections in 19 ICUs participating in a state-wide quality improvement program. Our results suggest that higher scores on a multitude of information technology domains (CIT, automation, usability, test results, and notes & records) are associated with lower post-intervention rates of CRBSI. These results remain significant after accounting for the baseline or pre-intervention rate. In addition, scores on both the processes and effectiveness sub-domains approached significance as well. In contrast, bed size, rural status, and teaching status do not appear to be associated with the post-intervention rate of CRBSI, even after adjusting for the pre-intervention rate.
These results are meaningful. Catheter-related bloodstream infections (CRBSI) are costly and can be fatal. 8,9
Each year within the United States, central venous catheters cause an estimated 80,000 CRBSIs and up to 28,000 deaths in intensive care units (ICUs). Average potential cost per CRBSI is $45,000. 10
According to our findings, an ICU with a 10 point higher CIT score is associated with 4.6 fewer central line infections per 1,000 central line days compared to an ICU with a lower IT score that implements the same evidence based intervention. Several potential explanations exist for these positive findings. Catheter-related bloodstream infections are the result of multiple factors including a system’s organization and structural environment. Providers equipped with systems that can more easily retrieve test results, provide ubiquitous access to clinical information, and employ order sets that reduce variations in clinical care may be more likely to deliver higher quality of care. Highly automated, carefully designed information systems may allow ICU teams to focus on truly clinical tasks by reducing paperwork, enhancing patient monitoring, and simplifying data extraction. In the case of central line placement, efficiencies created by a powerful information system may allow physicians and nurses to better comply with effective, but potentially time consuming, interventions such as those introduced in this study. Such steps include performing the central line insertion using checklists and enabling more team members, such as nurses, to participate. An electronic medical record, or complex decision support system, may prompt daily consideration of central line removal. In the future, clinical information systems might incorporate other key data elements about central lines, and provide automatic tracking, with warnings if certain signals appear (e.g., fever and tachycardia in the presence of a catheter that has been in place for an extended period).
Most quality improvement efforts are data intensive. Interventions need to be accompanied by tenaciously collected baseline and follow-up data. Powerful information systems may reduce the burden of data collection, freeing quality improvement teams to focus on efforts to change provider behavior, re-engineer processes, champion interventions, and sustain gains. Allowing staff to concentrate on the “human” aspects of quality improvement may be a significant benefit of well designed clinical information systems and may explain some of our findings.
The CITAT was designed so that automation and usability would represent, as much as possible, two separate constructs. This distinction was confirmed through previous factor analyses. 5
The uniqueness allows us to evaluate whether increasing automation is associated with greater usability. In this study we found that hospitals with higher automation scores also tend to have higher usability scores, suggesting that the conversion of paper-based systems to well-designed electronic systems do improve physician director perceptions of their system’s effectiveness and ease of use.
Survey based assessment tools often suffer from lack of response rate, particularly among physicians. We therefore reduced the instrument to its smallest possible size by prioritizing items for selection according to their overall relevance to the ICU and their potential impact on clinical outcomes and efficiencies. We also sought to create an instrument that could be administered to the medical director alone. To test the reliability of this approach, we administered the assessment tool to a varying number of physician staff at all 19 of the participating ICUs. Thirteen hospitals (68% of the sample) had at least one additional staff physician complete the survey. We found that when the sample was restricted to ICUs with at least six staff physicians completing the survey the correlation coefficient between the mean physician scores and the director score was at least 0.65. This suggests that the instrument reaches a relatively high degree of reliability with a relatively low number of respondents and that a single medical director’s score may be an efficient and valid way to score an ICU’s information systems. This finding will need to be replicated in further studies.
We do have important limitations to report. First, the 19 ICUs in this study represent a convenience sample of the ICUs who participated in the larger quality improvement study in Michigan. It is possible that the ICUs that were willing to participate might respond to the CITAT-ICU in a systematically different way than those who did not participate, producing a selection bias. In addition, because no other instruments exist with which to measure an ICU information system’s performance, we did not have an independent method to corroborate the validity of using responses from a single medical director. However, at a sub-sample of hospitals, independent physician staff scores appear to correlate with the director scores, lending evidence of inter-rater reliability.
We examined whether bed size, teaching status, and rural location were associated with both the post-intervention rates and with the IT score. These hospital characteristics do not appear to be confounders. However, it is possible that other unmeasured organizational factors (e.g., financial strength of the organization) could play a role in the relationship we observed.