|Home | About | Journals | Submit | Contact Us | Français|
Gregory K. Robbins, M.D., M.P.H., Massachusetts General Hospital, Division of Infectious Diseases, 55 Fruit Street, Cox 5, Boston, MA 02114, grobbins/at/partners.org.
William Lester, M.D., Massachusetts General Hospital, Lab of Computer Science, 50 Staniford Street, 7th Floor, Boston, MA 02114, wlester/at/partners.org
Kristin Johnson, M.P.H., Massachusetts General Hospital, 55 Fruit Street, Cox 5, Boston, MA 02114, kjohnson22/at/partners.org
Yuchiao Chang, Ph.D., Massachusetts General Hospital, Div. of General Internal Medicine, 50 Staniford Street, 9th Floor, Boston, MA 02114, ychang/at/partners.org
Greg Estey, Ed.M., Massachusetts General Hospital, Lab of Computer Science, 50 Staniford Street, 7th Floor, Boston, MA 02114, gestey/at/partners.org
Dominic Surrao, M.S., Massachusetts General Hospital, Lab of Computer Science, 50 Staniford Street, 7th Floor, Boston, MA 02114, dsurrao/at/partners.org
Kimon Zachary, M.D., Infectious Disease Associates, Massachusetts General Hospital, 55 Fruit Street, Cox 5, Boston, MA 02114, kzachary/at/partners.org
Sara M. Lammert, B.A., Infectious Disease Associates, Massachusetts General Hospital, 55 Fruit Street, Cox 5, Boston, MA 02114, slammert/at/partners.org
Henry Chueh, M.D., Massachusetts General Hospital, Lab of Computer Science, 50 Staniford Street, 7th Floor, Boston, MA 02114, hccheuh/at/partners.org
James B. Meigs, M.D., M.P.H., Massachusetts General Hospital, Div. of General Internal Medicine, 50 Staniford Street, 9th Floor, Boston, MA 02114, jmeigs/at/partners.org
Kenneth A. Freedberg, M.D., M.Sc., Massachusetts General Hospital, Div. of General Internal Medicine, 50 Staniford Street, 9th Floor, Boston, MA 02114, kfreedberg/at/partners.org
Data to support improved patient outcomes from clinical decision support systems (CDSS) are lacking in HIV care.
To conduct a randomized controlled trial testing the efficacy of a CDSS to improve HIV outcomes in an outpatient clinic.
We conducted a randomized controlled trial where half of each provider’s patients were randomized to interactive or static computer alerts (ClinicalTrials.gov #NCT00678600).
The study was conducted at the Massachusetts General Hospital HIV Clinic.
Participants were HIV providers and their HIV-infected patients.
Computer alerts were generated for virologic failure (HIV RNA >400 c/mL after HIV RNA ≤400 c/mL), evidence of suboptimal follow-up, and 11 abnormal laboratory tests. Providers received interactive computer alerts, facilitating appointment rescheduling and repeat laboratory testing, for half of their patients and static alerts for the other half.
The primary endpoint was change in CD4 count. Other endpoints included time-to-clinical event, 6-month suboptimal follow-up, and severe laboratory toxicity.
Thirty-three HIV providers followed 1,011 HIV-infected patients. For the intervention arm, the mean CD4 count increase was greater (5.3 versus 3.2 cells/mm3/month; difference = 2.0 cells/mm3/month 95% CI [0.1, 4.0], p=0.040) and the rate of 6-month suboptimal follow-up was lower (20.6 versus 30.1 events per 100 patient-years, p=0.022). Median time-to-next scheduled appointment was shorter in the intervention arm after a suboptimal follow-up alert (1.71 versus 3.48 months; p<0.001) and after a toxicity alert (2.79 versus >6 months for control); p=0.072). Ninety-six percent of providers supported adopting the CDSS as part of standard care.
This was a one-year informatics study conducted at a single hospital sub-specialty clinic.
A CDSS using interactive provider alerts improved CD4 counts and clinic follow-up for HIV-infected patients. Wider implementation of such systems can provide important clinical benefits.
Development and utilization of health information technology has been identified as an important strategy to improve the quality and safety of healthcare in the United States (1–6). In 2000 and 2001, the Institute of Medicine issued two landmark reports: ‘To Err Is Human: Building a Safer Health System’ and ‘Crossing the Chasm,’ that brought national attention to the discrepancy between care patients should receive and what they were receiving (7, 8). These reports suggested evidence-based clinical decision support systems (CDSS) as a key strategy to improve medical safety, quality of care, and lower healthcare costs (9). The American Recovery and Reinvestment Act of 2009 made the development and adoption of nationwide health information systems and meaningful use of electronic medical records (EMR) a national priority (10).
Despite the potential to improve healthcare outcomes, examples of effective CDSS in routine clinical care are lacking (11). A review of 100 controlled trials found that CDSS improved provider performance in 64% of studies; however, the effect on patient outcomes was inconsistent or not reported (12). Some CDSS studies have shown promising results, though most have focused on routine health maintenance for primary care (13–21). CDSS have the potential to improve specialty medical care through monitoring of laboratory results and patient follow-up (22–27).
We designed a randomized controlled trial to test the efficacy of Virology FastTrack, a CDSS that generates alerts to notify HIV outpatient providers of adverse events. Prior analyses of the Massachusetts General Hospital HIV Clinic indicated that time between laboratory toxicities or virologic failure and follow-up appointments was longer than in clinical trials, and low-grade toxicities often preceded more serious ones. To address this, “interactive alerts” were designed to notify providers of new adverse events or missed appointments via each provider’s EMR “home page,” patient-specific EMR page, and biweekly emails. Interactive alerts provided key clinical information and a mechanism to request follow-up appointments and/or laboratory tests. In contrast, control alerts were “static alerts” that were only visible on patient-specific EMR pages and did not provide additional information or the semi-automated scheduling mechanism.
All Massachusetts General Hospital HIV Clinic providers were eligible for study participation. “Primary” HIV provider was defined as the primary physician or nurse practitioner following a patient, as recorded in Virology OnCall, an HIV-specific EMR developed by the Massachusetts General Hospital Laboratory of Computer Science (28–30). The EMR tracks patient case status as active or inactive (loss to follow-up, transferred medical care, or deceased). Providers were informed of the study and consent was implied. A waiver of consent was obtained from patients. The study was approved by the Partners Institutional Review Board.
HIV-infected patients of participating providers, with an active case status and an arrived appointment within 6 months of the study start (9/18/2007) or an arrived appointment during the following year, were randomized by an automated program with a one-to-one ratio, using a block size of four, stratified by provider. Providers were not notified of randomization assignments.
The alerting algorithm was developed to be dynamic and responsive, such that alerts were generated less frequently for mild events and more frequently for critical events. Three types of alerts: 1) suboptimal follow-up; 2) virologic failure; and 3) laboratory toxicity were generated using an automated nightly query of hospital databases. Suboptimal follow-up alerts were generated using three rules: a missed appointment and no subsequent arrived appointment within 7 days; no arrived appointment in the previous 4 months and no scheduled appointment in the next 2 months; and “high-risk” patients, defined as those with a missed appointment in the previous year, no arrived appointment in the previous month and no scheduled appointment in the next 2 months. Virologic failure alerts were generated for HIV RNA >400 copies/mL where the previous measurement was ≤400 copies/mL. Toxicity alerts were generated for 11 laboratory tests using a modified version of the National Institutes of Health Division of AIDS Grading for Severity of Adult and Pediatric Adverse Events (31) (Appendix), according to the following scheme: the first Grade 2 or 3, subsequent Grade 2 or 3 if the previous toxicity of the same grade had not been acknowledged by the provider or had been more than 6 months, and all Grade 4 toxicities. Laboratory values prior to patient randomization were used to “prime” the alerting algorithm, so pre-existing laboratory toxicities did not generate alerts. Toxicities that occurred during hospitalizations did not trigger alerts.
For both arms, alerts were posted as “Virologic Failure,” “Suboptimal Follow-up,” or “Toxicity” with a red hyperlink on patient-specific EMR pages. For the intervention arm, interactive alerts could be viewed on providers’ EMR “home pages” as well as biweekly emails. The interactive alerts contained hyperlinks that allowed providers to access prior laboratory results, appointment history and prior alerts. Control alerts were “static” and provided no additional information.
After reviewing data provided by an interactive alert, providers could: 1) act; 2) dismiss; or 3) redirect the alert to a different provider. Provider action, ordering a physician, nurse practitioner, or nurse appointment and/or repeat laboratory tests within a specified timeframe, could be done with three mouse clicks. Requests were posted on administrative assistants’ FastTrack pages. A one-time repeat interactive alert was generated if the requested action did not occur within two weeks of the specified timeframe.
Alerts were automatically resolved based on repeat laboratory tests or an arrived appointment with a clinic provider for suboptimal follow-up. Intervention alerts were removed from the EMR after the provider responded to the alert or after 8 weeks (“timed-out”). Control alerts were only removed from the EMR after resolution.
The primary outcome was CD4 positive lymphocyte (CD4) count increase. Secondary endpoints included clinical event rate for each alert type. Clinical events were: 1) confirmed virologic failure, defined as either two consecutive HIV RNA measurements >400 c/mL within 3 months or a single measurement >400 c/mL with no repeat value in the following 3 months for patients with previous RNA ≤400c/mL; 2) new grade 3 or 4 laboratory toxicity; and 3) 6-month suboptimal follow-up (no arrived appointment for >6 months). An additional analysis, of time-to-next scheduled appointment following an alert, was performed. We also assessed acceptability of FastTrack among providers and administrative assistants.
Power calculation was based on annual CD4 count increases for clinic patients with and without virologic failure (63 and 35 cells/mm3, SD 180) and a two-sided significance level of 0.05, giving 80% power to detect a difference of 28 cells/mm3 with 649 patients per arm.
Alerts triggered during the first 12 months of study were included in the analysis and an additional 3 months of follow-up was included to evaluate provider response to alerts. Data were characterized using descriptive statistics, including percents for categorical variables and mean or medians with interquartile ranges (IQR) for continuous variables. Demographics, appointment history, and laboratory values were obtained electronically from the Research Patient Data Repository (32) and Virology OnCall.
Change in CD4 count was assessed using a mixed-effects model including all patients with at least one CD4 measurement, from 4 months prior to study start date to 3 months after study end date. The mixed model included a random intercept and slope for each patient, with unstructured covariance matrix, and fixed effects for month, intervention, month and intervention interaction, and provider.
Clinical event rates, including first confirmed virologic failure, 6-month suboptimal follow-up, and Grade 3 or 4 toxicity were compared using Poisson regression models with Generalized Estimating Equations (GEE) approach and unstructured correlation matrix to account for clustering within providers. All patients contributed to these analyses. Only the first virologic failure, suboptimal follow-up and laboratory toxicity alerts were considered because of differences in subsequent alerting imposed by the trial design, where for interactive alerts only, the CDSS suppressed subsequent alerts if previously acknowledged. The “time-to-repeat lab or clinic visit” and “time-to-next scheduled appointment” endpoints were measured from alert date and depicted using Kaplan-Meier curves, where patients were censored at the end of follow-up (end of study or “inactive” case status). A sensitivity analysis that ignored case status was done to determine if differences in updating case status could have biased clinical event rates. For “time-to-next scheduled appointment” analysis, patients were also censored at the date of a future appointment if it was scheduled prior to the alert. The comparisons of time to events were conducted using Cox proportional hazard models with robust sandwich estimates to account for provider clustering (33). Proportional hazards assumption was assessed by examining the log-negative-log of Kaplan-Meier estimates of the survival function versus the log of time. Analyses were performed with SAS.9.3 (SAS Institute, Cary, NC).
Thirty-four HIV providers were informed of the study; one provider who used a different EMR opted out. Of the 33 participating HIV Clinic providers (11 attending physicians, 20 infectious disease fellows, and 2 nurse practitioners), 24 participated for the full study period and 9 participated for part of the study period (4 fellows left the study after completing their clinical fellowship year and 5 started their clinical fellowship year during the study). Twenty-nine providers had patients randomized. Two providers had insufficient patient data to derive estimates in both arms. Four fellows who joined the clinic in July of 2008 assumed the care of previously randomized patients but had no new patients during the study. Training was conducted by a clinical assistant and took approximately 5 minutes. Training was repeated for providers with outstanding alerts.
The 33 providers followed 1,011 HIV-infected patients during the 9/18/2007 to 9/17/2008 alert period, 506 were randomized into the intervention arm and 505 to the control arm. Patient-contributed follow-up time was similar between the two arms; 79.2% and 82.8% of patients were followed for the entire study period (p=0.153). Among those followed for less than one year, median follow-up was 214 and 225 days in the intervention and control groups (p=0.23; Figure 1). Baseline characteristics of patients randomized to the two arms were similar (Figure 1). There was no statistical difference in mortality or lost to follow-up case status, but the number of patients who transferred medical care (“left practice”) during the study was greater in the intervention than the control arm (1.2% versus 5.1% patients, p<0.001).
Over the first 12 months of the study, 2,368 alerts were generated: 111 for virologic failure, 619 for toxicity, and 1,638 for suboptimal follow-up, representing 4.7%, 26.1% and 69.2% of alerts (Table 2). For the intervention and control arms, 76.7% and 71.0% of patients received one or more alerts.
Of all alerts, 46.7% were “first alerts” for virologic failure, suboptimal follow-up, or one of the 11 laboratory toxicities. The mean number of total and first alerts per patient was 3.0 and 1.6 in the intervention and 3.4 and 1.5 in the control. Because atazanavir, a commonly prescribed antiretroviral, frequently causes benign hyperbilirubinemia (34), bilirubin alerts were not included in the analysis.
Of the interactive alerts, 90% were acknowledged by providers. Of the unacknowledged alerts, 93% resolved without provider response, 5.2% were overridden by a new alert, and 1.8% “timed out” after 8 weeks. Of provider-acknowledged alerts, 44.4% requested follow-up visits or laboratory testing and 56.8% were dismissed.
The mean number of available CD4 measurements between the two arms (4.0 and 3.9; p=0.42) was similar, as was time to first CD4 measurement (2.2 and 2.0 months, p=0.13) and last CD4 measurements (11.5 and 11.1 months, p=0.17), suggesting CD4 monitoring was similar in both arms. The number of CD4 measurements ranged from 0 to 12 in the intervention arm and 0 to 11 in the control arm. Twenty-nine patients (14 intervention and 15 control) did not have any CD4 counts during the study period. The 982 patients with at least one measurement were included in the primary analysis. Using a mixed-effects model, patients in the intervention had a mean CD4 count increase of 5.3 cells/mm3/month versus 3.2 cells/mm3/month (difference = 2.1 cells/mm3/month, 95% CI [0.1, 4.0], p=0.040). Mean CD4 cell count change per month for the patients of the twenty-seven providers with evaluable patients in both arms are shown in Figure 3.
The rate of 6-month suboptimal follow-up was lower in the intervention, 20.6 versus 30.1 events per 100 patient-years (p=0.022, Table 3). The median time-to-next scheduled appointment after suboptimal follow-up was also lower in the intervention (1.71 versus 3.48 months; p<0.001, Figure 4). No differences were observed in the rate of grade 3 or 4 toxicities (p=0.41, Table 3). The median time-to-next scheduled appointment after first toxicity was lower in the intervention arm (2.79 versus >6 months; p=0.072, Figure 4). There were no differences in the rate of virologic failure (p=0.154) or time-to-next scheduled appointment (p=0.728).
Design and implementation took approximately 1 year of informatics expertise. Calls to the hospital “Help-Desk” were minimal. Following study completion, providers and administrative assistants were asked to complete a web-based questionnaire. Of the 79% of providers who completed the survey, 81% reported FastTrack saved time (≥3 on a scale of 1 to 5, where 5 indicates strong agreement), and over 90% believed FastTrack improved clinical care and should be adopted as part of standard clinical care. All administrative assistants reported FastTrack was easy to use and should be adopted as part of standard clinical care.
We conducted a randomized controlled trial of FastTrack, a novel CDSS with an integrated clinical messaging system for HIV care. We found that FastTrack resulted in greater increases in CD4 count compared to usual (non-CDSS) care. FastTrack’s interactive provider alerts decreased sub-optimal follow-up, an important clinical endpoint that has been associated with low CD4 count, virologic failure and mortality in HIV disease (35–37). This study adds to the small but growing number of CDSS randomized controlled trials that have assessed both patient outcomes and provider behavior. In accordance with the Institute of Medicine’s report (8), the CDSS was designed to make it easier for providers to “do the right thing,” specifically, timely and efficient review of important new laboratory values and scheduling patient follow-up. To this end, FastTrack utilized several key features, an alerting algorithm designed to minimize redundancy and alert fatigue, time-efficient integrated work flow, individual and population-level alerting, and EMR and email alerts.
Provider alerts have been studied previously, but with less distal outcomes (26, 38). Several previous studies indicate that static alerts are often ignored by providers (9, 39–44), and over-alerting leads to physician indifference (45–49). To avoid alert fatigue we employed a simple algorithm to notify providers of new virologic failure, extended time without a provider appointment, and serious and/or progressively worsening laboratory toxicities (29, 30). Over-alerting was avoided by censoring alerts of the same grade as previously acknowledged alerts and those generated during inpatient admissions. Several other aspects of the CDSS were also developed to improve physician workflow. First, interactive alerts were disseminated via email and the EMR, allowing providers to choose when they wanted to review alerts, including times when they were not using the EMR. This allowed providers to respond in a more timely fashion, including during non-clinic times when providers may have been less hurried (19, 20). Interactive alerts were also self-contained, so that providers did not need to open the EMR to find prior laboratory values or appointment history. Finally, the scheduling mechanism streamlined and reduced the process to seconds, requiring as few as three mouse clicks. Relative contribution of alerting and scheduling components should be explored in future studies.
A critical requirement for any CDSS is provider acceptance. FastTrack was quickly adopted by a diverse group of providers, including full-time and part-time outpatient attending physicians, nurse practitioners and infectious disease fellows. User feedback suggested that rapid adoption was due to ease of use and time-efficiency. To attest to provider acceptance, 90% of interactive alerts were acknowledged. This demonstrates the FastTrack alerting mechanism was effective at capturing the attention of providers in a timely manner. Moreover, almost half of the alerts resulted in new requests for appointments or laboratory tests, suggesting providers believed FastTrack alerts were clinically important. Biweekly emails proved to be a highly effective method to communicate with providers and supplied a mechanism by which providers were able to respond to new clinical events outside of clinic sessions. Finally, a majority of providers and all of the administrative assistants reported FastTrack improved patient care, was time-efficient, and should be incorporated into routine clinical care.
FastTrack improved clinical outcomes, both an increase in CD4 count, 63.6 versus 38.4 cells/mm3/year, and a decrease in sub-optimal follow-up. CD4 count changes on antiretroviral therapy vary, but our findings were similar to those from long-term follow-up of a large ACTG treatment naïve study, which found an average increase of 100 cells/mm3/year over the first three years and 52.9 cells/mm3/year over the first seven years (50). Increase in CD4 count reflects immune reconstitution and is correlated with a lower risk of AIDS-defining illness and death (51–53). Providers were not informed of the primary endpoint, nor were alerts generated in response to CD4 cell measurements. FastTrack’s effect varied by provider, likely due to small numbers and differences in both providers and patient panels. Two co-authors (GKR, KAF) are providers in the clinic and were aware of the primary analysis. After running the mixed effects model excluding the 113 patients of the two co-authors, patients in the intervention arm had a mean CD4 increase of 5.1 cells/mm3/month versus 3.1 cells/mm3/month (difference = 2.0, 95% CI [−0.1, 4.1]; p=0.063). FastTrack was designed to provide an efficient way to reduce sub-optimal follow-up, one potential mechanism for differences in CD4 counts is that regular clinical follow-up promotes better clinical outcomes as measured by immune response. This hypothesis is corroborated by several studies. Ndiaye et al. found that patients who were lost-to-follow-up were more likely to have a CD4 count <200/mm3 (p<0.0001) and were 5 times more likely to die than patients in active care (36). Preventing loss-to-follow-up may lower the risk of HIV treatment interruption and treatment failure, which on a population-level could theoretically lower the prevalence of HIV drug resistance, transmission, and mortality (54, 55). Similarly, others have shown a connection between lost-to-follow-up and adverse HIV outcomes (35, 56). Improved monitoring of initial virologic failure and laboratory toxicity may also have contributed to differences in CD4 counts.
Although developed for HIV care, the features of the CDSS can be applied to other chronic diseases. The alerting algorithms may be easily modified to monitor suboptimal follow-up, laboratory toxicities and disease markers specific to other illnesses, including diabetes mellitus, congestive heart failure, and chronic viral hepatitis. The interactive alerts, scheduling application and clinical messaging system have the potential to facilitate workflow in many settings.
This study has several limitations. FastTrack was tested at a single hospital with a strong informatics infrastructure; however, providers who rated themselves as slow to adopt technology viewed the system favorably. Provider-based randomization may have introduced crossover bias, as providers may have closely monitored all patient results. This, however, would have biased towards no difference in the two arms. Only first clinical events were analyzed to minimize bias, but second VF was uncommon (4 patients). There may have been insufficient VF and toxicity events to fully assess the efficacy of the CDSS for these endpoints. This was due in part to the success of the study and the desire of clinic providers to implement FastTrack as standard clinical care. Differences in number of patients transferring medical care may be because sub-optimal follow-up alerts triggered providers to update case status for patients who had become inactive but their case status had not been previously changed. Providers may have been motivated in part to due this as it would prevent them from receiving further alerts on inactive patients. Changing a patient’s case status to inactive censored the patient from further consideration for the sub-optimal follow-up analysis. To remove this potential bias, we conducted a sensitivity analysis that ignored case status, which also found shorter times to next appointment after suboptimal follow-up in the intervention arm (p=0.012). Case status was not considered for the primary study analysis, so had no impact on the primary endpoint. Adherence data and reasons for alert dismissal were not captured. The study was not designed to distinguish the relative contribution of CDSS alerting and scheduling components. Finally, to facilitate the study analysis, only a single provider was alerted for each event; therefore, the study did not take advantage of clinical teams, which typically include nurses, social workers, pharmacists, and case managers.
With informatics expected to play a crucial role in improving efficiency and lowering the cost of healthcare in the United States, it is critical to demonstrate meaningful use of the EMR. While FastTrack improved important HIV outcomes, it also incorporated several innovative features designed with general healthcare providers in mind. First, it offered a simple and time-efficient mechanism to intervene for significant events (57). Second, it minimized alerting fatigue by limiting alerts to new or increasingly important events. Finally, it used both EMR and biweekly Health Insurance Portability and Accountability Act (HIPAA)-compliant email alerts to accommodate busy clinician schedules, which was particularly important for offsite providers and those with fewer patients. The principles of FastTrack may readily transfer to other HIV clinical settings and inform the design of systems to support disease management and improve outcomes in HIV as well as other chronic diseases.
Financial support: Grants K01AI062435 (G.K.R.), K24AI062476 (K.A.F.), P30AI42851 (G.K.R. and K.A.F.), K24DK080140 (J.B.M.), and R37AI42006 (K.A.F.); Massachusetts General Hospital Clinical Research Program.
Primary Funding Source: National Institute of Allergy and Infectious Diseases: “Bridging the Gap HIV Trials and Clinical Practice” (K01AI062435)
We thank the patients and staff of the Massachusetts General Hospital HIV clinic.
Potential conflicts of interest: no conflicts.
Reproducible research statement:
Study protocol and Statistical code available from: Dr. Robbins (grobbins/at/partners.org)