Search tips
Search criteria 


Logo of nihpaAbout Author manuscriptsSubmit a manuscriptHHS Public Access; Author Manuscript; Accepted for publication in peer reviewed journal;
Ann Intern Med. Author manuscript; available in PMC 2013 November 15.
Published in final edited form as:
PMCID: PMC3829692

Successful Outcomes of a Clinical Decision Support System in an HIV Practice: A Randomized Controlled Trial

Gregory K. Robbins, M.D., M.P.H,1,5,6 William Lester, M.D.,2,3,6 Kristin L. Johnson, M.P.H,1,5 Yuchiao Chang, Ph.D.,2,5,6 Gregory Estey, Ed.M.,3 Dominic Surrao, M.S.,3 Kimon Zachary, M.D.,1,5,6 Sara M. Lammert, B.A.,1 Henry Chueh, M.D.,2,3,6 James B. Meigs, M.D., M.P.H.,2,6 and Kenneth A. Freedberg, M.D., M. Sc.1,2,4,5,6



Data to support improved patient outcomes from clinical decision support systems (CDSS) are lacking in HIV care.


To conduct a randomized controlled trial testing the efficacy of a CDSS to improve HIV outcomes in an outpatient clinic.


We conducted a randomized controlled trial where half of each provider’s patients were randomized to interactive or static computer alerts ( #NCT00678600).


The study was conducted at the Massachusetts General Hospital HIV Clinic.


Participants were HIV providers and their HIV-infected patients.


Computer alerts were generated for virologic failure (HIV RNA >400 c/mL after HIV RNA ≤400 c/mL), evidence of suboptimal follow-up, and 11 abnormal laboratory tests. Providers received interactive computer alerts, facilitating appointment rescheduling and repeat laboratory testing, for half of their patients and static alerts for the other half.


The primary endpoint was change in CD4 count. Other endpoints included time-to-clinical event, 6-month suboptimal follow-up, and severe laboratory toxicity.


Thirty-three HIV providers followed 1,011 HIV-infected patients. For the intervention arm, the mean CD4 count increase was greater (5.3 versus 3.2 cells/mm3/month; difference = 2.0 cells/mm3/month 95% CI [0.1, 4.0], p=0.040) and the rate of 6-month suboptimal follow-up was lower (20.6 versus 30.1 events per 100 patient-years, p=0.022). Median time-to-next scheduled appointment was shorter in the intervention arm after a suboptimal follow-up alert (1.71 versus 3.48 months; p<0.001) and after a toxicity alert (2.79 versus >6 months for control); p=0.072). Ninety-six percent of providers supported adopting the CDSS as part of standard care.


This was a one-year informatics study conducted at a single hospital sub-specialty clinic.


A CDSS using interactive provider alerts improved CD4 counts and clinic follow-up for HIV-infected patients. Wider implementation of such systems can provide important clinical benefits.

Keywords: Clinical Outcomes, Disease Management, Clinical Decision Support Systems, Electronic Medical Record, Alerts, HIV


Development and utilization of health information technology has been identified as an important strategy to improve the quality and safety of healthcare in the United States (16). In 2000 and 2001, the Institute of Medicine issued two landmark reports: ‘To Err Is Human: Building a Safer Health System’ and ‘Crossing the Chasm,’ that brought national attention to the discrepancy between care patients should receive and what they were receiving (7, 8). These reports suggested evidence-based clinical decision support systems (CDSS) as a key strategy to improve medical safety, quality of care, and lower healthcare costs (9). The American Recovery and Reinvestment Act of 2009 made the development and adoption of nationwide health information systems and meaningful use of electronic medical records (EMR) a national priority (10).

Despite the potential to improve healthcare outcomes, examples of effective CDSS in routine clinical care are lacking (11). A review of 100 controlled trials found that CDSS improved provider performance in 64% of studies; however, the effect on patient outcomes was inconsistent or not reported (12). Some CDSS studies have shown promising results, though most have focused on routine health maintenance for primary care (1321). CDSS have the potential to improve specialty medical care through monitoring of laboratory results and patient follow-up (2227).



We designed a randomized controlled trial to test the efficacy of Virology FastTrack, a CDSS that generates alerts to notify HIV outpatient providers of adverse events. Prior analyses of the Massachusetts General Hospital HIV Clinic indicated that time between laboratory toxicities or virologic failure and follow-up appointments was longer than in clinical trials, and low-grade toxicities often preceded more serious ones. To address this, “interactive alerts” were designed to notify providers of new adverse events or missed appointments via each provider’s EMR “home page,” patient-specific EMR page, and biweekly emails. Interactive alerts provided key clinical information and a mechanism to request follow-up appointments and/or laboratory tests. In contrast, control alerts were “static alerts” that were only visible on patient-specific EMR pages and did not provide additional information or the semi-automated scheduling mechanism.


All Massachusetts General Hospital HIV Clinic providers were eligible for study participation. “Primary” HIV provider was defined as the primary physician or nurse practitioner following a patient, as recorded in Virology OnCall, an HIV-specific EMR developed by the Massachusetts General Hospital Laboratory of Computer Science (2830). The EMR tracks patient case status as active or inactive (loss to follow-up, transferred medical care, or deceased). Providers were informed of the study and consent was implied. A waiver of consent was obtained from patients. The study was approved by the Partners Institutional Review Board.


HIV-infected patients of participating providers, with an active case status and an arrived appointment within 6 months of the study start (9/18/2007) or an arrived appointment during the following year, were randomized by an automated program with a one-to-one ratio, using a block size of four, stratified by provider. Providers were not notified of randomization assignments.


The alerting algorithm was developed to be dynamic and responsive, such that alerts were generated less frequently for mild events and more frequently for critical events. Three types of alerts: 1) suboptimal follow-up; 2) virologic failure; and 3) laboratory toxicity were generated using an automated nightly query of hospital databases. Suboptimal follow-up alerts were generated using three rules: a missed appointment and no subsequent arrived appointment within 7 days; no arrived appointment in the previous 4 months and no scheduled appointment in the next 2 months; and “high-risk” patients, defined as those with a missed appointment in the previous year, no arrived appointment in the previous month and no scheduled appointment in the next 2 months. Virologic failure alerts were generated for HIV RNA >400 copies/mL where the previous measurement was ≤400 copies/mL. Toxicity alerts were generated for 11 laboratory tests using a modified version of the National Institutes of Health Division of AIDS Grading for Severity of Adult and Pediatric Adverse Events (31) (Appendix), according to the following scheme: the first Grade 2 or 3, subsequent Grade 2 or 3 if the previous toxicity of the same grade had not been acknowledged by the provider or had been more than 6 months, and all Grade 4 toxicities. Laboratory values prior to patient randomization were used to “prime” the alerting algorithm, so pre-existing laboratory toxicities did not generate alerts. Toxicities that occurred during hospitalizations did not trigger alerts.

Alert Dissemination

For both arms, alerts were posted as “Virologic Failure,” “Suboptimal Follow-up,” or “Toxicity” with a red hyperlink on patient-specific EMR pages. For the intervention arm, interactive alerts could be viewed on providers’ EMR “home pages” as well as biweekly emails. The interactive alerts contained hyperlinks that allowed providers to access prior laboratory results, appointment history and prior alerts. Control alerts were “static” and provided no additional information.

Facilitated Provider Intervention

After reviewing data provided by an interactive alert, providers could: 1) act; 2) dismiss; or 3) redirect the alert to a different provider. Provider action, ordering a physician, nurse practitioner, or nurse appointment and/or repeat laboratory tests within a specified timeframe, could be done with three mouse clicks. Requests were posted on administrative assistants’ FastTrack pages. A one-time repeat interactive alert was generated if the requested action did not occur within two weeks of the specified timeframe.


Alerts were automatically resolved based on repeat laboratory tests or an arrived appointment with a clinic provider for suboptimal follow-up. Intervention alerts were removed from the EMR after the provider responded to the alert or after 8 weeks (“timed-out”). Control alerts were only removed from the EMR after resolution.


The primary outcome was CD4 positive lymphocyte (CD4) count increase. Secondary endpoints included clinical event rate for each alert type. Clinical events were: 1) confirmed virologic failure, defined as either two consecutive HIV RNA measurements >400 c/mL within 3 months or a single measurement >400 c/mL with no repeat value in the following 3 months for patients with previous RNA ≤400c/mL; 2) new grade 3 or 4 laboratory toxicity; and 3) 6-month suboptimal follow-up (no arrived appointment for >6 months). An additional analysis, of time-to-next scheduled appointment following an alert, was performed. We also assessed acceptability of FastTrack among providers and administrative assistants.

Statistical Analysis

Power calculation was based on annual CD4 count increases for clinic patients with and without virologic failure (63 and 35 cells/mm3, SD 180) and a two-sided significance level of 0.05, giving 80% power to detect a difference of 28 cells/mm3 with 649 patients per arm.

Alerts triggered during the first 12 months of study were included in the analysis and an additional 3 months of follow-up was included to evaluate provider response to alerts. Data were characterized using descriptive statistics, including percents for categorical variables and mean or medians with interquartile ranges (IQR) for continuous variables. Demographics, appointment history, and laboratory values were obtained electronically from the Research Patient Data Repository (32) and Virology OnCall.

Change in CD4 count was assessed using a mixed-effects model including all patients with at least one CD4 measurement, from 4 months prior to study start date to 3 months after study end date. The mixed model included a random intercept and slope for each patient, with unstructured covariance matrix, and fixed effects for month, intervention, month and intervention interaction, and provider.

Clinical event rates, including first confirmed virologic failure, 6-month suboptimal follow-up, and Grade 3 or 4 toxicity were compared using Poisson regression models with Generalized Estimating Equations (GEE) approach and unstructured correlation matrix to account for clustering within providers. All patients contributed to these analyses. Only the first virologic failure, suboptimal follow-up and laboratory toxicity alerts were considered because of differences in subsequent alerting imposed by the trial design, where for interactive alerts only, the CDSS suppressed subsequent alerts if previously acknowledged. The “time-to-repeat lab or clinic visit” and “time-to-next scheduled appointment” endpoints were measured from alert date and depicted using Kaplan-Meier curves, where patients were censored at the end of follow-up (end of study or “inactive” case status). A sensitivity analysis that ignored case status was done to determine if differences in updating case status could have biased clinical event rates. For “time-to-next scheduled appointment” analysis, patients were also censored at the date of a future appointment if it was scheduled prior to the alert. The comparisons of time to events were conducted using Cox proportional hazard models with robust sandwich estimates to account for provider clustering (33). Proportional hazards assumption was assessed by examining the log-negative-log of Kaplan-Meier estimates of the survival function versus the log of time. Analyses were performed with SAS.9.3 (SAS Institute, Cary, NC).



Thirty-four HIV providers were informed of the study; one provider who used a different EMR opted out. Of the 33 participating HIV Clinic providers (11 attending physicians, 20 infectious disease fellows, and 2 nurse practitioners), 24 participated for the full study period and 9 participated for part of the study period (4 fellows left the study after completing their clinical fellowship year and 5 started their clinical fellowship year during the study). Twenty-nine providers had patients randomized. Two providers had insufficient patient data to derive estimates in both arms. Four fellows who joined the clinic in July of 2008 assumed the care of previously randomized patients but had no new patients during the study. Training was conducted by a clinical assistant and took approximately 5 minutes. Training was repeated for providers with outstanding alerts.


The 33 providers followed 1,011 HIV-infected patients during the 9/18/2007 to 9/17/2008 alert period, 506 were randomized into the intervention arm and 505 to the control arm. Patient-contributed follow-up time was similar between the two arms; 79.2% and 82.8% of patients were followed for the entire study period (p=0.153). Among those followed for less than one year, median follow-up was 214 and 225 days in the intervention and control groups (p=0.23; Figure 1). Baseline characteristics of patients randomized to the two arms were similar (Figure 1). There was no statistical difference in mortality or lost to follow-up case status, but the number of patients who transferred medical care (“left practice”) during the study was greater in the intervention than the control arm (1.2% versus 5.1% patients, p<0.001).

Figure 1
Flow diagram of providers and patients in the study


Over the first 12 months of the study, 2,368 alerts were generated: 111 for virologic failure, 619 for toxicity, and 1,638 for suboptimal follow-up, representing 4.7%, 26.1% and 69.2% of alerts (Table 2). For the intervention and control arms, 76.7% and 71.0% of patients received one or more alerts.

Table 2
Alert frequencies in a study of a Clinical Decision Support System for HIV outpatient care.

Of all alerts, 46.7% were “first alerts” for virologic failure, suboptimal follow-up, or one of the 11 laboratory toxicities. The mean number of total and first alerts per patient was 3.0 and 1.6 in the intervention and 3.4 and 1.5 in the control. Because atazanavir, a commonly prescribed antiretroviral, frequently causes benign hyperbilirubinemia (34), bilirubin alerts were not included in the analysis.

Of the interactive alerts, 90% were acknowledged by providers. Of the unacknowledged alerts, 93% resolved without provider response, 5.2% were overridden by a new alert, and 1.8% “timed out” after 8 weeks. Of provider-acknowledged alerts, 44.4% requested follow-up visits or laboratory testing and 56.8% were dismissed.

CD4 count

The mean number of available CD4 measurements between the two arms (4.0 and 3.9; p=0.42) was similar, as was time to first CD4 measurement (2.2 and 2.0 months, p=0.13) and last CD4 measurements (11.5 and 11.1 months, p=0.17), suggesting CD4 monitoring was similar in both arms. The number of CD4 measurements ranged from 0 to 12 in the intervention arm and 0 to 11 in the control arm. Twenty-nine patients (14 intervention and 15 control) did not have any CD4 counts during the study period. The 982 patients with at least one measurement were included in the primary analysis. Using a mixed-effects model, patients in the intervention had a mean CD4 count increase of 5.3 cells/mm3/month versus 3.2 cells/mm3/month (difference = 2.1 cells/mm3/month, 95% CI [0.1, 4.0], p=0.040). Mean CD4 cell count change per month for the patients of the twenty-seven providers with evaluable patients in both arms are shown in Figure 3.

Figure 3
Mean (95% CI) change in CD4 count per month by providers with evaluable patients in both arms.

Event Rates

The rate of 6-month suboptimal follow-up was lower in the intervention, 20.6 versus 30.1 events per 100 patient-years (p=0.022, Table 3). The median time-to-next scheduled appointment after suboptimal follow-up was also lower in the intervention (1.71 versus 3.48 months; p<0.001, Figure 4). No differences were observed in the rate of grade 3 or 4 toxicities (p=0.41, Table 3). The median time-to-next scheduled appointment after first toxicity was lower in the intervention arm (2.79 versus >6 months; p=0.072, Figure 4). There were no differences in the rate of virologic failure (p=0.154) or time-to-next scheduled appointment (p=0.728).

Figure 4
Kaplan-Meier analysis of time-to-next scheduled appointment following the first suboptimal follow-up (SOF) and first toxicity (TOX) alerts.
Table 3
Clinical event rates in a study of a Clinical Decision Support System for HIV outpatient care.


Design and implementation took approximately 1 year of informatics expertise. Calls to the hospital “Help-Desk” were minimal. Following study completion, providers and administrative assistants were asked to complete a web-based questionnaire. Of the 79% of providers who completed the survey, 81% reported FastTrack saved time (≥3 on a scale of 1 to 5, where 5 indicates strong agreement), and over 90% believed FastTrack improved clinical care and should be adopted as part of standard clinical care. All administrative assistants reported FastTrack was easy to use and should be adopted as part of standard clinical care.


We conducted a randomized controlled trial of FastTrack, a novel CDSS with an integrated clinical messaging system for HIV care. We found that FastTrack resulted in greater increases in CD4 count compared to usual (non-CDSS) care. FastTrack’s interactive provider alerts decreased sub-optimal follow-up, an important clinical endpoint that has been associated with low CD4 count, virologic failure and mortality in HIV disease (3537). This study adds to the small but growing number of CDSS randomized controlled trials that have assessed both patient outcomes and provider behavior. In accordance with the Institute of Medicine’s report (8), the CDSS was designed to make it easier for providers to “do the right thing,” specifically, timely and efficient review of important new laboratory values and scheduling patient follow-up. To this end, FastTrack utilized several key features, an alerting algorithm designed to minimize redundancy and alert fatigue, time-efficient integrated work flow, individual and population-level alerting, and EMR and email alerts.

Provider alerts have been studied previously, but with less distal outcomes (26, 38). Several previous studies indicate that static alerts are often ignored by providers (9, 3944), and over-alerting leads to physician indifference (4549). To avoid alert fatigue we employed a simple algorithm to notify providers of new virologic failure, extended time without a provider appointment, and serious and/or progressively worsening laboratory toxicities (29, 30). Over-alerting was avoided by censoring alerts of the same grade as previously acknowledged alerts and those generated during inpatient admissions. Several other aspects of the CDSS were also developed to improve physician workflow. First, interactive alerts were disseminated via email and the EMR, allowing providers to choose when they wanted to review alerts, including times when they were not using the EMR. This allowed providers to respond in a more timely fashion, including during non-clinic times when providers may have been less hurried (19, 20). Interactive alerts were also self-contained, so that providers did not need to open the EMR to find prior laboratory values or appointment history. Finally, the scheduling mechanism streamlined and reduced the process to seconds, requiring as few as three mouse clicks. Relative contribution of alerting and scheduling components should be explored in future studies.

A critical requirement for any CDSS is provider acceptance. FastTrack was quickly adopted by a diverse group of providers, including full-time and part-time outpatient attending physicians, nurse practitioners and infectious disease fellows. User feedback suggested that rapid adoption was due to ease of use and time-efficiency. To attest to provider acceptance, 90% of interactive alerts were acknowledged. This demonstrates the FastTrack alerting mechanism was effective at capturing the attention of providers in a timely manner. Moreover, almost half of the alerts resulted in new requests for appointments or laboratory tests, suggesting providers believed FastTrack alerts were clinically important. Biweekly emails proved to be a highly effective method to communicate with providers and supplied a mechanism by which providers were able to respond to new clinical events outside of clinic sessions. Finally, a majority of providers and all of the administrative assistants reported FastTrack improved patient care, was time-efficient, and should be incorporated into routine clinical care.

FastTrack improved clinical outcomes, both an increase in CD4 count, 63.6 versus 38.4 cells/mm3/year, and a decrease in sub-optimal follow-up. CD4 count changes on antiretroviral therapy vary, but our findings were similar to those from long-term follow-up of a large ACTG treatment naïve study, which found an average increase of 100 cells/mm3/year over the first three years and 52.9 cells/mm3/year over the first seven years (50). Increase in CD4 count reflects immune reconstitution and is correlated with a lower risk of AIDS-defining illness and death (5153). Providers were not informed of the primary endpoint, nor were alerts generated in response to CD4 cell measurements. FastTrack’s effect varied by provider, likely due to small numbers and differences in both providers and patient panels. Two co-authors (GKR, KAF) are providers in the clinic and were aware of the primary analysis. After running the mixed effects model excluding the 113 patients of the two co-authors, patients in the intervention arm had a mean CD4 increase of 5.1 cells/mm3/month versus 3.1 cells/mm3/month (difference = 2.0, 95% CI [−0.1, 4.1]; p=0.063). FastTrack was designed to provide an efficient way to reduce sub-optimal follow-up, one potential mechanism for differences in CD4 counts is that regular clinical follow-up promotes better clinical outcomes as measured by immune response. This hypothesis is corroborated by several studies. Ndiaye et al. found that patients who were lost-to-follow-up were more likely to have a CD4 count <200/mm3 (p<0.0001) and were 5 times more likely to die than patients in active care (36). Preventing loss-to-follow-up may lower the risk of HIV treatment interruption and treatment failure, which on a population-level could theoretically lower the prevalence of HIV drug resistance, transmission, and mortality (54, 55). Similarly, others have shown a connection between lost-to-follow-up and adverse HIV outcomes (35, 56). Improved monitoring of initial virologic failure and laboratory toxicity may also have contributed to differences in CD4 counts.

Although developed for HIV care, the features of the CDSS can be applied to other chronic diseases. The alerting algorithms may be easily modified to monitor suboptimal follow-up, laboratory toxicities and disease markers specific to other illnesses, including diabetes mellitus, congestive heart failure, and chronic viral hepatitis. The interactive alerts, scheduling application and clinical messaging system have the potential to facilitate workflow in many settings.

This study has several limitations. FastTrack was tested at a single hospital with a strong informatics infrastructure; however, providers who rated themselves as slow to adopt technology viewed the system favorably. Provider-based randomization may have introduced crossover bias, as providers may have closely monitored all patient results. This, however, would have biased towards no difference in the two arms. Only first clinical events were analyzed to minimize bias, but second VF was uncommon (4 patients). There may have been insufficient VF and toxicity events to fully assess the efficacy of the CDSS for these endpoints. This was due in part to the success of the study and the desire of clinic providers to implement FastTrack as standard clinical care. Differences in number of patients transferring medical care may be because sub-optimal follow-up alerts triggered providers to update case status for patients who had become inactive but their case status had not been previously changed. Providers may have been motivated in part to due this as it would prevent them from receiving further alerts on inactive patients. Changing a patient’s case status to inactive censored the patient from further consideration for the sub-optimal follow-up analysis. To remove this potential bias, we conducted a sensitivity analysis that ignored case status, which also found shorter times to next appointment after suboptimal follow-up in the intervention arm (p=0.012). Case status was not considered for the primary study analysis, so had no impact on the primary endpoint. Adherence data and reasons for alert dismissal were not captured. The study was not designed to distinguish the relative contribution of CDSS alerting and scheduling components. Finally, to facilitate the study analysis, only a single provider was alerted for each event; therefore, the study did not take advantage of clinical teams, which typically include nurses, social workers, pharmacists, and case managers.

With informatics expected to play a crucial role in improving efficiency and lowering the cost of healthcare in the United States, it is critical to demonstrate meaningful use of the EMR. While FastTrack improved important HIV outcomes, it also incorporated several innovative features designed with general healthcare providers in mind. First, it offered a simple and time-efficient mechanism to intervene for significant events (57). Second, it minimized alerting fatigue by limiting alerts to new or increasingly important events. Finally, it used both EMR and biweekly Health Insurance Portability and Accountability Act (HIPAA)-compliant email alerts to accommodate busy clinician schedules, which was particularly important for offsite providers and those with fewer patients. The principles of FastTrack may readily transfer to other HIV clinical settings and inform the design of systems to support disease management and improve outcomes in HIV as well as other chronic diseases.

Figure 2
Flow diagram of interactive and static computer alerts
Table 1
Patient demographics in a study of a Clinical Decision Support System for HIV outpatient care.


Financial support: Grants K01AI062435 (G.K.R.), K24AI062476 (K.A.F.), P30AI42851 (G.K.R. and K.A.F.), K24DK080140 (J.B.M.), and R37AI42006 (K.A.F.); Massachusetts General Hospital Clinical Research Program.

Primary Funding Source: National Institute of Allergy and Infectious Diseases: “Bridging the Gap HIV Trials and Clinical Practice” (K01AI062435)

We thank the patients and staff of the Massachusetts General Hospital HIV clinic.


Potential conflicts of interest: no conflicts.

Reproducible research statement:

Study protocol and Statistical code available from: Dr. Robbins (gro.srentrap@snibborg)


1. Bakken S. An informatics infrastructure is essential for evidence-based practice. J Am Med Inform Assoc. 2001;8:199–201. [PMC free article] [PubMed]
2. Bates DW, Gawande AA. Improving safety with information technology. N Engl J Med. 2003;348:2526–34. [PubMed]
3. Bakken S, Cimino JJ, Hripcsak G. Promoting patient safety and enabling evidence-based practice through informatics. Med Care. 2004;42:II49–56. [PubMed]
4. Jenders RA, Osheroff JA, Sittig DF, Pifer EA, Teich JM. Recommendations for clinical decision support deployment: synthesis of a roundtable of medical directors of information systems. AMIA Annu Symp Proc. 2007:359–63. [PMC free article] [PubMed]
5. Schulman J, Kuperman GJ, Kharbanda A, Kaushal R. Discovering how to think about a hospital patient information system by struggling to evaluate it: a committee’s journal. J Am Med Inform Assoc. 2007;14:537–41. [PMC free article] [PubMed]
6. Patel V, Abramson EL, Edwards A, Malhotra S, Kaushal R. Physicians’ potential use and preferences related to health information exchange. Int J Med Inform. 2010;80:171–80. [PubMed]
7. Institute of Medicine. To Err is Human: Building A Safer Health System. 1999.
8. Crossing the Quality Chasm: A New Health System for the 21st Century. Institute of Medicine; 2001. [PubMed]
9. Organization, Functions, and Delegations of Authority. Office of the National Coordinator for Health Information Technology; 2009.
10. The American Recovery and Reinvestment Act of 2009. United States Government Printing Offices; 2009.
11. Middleton B, Hammond WE, Brennan PF, Cooper GF. Accelerating U.S. EHR adoption: how to get there from here. recommendations based on the 2004 ACMI retreat. J Am Med Inform Assoc. 2005;12:13–9. [PMC free article] [PubMed]
12. Garg AX, Adhikari NK, McDonald H, Rosas-Arellano PR, Devereaux PJ, Beyene J, et al. Effects of computerized clinical decision support systems on practitioner performance and patient outcomes: a systematic review. JAMA. 2005;293:1223–38. [PubMed]
13. Poon EG, Wald J, Bates DW, Middleton B, Kuperman GJ, Gandhi TK. Supporting patient care beyond the clinical encounter: three informatics innovations from Partners Health Care. AMIA Annu Symp Proc. 2003:1072. [PMC free article] [PubMed]
14. Bakken S, Roberts WD, Chen E, Dilone J, Lee NJ, Mendonca E, et al. PDA-based informatics strategies for tobacco use screening and smoking cessation management: a case study. Stud Health Technol Inform. 2007;129:1447–51. [PubMed]
15. Bindoff IK, Tenni PC, Peterson GM, Kang BH, Jackson SL. Development of an intelligent decision support system for medication review. J Clin Pharm Ther. 2007;32:81–8. [PubMed]
16. Dickey J, Girard DE, Geheb MA, Cassel CK. Using systems-based practice to integrate education and clinical services. Med Teach. 2004;26:428–34. [PubMed]
17. Grant RW, Wald JS, Poon EG, Schnipper JL, Gandhi TK, Volk LA, et al. Design and implementation of a web-based patient portal linked to an ambulatory care electronic health record: patient gateway for diabetes collaborative care. Diabetes Technol Ther. 2006;8:576–86. [PMC free article] [PubMed]
18. Hsieh TC, Kuperman GJ, Jaggi T, Hojnowski-Diaz P, Fiskio J, Williams DH, et al. Characteristics and consequences of drug allergy alert overrides in a computerized physician order entry system. J Am Med Inform Assoc. 2004;11:482–91. [PMC free article] [PubMed]
19. Lester WT, Ashburner JM, Grant RW, Chueh HC, Barry MJ, Atlas SJ. Mammography FastTrack: an intervention to facilitate reminders for breast cancer screening across a heterogeneous multi-clinic primary care network. J Am Med Inform Assoc. 2009;16:187–95. [PMC free article] [PubMed]
20. Lester WT, Grant R, Barnett GO, Chueh H. Facilitated lipid management using interactive e-mail: preliminary results of a randomized controlled trial. Stud Health Technol Inform. 2004;107:232–6. [PubMed]
21. Lester WT, Grant RW, Barnett GO, Chueh HC. Randomized controlled trial of an informatics-based intervention to increase statin prescription for secondary prevention of coronary disease. J Gen Intern Med. 2006;21:22–9. [PMC free article] [PubMed]
22. Cowen M, Halasyamani LK, McMurtrie D, Hoffman D, Polley T, Alexander JA. Organizational structure for addressing the attributes of the ideal healthcare delivery system. J Healthc Manag. 2008;53:407–19. [PubMed]
23. Crosson FJ. The delivery system matters. Health Aff (Millwood) 2005;24:1543–8. [PubMed]
24. Doebbeling BN, Chou AF, Tierney WM. Priorities and strategies for the implementation of integrated informatics and communications technology to improve evidence-based practice. J Gen Intern Med. 2006;21 (Suppl 2):S50–7. [PMC free article] [PubMed]
25. McGowan JJ, Cusack CM, Poon EG. Formative evaluation: a critical component in EHR implementation. J Am Med Inform Assoc. 2008;15:297–301. [PMC free article] [PubMed]
26. Rind DM, Safran C, Phillips RS, Wang Q, Calkins DR, Delbanco TL, et al. Effect of computer-based alerts on the treatment and outcomes of hospitalized patients. Arch Intern Med. 1994;154:1511–7. [PubMed]
27. Ruland CM, Bakken S. Developing, implementing, and evaluating decision support systems for shared decision making in patient care: a conceptual model and case illustration. J Biomed Inform. 2002;35:313–21. [PubMed]
28. Chueh H. OnCall: Clinical Web Portals. 2012
29. Robbins GK, Daniels B, Zheng H, Chueh H, Meigs JB, Freedberg KA. Predictors of antiretroviral treatment failure in an urban HIV clinic. J Acquir Immune Defic Syndr. 2007;44:30–7. [PMC free article] [PubMed]
30. Robbins GK, Johnson KL, Chang Y, Jackson KE, Sax PE, Meigs JB, et al. Predicting virologic failure in an HIV clinic. Clin Infect Dis. 2010;50:779–86. [PMC free article] [PubMed]
31. Division of AIDS table for grading the severity of adult and pediatric adverse events. 2009 Aug;:16–20. (clarification)
32. Guide to RPDR Data. Partners Healthcare; 2012.
33. Lin DY, Wei LJ. The robust inference for the proportional hazards model. J Am Stat Assoc. 1989;84:1074–1078.
34. Le Tiec C, Barrail A, Goujard C, Taburet AM. Clinical pharmacokinetics and summary of efficacy and tolerability of atazanavir. Clin Pharmacokinet. 2005;44:1035–50. [PubMed]
35. Mugavero MJ, Lin HY, Willig JH, Westfall AO, Ulett KB, Routman JS, et al. Missed visits and mortality among patients establishing initial outpatient HIV treatment. Clin Infect Dis. 2009;48:248–56. [PMC free article] [PubMed]
36. Ndiaye B, Ould-Kaci K, Salleron J, Bataille P, Bonnevie F, Cochonat K, et al. Characteristics of and outcomes in HIV-infected patients who return to care after loss to follow-up. AIDS. 2009;23:1786–9. [PubMed]
37. Dalal RP, Macphail C, Mqhayi M, Wing J, Feldman C, Chersich MF, et al. Characteristics and outcomes of adult patients lost to follow-up at an antiretroviral treatment clinic in johannesburg, South Africa. J Acquir Immune Defic Syndr. 2008;47:101–7. [PubMed]
38. Safran C, Rind DM, Davis RB, Ives D, Sands DZ, Currier J, et al. Guidelines for management of HIV infection with computer-based patient’s record. Lancet. 1995;346:341–6. [PubMed]
39. Isaac T, Weissman JS, Davis RB, Massagli M, Cyrulik A, Sands DZ, et al. Overrides of medication alerts in ambulatory care. Arch Intern Med. 2009;169:305–11. [PubMed]
40. Judge J, Field TS, DeFlorio M, Laprino J, Auger J, Rochon P, et al. Prescribers’ responses to alerts during medication ordering in the long term care setting. J Am Med Inform Assoc. 2006;13:385–90. [PMC free article] [PubMed]
41. Lo HG, Matheny ME, Seger DL, Bates DW, Gandhi TK. Impact of non-interruptive medication laboratory monitoring alerts in ambulatory care. J Am Med Inform Assoc. 2009;16:66–71. [PMC free article] [PubMed]
42. van der Sijs H, Aarts J, Vulto A, Berg M. Overriding of drug safety alerts in computerized physician order entry. J Am Med Inform Assoc. 2006;13:138–47. [PMC free article] [PubMed]
43. Weingart SN, Toth M, Sands DZ, Aronson MD, Davis RB, Phillips RS. Physicians’ decisions to override computerized drug alerts in primary care. Arch Intern Med. 2003;163:2625–31. [PubMed]
44. Shah NR, Seger AC, Seger DL, Fiskio JM, Kuperman GJ, Blumenfeld B, et al. Improving acceptance of computerized prescribing alerts in ambulatory care. J Am Med Inform Assoc. 2006;13:5–11. [PMC free article] [PubMed]
45. Gurwitz JH, Field TS, Rochon P, Judge J, Harrold LR, Bell CM, et al. Effect of computerized provider order entry with clinical decision support on adverse drug events in the long-term care setting. J Am Geriatr Soc. 2008;56:2225–33. [PubMed]
46. Payne TH. Computer decision support systems. Chest. 2000;118:47S–52S. [PubMed]
47. de Lusignan S, Teasdale S. Achieving benefit for patients in primary care informatics: the report of a international consensus workshop at Medinfo 2007. Inform Prim Care. 2007;15:255–61. [PubMed]
48. Feldbaum J. Doing alerts right. Implemented correctly, alerts can greatly enhance a CPOE system, but done wrong, they can send physicians back to pens and paper. Healthc Inform. 2008;25:95–6. [PubMed]
49. Kucher N, Puck M, Blaser J, Bucklar G, Eschmann E, Luscher TF. Physician compliance with advanced electronic alerts for preventing venous thromboembolism among hospitalized medical patients. J Thromb Haemost. 2009;7:1291–6. [PubMed]
50. Lok JJ, Bosch RJ, Benson CA, Collier AC, Robbins GK, Shafer RW, et al. Long-term increase in CD4+ T-cell counts during combination antiretroviral therapy for HIV-1 infection. AIDS. 2010;24:1867–76. [PMC free article] [PubMed]
51. Baker JV, Peng G, Rapkin J, Krason D, Reilly C, Cavert WP, et al. Poor initial CD4+ recovery with antiretroviral therapy prolongs immune depletion and increases risk for AIDS and non-AIDS diseases. J Acquir Immune Defic Syndr. 2008;48:541–6. [PMC free article] [PubMed]
52. Moore RD, Keruly JC. CD4+ cell count 6 years after commencement of highly active antiretroviral therapy in persons with sustained virologic suppression. Clin Infect Dis. 2007;44:441–6. [PubMed]
53. Piketty C, Weiss L, Thomas F, Mohamed AS, Belec L, Kazatchkine MD. Long-term clinical outcome of human immunodeficiency virus-infected patients with discordant immunologic and virologic responses to a protease inhibitor-containing regimen. J Infect Dis. 2001;183:1328–35. [PubMed]
54. Arnedo-Valero M, Garcia F, Gil C, Guila T, Fumero E, Castro P, et al. Risk of selecting de novo drug-resistance mutations during structured treatment interruptions in patients with chronic HIV infection. Clin Infect Dis. 2005;41:883–90. [PubMed]
55. Deeks SG, Gange SJ, Kitahata MM, Saag MS, Justice AC, Hogg RS, et al. Trends in multidrug treatment failure and subsequent mortality among antiretroviral therapy-experienced patients with HIV infection in North America. Clin Infect Dis. 2009;49:1582–90. [PMC free article] [PubMed]
56. Mocroft A, Kirk O, Aldins P, Chies A, Blaxhult A, Chentsova N, et al. Loss to follow-up in an international, multicentre observational study. HIV Med. 2008;9:261–9. [PubMed]
57. Poon EG, Wright A, Simon SR, Jenter CA, Kaushal R, Volk LA, et al. Relationship between use of electronic health record features and health care quality: results of a statewide survey. Med Care. 2010;48:203–9. [PubMed]