|Home | About | Journals | Submit | Contact Us | Français|
Little evidence exists on effective interventions to integrate HIV-care guidelines into practices within developing countries. This study tested the hypothesis that clinical summaries with computer-generated reminders could improve clinicians' compliance with CD4 testing guidelines in the resource-limited setting of sub-Saharan Africa.
A prospective comparative study of two randomly selected outpatient adult HIV clinics in western Kenya. Printed summaries with reminders for overdue CD4 tests were made available to clinicians in the intervention clinic but not in the control clinic.
Changes in order rates for overdue CD4 tests were compared between and within the two clinics.
The computerized reminder system identified 717 encounters (21%) with overdue CD4 tests. Analysis by study assignment (regardless of summaries being printed or not) revealed that with computer-generated reminders, CD4 order rates were significantly higher in the intervention clinic compared to the control clinic (53% vs 38%, OR=1.80, CI 1.34 to 2.42, p<0.0001). When comparison was restricted to encounters where summaries with reminders were printed, order rates in intervention clinic were even higher (63%). The intervention clinic increased CD4 ordering from 42% before reminders to 63% with reminders (50% increase, OR=2.32, CI 1.67 to 3.22, p<0.0001), compared to control clinic with only 8% increase from prestudy baseline (CI 0.83 to 1.46, p=0.51).
Evaluation was conducted at two clinics in a single institution.
Clinical summaries with computer-generated reminders significantly improved clinician compliance with CD4 testing guidelines in the resource-limited setting of sub-Saharan Africa. This technology can have broad applicability to improve quality of HIV care in these settings.
Healthcare systems in the world's poorest places must care for large numbers of patients with a heavy disease burden. Paradoxically, these same healthcare systems have too few resources and too few skilled personnel.1 In sub-Saharan Africa, the challenge of providing adequate care is increasingly magnified as large numbers of HIV-positive patients seek treatment.2 3 While HIV care in developed countries would typically be managed by specialist physicians, resource-limited settings often utilize less-trained healthcare workers out of necessity.4–7 The combination of overworked staff with limited training, increasingly busy clinics, the challenges of providing chronic disease management, and the ‘non-perfectibility of man’8 often results in suboptimal patient care.9
Approaches are urgently needed to improve the quality and outcomes of care offered to patients in these resource-limited settings. Unfortunately, no amount of training will overcome the limitations of human providers in high-volume clinics with complex care protocols. However, basic information management tools may help healthcare workers do the right things. One such tool is offered by Clinical Decision Support Systems (CDSS).10 CDSS use data stored in electronic health records (EHRs) to provide care suggestions and reminders to clinicians whenever there is a deviation from the accepted standard of care, and thus offer one of the most powerful tools in EHRs. In the developed world, CDSS have been shown to improve clinician behaviors and the quality of healthcare.11 12 In fact, a study conducted in the USA by Safran et al demonstrated that computer-based alerts and reminders were effective in helping clinicians adhere to HIV care guidelines.13
Although more and more resource-limited settings are implementing EHRs,14 little research documents whether these systems affect clinician behavior or improve the quality of care. Extrapolating from CDSS' success in the developed world, we hypothesized that CDSS (implemented within EHRs) would change clinician behavior and improve the quality of care offered to HIV-positive patients in the resource-limited setting of sub-Saharan Africa. In this study, we assessed whether a CDSS that provides clinicians in an HIV care system in western Kenya with ‘just-in-time’ clinical summary reports and patient-specific care suggestions could improve adherence to accepted CD4 testing guidelines.
This study was conducted at two randomly selected adult HIV clinics affiliated with the partnership between United States Agency for International Development and the Academic Model Providing Access to Healthcare (USAID–AMPATH) in Western Kenya.15 This program provides comprehensive care to over 100 000 active HIV-positive patients through 23 parent and 23 satellite clinics (figure 1). The urban USAID–AMPATH facility is located in Eldoret, Kenya and is home to four HIV clinics—one pediatric and three adult clinics. All three adult clinics offer the same services in the same building, with 90% of the patient visits handled by nurses and clinical officers (equivalent to physician assistants) without the presence of a supervising physician. Enrollment to the clinics is based largely on the order of presentation, and there is very little patient crossover between clinics.
Since 2004, USAID–AMPATH clinics have used the AMPATH Medical Record System (AMRS) to store comprehensive, longitudinal, electronic patient records for all enrolled patients.16 AMRS is the original implementation of OpenMRS, an open-source EHRs deployed widely in the developing world.17 18 Patient records in the system contain demographic information, historical and physical examination data, problem lists, medications, diagnostic test results, and visit data. The clinical information are stored largely as coded concepts (as opposed to free-text) for easy retrieval and analysis.19
Clinicians caring for AMPATH patients do not enter data directly into AMRS but rather complete paper encounter forms that contain clinical parameters and categorical observations previously defined and encoded into the AMRS concept dictionary (see appendix, available as an online data supplement at http://www.jamia.org). Where necessary, clinicians can write down diagnoses, test results, and other observations as free-text if these are not included in checklists on the encounter form. Clerks with basic computer skills and minimal medical knowledge enter data from the encounter forms into the AMRS. The encounter forms are then placed in the patient's paper clinic chart, which is available to the clinician during patient care.
With input from AMPATH clinicians, we developed a module within OpenMRS which generated a patient-specific clinical summary that displayed selected information from the patient's record to provide a quick reference to the most relevant data needed by the clinicians. The module also contained CDSS functionality which appended patient-specific care suggestions and reminders to the bottom of the clinical summary (figure 2).20 Generated clinical summaries with reminders could be printed and were typically attached to patients' paper charts at the time of a patient's visit.
For the current study, we implemented five care suggestions in the CDSS which recommended that an overdue CD4 test be ordered if particular criteria were met (table 1). Overdue CD4 studies were determined based on testing algorithms used for clinical care at USAID–AMPATH. These algorithms were based on recommendations by the WHO21 and Kenyan Ministry of Health,22 and had been adopted through consensus with specific attention to financial constraints within the USAID–AMPATH setting.
We assessed the effect of clinical summary reports with CD4 care suggestions on adherence to testing protocols in a controlled trial at two randomly selected adult clinics affiliated with USAID–AMPATH. One clinic was the intervention clinic for our study, whereas the other was the control clinic. For both study clinics, we determined baseline order rates for overdue CD4 tests 2 months prior to the intervention using the same algorithms for generating study CD4 care suggestions.
Our study took place in February 2009. When an adult HIV-positive patient presented for a return visit at the intervention clinic during the study period, a patient summary report was generated at registration. A printout of the summary (with the reminder section containing suggestions for CD4 testing, if indicated) was placed at the front of the patient's paper chart, along with a blank encounter form. These were made available to all intervention clinic providers who interacted with the patient during that visit. The summaries with reminders could also be viewed on a computer placed in the intervention clinic. Clinicians recorded data on the encounter form where appropriate. If they wanted to order a CD4 Panel, they would check the option for ‘CD4 Panel’ in the ‘Tests Ordered’ section of the encounter form (see appendix: item 18, available as an online data supplement at http://www.jamia.org). Data from completed encounter forms were entered into AMRS, and the forms placed in the patient's paper chart. For the control clinic, the computer also generated summaries with reminders, but no printouts were made available to clinicians, and there was no computer through which the summaries could be viewed. Patients had to be eligible for CD4 testing (ie, have a CD4 care suggestion generated) to be included in the study. The study was approved by the Institutional Review Boards at Indiana University School of Medicine in Indianapolis, Indiana and the Institutional Review and Ethics Committee at Moi University School of Medicine in Eldoret, Kenya.
The unit of analysis for all analyses was the individual clinic visit—that is, each individual clinician–patient encounter. This unit was chosen because patients typically saw whichever clinician was available at the time of their visit, instead of seeing the same clinician every time. The primary outcome of the study was clinicians' compliance rates with ordering CD4 laboratory studies. We primarily compared compliance rates between intervention and control clinics during the study period, controlling for confounding of measured characteristics and clustering effects in our model. To eliminate potential confounding due to unmeasured characteristics that differed between the two clinics, we also conducted a pre–post comparison within each clinic.
Continuous variables were summarized by mean (SD) or median (IQR), and categorical variables summarized by frequency and percentage. Comparisons of continuous and categorical variables between different groups were performed with Wilcoxon rank-sum and Fisher exact tests, respectively. Demographic data used for these comparisons were obtained as part of the routine procedures for clinic registration and patient care.
We used generalized linear mixed-effects model to account for (1) confounding effects due to potentially non-randomized assignment to intervention and control groups and (2) clustering effect due to the fact that some clinicians treated more than one patient. To identify covariates included for the purpose of (1) above, we first performed univariate analyses to identify factors associated with intervention assignment, and all significant factors are included in the model in addition to the intervention variable. It should be noted that some patients appear more than once in these data, indicating a potential clustering effect of patient. However, such cases are relatively few (less than 4%), and inclusion of the additional patient clustering effect does not change the analysis results substantially. Therefore, we chose to report the results based on models with clustering by clinician only. We also used generalized linear mixed-effects model to assess changes in compliance with reminders over time. For all analyses, a two-sided p value of <0.05 was considered statistically significant. All analyses were performed in SAS 9.1.
Baseline evaluation conducted 2 months prior to our study revealed that 436 (29%) of 1482 patient visits in the intervention clinic had an overdue CD4 test compared to 489 (31%) of 1581 visits in the control clinic. During this time, baseline CD4 order rates for the overdue tests were different between the intervention clinic (42%) and control clinic (36%) (p=0.04, OR=1.32, 95% CI 1.01 to 1.72).
A total of 3108 patients accounted for 3405 clinic visits during the study period in February 2009; there were 1929 visits to the intervention clinic and 1476 visits to the control clinic. For 361 (19%) of the visits (349 unique patients) to the intervention clinic, a CD4 test was overdue compared to 355 (24%) of visits (341 unique patients) to the control clinic. All patients with overdue CD4 tests were included in the study. Study patients had a mean age of 38 years, 65% were women, and 64% were on antiretroviral medication for HIV (table 2). Twenty-five (3.6%) of these patients had more than one clinical encounter during the study periods, with no patients crossing over between the two study clinics. Significant differences between intervention and control patients included WHO stage, most recent prior CD4 count, and years since clinical enrollment. All subsequent analyses adjusted for these characteristics.
A total of 36 clinicians (29 clinical officers, five nurses, and two medical doctors) cared for the study patients, with four clinicians crossing over between the two clinics. At the beginning of the study, clinicians at the two clinics had worked at AMPATH for an average of 3.01 years (SD=1.95 years), with experience levels and gender distribution not varying significantly between the two clinics for those who did not cross over (p=0.38 and p=0.09 respectively).
During the study period in February 2009, clinical summaries with CD4 care suggestions were printed and made available to clinicians only in the intervention clinic. In 39% of the patient visits (140 out of 361), the summaries with reminders were inadvertently not printed, and thus were not available to intervention clinicians. This happened mostly because the computer or printer in the clinic was not working. As such, only 221 (61%) of the clinical summaries with reminders in the intervention clinic were printed and made available clinicians. In the control clinic, none of the summaries with reminders were printed.
Analysis by study assignment (regardless of summaries being printed or not, and similar to intention-to-treat analysis in randomized trials) revealed that clinicians ordered CD4 tests during 53% of all visits to the intervention clinic when CD4 tests were indicated versus 38% in the control clinic (p<0.0001, OR=1.80, 95% CI 1.34 to 2.42). Adjusting for the number of years a clinician had worked in AMPATH, number of years a patient had been in AMPATH system, time since last CD4 test, last CD4 count, WHO stage, and clinician gender (variables correlated with intervention assignment for each clinician–patient encounter) yielded an OR of 2.07 (p=0.04) with 95% CI of 1.05 to 4.07. Taking into consideration only encounters for which reminders were printed, order rates between the intervention and control groups were 63% versus 38% (adjusted OR 2.90, 95% CI 1.66 to 5.05, p=0.0002). Conversely, for the 140 intervention-clinic visits where the summaries with reminders were not printed, CD4 order rates were no different to order rates in the control clinic (36% vs 38 %, OR=0.91, 95% CI 0.60 to 1.35, p=0.68).
When clinical summaries with reminders were available to clinicians, the compliance rates varied depending on the reminder type (table 3). Availability of three of the reminders led to statistically significant increases in CD4 order rates. The other two reminders, though not achieving statistical significance, showed a trend of increasing ordering rates. Clinicians who received the reminders varied in the frequency with which they accepted these reminders. The distribution of ordering rates among the seven clinicians who saw the reminders for at least 10 patients (range 12 to 45) ranged from 52% to 86%. A strong negative correlation (<−0.9) between the random intercept and slope was observed among the seven clinicians—this suggests that over the course of the intervention, there was a convergence of compliance rates between the clinicians, with steeper increasing acceptance by clinicians who initially had not complied as well compared to those who did well at the beginning of the intervention.
To eliminate potential confounding due to unmeasured characteristics that differ between the two clinics, we also conducted a pre–post comparison within each clinic. In a pre–post analysis within the intervention clinic, comparing baseline data (November 2008) with intervention data (February 2009), we see a statistically significant increase in the order rates for overdue CD4 studies whether we use all study cases (42% vs 53%, p=0.005, OR=1.50, 95% CI 1.13 to 1.98) or exclude cases where the clinical summaries were not printed (42% vs 63%, p<0.0001, OR=2.32, 95% CI 1.67 to 3.22).
Order rates for overdue CD4 counts in the control clinic between baseline data (November 2008) and our study period (February 2009) were not different (36% vs 38%, p=0.51, OR=1.10, 95% CI 0.83 to 1.46).
The results of this study provide compelling evidence that clinical summaries containing computer-generated care suggestions can improve clinician adherence with HIV care guidelines in the resource-limited setting of sub-Saharan Africa. The effect of the reminders was even greater for clinicians whose compliance rates had been worse before the intervention. In fact, as the study progressed, compliance rates improved. Given that scant data exist on effective interventions to translate evidence-based medicine into practice in developing countries,23 these findings offer a new powerful tool for improving HIV care in these resource-limited settings. The study also highlights the importance of EHRs in these settings.
As stated by Dexter et al,24 the easy sustainability of computer-based reminder systems contrasts with the weaknesses of such approaches as manual reviewing of charts25 and physician-directed continuing medical education.26 Messages in computer-generated reminders can be tailored for all levels of providers—an approach that is particularly relevant in settings where less-trained personnel provide a large amount of care. In addition, CDSS allows evolving care protocols to be seamlessly, efficiently, and broadly introduced into clinical practice.
As in prior reminder studies from the developed world,12 24 27 28 we observed a variation in adherence to computer reminders among clinicians. Even though we did not formally evaluate reasons for non-adherence, informal questioning of reminder recipients indicated that several factors were at play. Some clinicians had rote practice patterns and simply disagreed with the algorithms used in the reminder: with education about the reminders, acceptance rates improved. Occasionally, clinicians were right to ignore the reminders because the recommended action was inappropriate for the particular patient during that visit, often because the clinician had other information not available in the computer. In fact, because computers are limited by the data they contain, computer reminders should be considered as care suggestions to the clinicians. The final decision must still rest with the clinician, as clinical judgment should always take precedence over the computer's judgment.
Several limitations in our study deserve mention. The generalizability of our findings is limited by the fact that only a few reminders were implemented and at a single clinical site. We observe a variation in compliance by type of CD4 reminder, which demonstrates that not all care suggestions will be treated equally by clinicians. Our evaluation only lasted for a short period of time, and we cannot account for possible retardation in efficiency and effectiveness of utilization over time. Another limitation of our study is that the intervention would have an uncertain role in settings with no EHR. However, many care rules are based on limited data (eg, gender, age, duration of care) and do not require fully implemented EHRs—for example, reminders about childhood immunizations are based solely on the child's age and history of prior immunizations, and the latter can easily be maintained with simple flowsheets in patients' charts. Even where EHRs have been implemented in developing countries, the generalizability of our intervention may be limited by the additional level of technology required to implement CDSS. It should however be noted that our intervention only used a single computer at the intervention clinic. Lastly, the comparative design (instead of a randomized study design) may have introduced some bias. We controlled for significant covariates in the analyses.
This study provides a model through which HIV care guidelines can be broadly implemented in resource-limited settings. The approach can be used to provide reminders about drug–drug interactions and known allergies,29 and reminders on overuse or underuse of diagnostic tests or medications. The guidelines can also extend beyond HIV to encompass a broad set of diseases, especially for conditions such as diabetes and hypertension where particular care protocols are well accepted as best practice. To better delineate the specific effects of reminders, we are conducting evaluations in which patient summaries are being presented to both intervention and control groups, but clinical reminders presented only to the intervention group. In the future, we hope to demonstrate the impact of CDSS on patient outcomes and quality of care in these resource-limited settings, and determine sustainability of the observed impact of reminders over time.
Clinical summaries with computer-generated reminders significantly improved clinician adherence to CD4 testing guidelines in this first study of its kind in sub-Saharan Africa. This technology can have broad applicability to improve quality of HIV care in these settings.
We would like to thank our patients and providers at the study clinics. Special thanks to S Masit, P Tanui, J Lelei, B McKown, B Wolfe, J Kariuki, J Lagat, R Vreeman, and A Yeung.
Funding: This work was supported by a grant from the Abbott Fund, and in part by a grant to the USAID–AMPATH Partnership from the United States Agency for International Development as part of the President's Emergency Plan for AIDS Relief (PEPFAR).
Competing interests: None.
Ethics approval: Ethics approval was provided by the Institutional Review Boards at Indiana University School of Medicine in Indianapolis, Indiana and the Institutional Review and Ethics Committee at Moi University School of Medicine in Eldoret, Kenya.
Provenance and peer review: Not commissioned; externally peer reviewed.