We aimed to improve pediatric inpatient surveillance at a busy referral hospital in Malawi with 2 new programs: (1) the provision of vital sign equipment and implementation of an inpatient triage program (ITAT) that includes a simplified pediatric severity-of-illness score; (2) task-shifting ITAT to a new cadre of health care workers called “Vital Sign Assistants” (VSAs).
This study, conducted on the pediatric inpatient ward of a large referral hospital in Malawi, was divided into 3 phases, each lasting 4 weeks. In Phase A, we collected baseline data. In Phase B, we provided 3 new automated vital sign poles and implemented ITAT with current hospital staff. In Phase C, VSAs were introduced and performed ITAT. Our primary outcome measures were the number of vital sign assessments performed and clinician notifications to reassess patients with high ITAT scores.
We enrolled 3,994 patients who received 5,155 vital sign assessments. Assessment frequency was equal between Phases A (0.67 assessments/patient) and B (0.61 assessments/patient), but increased 3.6-fold in Phase C (2.44 assessments/patient, p<0.001). Clinician notifications increased from Phases A (84) and B (113) to Phase C (161, p=0.002). Inpatient mortality fell from Phase A (9.3%) to Phases B (5.7) and C (6.9%).
ITAT with VSAs improved vital sign assessments and nearly doubled clinician notifications of patients needing further assessment due to high ITAT scores, while equipment alone made no difference. Task-shifting ITAT to VSAs may improve outcomes in pediatric hospitals in the developing world.
pulse oximetry; pediatric early warning score; task-shift; Malawi; ITAT; vital sign
To develop a new pediatric illness severity score, called Inpatient Triage, Assessment, and Treatment (ITAT), for resource-limited settings to identify hospitalized patients at highest risk of death and facilitate urgent clinical re-evaluation.
We performed a nested case-control study at a Malawian referral hospital. The ITAT score was derived from 4 equally-weighted variables, yielding a cumulative score between 0 and 8. Variables included oxygen saturation, temperature, and age-adjusted heart and respiratory rates. We compared the ITAT score between cases (deaths) and controls (discharges) in predicting death within 2 days. Our analysis includes predictive statistics, bivariable and multivariable logistic regression, and calculation of data-driven scores.
A total of 54 cases and 161 controls were included in the analysis. The area under the receiver operating characteristic curve was 0.76. At an ITAT cutoff of 4, the sensitivity, specificity, and likelihood ratio were 0.44, 0.86, and 1.70, respectively. A cumulative ITAT score of 4 or higher was associated with increased odds of death (OR: 4.80; 95% CI: 2.39 – 9.64). A score of 2 for all individual vital signs was a statistically significant independent predictor of death.
We developed an inpatient triage tool (ITAT) appropriate for resource-constrained hospitals that identifies high-risk children after hospital admission. Further research is needed to study how best to operationalize ITAT in developing countries.
vital sign; early warning score; PEWS; pediatric; Malawi; ITAT
Culturing is generally considered to be the gold standard for detecting Vibrio cholerae in stool, though it is not always feasible in resource-limited settings. The Crystal VC dipstick test allows for rapid stool testing for the diagnosis of cholera in the field. However, previous studies have found low specificities (49%–79%) associated with direct testing of stool for cholera using this kit when compared to culturing.
In the present study conducted in Dhaka, Bangladesh in 2013, we compare direct testing using the Crystal VC dipstick test and testing after enrichment for 6-hours in Alkaline Peptone Water (APW) to bacterial culture as the gold standard. Samples positive by dipstick but negative by culture were also tested using PCR.
Stool was collected from 125 patients. The overall specificities of the direct testing and testing after 6-hour enrichment in APW compared to bacterial culture were 91.8% and 98.4% (p=0.125) respectively, and the sensitivities were 65.6% and 75.0% (p=0.07), respectively.
The increase in the sensitivity of the Crystal VC kit with the use of the 6 hour enrichment step in APW compared to direct testing was marginally significant. The Crystal VC dipstick was found to have a much higher specificity than previously reported (91–98%). Therefore this method provides a promising screening tool for cholera outbreak surveillance in resource limited settings where elimination of false positive results is critical.
To document frequency of hygiene practices of mothers and children in a shantytown in Lima, Peru.
Continuous monitoring over three 12-h sessions in households without in-house water connections to measure: (i) water and soap use of 32 mothers; (ii) frequency of interrupting faecal-hand contamination by washing; and (iii) the time until faecal-hand contamination became a possible transmission event.
During 1008 h of observation, 55% (65/119) of mothers’ and 69% (37/54) of children's faecal-hand contamination events were not followed within 15 min by handwashing or bathing. Nearly 40% (67/173) of faecal-hand contamination events became possible faecal-oral transmission events. There was no difference in the time-until-transmission between mothers and children (P = 0.43). Potential transmission of faecal material to food or mouth occurred in 64% of cases within 1 h of hand contamination. Mean water usage (6.5 l) was low compared to international disaster relief standards.
We observed low volumes of water usage, inadequate handwashing, and frequent opportunities for faecal contamination and possible transmission in this water-scarce community.
handwashing; water supply; hygiene; Peru
To examine the interaction between CD4 cell count, viral load
suppression and duration of ART on mortality.
Cohort analysis of HIV-infected patients initiating ART between April
2004 and June 2011 at a large public-sector clinic in Johannesburg, South
Africa. A log-linear model with Poisson distribution was used to estimate
risk of death as a function of the interaction between current CD4 count,
current viral load suppression and duration on ART in 12-month intervals. We
calculated predicted mortality using estimated coefficients within
combinations of predictors.
Among 14,932 ART patients, 1,985 (13.3%) died. Current CD4
was the strongest predictor of death (<50 vs. ≥550
cells/mm3 - RR: 46.3; 95%CI: 26.8–80), while
unsuppressed current viral load vs. suppressed (RR: 1.8; 95%CI:
1.5–2.1) and short duration of ART (0–11.9 vs.
66–71.9 months RR: 1.7; 95%CI: 1.2–2.3) also
predicted death. Our interaction model showed that mortality was highest in
the first 12-months on treatment across all CD4 and viral load strata. As
current CD4 and duration on ART increased and viral load suppression
occurred, mortality dropped. CD4 count was the strongest predictor of death.
The relative effect of current CD4 count varied strongly by viral load and
duration of ART (from 1.3 to 55). Lack of suppression increased the risk of
mortality upwards of 6-fold depending on time on ART and current CD4.
Our findings show that while CD4 count is the strongest predictor of
death, the effect is modified by viral load and the duration of ART.
Assessment of risk should take into account all three factors.
current CD4 count; current viral load; antiretroviral therapy; mortality; resource limited setting
Recent randomised controlled trials have examined the issue of when to start antiretroviral therapy in HIV-infected patients with tuberculosis (TB). There is, however, little information on the effect of timing of antiretroviral therapy (ART) initiation on outcomes in real-life, non-clinical trial, rural settings in sub-Saharan Africa.
We conducted an observational cohort study of all HIV-infected TB patients presenting to a rural hospital in Kenya between 2005 and 2009. We analysed the association between timing of initiation of ART and mortality, using a Cox regression survival analysis, adjusted for measured confounders.
404 antiretroviral-naïve HIV/TB co-infected patients were included in the study. Initiation of ART during the first 8 weeks of TB treatment (early group) was not associated with changes in mortality at one year compared to those who initiated ART after 8 weeks (late group) [Hazard Ratio (HR) = 0.74 (Confidence Interval (CI) 0.33 – 1.64, p=0.46]. In those with baseline CD4 counts ≤50 cells/μl, there was a significant reduction in mortality in the early group compared to the late group (HR= 0.20, 95% CI 0.042 - 0.99, p=0.049). In patients with a CD4 count >50 cells/μl there was no significant difference between early and late groups (HR 1.79 95% CI 0.64- 5.03, P=0.27)
We found that in HIV/TB co-infected patients in rural Kenya, early ART initiation (within 8 weeks) was associated with reduced mortality in those with CD4 counts ≤50 cells/μl. In patients with CD4 counts >50 cells/μl there was no association seen between timing of ART and mortality.
To evaluate the diagnostic accuracy of a peptide corresponding to the variant surface glycoprotein (VSG) LiTat 1.5 amino acid sequence 268–281, and identified through alignment of monoclonal antibody selected mimotopes, for diagnosis of Trypanosoma brucei gambiense sleeping sickness.
A synthetic biotinylated peptide (peptide 1.5/268–281), native VSG LiTat 1.3 and VSG LiTat 1.5 were tested in an indirect ELISA with 102 sera from HAT patients and 102 endemic HAT negative controls.
The area under the curve (AUC) of peptide 1.5/268–281 was 0.954 (95% confidence interval 0.918– 0.980), indicating diagnostic potential. The areas under the curve of VSG LiTat 1.3 and LiTat 1.5 were 1.000 (0.982–1.000) and 0.997 (0.973–1.000), respectively, and significantly higher than the AUC of peptide 1.5/268–281. On a model of VSG LiTat 1.5, peptide 1.5/268–281 was mapped near the top of the VSG.
A biotinylated peptide corresponding to AA 268–281 of VSG LiTat 1.5 may replace the native VSG in serodiagnostic tests, but the diagnostic accuracy is lower than for the full length native VSG LiTat 1.3 and VSG LiTat 1.5.
Diagnosis; sensitivity; specificity; sleeping sickness; Trypanosoma brucei gambiense; variant surface glycoprotein
HIV-positive pregnant women are at heightened risk of becoming lost to follow-up (LTFU) from HIV care. We examined LTFU before and after delivery among pregnant women newly-diagnosed with HIV.
Observational cohort study of all pregnant women ≥18 years (N=300) testing HIV-positive for the first time at their first ANC visit between January–June 2010, at a primary healthcare clinic in Johannesburg, South Africa. Women (n=27) whose delivery date could not be determined were excluded.
Median (IQR) gestation at HIV testing was 26 weeks (21–30). 98.0% received AZT prophylaxis, usually started at the first ANC visit. Of 139 (51.3%) patients who were ART-eligible, 66.9% (95%CI 58.8–74.3%) initiated ART prior to delivery; median (IQR) ART duration pre-delivery was 9.5 weeks (5.1–14.2). Among ART-eligible patients, 40.5% (32.3–49.0%) were cumulatively retained through six months on ART. Of those ART-ineligible at HIV testing, only 22.6% (95%CI 15.9–30.6%) completed CD4 staging and returned for a repeat CD4 test after delivery. LTFU (≥1 month late for last scheduled visit) before delivery was 20.5% (95%CI 16.0–25.6%) and, among those still in care, 47.9% (95%CI 41.2–54.6%) within six months after delivery. Overall, 57.5% (95%CI 51.6–63.3%) were lost between HIV testing and six months post-delivery.
Our findings highlight the challenge of continuity of care among HIV-positive pregnant women attending antenatal services, particularly those ineligible for ART.
HIV/AIDS; pregnant; antenatal; loss to follow-up; retention; South Africa
To collect normative MRI data for effective clinical and research applications. Such data may also offer insights into common neurologic insults.
We identified a representative, community-based sample of children aged 9–14 years. Children were screened for neurodevelopmental problems. Demographic data, medical history and environmental exposures were ascertained. Eligible children underwent the Neurologic Examination for Subtle Signs (NESS) and a brain MRI. Descriptive findings and analyses to identify risk factors for MRI abnormalities are detailed.
102/170 households screened had age-appropriate children. 2/102 children had neurologic problems—one each with cerebral palsy and epilepsy. 96/100 eligible children were enrolled. Mean age was 11.9 years (SD 1.5), 43 (45%) were male. No acute MRI abnormalities were seen. NESS abnormalities were identified in 6/96 (6%). Radiographic evidence of sinusitis 29 (30%) was the most common MRI finding. Brain abnormalities were found in 16 (23%): mild diffuse atrophy in 4 (4%), periventricular white matter changes/gliosis in 6 (6%), multi-focal punctuate subcortical white matter changes in 2 (2%), vermian atrophy in 1 (1%), empty sella in 3 (3%), and multifocal granulomas with surrounding gliosis in 1 (1%). Having an abnormal MRI was not associated with age, sex, antenatal problems, early malnutrition, febrile seizures, an abnormal neurologic examination or housing quality (all p’s > 0.05). No predictors of radiographic sinusitis were identified.
Incidental brain MRI abnormalities are common in normal Malawian children. The incidental atrophy and white matter abnormalities seen in this African population have not been reported among incidental findings from US populations, suggesting Malawi-specific exposures may be the cause.
Sinusitis; abnormal brain MRI; developmental delay
We hypothesized that a screening and treatment intervention for early cryptococcal infection would improve survival among HIV-infected individuals with low CD4 cell counts.
Newly enrolled patients at Family AIDS Care and Education Services (FACES) in Kenya with CD4≤100 cells/μl were tested for serum Cryptococcal antigen (sCrAg). Individuals with sCrAg titer≥1:2 were treated with high-dose fluconazole. Cox proportional hazard models of Kaplan-Meier curves were used to compare survival among individuals with CD4≤100 cells/μl in the intervention and historical control groups.
The median age was 34 years [IQR: 29,41], 54% were female, and median CD4 was 43 cells/μl [IQR: 18,71]. Follow-up time was 1224 person-years. In the intervention group 66% (514/782) were tested for sCrAg; of whom 11% (59/514) were sCrAg positive. Mortality was 25% (196/782) in the intervention group and 25% (191/771) in the control group. There was no significant difference between the intervention and control group in overall survival [Hazard Ratio(HR): 1.1 (95%CI:0.9,1.3)] or three-month survival [HR: 1.0 (95%CI:0.8,1.3)]. Within the intervention group, sCrAg positive individuals had borderline lower survival rates than sCrAg negative individuals [HR:1.8 (95%CI: 1.0, 3.0)].
A screening and treatment intervention to identify sCrAg positive individuals and treat them with high-dose fluconazole did not significantly improve overall survival among HIV-infected individuals with CD4 counts≤100 cells/μl as compared to a historical control. Potential explanations include intervention uptake rates or poor efficacy of high-dose oral fluconazole. Future studies to identify the best treatments for early cryptococcal infection and improve uptake of the intervention are critical.
Cryptococcus; screening; prevention; Africa; outcomes; cryptococcal meningitis
To develop a standardized verbal autopsy (VA) training program and evaluate whether its implementation resulted in comparable knowledge required to classify perinatal cause of death (COD) by physicians and non-physicians.
Training materials, case studies, and written and mock scenarios for this VA program were developed using conventional VA and ICD-10 guidelines. This program was used to instruct physicians and non-physicians in VA methodology using a train-the-trainer model. Written tests of cognitive and applied knowledge required to classify perinatal COD were administered before and after training to evaluate the effect of the VA training program.
53 physicians and non-physicians (nurse-midwives/nurses and Community Health Workers [CHW]) from Pakistan, Zambia, the Democratic Republic of Congo, and Guatemala were trained. Cognitive and applied knowledge mean scores among all trainees improved significantly (12.8 and 28.8% respectively, p<0.001). Cognitive and applied knowledge post-training test scores of nurse-midwives/nurses were comparable to those of physicians. CHW (high-school graduates with 15 months or less formal health/nursing training) had the largest improvements in post-training applied knowledge with scores comparable to those of physicians and nurse-midwives/nurses. However, CHW cognitive knowledge post-training scores were significantly lower than those of physicians and nurses.
With appropriate training in VA, cognitive and applied knowledge required to determine perinatal COD is similar for physicians and nurses-midwives/nurses. This suggests that midwives and nurses may play a useful role in determining COD at the community level, which may be a practical way to improve the accuracy of COD data in rural, remote, geographic areas.
perinatal mortality; education; non-physicians; verbal autopsy
To determine the comparability between cause of death by a single physician coder and a two-physician panel, using verbal autopsy.
The study was conducted between May 2007 and June 2008. Within a week of a perinatal death in 38 rural remote communities in Guatemala, the Democratic Republic of Congo, Zambia and Pakistan, VA questionnaires were completed. Two independent physicians, unaware of the others decisions, assigned an underlying cause of death, in accordance with the causes listed in the chapter headings of the International classification diseases and related health problems, 10th revision (ICD-10). Cohen's kappa statistic was used to assess level of agreement between physician coders.
There were 9461 births during the study period; 252 deaths met study enrollment criteria and underwent verbal autopsy. Physicians assigned the same COD for 75% of stillbirths (K=0.69; 95% confidence interval: 0.61–0.78) and 82% early neonatal deaths (K=0.75; 95% confidence interval: 0.65–0.84). The patterns and proportion of stillbirths and early neonatal deaths determined by the physician coders were very similar compared to causes individually assigned by each physician. Similarly, rank order of the top 5 causes of stillbirth and early neonatal death were identical for each physician.
This study raises important questions about the utility of a system of multiple coders that is currently widely accepted, and speculates that a single physician coder may be an effective and economical alternative to VA programs that use traditional two-physician panels to assign COD.
verbal autopsy; perinatal death; comparing coders
To estimate the prevalence, spatial patterns and clustering in the distribution of soil-transmitted helminth (STH) infections, and factors associated with hookworm infections in a tribal population in Tamil Nadu, India.
Cross-sectional study with one-stage cluster sampling of 22 clusters. Demographic and risk factor data and stool samples for microscopic ova/cysts examination were collected from 1237 participants. Geographical information systems mapping assessed spatial patterns of infection.
The overall prevalence of STH was 39% (95% CI 36%–42%), with hookworm 38% (95% CI 35–41%) and Ascaris lumbricoides 1.5% (95% CI 0.8–2.2%). No Trichuris trichiura infection was detected. People involved in farming had higher odds of hookworm infection (1.68, 95% CI 1.31–2.17, P < 0.001). In the multiple logistic regression, adults (2.31, 95% CI 1.80–2.96, P < 0.001), people with pet cats (1.55, 95% CI 1.10–2.18, P = 0.011) and people who did not wash their hands with soap after defecation (1.84, 95% CI 1.27–2.67, P = 0.001) had higher odds of hookworm infection, but gender and poor usage of foot wear did not significantly increase risk. Cluster analysis, based on design effect calculation, did not show any clustering of cases among the study population; however, spatial scan statistic detected a significant cluster for hookworm infections in one village.
Multiple approaches including health education, improving the existing sanitary practices and regular preventive chemotherapy are needed to control the burden of STH in similar endemic areas.
soil-transmitted helminths; hookworm; Ascaris; Trichuris; intestinal parasites; tribal
To investigate the effects of nutritional supplementation on the outcome and nutritional status of south Indian patients with tuberculosis (TB) with and without human immunodeficiency virus (HIV) coinfection on anti-tuberculous therapy.
Randomized controlled trial on the effect of a locally prepared cereal–lentil mixture providing 930 kcal and a multivitamin micronutrient supplement during anti-tuberculous therapy in 81 newly diagnosed TB alone and 22 TB–HIV-coinfected patients, among whom 51 received and 52 did not receive the supplement. The primary outcome evaluated at completion of TB therapy was outcome of TB treatment, as classified by the national programme. Secondary outcomes were body composition, compliance and condition on follow-up 1 year after cessation of TB therapy and supplementation.
There was no significant difference in TB outcomes at the end of treatment, but HIV–TB coinfected individuals had four times greater odds of poor outcome than those with TB alone. Among patients with TB, 1/35 (2.9%) supplemented and 5/42(12%) of those not supplemented had poor outcomes, while among TB–HIV-coinfected individuals, 4/13 (31%) supplemented and 3/7 (42.8%) non-supplemented patients had poor outcomes at the end of treatment, and the differences were more marked after 1 year of follow-up. Although there was some trend of benefit for both TB alone and TB–HIV coinfection, the results were not statistically significant at the end of TB treatment, possibly because of limited sample size.
Nutritional supplements in patients are a potentially feasible, low-cost intervention, which could impact patients with TB and TB–HIV. The public health importance of these diseases in resource-limited settings suggests the need for large, multi-centre randomized control trials on nutritional supplementation.
nutritional supplementation; tuberculosis; HIV/AIDS; nutrition; randomized controlled trial
To investigate the incidence of selected opportunistic infections (OIs) and cancers and the role of a history of tuberculosis (TB) as a risk factor for developing these conditions in HIV-infected patients starting antiretroviral treatment (ART) in Southern Africa.
Five ART programs from Zimbabwe, Zambia and South Africa participated. Outcomes were extrapulmonary cryptococcal disease (CM), pneumonia due to Pneumocystis jirovecii (PCP), Kaposi’s sarcoma and Non-Hodgkin lymphoma. A history of TB was defined as a TB diagnosis before or at the start of ART. We used Cox models adjusted for age, sex, CD4 cell count at ART start and treatment site, presenting results as adjusted hazard ratios (aHR) with 95% confidence intervals (CI).
We analyzed data from 175,212 patients enrolled between 2000–2010 and identified 702 patients with incident CM (including 205 with a TB history) and 487 with incident PCP (including 179 with a TB history). The incidence per 100 person-years over the first year of ART was 0.48 (95% CI 0.44–0.52) for CM, 0.35 (95% CI 0.32–0.38) for PCP, 0.31 (95% CI 0.29–0.35) for Kaposi’s sarcoma and 0.02 (95% CI 0.01–0.03) for Non-Hodgkin lymphoma. A history of TB was associated with cryptococcal disease (aHR 1.28, 95% CI 1.05–1.55) and Pneumocystis jirovecii pneumonia (aHR 1.61, 95% CI 1.27–2.04), but not with Non-Hodgkin lymphoma (aHR 1.09, 95% CI 0.45–2.65) or Kaposi’s sarcoma (aHR 1.02, 95% CI 0.81–1.27).
Our study suggests that there may be interactions between different OIs in HIV-infected patients.
tuberculosis; opportunistic infections; cancer; HIV; risk factors; antiretroviral treatment programs; history of tuberculosis
In the context of an Intermittent preventive treatment (IPTp) trial for pregnant women in Malawi, P. falci-parum samples from 85 women at enrollment and 35 women at delivery were genotyped for mutations associated with sulfadoxine-pyrimethamine resistance. The prevalence of the highly resistant haplotype with mutations at codons 51 and 108 of dihydrofolate reductase (dhfr) and codons 437 and 540 of dihydropteroate synthase (dhps) increased from 81% at enrollment to 100% at delivery (p=0.01). Pregnant women who were smear-positive at enrollment were more likely to have P. falciparum parasitemia at delivery. These results lend support to concerns that IPTp use may lead to increased drug resistance in pregnant women during pregnancy and emphasize the importance of screening pregnant women for malaria parasites in areas with prevalent SP resistance even when they are already on IPTp.
malaria; Intermittent preventive treatment; pregnancy; dhfr; dhps; Malawi
To assess the feasibility of involving village health workers of southern Bihar in visceral leishmaniasis control, by investigating their knowledge, attitude and practices.
We obtained a list of auxiliary nurses/midwives and accredited social health activists for the highly endemic district of Muzaffarpur. We randomly sampled 100 auxiliary nurses and 100 activists, who were visited in their homes for an interview. Questions were asked on knowledge, attitude and practice related to visceral leishmaniasis and to tuberculosis.
Auxiliary nurses and activists know the presenting symptoms of visceral leishmaniasis, they know how it is diagnosed but they are not aware of the recommended first line treatment. Many are already involved in tuberculosis control and are very well aware of the treatment modalities of tuberculosis, but few are involved in control of visceral leishmaniasis control. They are well organized, have strong links to the primary health care system and are ready to get more involved in visceral leishmaniasis control.
To ensure adequate monitoring of visceral leishmaniasis treatment and treatment outcomes, the control program urgently needs to consider involving auxiliary nurses and activists.
KAP survey; Visceral Leishmaniasis; Tuberculosis; Drug monitoring; Drug effectiveness; Public Health System; Supervised treatment; patient follow-up; ANM; ASHA Network
To evaluate the performance of a Verbal Autopsy (VA) expert algorithm (the InterVA model) for diagnosing AIDS mortality against a reference standard from hospital records that include HIV serostatus information in Addis Ababa, Ethiopia.
Verbal autopsies were conducted for 193 individuals who visited a hospital under surveillance during terminal illness. Decedent admission diagnosis and HIV serostatus information is used to construct two reference standards (AIDS versus other causes of death and TB/AIDS versus other causes). The InterVA model is used to interpret the VA interviews, and the sensitivity, specificity, and cause-specific mortality fractions are calculated as indicators of the diagnostic accuracy of the InterVA model.
The sensitivity and specificity of the InterVA model for diagnosing AIDS are 0.82 (95%-CI: 0.74-0.89) and 0.76 (95%-CI: 0.64-0.86), respectively. The sensitivity and specificity for TB/AIDS are 0.91 (95%-CI: 0.85-0.96) and 0.78 (95%-CI: 0.63-0.89), respectively. The AIDS specific mortality fraction estimated by the model is 61.7% (95%-CI: 54%-69%), which is close to 64.7% (95%-CI: 57%-72%) in the reference standard. The TB/AIDS mortality fraction estimated by the model is 73.6% (95%-CI: 67%-80%), compared to 74.1% (95%-CI: 68%-81%) in the reference standard.
The InterVA model is an easy to use and cheap alternative to physician review for assessing AIDS mortality in countries without vital registration and medical certification of causes of death. The model seems to perform better when TB and AIDS are combined, but the sample is too small to statistically confirm that.
mortality; surveillance; verbal autopsy; InterVA; cause of death; HIV/AIDS; Ethiopia
To describe the pattern of incident illness in children after initiation of antiretroviral therapy (ART) in a large public health sector in Lusaka, Zambia.
A systematic review was performed to extract data from medical records of children (i.e., under 16 years) initiating ART in the Lusaka, Zambia HIV care and treatment program. Incident conditions were listed separately and then grouped according to broad categories. Predictors for incident diagnoses were determined using univariate and multivariable analysis.
Between May 2004 and July 2006, 1,940 HIV-infected children initiated ART. Of these, 1,391 (71.1%) had their medical records reviewed. Median age at ART initiation was 77 months and 631 (45.4%) were females. 859 (62%) children had an incident condition during this period, with a median time of occurrence of 63 days from ART initiation. 28 different incident conditions were documented. When categorized, the most common were mucocutaneous conditions (incidence rate [IR]: 101.1 per 100 child-years, 95%CI: 92.3-110.5) and upper respiratory tract infection (IR: 100.6 per 100 child-years; 95% CI 91.9-110.0). Children with severe immunosuppression (i.e., CD4 < 10%) were more likely to develop lower respiratory tract infection (15.4% vs. 8.4%; p = 0.0002), mucocutaneous conditions (41.3% vs. 29.5%; p < 0.0001) and gastrointestinal conditions (19.8% vs. 14.5%; p = 0.02), when compared to those with CD4 ≥ 10%.
There is a high incidence of new illness following ART initiation, emphasizing the importance of close monitoring during this period.
To assess the proportion of patients lost to programme (died, lost to follow-up, transferred-out) between HIV diagnosis and start of antiretroviral therapy (ART) in sub-Saharan Africa, and determine factors associated with loss to programme.
Systematic review and meta-analysis. We searched PubMed and EMBASE databases for studies in adults. Outcomes were the percentage of patients dying before starting ART, the percentage lost to follow-up, the percentage with a CD4 cell count, the distribution of first CD4 counts and the percentage of eligible patients starting ART. Data were combined using random-effects meta-analysis.
29 studies from sub-Saharan Africa including 148,912 patients were analysed. 6 studies covered the whole period from HIV diagnosis to ART start. Meta-analysis of these studies showed that of 100 patients with a positive HIV test, 72 (95% CI 60–84) had a CD4 cell count measured, 40 (95% CI 26–55) were eligible for ART and 25 (95% CI 13–37) started ART. There was substantial heterogeneity between studies (p<0.0001). Median CD4 cell count at presentation ranged from 154 cells/μl to 274 cells/μl. Patients eligible for ART were less likely to become lost to programme (25% versus 54%, p<0.0001) but eligible patients were more likely to die (11% versus 5%, p<0.0001) than ineligible patients. Loss to programme was higher in men, in patients with low CD4 cell counts and low socio-economic status, and in recent time periods.
Monitoring and care in the pre-ART time period needs improvement, with greater emphasis on patients not yet eligible for ART.
pre-ART; linkage to care; sub-Saharan Africa; mortality; loss to follow-up
Malnutrition is common in HIV-infected children in Africa and an indication for antiretroviral treatment (ART). We examined anthropometric status and response to ART in children treated at a large public-sector clinic in Malawi.
All children aged <15 years who started ART between January 2001 and December 2006 were included and followed until March 2008. Weight and height were measured at regular intervals from 1 year before to 2 years after the start of ART. Sex- and age-standardized z-scores were calculated for weight-for-age (WAZ) and height-for-age (HAZ). Predictors of growth were identified in multivariable mixed-effect models.
A total of 497 children started ART and were followed for 972 person-years. Median age (inter-quartile range; IQR) was 8 years (4 to 11 years). Most children were underweight (52% of children), stunted (69%), in advanced clinical stages (94% in WHO stages 3 or 4) and had severe immunodeficiency (77%). After starting ART median (IQR) WAZ and HAZ increased from −2.1 (−2.7 to −1.3) and −2.6 (−3.6 to −1.8) to −1.4 (−2.1 to −0.8) and −1.8 (−2.4 to −1.1) at 24 months, respectively (p<0.001). In multivariable models, baseline WAZ and HAZ scores were the most important determinants of growth trajectories on ART.
Despite a sustained growth response to ART among children remaining on therapy, normal values were not reached. Interventions leading to earlier HIV diagnosis and initiation of treatment could improve growth response.
children; HIV infection; growth; antiretroviral therapy; Malawi
In countries with limited vital registration data, maternal mortality levels are often estimated using siblings’ survival histories (SSH) collected during retrospective adult mortality surveys. We explored how accurately adult deaths can be classified as pregnancy-related (PR) using such data.
The study was conducted in a rural area of southeastern Senegal with high maternal mortality, Bandafassi. We used data from a Demographic Surveillance System (DSS) implemented in this area to 1) identify deaths of women at reproductive ages between 2003 and 2009, and 2) locate the surviving adult sisters of the deceased. We interviewed the sisters we could locate using the standard SSH questionnaire. Retrospective SSH reports of siblings’ deaths were linked at the individual level to death records and verbal autopsy data obtained by DSS. We compared the classification of adult female deaths as PR or non-PR deaths in SSH surveys and DSS data.
There were 91 deaths at reproductive ages in the Bandafassi DSS between 2003 and 2009, but only 59 had known surviving sisters. Some deaths were omitted by SSH respondents, or reported as alive or as having occurred during childhood (n=8). Among deaths reported both in the SSH and DSS data, 94% of deaths classified as PR in the DSS were also classified as PR by SSH data. Only 70% of deaths classified as non-PR in the DSS were also classified as non-PR by SSH data.
Misclassifications of PR deaths in retrospective adult mortality surveys may affect estimates of pregnancy-related mortality rates.
Maternal mortality; surveys; adult mortality; sibling method; demographic surveillance; Senegal
The who defines community treatment for Schistosoma mansoni on the basis of prevalence of infection. The community is classified into one of three treatment classes. We need an effective classification methodology based on the prevalence of S. mansoni in the community. Parasitological screening using the Katz-Kato method remains the preferred option. Current analytical work incorrectly assumes a perfect test, overlooking the imperfect sensitivity. Research shows the test can have low sensitivity, especially in regions of lower prevalence and intensity of infection.
We create decision rules (defined by cutoffs for number of positive slides), which account for imperfect sensitivity, both with a simple adjustment of fixed sensitivity and with a more complex adjustment of changing sensitivity with prevalence. To reduce screening costs whilst maintaining accuracy, we propose a pooled classification method. To estimate sensitivity, we use the DeVlas model for worm and egg distributions. We compare the proposed method to the standard method to investigate differences in efficiency, measured by number of slides read, and accuracy, measured by probability of correct classification.
Modelling varying sensitivity lowers the lower cutoff more significantly than the upper cutoff, correctly classifying regions as moderate rather than lower, thus receiving life-saving treatment. The classification method goes directly to classification on the basis of positive pools, avoiding having to know sensitivity to estimate prevalence. For model parameter values describing worm and egg distributions among children, the pooled method with 25 slides achieves an expected 89.9% probability of correct classification, whereas the standard method with 50 slides achieves 88.7%.
Among children, it is more efficient and more accurate to use the pooled method for classification of S. mansoni prevalence than the current standard method.
Pooled testing; classification; S. mansoni; prevalence estimation; Katz-Kato; disease control
To propose practical, standardized definitions for reporting retention for pre-ART care.
Definitions footed on three stages: Stage 1, testing HIV-positive to initial ART eligibility assessment; Stage 2, initial assessment to ART eligibility; and Stage 3, ART eligibility to ART initiation. For each stage, negative outcomes include death, loss, or not being retained.
Stage 1 Retention: proportion of patients who complete initial ART eligibility assessment within 3 months of HIV testing, with reporting of cohort outcomes at 3 and 12 months after HIV testing. Patients who end Stage 1 eligible for ART move directly to Stage 3. Stage 2 Retention: proportion of patients who either: complete all possible ART eligibility re-assessments within 6 months of the site’s standard visit schedule; or had an assessment within 1 year of the time reported to and were not ART eligible at the last assessment. Retention should be reported at 12-month intervals. Stage 3 Retention: initiating ART (i.e. ARVs dispensed) within 3 months of determining ART eligibility, with reporting at 3 months after eligibility and 3 monthly intervals thereafter.
If pre-ART retention is to improve, consistent terminology is needed for collecting data, measuring and reporting outcomes, and comparing results across programs and countries. The definitions we propose offer a strategy for improving the consistency and comparability of future reports.
HIV; retention; attrition; antiretroviral therapy; pre-antiretroviral therapy care; resource limited setttings