South Africa has one national and nine provincial (sub-national) EPI managers. In September 2009, we sent e-mails to the national EPI manager and the nine provincial EPI managers, requesting the managers to identify key challenges facing EPI. The two questions asked were:
1. What, in your opinion, are the five key challenges to childhood immunisation in South Africa (List the challenges from the most important challenge to the least important)?
2. What would be the solutions, in your opinion, to these challenges (Only one solution for each challenge)?
Eight days after the initial email request, we sent a follow-up reminder to non-responders. Responses were collated two weeks later. The findings of this first audit were presented at a national immunisation conference in October 2009; which was attended by all key immunisation stakeholders in South Africa [12
]. The conference agenda included a critical appraisal of vaccines and vaccine administration, globally and with specific reference to South Africa. In June 2010, the two questions were again sent to all the 10 EPI managers in the country.
The exercise was initially undertaken as a programme evaluation for management purposes, and therefore no ethics review board approval was sought. However, only later we realised that the results could be of interest to a wider audience. All EPI managers who responded to the questionnaire provided consent for their responses to be published. In addition, we obtained the permission of the South African National Department of Health to publish the findings. In this paper we present the key challenges identified by the managers, the key solutions they proposed, and the findings from systematic reviews on the effects of the proposed interventions. Each respondent provided five barriers and five corresponding corrective interventions. In summarising the responses, we treated the barriers identified (five per manager) equally without applying any weighting; irrespective of whether the barriers were identified by the national or provincial managers, or whether the barrier was classified by the manager as most important or least important. We did the same for the proposed interventions (one per identified barrier).
On 30 November 2010 we searched the Health Systems Evidence database, the Cochrane Database of Systematic Reviews, the Database of Abstracts of Reviews of Effectiveness, and PubMed. One author (CSW) conducted the search using the search strategy shown in Table . We screened the search outputs in the order in which the databases are listed here, starting with Health Systems Evidence database and ending with PubMed. Figure shows a summary of the search and selection process. When we found more than one systematic review that assessed a particular intervention, we chose the one that was more comprehensive and/or more recent. Two authors (CSW and MSS) independently screened the search results, selected relevant systematic reviews, and assessed the quality of selected reviews using the AMSTAR tool [13
]. In particular, we assessed whether the authors of the review reported the study selection criteria; conducted a literature search that was comprehensive enough to avoid publication, language and indexing biases; undertook duplicate study selection and data extraction; used reliable criteria to assess the risk of bias in included studies; reported the characteristics of included studies appropriately; and combined data from included studies using reliable methods. Based on these criteria, we concluded whether the review was well conducted (i.e. reliable) or not. We have only reported data from reviews that we considered to be reliable. At each stage, the two authors compared their results and resolved any disagreements by discussion and consensus.
Search strategy for identification of eligible reviews
Flow diagram showing the search and selection of reviews.
Finally, we used the GRADE approach [14
] to assess the quality of the evidence for the effectiveness of the proposed strategies. This method results in an assessment of the quality of a body of evidence as high, moderate, low, or very low. High quality evidence implies that “further research is very unlikely to change our confidence in the estimate of effect”. Moderate quality evidence means that “further research is likely to have an important impact on our confidence in the estimate of effect and may change the estimate”. Evidence is considered of low quality if “further research is very likely to have an important impact on our confidence in the estimate of effect and is likely to change the estimate”, and very low quality if “we have very little confidence in the effect estimate”. We began the rating of the quality of evidence with the study design; evidence from systematic reviews of randomised controlled trials as high-quality and that from systematic reviews of observational studies as low-quality. In addition, five reasons led us to downgrade the quality of evidence from systematic reviews of randomised controlled trials and three to upgrade the quality of evidence from systematic reviews of observational studies. For pooled data from randomised controlled trials, the factors that led to rating down the quality of evidence were risk of bias, heterogeneity, indirectness, imprecision, and publication bias. Regarding risk of bias, concerns that limited our confidence in the evidence include lack of allocation concealment, lack of blinding of outcome assessment, and a large loss to follow-up. Heterogeneity of effects across studies for which there were no compelling explanations also reduced our confidence in the evidence. Indirectness refers to differences between the population, intervention, comparison group and outcome of interest to us, and those included in the relevant reviews e.g. we used the evidence on strategies for improving patients’ understanding of health information as a proxy for evidence on parents’ understanding of the importance of childhood immunisation [16
]. For imprecision, if we found that studies included relatively few participants and few events and thus had estimates of effects with wide confidence intervals, we rated down the quality of the evidence. Finally, we downgraded the quality of evidence if there was a high likelihood of publication bias. Furthermore, we upgraded the quality of the evidence if the pooled estimates revealed a large magnitude of effect, if we had negligible concerns about confounders, or if there was a strong dose-response gradient [14