|Home | About | Journals | Submit | Contact Us | Français|
Mike English conceived of the idea for the study, submitted the application for funding and wrote the first and final drafts of the manuscript.
Grace Irimu, Annah Wamae, Fred Were, Aggrey Wasunna, Greg Fegan & Norbert Peshu have participated in discussions over the design of the study, are co-investigators on the proposed research and have reviewed and approved the manuscript.
Small hospitals sit at the apex of the pyramid of primary care in many low-income country health systems. If the Millennium Development Goal for child survival is to be achieved hospital care for severely ill, referred children will need to be improved considerably in parallel with primary care in many countries. Yet we know little about how to achieve this. We describe the evolution and final design of an intervention study attempting to improve hospital care for children in Kenyan district hospitals. We believe our experience illustrates many of the difficulties involved in reconciling epidemiological rigour and feasibility in studies at a health system rather than an individual level and the importance of the depth and breadth of analysis when trying to provide a plausible answer to the question - does it work? While there are increasing calls for more health systems research in low-income countries the importance of strong, broadly-based local partnerships and long term commitment even to initiate projects are not always appreciated.
Under 5 mortality in most of sub-Saharan Africa remains >100/1000 live-births and has remained unchanged for a decade or has risen in some countries including Kenya. Improving child survival will require better delivery of health services and in some cases curative care may be at least as cost effective as preventive interventions . Appropriately therefore, the delivery of health services at the community level and through primary care units has been the subject of considerable global research and calls to action[3,4]. However, district hospitals that provide referral care and the complex environments in which they operate have been largely ignored[2,5,6]. We believe that understanding how to improve the performance of district hospitals in settings such as Kenya is also important for the following reasons.
Firstly, referral care for children is commonly required. In sub-Saharan African countries between 6% and 20% of children assessed at primary care units may require referral  although often they do not get it [8,9]. Secondly, it has been estimated that effective hospital care can confer a considerable child survival advantage if access is good. Thirdly, district hospital care can be highly cost effective. In Bangladesh the cost per disability adjusted life year (DALY) averted attributable to a small district hospital was estimated to be $11 , while Kenyan data suggest that the cost per child life saved by hospital care may be as low as $105 . Fourthly, in many countries the district hospital has a supervisory and peer leadership role within the formal primary care network. If hospitals fail to provide appropriate leadership the whole primary care network is threatened. Finally, hospitals are an established part of many health systems. Even in African countries the hospital sector consumes a major proportion of health care budgets although the relatively poor quality services they provide may limit their effectiveness and produce a poor return for this investment .
How hospitals provide services and maximise health benefits are subjects that are therefore highly relevant to improving health systems in low income settings. Yet the question of how to deliver essential services effectively in small hospitals has scarcely been addressed.
In Kenya there are just over 100 government hospitals providing basic referral care, 70 of which are district hospitals that serve and supervise primary care networks. In common with many countries in sub-Saharan Africa these hospitals face problems with infrastructure, equipment, personnel, supplies of resources [2,12,13] and, sometimes, poor management. However, first line therapeutics for the most common diseases are widely available  and clinical staff (predominantly clinical officers with a 3-year medical training) and nursing staff are available.
The functioning of district hospitals, as part of complex health systems, is affected by a wide variety of factors (see figure 1). These include effective health policy and regulation and the provision of adequate human, capital and consumable resources. At a local level, resource allocation, individual health worker motivation, organisational structures, institutional and personal values and trust will impact on hospital performance [16,17,18]. The demand for services, reflected by the effectiveness of referral, is also likely to be a key determinant of efficiency, equitable distribution of resources and population health benefits. Given this complexity multiple interventions targeting health system constraints above, within and below the district hospital level are likely to be necessary to optimise performance. Health workers, however, remain central to a health system’s functioning .
The desire to ensure that patients are correctly assessed, and receive prompt, safe and effective therapy in an appropriate environment is hardly new. In the most developed countries considerable resources have been invested in evidence based medicine and quality improvement approaches that address these issues. To change health worker behaviour, however, evidence, even in low income settings, suggests that multiple approaches are likely to be required in the form of written, expert guidelines and training combined with job aides, feedback and supervision, or more general quality improvement initiatives [20,21,22,23].
Given the need to improve hospital care for children and uncertainty about the best means to achieve this how can health systems research help? Our aim was to examine how paediatric practices could be improved in the setting of typical, government hospitals in Kenya without resorting to major resource inputs. We now describe the evolution of a study design and some of the practical and scientific tensions that helped to shape it as an illustration of the complexities encountered undertaking health systems research.
The rationale for an intervention to promote evidence-based patient care is that such care is on the causal pathway to better outcomes. Therefore it would be logical to consider reduction in inpatient paediatric mortality as the primary endpoint of any study. However, it soon became apparent that basing an assessment on a reduction in paediatric mortality in hospitals would be problematic, particularly in a setting such as Kenya for reasons listed in Box 1. With these in mind it is clear that the resources required to mount a study demonstrating ‘statistically significant’ reductions in mortality that would not be undermined by worries over bias, residual or unrecognised confounding will be beyond most research teams. If mortality is problematic as the major outcome measure of success then are there appropriate alternatives?
The rationale for better case management is that it improves outcomes. If this is true then process indicators that reflect the degree to which best practice care is provided are valid and appropriate endpoints. For example, the proportion of severely dehydrated children receiving fluids of the correct type, volume and rate may be a useful measure. Observations of this type can be made frequently, may be easier to identify and be less subject to confounding. Critically substantial changes in measured indicators may occur, also making impact potentially easier to observe. Thus, process measures have many desirable properties: they may permit between health worker and between hospital comparisons; they can target the most desired attributes of service delivery; they are relatively cheap to measure; they can rapidly incorporate new elements; and they provide results that should be intrinsically meaningful to service providers [24,25]. Process measures may however be affected by the degree to which inputs (resources) are available - a consideration rarely of concern in developed countries. In addition, there are no defined, accepted process performance measures. To develop these we required a very good understanding of the setting within which care is being delivered. Furthermore, how does one select the important measures from all of the possible measures. In our case key measures were selected on the basis of feasibility and one or more of: a clear, logical link to patient outcomes; a clear and proximate link to the intervention; favourable cost-effectiveness; requirement for minimal resource inputs; or, objectivity of the assessment(s) (additional information is available from the authors).
To change health worker practices and hospital care a suitable intervention is required. In keeping with emerging evidence that multifaceted interventions including training, job aides, feedback, quality improvement and supervision are more likely to be successful we wished to incorporate most of these elements (described briefly in Box 2). However, the lack of available tools or structures in Kenya meant that all of these had to be developed de novo, in collaboration with the Ministry of Health and other stakeholders, keeping in mind what might be sustainable. This process engaged us for three years, with significant effort to ensure what was developed became nationally owned.
If our overall aim is to assess the ability of an intervention to achieve practice improvement, assessed with a panel of process measures, the gold standard design remains the randomised controlled trial. When working with hospitals, however, appropriate sample size considerations can pose considerable practical and financial challenges. Additional factors to consider are the types of possible response. Suppose that huge effort and expense results in a trial with strikingly different effect sizes in individual hospitals. Irrespective of whether the overall result is statistically significant would we understand why the intervention worked (or did not work) and would we know how to modify the intervention or the health system to produce more consistent results?
Consider also the nature of the intervention. The size of any effect is likely to be related to the duration and success of support supervision and the capacity to solve problems with implementation. Alternatively time itself, through a changing sociopolitical or general health systems context, might change performance. These issues alone suggest, as has been recently recognised , that we need answers to questions that include: What is the pattern of performance improvement over time and how well is the intervention delivered? Is improvement related to the duration and content of supervision? Is performance dependent on sustained supervision? To what degree are hospitals able to solve problems locally? What factors, at national and local levels seem to be important in determining the degree to which hospital performance can be improved? Data collection and analyses also need to acknowledge that while the interventions are delivered at the hospital or health worker level process indicator observations are largely made at the patient level. Thus, data should ideally be of sufficient quality and quantity to allow for health worker and perhaps hospital attributes to be accounted for using statistical models that account for clustering. The demand for large, controlled studies with extremely detailed data collection that all this implies is not easily reconciled with cost-containment - the perennial research dilemma.
Our attempt to maximise the information generated by a Kenyan study, while working within a defined resource envelope, comprises a randomised, parallel group, controlled, intervention project with 4 hospitals per group. Also, incorporated are: pre-intervention baseline measures in both groups to provide for comparison of groups at baseline and for within-hospital before and after comparisons; and, multiple measures before and after the pre-defined major endpoint in the intervention group to explore the relationship between intervention delivery and its withdrawal. Further details of the Kenyan study design are included in Box 3.
Representing primarily a detailed case study, before and after, controlled design, intervention hospitals (n = 4) will receive all of the interventions listed in Box 2, beginning with a single hospital training course, over a period of 1.5 years and will be evaluated six-monthly for 2.5 years. Control hospitals, who for practical and ethical reasons cannot truly receive nothing, will receive job aides identical to those of the intervention group, a 1.5 day lecture-based introductory seminar explaining the guidelines and copies of written survey feedback reports. Control hospital performance will be evaluated at 6 months and at 18 months after the start of the study. All hospitals will be examined at baseline. Research evaluation will include:
These data will describe any temporal association of effect with the duration and nature of the intervention. If changes in the same direction and of the same magnitude are consistently observed across the intervention sites but not observed in control hospitals this will increase the plausibility that the intervention is causing the effect and indicate that the effect is not site specific. Failure to demonstrate improvements or inconsistent patterns of improvement between intervention and control hospitals will weaken the argument that the intervention had a specific effect. Such rich data will be invaluable for informing the design of future attempts to improve hospital care in Kenya and countries with similar health systems.
Kenya has eight provinces and had, at the time of study design, 70 districts. How does one try and ensure representation of diversity and attempt to limit selection bias with a relatively small sample size? To what degree will demands for research efficiency compromise the value of the results. For example, insisting on a minimum hospital workload to permit time-limited performance assessment and restricting geographic sampling to limit the number of stakeholders who must be consulted and kept informed are important practical considerations. But how will they influence the findings or generalisability? In practice selection biases seem almost impossible to overcome, making it imperative that at least the selection process is well documented to aid a study’s interpretation (See Box 4 for our compromise).
In collaboration with the Kenyan Ministry of Health hospitals in 4 of Kenya’s 8 provinces were initially considered, avoiding areas with existing major hospital management intervention projects. Those with a minimum of 1,000 paediatric admissions and 1,200 deliveries per year were then listed together with important district specific data (see table below).
Present in this basic sampling frame were 8 hospitals included in previous evaluations of hospital care for children undertaken in 2002. Other than a brief feedback visit in early 2003 the investigators had had no subsequent contact with them. As it was felt that even old data on hospital performance would be a clear advantage in understanding how hospitals change these 8 were initially considered for inclusion. However, in an effort to ensure balance if divided into two groups, one of these eight hospitals was replaced with an alternative facility, permitting seven combinations of two relatively balanced groups to be defined, with two hospitals in each of 4 provinces. After obtaining permission from the hospitals one of the seven balanced combinations was selected randomly to define intervention and control groups, described in the table below.
Results from this study will be reported in due course, together we are sure with further lessons learned during its conduct. What we hope to have conveyed here are insights into some of the challenges involved in preparing for and initiating such studies, a process that took almost three years in our case. It should also be clear that in this field each experiment is unique, the conditions under which the study takes place will never be the same again. Interpretation of the results is only possible if we can define as carefully as possible what has been done and in what context. Finally it should be obvious that strong partnerships with multiple stake-holders are required to undertake health system studies of a significant size. Such partnerships require a considerable investment in time for all parties with no guarantee of success. While there are increasing calls for health systems research in low income settings little of substance may be delivered unless funding agencies are prepared to invest, for the long term, in partnerships and essential preparatory work and at substantial scale if epidemiologically rigorous designs are to be employed.
This manuscript is published with the permission of the Director of KEMRI. The authors would like to thank Prof. Lucy Gilson, Dr. Kara Hanson and Dr. Alex Rowe for specific contributions to the debate on study design and Prof. Robert Snow, Prof. Kevin Marsh, Dr. Bernhards Ogutu and Prof. Fabian Esamai to the general study design. The authors would also like to acknowledge the considerable support received in planning this study from the Division of Child Health, The Department of Preventive and Promotive Health, The Department of Standards and Regulatory Services and the Department of Curative and Rehabilitative Services all in the Ministry of Health in Kenya. Finally we would like to acknowledge the support of the Wellcome Trust in choosing to fund this work.
This work is funded through a Wellcome Trust Senior Research Fellowship awarded to Dr. Mike English (#076827). The funders have played no role in the design of this study.
Conflict of Interest.
The authors have no conflict of interest.