Avahan, the India AIDS Initiative, has supported the HIV-intervention programme in six HIV high-prevalence states in India through various grants awarded beginning from December 2003 to August 2005. As of December 2007, the programme provided HIV-prevention services to about 280
000 SWs and MSMs through a pyramidal virtual organisational structure consisting of a central STI team, seven lead implementing partners who subgranted to over 140 local non-governmental organisations (NGOs).18
The services provided through project supported clinics, peer educators and out-reach workers included STI and primary HIV care, condom provision, behaviour change communication and community mobilisation to build capacity for community ownerships. The STI clinic services were scaled up, starting from December 2004. Details of early scale-up are available in a previous publication.19
The delivery of STI services worked at three levels.19
At the grass-root level, NGOs organised services to SWs, MSM/TGs and IDUs through designated clinic settings such as static clinics, mobile clinics in a vehicle and health camps. Some static clinics were located at private general practitioner clinics who were identified by the community as preferred service providers. At the middle level, the seven state lead implementing partners provided technical and management support, established a supervisory system and gave logistical assistance to facilitate overall STI service delivery. At the central level, an STI capacity building team was responsible for ensuring high quality and standardised STI services which were provided uniformly across the seven lead partners and their subgrantees. Clinical operational guidelines, standard operating procedures and corresponding supervisory handbooks were developed to support standardisation and supervision.20 21
The central team conducted trainings, provided support and mentoring to mid-level supervisors, monitored overall activities and readjusted guidelines based on the experiences, monitoring data and operation research. Overall, the capacity-building system was responsible to facilitate the delivery of uniform high-quality services for STIs management. Counselling and basic HIV management services were added later.
The supervisory system was two-tiered: a state and a central level. The state-level supervisory team consisted of staff from the lead implementing partner who conducted systematic, routine periodic visits to the designated clinics supported and managed by them. Every clinic was covered by these visits. The supervisory visit schedules could be adjusted to respond to the need for state supervisors based on the requirements of individual clinic. Frequent visits were given to clinics which needed more support for quality improvement such as newly started clinics and those with new staff. The central STI capacity-building team conducted ‘dipstick’ supervisory visits to different clinics supported by each lead implementing partner once in every 3 months. Dipstick supervision consisted a visit by central supervisors accompanied by state supervisors to preselected clinics. Around 10% of existing clinics under each lead implementing partner were selected in each quarter in consultation with state clinic supervisors. Different clinic clusters were visited by the central team in each quarter. Clinic staff were aware of supervisory visit plans beforehand at least a week earlier.
The central STI capacity-building team developed a clinic quality monitoring tool in a participatory manner as a component of regular supervision, for which guidelines were outlined in the supervisory handbook.21
This tool was used as part of regular supervision and assisted in monitoring the quality of clinical services against the prescribed standards presented in Clinic Operational Guidelines and Standards.19 20
The purpose of the tool was twofold: (1) to serve as a checklist to help clinic supervisors from the state lead implementing partners to support, monitor and improve the overall quality of service delivery during their periodic visits; and (2) to track quality of STI services over time.
This tool assessed five performance areas of STI clinical services, as shown in the . The coverage performance area measured accessibility, acceptability and contact coverage, which were the three important domains related to provision of STI services in programme settings where exclusive services are made available to SWs, MSM/TGs and IDUs.22 23
The coverage component of the tool was devised to explore various structural and environmental barriers for service uptake such as geographical distance, mobility, work limitations and social stigma, so that actions could be taken to enhance the coverage.24
The quality of clinic and services performance area measured 10 clinic and service components (see ) such as correct treatment, counselling, infection control, confidentiality, drug stock and record keeping measured against the defined standards.20
The referral network measured the availability of adequate referral network and the use of STI clinics as an entry point for HIV testing and treatment services. The community involvement performance indicator measured the involvement of the community in the clinic service delivery, which is an explicit component of the programme to facilitate client-orientated services, community ownership and sustainability.25
The technical support performance indicator measured the adequacy of supportive supervision provided to the clinic staff by the lead implementing partner level technical and management team. The tool documents the observations in yes/no or numerical form (percentage and numbers) for around 80 observations including interviews, clinical observations, record reviews and data analysis. The yes/no questions were scored 0 or 1. Numerical answers were converted to an ordinal score between 0 and 5. Based on the number and type of questions under each enquiry area, a mean score between 0 and 5 for each of the five enquiry areas could be calculated. For example, coverage component had five numerical questions (). Based on percentage of coverage, each question was given a score between 0 and 5. The mean score for these five questions was calculated to obtain the score for coverage. In a similar manner, the mean score between 0 and 5 for each of the five enquiry areas was calculated using the supervisory tool. Some of the subcomponents of the five main performance indicators in the tool have undergone refinement over the period based on field experiences. Such adjustments were mostly improvements in the definition of subcomponents and its measurement methodologies so that the observations could be documented quantitatively. The clinic quality monitoring tool is a participatory tool which guides the supervisors through a systematic supportive supervision session requiring about 2–3 h per clinic assessment with the active involvement of the service providers.
Table 1 Summary of the clinic quality-monitoring tool used to measure the quality against defined standards21
During the supervisory visits, the central team documented the observations on the clinic quality-monitoring tool with active participation of the clinic staff. Two outcomes were generated from the tool. First, the central STI capacity building team generated a technical report and recommendations based on the observations leading to immediate follow-up actions by the clinic to improve the service quality. The state supervisory team followed up the corrective actions during their subsequent visits. The second, clinic quality-monitoring tool and technical reports were periodically reviewed by central STI capacity building team to generate an objective report using an ordinal score of 0–5 for each of the above five enquiry areas at the clinic level. For each of seven state lead implementing partners, quarterly clinic quality scores were generated under five performance indicators. The scores were calculated by a central STI capacity building team by averaging the scores for all clinics visited under the respective partner in that quarter. Final quarterly quality scores for five performance indicators were obtained by averaging the score for all the seven state lead partners. Whenever a visit to a state lead implementing partner was not conducted in a particular quarter, the clinic quality score from the previous quarter was carried forward, assuming no change in the quality in that quarter. The statistical significance of the quality score trend over the time was assessed based on the correlation coefficient using SPSS (SPSS, Chicago).