The usefulness of surveillance data will depend directly on the quality of the data; every system should have a quality assurance program. Quality indicators will reflect such attributes as system acceptability, timeliness, completeness, and representativeness of collected data. These attributes should be assessed routinely. In addition, the system should undergo regular data audits and systematic field evaluation. In 2001, the Centers for Disease Control and Prevention published comprehensive guidelines for the evaluation of public health surveillance systems (30
). These guidelines serve as a template for sentinel surveillance evaluation and quality recommendations. Several key quality indicators are recommended in the following section and in .
Influenza surveillance evaluation and recommended quality indicators*
Regular field evaluations and audits at a facility level must be a standard component of the system. This process can determine that cases are being counted appropriately, that reported cases meet the case definition, and that sampling procedures are being used uniformly without evidence of bias. Data values recorded in the surveillance system can be compared with standard chart-review values by a retrospective review of a sample of medical records. If a sampling procedure is used for specimen collection, audits can ensure that procedures are uniform and unbiased. Additionally, audits can determine whether clinical specimens are being taken, stored, processed, tested (if appropriate), and shipped properly and in a timely manner from all those who meet sampling criteria.
Observance of expected trends in reporting and disease activity can provide an additional means of assessing data quality. Although it is not possible to define expected values for some parameters, such as the percentage of specimens testing positive for influenza virus or the number of SARI cases occurring in a given facility, aberrations in the data over time or substantial differences between facilities can signal problems at a given site. Trends assessed may include number of cases reported by month, number of specimens submitted by month, percentage of influenza-positive specimens, and number and percentage of SARI and ILI cases tested.
To be useful, collection and reporting of surveillance data must be timely. Timeliness of the following activities is appropriate for routine measurement as quality indicators for surveillance sites: data reporting, specimen shipment to the laboratory for testing, receipt of specimens by the laboratory, laboratory processing and testing of specimens, and reporting of laboratory results.
One way to quantify timeliness is to calculate the percentage of times that a site achieves targets for specific intervals, for example, the percentage of times that a site sends reports or specimens to the appropriate place within a specified time frame. A hypothetical system may choose as a goal that 80% of data reports be sent within 48 hours of the reporting deadline or that 80% of specimens be shipped within 48 hours of specimen collection. Likewise, for the laboratory, the percentage of samples that are tested and have final results within a target time frame can be calculated. Targets will depend on site-specific circumstances and public health priorities.
A similar quality metric that can be used is the calculation of the average time to accomplish surveillance activities. For example, a hypothetical site that is chronically late in sending data every month might average several days between the deadline for receipt (the day of the week or month on which reports are due) and actual receipt of data. For laboratory specimen processing, the average number of days between receipt of specimens and the reporting of the results can be measured and followed similarly. Site time averages can be compared to identify sites that are underperforming and to target improvements. Either percentages of sites achieving timeliness targets or time lag averages can also be used as a quality metric to be followed over time.
Indicators of completeness can be determined by analyzing reported data. They may include percentage of reports received from each site with complete data, percentage of total expected data reports received, and percentage of total expected cases that have specimens submitted to the laboratory (depends on sampling scheme devised for sites).