|Home | About | Journals | Submit | Contact Us | Français|
Rapid human immunodeficiency virus testing is often conducted in nonclinical settings by staff with limited training, so quality assurance (QA) monitoring is critical to ensure accuracy of test results. Rapid tests (n = 86,749) were generally conducted according to manufacturers' instructions, but ongoing testing competency assessments and on-site QA monitoring were not uniformly conducted.
In 2003, rapid human immunodeficiency virus (HIV) tests became available that were waived under the Clinical Laboratory Improvement Amendments of 1988 (CLIA), and HIV testing began to be conducted in nonclinical settings by nonlaboratory staff (3). Because testing outside of clinical settings may affect result accuracy, monitoring to ensure that rapid testing is conducted in a manner that is consistent with manufacturers' instructions is essential. The Food and Drug Administration requires testing facilities that plan to perform waived rapid HIV testing to develop quality assurance (QA) plans which include site-specific testing procedures, plans for training testing personnel, and development of systems to identify and correct mistakes (3, 4). Mistakes that can occur during the testing process include performing testing outside of recommended temperatures, collecting specimens incorrectly, documentation errors, improper performance of testing, and incorrect interpretation of results (7). Adherence to QA procedures and ongoing test performance monitoring should reduce the number of testing mistakes that occur and increase the accuracy of results (5, 6, 8, 13).
The Centers for Disease Control and Prevention (CDC) developed guidelines for the implementation and operation of rapid test QA programs (4). The CDC recommends that testing sites participate in external quality assessments to evaluate how well testing is being performed. Examples include (i) proficiency testing (PT) programs which provide panels of samples with known results; (ii) competency assessments which evaluate the ability of individuals to read test devices, such as interpreting pictures of rapid test devices; and (iii) QA monitoring in which someone outside the organization observes testing (4).
To characterize QA practices and outcomes, the CDC conducted postmarketing surveillance in conjunction with 17 state and city health departments (Arizona; Delaware; Florida; Indiana; Louisiana; Massachusetts; Michigan; Montana; Nebraska; New Jersey; New York; North Carolina; Utah; Wisconsin; Chicago, Illinois; New York City, New York; and San Francisco, California) from August 2004 through June 2005. Testing was performed at 368 rapid HIV testing sites, including sexually transmitted disease clinics, counseling and testing sites, outreach sites, and correctional facilities. Testing was conducted using the OraQuick rapid HIV-1 antibody test device on fingerstick specimens or the OraQuick Advance rapid HIV-1/2 antibody test device on fingerstick or oral fluid specimens (OraSure, Inc., Bethlehem, PA). This project was considered routine public health surveillance and did not require CDC Institutional Review Board approval.
A telephone survey was administered to rapid testing program managers at each health department to characterize the training provided to staff, the use of locally organized competency assessments, the use of external PT programs (e.g., CDC's Model Performance Evaluation Program), and QA monitoring practices.
During January through June 2005 (55,748 site-days), sites were provided forms to record the number of invalid tests (11), external control runs, and temperature deviations. External controls are run to verify that test devices provide accurate results (4). Reports of incorrect results from external control runs were assessed. Temperature deviations were defined as the number of site-days during which at least one temperature reading was outside allowable limits (i) where tests were performed, (ii) where test kits were stored, and (iii) where control materials were stored (10-12). The occurrence of invalid results following temperature deviations was evaluated.
Results of the telephone survey showed that all health departments required staff to be trained to perform rapid tests: 16 (94%) used a training course they had developed, and 11 (65%) required staff to read the package insert (Table (Table1).1). All health departments assessed the competency of staff to perform testing before testing clients, most commonly using pictures of test results (82%) (Table (Table2).2). Twelve (71%) health departments assessed the competency of staff to conduct tests after they began testing clients: 9 (53%) used an internal competency testing program, and 7 (41%) used an external PT program (Table (Table22).
Staff from 15 (88%) health departments visited all test sites during the surveillance period to monitor QA procedures and ensure that rapid testing materials were in place, and staff from 10 (59%) departments conducted these visits at least every 6 months. Only 8 (47%) health departments directly observed staff in all sites collect specimens and interpret test results. The following external QA assessments were performed by health department staff on-site or off-site for all sites: 16 (94%) reviewed external control test procedures, 16 (94%) examined test logs, 16 (94%) reviewed procedures for clients with preliminary positive test results, 15 (88%) reviewed procedures to address invalid and discordant test results, 12 (71%) examined temperature logs, and 10 (59%) observed how test results were explained to clients.
During January through June, 86,749 rapid tests were conducted: 61,193 (71%) using whole blood, 25,263 (29%) using oral fluid, and 293 (0.3%) using an unknown specimen type. There were 19 (0.02%) invalid test results, 13 of which were from oral fluid specimens. Repeat testing following all invalid tests produced nonreactive results. QA reports were received from 308 (84%) sites. There were 9,217 external control runs conducted (mean of 9.4 client tests per external control run). Four (0.04%) external control runs were reported to be invalid or incorrect. Temperatures were outside limits during 13 (0.02%) site-days where clients were tested, 161 (0.3%) site-days where test kits were stored, and 768 (1.3%) site-days where control materials were stored. External controls run after temperature deviations produced normal results. Almost 84% of out-of-range storage temperatures occurred at sites within one health department.
During our assessment of rapid HIV test QA outcomes, over 86,000 tests were conducted in a variety of settings with few instances of temperature deviations, invalid test results, or incorrect external control test results. Health departments used comprehensive rapid test training programs which were not uniform across health departments, in part because state regulations for HIV testing vary. Although our survey and previous reports (5, 8) suggest that CLIA-waived rapid testing is usually conducted according to manufacturers' instructions, we found that some QA practices could be improved. Despite CLIA requirements to perform testing according to manufacturers' instructions, only 65% of health departments required staff members to read the package insert (3). Additionally, although all health departments assessed competency before staff began to test clients, some did not conduct competency assessments or on-site QA monitoring visits after testing began.
In ~30% of participating health departments, testing staff participated in neither locally organized competency assessments nor external PT programs after they began testing clients. These methods of quality assessment provide systematic evaluations of how well staff members perform testing and give staff experience testing reactive samples. Seroconversion panels with a variety of test line intensities can also be used to identify staff who have difficulty interpreting test results and who could benefit from additional training, but seroconversion panels and some PT panels can be expensive. Interpreting test results from photographed rapid HIV tests is a less-expensive method to evaluate the ability of staff to interpret test results (9).
Good testing practice requires periodic review of QA procedures. However, less than half of the health departments observed staff collecting specimens and interpreting results at all of their rapid test sites. On-site visits allow QA monitors to observe errors in testing procedures, such as incorrect specimen collection (13). When the same error is made by more than one staff member, it can indicate that program training curricula need to be modified. QA monitors should ensure that testing mistakes are followed by appropriate corrective actions, such as running external controls after temperature deviations in testing areas. They should also examine testing, temperature, and control logs; assess the use of external controls; evaluate whether confirmatory test results are positive for persons with reactive rapid tests; and assess the occurrence of invalid test results.
During surveillance, reports of temperatures outside allowable limits were unusual. At least two reasons may explain why one health department reported the greatest number of temperature deviations. First, its sites used minimum-maximum thermometers which constantly record the lowest and highest temperatures since the thermometer was reset, whereas some health departments used standard thermometers. Second, its sites frequently used compact refrigerators which may not be as effective at maintaining appropriate temperatures as full-sized refrigerators. When one health department contacted the manufacturer to inform it that control materials had been exposed to elevated temperatures, the company authorized their continued use pending examination for invalid results. Although the performance of controls was appropriate despite temperature deviations, it would be optimal to have control materials that do not require refrigeration due to equipment and maintenance costs.
The findings in this report are subject to limitations. Validation studies were not uniformly conducted to assess whether QA outcomes were accurately counted (e.g., by examining corresponding testing logs). The number of incorrect external control results may be underestimated because some sites only reported invalid external control incidents. Additionally, on-site QA monitoring by parties other than health departments was not assessed.
Health departments and other programs that monitor rapid test sites should work to provide ongoing external quality assessments, such as competency testing and on-site monitoring. This will ensure that testing is conducted accurately and will allow for the prompt detection and correction of testing mistakes.
The members of the Post-Marketing Surveillance Team are as follows: Ann D. Gardner, Arizona Department of Health Services; Chicago Department of Public Health; Delaware Division of Public Health, HIV Prevention Program Team; Marlene Lalota, Florida Department of Health; Indiana State Department of Health; Louisiana Office of Public Health, HIV/AIDS Program; Massachusetts Department of Public Health; Michigan Department of Community Health; Montana Department of Public Health and Human Services; Stephen Jackson, Nebraska Health and Human Services System; New Jersey Department of Health and Senior Services; City of New York Department of Health and Mental Hygiene; New York State Department of Health, AIDS Institute; North Carolina Department of Health and Human Services; San Francisco Department of Public Health; Utah Department of Health; Wisconsin Department of Health and Family Services; Dollene Hemmerlein and the CDC Epidemic Response Laboratory; CDC HIV Diagnostic Laboratory; CDC HIV Virology Laboratory; and CDC Hepatitis Reference Laboratory.
The findings and conclusions in this report are those of the authors and do not necessarily represent the views of the Centers for Disease Control and Prevention or the U.S. Department of Health and Human Services.
Published ahead of print on 19 August 2009.