PMCCPMCCPMCC

Search tips
Search criteria 

Advanced

 
Logo of irvLink to Publisher's site
 
Influenza Other Respir Viruses. 2017 March; 11(2): 138–147.
Published online 2016 November 14. doi:  10.1111/irv.12433
PMCID: PMC5304574

The assessment of data sources for influenza virologic surveillance in New York State

Abstract

Background

Following the 2013 USA release of the Influenza Virologic Surveillance Right Size Roadmap, the New York State Department of Health (NYSDOH) embarked on an evaluation of data sources for influenza virologic surveillance.

Objective

To assess NYS data sources, additional to data generated by the state public health laboratory (PHL), which could enhance influenza surveillance at the state and national level.

Methods

Potential sources of laboratory test data for influenza were analyzed for quantity and quality. Computer models, designed to assess sample sizes and the confidence of data for statistical representation of influenza activity, were used to compare PHL test data to results from clinical and commercial laboratories, reported between June 8, 2013 and May 31, 2014.

Results

Sample sizes tested for influenza at the state PHL were sufficient for situational awareness surveillance with optimal confidence levels, only during peak weeks of the influenza season. Influenza data pooled from NYS PHLs and clinical laboratories generated optimal confidence levels for situational awareness throughout the influenza season. For novel influenza virus detection in NYS, combined real‐time (rt) RTPCR data from state and regional PHLs achieved ≥85% confidence during peak influenza activity, and ≥95% confidence for most of low season and all of off‐season.

Conclusions

In NYS, combined data from clinical, commercial, and public health laboratories generated optimal influenza surveillance for situational awareness throughout the season. Statistical confidence for novel virus detection, which is reliant on only PHL data, was achieved for most of the year.

Keywords: computer models, data sources, influenza, surveillance

1. Introduction

Influenza surveillance is essential to monitor the spread and severity of the disease, identify populations at risk, detect the emergence of new subtypes and variant strains with pandemic potential, monitor the prevalence of drug resistance, characterize circulating virus types for the selection of strains for vaccine production, and provide information and guidance for clinicians and public health officials. The World Health Organization (WHO) established a global surveillance network for disease caused by influenza viruses in 1952.1 Since then, the Centers for Disease Control and Prevention (CDC) in collaboration with a network of PHLs has formed a US national influenza surveillance system comprising 85 PHLs performing molecular assays from the CDC to type and subtype the influenza virus, and 60 hospital laboratories reporting influenza test data.2 The consequent network and data monitoring systems have facilitated the detection of numerous important events and viral changes, including the rapid identification in 2009 of the pandemic influenza strain (A/H1pdm09). In 2010, to address concurrent fiscal constraints and emerging diseases, the CDC and the Association of Public Health Laboratories (APHL) initiated the Influenza Virologic Surveillance Right Size Project to assess the vast and complex national surveillance system, determine the most efficient means to monitor influenza activity, and establish a standard reference for the CDC and state PHLs.2 The Influenza Virologic Surveillance Right Size Roadmap (1st Edition released in 2013) attempted to guide surveillance toward a more systematic and statistically relevant process. The roadmap includes sample size calculators, developed to estimate the appropriate numbers of samples needed to achieve influenza surveillance with statistical confidence for situational awareness and rare/novel influenza event detection. The roadmap proposed identification of alternate, non‐PHL, data sources as a means to augment state PHL data and, in turn, enhance national surveillance.

Data generated from the NYS PHL, the Wadsworth Center, were measured against sample numbers calculated with the computer models for influenza situational awareness and rare/novel event detection. New York State alternate data from clinical and commercial laboratories were analyzed for integrity and impact on influenza situational awareness. Regional NYS PHL data were assessed for its impact on rare/novel event detection. New York State Department of Health scientific staff in the Virology Laboratory at the Wadsworth Center, in partnership with epidemiologists from the Bureau of Communicable Disease Control (BCDC), evaluated influenza testing practices, regulations, infrastructure, data collection, and reporting. Additionally, surveillance policy, potential future ideal practices and systems, and likely hurdles that may impede implementation were discussed.

2. Methods

2.1. Situational awareness of influenza viral disease in NYS

2.1.1. Laboratory networks of influenza data sources

The NYS Wadsworth Center PHL performs influenza testing on specimens received through the Influenza‐like Illness Network (ILINet) and Emerging Infections Program (EIP), in addition to samples received from non‐EIP hospitals, student health clinics, veteran administration (VA) centers, long‐term care facilities, correctional facilities, and occasionally commercial laboratories.

The ILINet is an outpatient influenza surveillance program supported by CDC in all states.3 For the 2013‐2014 season, the NYSDOH ILINet Program had 173 participating primary care physicians (ILINet providers) in 39 of the 57 NYS counties outside of New York City, from a variety of medical practice specialties 4 (Figure 1). ILINet providers report data and submit specimens from patients with medically attended influenza‐like illness (MA‐ILI). The New York City Department of Health and Mental Hygiene (NYCDOHMH) coordinates a separate ILINet Program in the five counties of NYC.

Figure 1

NYS map showing the 39 counties of 57 total outside of NYC that contribute to the ILINet and EIP influenza virologic surveillance networks. Counties participating in the EIP program are clustered around the cities of Albany and Rochester. The distribution ...

The CDC supports the EIP, a program within FluServ‐NET,3 for surveillance of patients hospitalized with influenza. The influenza activities component of EIP in NYS comprises 21 hospitals in 15 counties around Albany in the Capital District region and Rochester in western NYS. Participating hospitals send a subset of data and influenza‐positive samples to the Wadsworth Center for confirmation and virus characterization.

The Wadsworth Center is both a WHO Collaborating Laboratory and a National Respiratory and Enteric Virus Surveillance System (NREVSS) laboratory. The 11 WHO and 11 NREVSS clinical laboratories in NYS (including four which are both) voluntarily participate in these networks and transmit influenza surveillance data to the CDC.3

2.1.2. Sample size calculators

The Influenza Virologic Surveillance Right Size Project developed online Right Size Sample Size Calculators to determine optimal sample sizes, which should be analyzed to generate statistically meaningful, sufficient, and relevant data for influenza surveillance.2, 5 The user chooses the state or population, and based on the prevalence of influenza, the calculators generate the sample sizes with specified confidence levels, margin of errors (MOEs), and a reminder that biases in sampling could change the result. The CDC recognizes the start of influenza season at a threshold of ≥10% influenza positivity among all specimens tested from patients with MA‐ILI for two consecutive weeks. Calculator A utilizes this threshold to estimate the recommended sample sizes needed to ascertain influenza activity, with resulting information referred to as “situational awareness.” Calculator B determines statistically appropriate sample sizes for the detection of a rare/novel influenza virus.2

2.1.3. Quality of influenza data

Considerable alternate data are readily available since NYSDOH added laboratory‐confirmed influenza to its list of reportable communicable diseases in December 2004. Positive influenza laboratory results, regardless of test method, are required to be reported to the NYSDOH 6 by clinical laboratories with permits. A NYSDOH clinical laboratory permit must be obtained through the Clinical Laboratory Evaluation Program (CLEP) by laboratories performing diagnostic testing on NYS patients,7 the requirements for which include a quality management system (QMS). NYS laboratories are eligible for (i) a permit for high or moderate complexity testing or (ii) registration as a Limited Service Laboratory (LSL) to perform waived tests that are simple and considered to have minimal risk of incorrect results or cause harm. Limited Service Laboratorys include nursing homes, school/student health services, dialysis facilities, ambulatory surgery centers, county health departments, correctional facilities, ambulance/rescue squads, and other direct patient care facilities. Physician Office Laboratories (POLs), operated by healthcare practitioners, only perform tests on specimens from their own patients and are exempt from the requirement to hold a CLEP permit. Limited Service Laboratorys and POLs, with minimal QMS and regulatory oversight, are not required to report positive influenza results to the state health department.

2.1.4. Reporting of influenza data in NYS

The Electronic Clinical Laboratory Reporting System (ECLRS) provides an electronic system for prompt and protected transmission of reportable disease information to the NYSDOH, local health departments, and the NYCDOHMH.8

Clinical and commercial laboratories submit influenza‐positive test results electronically to the NYSDOH via ECLRS. Each ECLRS report contains specimen‐level data, including name, DOB, sex, address, home phone, county of residence, reporting laboratory, ordering physician, specimen source, testing method, results, specimen collection date, and report date. New York State Department of Health staff review the submitted data to determine whether it meets the case definition for laboratory‐confirmed influenza, defined as a positive influenza laboratory test result with at least one of the following methods: culture, enzyme immunoassay (EIA), direct immunofluorescence assay (DFA), immunofluorescence assay (IFA), RT‐PCR, immunohistochemistry (IHC), or influenza virus antigen detection systems (IVADs, also known as rapid influenza diagnostic tests, RIDTs). If the case definition is met, an influenza case report is created in the Communicable Disease Electronic Surveillance System (CDESS). The system automatically deletes duplicate CDESS case reports on the same patient. If the Wadsworth Center tests and does not confirm an initial positive influenza result from another laboratory, the initial test is considered a false‐positive result and the original influenza case report is revoked. Communicable Disease Electronic Surveillance System allocates ECLRS positive influenza laboratory results to disease classification codes for influenza type A or B, influenza type not specified, A/H1pdm09 subtype (since 2009), and H7N9 (since 2013). A disease classification code for influenza A/H3 subtype was added in October 2014 pursuant to discussions from this project.

While PHLs including Wadsworth use the CDC influenza rtRT‐PCR panel to detect and subtype influenza viruses, clinical laboratories use a variety of testing methods, which may or may not include subtyping. The majority of influenza molecular assays generate results in approximately 1‐8 hours and are capable of detecting influenza viruses with very high sensitivity and specificity; some tests also identify subtypes.9 Growth and isolation of influenza viruses in culture may take 7‐14 days or longer, while IVAD kits provide results in 15‐30 minutes.10 Table 1 summarizes 2014 information on testing platforms and assays, the number of licensed clinical laboratories using them in NYS, test complexity, and the influenza types/subtypes and other respiratory pathogens detected.

Table 1

Influenza testing platforms used by NYS CLEP licensed clinical laboratories testing NYS specimens

2.2. Rare/novel influenza virus detection with NYS PHL data sources

Revised recommendations released in 2014 11 advised using only PHL data generated with molecular methods for the assessment of sample sizes for detection of a rare/novel influenza event. Three regional PHLs exist in NYS: the Erie County PHL in western NY, the NYC PHL, and Westchester PHL downstate. Sample sizes and confidence levels were evaluated with the calculators for all available data generated with the CDC influenza rtRT‐PCR assay from Wadsworth and the regional PHLs.

Pooling state surveillance data into national aggregates produce large enough sample sizes to meet recommended confidence levels for the detection of a rare/novel influenza event. During peak season when influenza positivity is 20% or greater, the Right Size Roadmap recommends that states calculate sample sizes sufficient to have 95% confidence in detecting one novel virus among 700 influenza‐positive specimens. Prior and post‐peak influenza activity, when positivity is less than 20%, the Roadmap recommends a detection threshold of one novel virus of 200 influenza‐positive specimens, while for off‐season and summer periods, a threshold of one novel virus among four influenza positive samples is recommended.

3. Results

3.1. Recommended sample sizes for NYS influenza surveillance

The goal for NYS is to obtain recommended sample sizes for the state population of approximately 20 million, which would ensure optimal detection thresholds with ≥95% confidence and ≤5% margin of error (MOE), for both situational awareness and rare/novel event detection (Table 2). Computer modeling software for situational awareness with Calculator A establishes ideal sample sizes using unscreened MA‐ILI specimens. For detection of a rare/novel event, the current Calculator B revised late 2015 uses only Flu+ specimens tested at state PHLs.

Table 2

Sample sizes calculated for NYS using the influenza Virologic Surveillance Right Size roadmap calculators A and B for 2013‐14

3.2. Sample size calculations for situational awareness in NYS

New York State influenza test data were compared with the recommended sample sizes for situational awareness as determined with Calculator A (Figure 2). To avoid bias, specimens should preferably be unscreened, or a random sampling. The Wadsworth Center Virology Laboratory receives specimens for influenza testing from many sources including some that are prescreened by IVADs or other methods. During most weeks of peak influenza activity, sample sizes needed to achieve ≥95% confidence levels for situational awareness were obtained only with a combination of randomly submitted Flu+ and MA‐ILI specimens. During peak season, the recommended sample sizes were not achieved with only MA‐ILI specimens, or outside of peak season with Wadsworth test data alone.

Figure 2

Influenza testing performed during 2013‐2014 by the Wadsworth Center on respiratory samples, relative to the recommended sample size determined from the Right Size Roadmap Calculator A for situational awareness, with 95% confidence and 10% expected ...

The WHO/NREVSS laboratories in NYS provide additional data with sufficient power to meet the optimal confidence levels and MOE determined with Calculator A (Figure 3) during high and low influenza activity. While the WHO/NREVSS laboratories already transmit data to the CDC, the data from the WHO/NREVSS laboratories within NYS can still be used for state surveillance. In utilizing the WHO/NREVSS data to determine the true prevalence of Flu+/MA‐ILI, confidence levels were 99% ± ≤5% throughout the year, including off‐season when adjusted for estimated prevalence, indicating that the WHO/NREVSS data are likely to provide an accurate representation of the true influenza prevalence.

Figure 3

Influenza testing performed during 2013‐2014 by the Wadsworth Center, NYS regional PHLs, WHO/NREVSS‐collaborating laboratories in NYS, and clinical and commercial laboratories with NYS permits (CDESS data), relative to the recommended ...

Alternate data for situational awareness during the 2013‐2014 influenza season in NYS were obtained from 193 clinical laboratories, which reported 43 281 positive influenza laboratory results to ECLRS, including results of high‐ and moderate‐complexity testing as well as waived testing. These results generated 37 180 positive influenza CDESS cases (Figure 3), which do not include negative influenza test results, yet are comprised of IVAD, culture, and molecular influenza testing methodologies. In NYS, a biphasic influenza picture occurred during the 2013‐2014 season with highest levels of influenza type A circulating in January and high levels of influenza type B circulating in April.

Test methods for the 2013‐2014 season included 17 426 influenza test results positive by IVAD methods (40%) compared to 20 170 positive by PCR tests (47%) (Figure 4). The majority of clinical laboratories use IVAD tests for influenza. The impact of alternate data on situational awareness was analyzed for both rapid influenza tests and molecular test methods. Surpassing the situational awareness threshold of 137 samples to indicate the start of the influenza season, IVAD testing yielded 155 positive samples during the first week of December [Morbidity and Mortality Weekly Report (MMWR) week 1349], and 219 PCR‐positive samples 2 weeks later (MMWR week 1351). However, false‐positive IVAD results are of particular concern outside of peak influenza season when the positive predictive value of these tests is low.

Figure 4

For the 2013‐2014 influenza season 43 281 total influenza positive tests were reported to ECLRS, of which 40% were positive by IVAD and EIA, 47% positive by PCR, 11% positive by culture, IFA, and DFA, and 2% by antibody, IHC, and other ...

3.3. Sample size calculations for rare/novel virus detection with NYS PHL data

For the detection of a rare/novel influenza virus, Wadsworth test data were compared to recommended sample sizes for NYS, aggregated on a national scale, determined from Calculator B (Figure 5). The total number of Flu+ specimens tested at the Wadsworth Center was insufficient to detect a rare/novel event for influenza surveillance at the recommended confidence levels. To augment detection of a rare/novel influenza virus, NYS regional PHL influenza rtRT‐PCR data supplemented the Wadsworth Center rtRT‐PCR data (Figure 5). From the last week of December 2013 through January 2014 with peak influenza activity, Flu+ data provided 86% to 94% confidence in the likelihood of detecting a novel virus present at 1/700 of cases. During low season, the recommended threshold for detection of a novel virus outside of peak season is 1/200 with a minimum Flu+ sample size of 37; sample sizes with 95% confidence were obtained for 4 of those 6 weeks (Table 2). During off‐season, the recommended threshold drops to ¼ with a minimum Flu+ sample size of 1. In fact, just 1 Flu+ sample is needed every 5 weeks per state, as 11 Flu+ samples are needed per week for 52 states nationally (personal communication, Lynette Brammer, CDC). Sample sizes with 95% confidence were obtained for 40 of 40 off‐season weeks when counting 1 Flu+ every 5 weeks. Thus, to contribute to national surveillance in detecting a rare/novel influenza virus, combined sample sizes from Wadsworth and the regional PHLs were sufficient to reach minimum confidence levels (≥85%) for recommended detection thresholds during peak weeks of influenza activity, and optimal (≥95%) confidence for the majority of low season and all of off‐season, yet not throughout the year.

Figure 5

Influenza testing performed during 2013‐2014 by the Wadsworth Center, and NYS regional PHLs. Total number of positives are shown relative to the recommended Flu+ sample sizes determined from the Right Size Roadmap Calculator B for detection of ...

4. Discussion

For patient samples tested at Wadsworth, the recommended sample size for influenza surveillance for situational awareness was achieved for half of the peak weeks of influenza activity. This data included test results from the random submission of specimens from hospitalized cases as well as those from primary care patients. To augment the total surveillance information, other data sources from clinical and commercial laboratories were investigated. The WHO/NREVSS‐collaborating laboratories, geographically spread throughout NYS, generate considerable data for situational awareness and directly transmit these data to the CDC, as well as NYS. All WHO/NREVSS laboratories post negative as well as positive results, providing sufficiently robust data for estimates of influenza prevalence. Beyond the WHO/NREVSS‐collaborating laboratories, the remainder of the clinical laboratories that report to ECLRS do not report to CDC, nor are their data shared with CDC by the NYSDOH. These ECLRS/CDESS data consist of only influenza‐positive cases, and sufficient numbers are not attained for situational awareness in the off‐season months. The CDESS Flu+ cases for NYS surveillance comprise a large alternate data source that is not currently transmitted to the CDC and provides an indicator of prevailing influenza strains.

In the months preceding peak influenza activity, more samples were tested by IVAD than PCR, yet the reliability of the IVAD data is questionable. Some providers submit IVAD‐positive specimens to the Wadsworth Center for PCR confirmation and identification of the subtype, particularly at the beginning of the season. A significant number of IVAD‐positive tests are not submitted for confirmation by PCR testing but are still reported as CDESS‐positive cases. The Right Size Roadmap companion document released in October 2014 “Using Alternative Data for Influenza Virologic Surveillance” states “Alternative sources should ONLY be used for determining situational awareness. Only PHL rRT‐PCR test data should be used to meet national novel influenza event detection thresholds”.11

Wadsworth molecular data were insufficient to achieve recommended confidence levels and thresholds for detection of a rare/novel event throughout 2013‐2014. Only the CDC RT‐PCR panel used at the state PHLs detects all influenza subtypes commonly circulating in humans, as well as A/H5 and A/H7 strains with kits provided for reflex testing. Therefore, only the CDC assays are likely to reveal an emerging novel virus. Wadsworth and the regional NYS PHL data combined met the Flu+ sample sizes needed to detect a novel virus at a 1/700 threshold with minimal (≥85%) confidence, for the five peak weeks of influenza activity. Desired sample sizes, thresholds, and optimal (≥95%) confidence levels were obtained for the majority of low season (1/200 threshold) and all of off‐season (¼ threshold). To increase off‐season influenza testing, NYSDOH issues an annual notice to NYS clinical laboratories, requesting submission of all clinical samples positive for influenza by any detection method during the summer to the Wadsworth Virology Laboratory for testing with CDC influenza rtRT‐PCR assay.

Extensive validation studies have shown that molecular detection by rtRT‐PCR is highly sensitive and specific for the detection and subtyping of influenza viruses.12 Large respiratory viral panel (RVP) assays have become increasingly popular for the testing of respiratory samples, despite a potential decrease in sensitivity compared to single‐target PCR assays. In a comparison of 11 898 respiratory samples tested by the CDC influenza rtRT‐PCR assay and the Luminex xTAG RVP, influenza A positive samples with low viral load were often not detected by the RVP and were mostly detected only by the CDC rtRT‐PCR.13 Relying on RVP data alone could therefore introduce inaccurate positivity rates. Although sensitivity and specificities vary between different RVPs such as the BioFire Film Array, GenMark eSensor, Luminex xTAG, and the new NxTAG, a significant advantage is the detection of multiple respiratory pathogens (Table 1).14

Compared to molecular and culture assays, the ease and rapidity of point‐of‐care antigen screening tests are perceived as beneficial for clinical management, particularly during periods of high prevalence.10 IVAD tests detect only the influenza virus type and do not distinguish the subtypes of influenza, nor would they identify a rare/novel emerging virus. In a meta‐analysis of 17 studies comparing A/H1pdm09 detection by IVAD to that with rtRT‐PCR, the estimated overall sensitivity of IVADs was 51% (ranging from 11% to 88%) (95% confidence interval) and the specificity was 98% (95% confidence interval).15 Another meta‐analysis of 159 similar studies assessing IVAD performance determined assay sensitivities to average 62.3% (95% confidence interval) and the specificity to be 98.2% (97.5%‐98.7% confidence interval.) Result accuracy was variable, depending on whether the specimen was collected from a child or an adult, as well as virus type and subtype.16 Thus, IVADs cannot reliably detect emerging influenza subtypes, and their widespread use presents a risk of missing the spread of a potentially pandemic strain. For influenza surveillance, recommended practices include the confirmation of IVAD results using molecular assays or cell culture at the beginning and end of the influenza season.

The NYSDOH has enhanced its statewide influenza surveillance by making laboratory‐confirmed influenza reportable and developing an electronic reporting system. Electronic Clinical Laboratory Reporting System provides a mechanism for timely reporting, improves completeness and accuracy of reports, and facilitates the identification of emergent public health problems. Limiting the reportability of laboratory‐confirmed influenza to clinical laboratories has ensured that the submitted test data have been performed under extensive QMSs, and provides tens of thousands of reports per season for extensive temporal and geographic coverage. During influenza season, the NYSDOH BCDC Influenza Surveillance Coordinator compiles a detailed weekly influenza report, which is posted online. This report includes geographic and demographic distribution of influenza cases; weekly and seasonal comparison of case numbers; testing and subtyping data from the WHO/NREVSS‐collaborating laboratories in NYS; Wadsworth Center antiviral resistance data; numbers of healthcare facility‐associated outbreaks; severity of disease; and any pediatric influenza‐associated fatalities.17

Challenges exist in communicating the reporting requirements to clinical laboratories testing NYS patients. The multiple laboratory classifications (permitted, LSL, POL) and different reporting requirements across diseases can create confusion. During peak influenza season, NYSDOH Statistical Unit staff must prioritize influenza data processing over other reportable diseases. Means of reporting of demographic data, LOINC® and SNOMED® coding, and test descriptions need to be standardized. The addition of codes for new strains, as they arise, improves the granularity and accuracy of reporting. Further, ECLRS reporting does not include the total number of specimens tested for denominator data. No other reportable communicable disease in NYS requires laboratories to report denominator data, which would be burdensome and might require a legislated regulatory change.

Multiple networks in NYS contribute to influenza surveillance on the state level, but only some of those networks transmit data to the CDC for national surveillance. Wadsworth Center data together with NYS alternate data sources portray a clear picture of influenza in the community and give confidence to the surveillance of influenza viruses circulating at any time in each county or region. The NYS alternate data could enhance national influenza surveillance for situational awareness due to its volume, geographic representation within NYS, and reliability. During 2013‐2014, NYS experienced a second wave of influenza activity, mostly type B, as intense and long lasting as the first wave of influenza A/H1pdm09 activity. Several other northeast states also experienced this second wave of activity, but most of the United States did not. NYS was the only state in the United States to report widespread activity to CDC for 24 consecutive weeks. Thus, the impact of the NYS alternate data is widespread, contributing to national and subsequently international influenza surveillance.

Acknowledgements

This work was supported by the APHL and by Cooperative Agreement Number U60HM000803 from the CDC and the Office of the Assistant Secretary for Preparedness and Response. Its contents are solely the responsibility of the authors and do not necessarily represent the official views of the APHL or the CDC. The authors thank Lynette Brammer, CDC, for guidance throughout the project. They also thank Stephanie Chester, Sarah Muir‐Paulik, and Kelly Wroblewski at the APHL for facilitating the influenza surveillance alternative data project, and Lynette Brammer, Stephanie Chester, and Stephanie Shulman (NYS CLEP Director) for review of the final manuscript. We sincerely appreciate the NYS Statistical Unit and Epidemiology staff for analyzing the NYS influenza surveillance data, and the participating clinical facilities whose data submissions contribute to the robust NYS influenza surveillance system.

Notes

Escuyer K. L., Waters C. L., Gowie D. L., Maxted A. M., Farrell G. M., Fuschino M. E. and St. George K. (2017), The assessment of data sources for influenza virologic surveillance in New York State. Influenza and Other Respiratory Viruses 11, 138–147. doi: 10.1111/irv.12433

References

1. World Health Organization . Global Epidemiological Surveillance Standards for Influenza. World Health Organization. http://www.who.int/influenza/resources/documents/WHO_Epidemiological_Influenza_Surveillance_Standards_2014.pdf. Accessed October 3, 2016.
2. Association of Public Health Laboratories . Influenza Virologic Surveillance Right Size Roadmap. http://www.aphl.org/AboutAPHL/publications/Documents/ID_July2013_Influenza-Virologic-Surveillance-Right-Size-Roadmap.pdf. Published July 2013. Accessed October 3, 2016.
3. Centers for Disease Control and Prevention . Overview of Influenza Surveillance in the United States. http://www.cdc.gov/flu/weekly/overview.htm. Updated October 13, 2016. Accessed November 1, 2016.
4. New York State Department of Health . Influenza‐like Illness Surveillance Program (ILINet). http://www.health.ny.gov/diseases/communicable/influenza/surveillance/ilinet_program/. Updated November 2015. Accessed October 3, 2016.
5. Association of Public Health Laboratories . Influenza Virologic Surveillance Right Size Sample Size Calculators: User Guide. http://www.aphl.org/aphlprograms/infectious/influenza/Documents/ID_2013July_User-Guide-Sample-Size-Calculators.pdf. Published July 2013. Accessed October 3, 2016.
6. New York State Department of Health . Communicable Disease Reporting. http://www.health.ny.gov/professionals/diseases/reporting/communicable/. Updated February 2015. Accessed October 3, 2016.
7. New York State Department of Health Wadsworth Center . Clinical Laboratory Evaluation Program. http://www.wadsworth.org/labcert/clep/clep.html. Accessed October 3, 2016.
8. New York State Department of Health . Electronic Clinical Laboratory Reporting System. http://www.health.ny.gov/professionals/reportable_diseases/eclrs/index.htm. Updated March 2011. Accessed October 3, 2016.
9. Centers for Disease Control and Prevention . Guidance for Clinicians on the Use of RT‐PCR and Other Molecular Assays for Diagnosis of Influenza Virus Infection. http://www.cdc.gov/flu/professionals/diagnosis/molecular-assays.htm. Updated August 23, 2016. Accessed October 3, 2016.
10. Centers for Disease Control and Prevention . Rapid Diagnostic Testing for Influenza: Information for Clinical Laboratory Directors. http://www.cdc.gov/flu/professionals/diagnosis/rapidlab.htm. Updated August 4, 2016. Accessed October 3, 2016.
11. Association of Public Health Laboratories . Influenza Virologic Surveillance Right Size Roadmap: Using Alternative Data for Influenza Virologic Surveillance. http://www.aphl.org/AboutAPHL/publications/Documents/Right-Size-Roadmap-Alternative-Data_October2014.pdf. Published October 2014. Accessed October 3, 2016.
12. Shu B, Wu KH, Emery S, Villanueva J, Johnson R, Guthrie E, et al. Design and performance of the CDC real‐time reverse transcriptase PCR swine flu panel for detection of 2009 A (H1N1) pandemic influenza virus. J Clin Microbiol. 2011;49:2614–2619. [PubMed]
13. Pabbaraju K, Wong S, Lee B, Tellier R, Fonseca K, Louie M, et al. Comparison of a singleplex real‐time RT‐PCR assay and multiplex respiratory viral panel assay for detection of influenza “A” in respiratory specimens. Influenza Other Respir Viruses. 2011. Mar;5:99–103. [PubMed]
14. Popowitch EB, O'Neill SS, Miller MB. Comparison of the Biofire FilmArray RP, Genmark eSensor RVP, Luminex xTAG RVPv1, and Luminex xTAG RVP fast multiplex assays for detection of respiratory viruses. J Clin Microbiol. 2013;51:1528–1533. [PubMed]
15. Chu H, Lofgren ET, Halloran ME, Kuan PF, Hudgens M, Cole SR. Performance of rapid influenza H1N1 diagnostic tests: a meta‐analysis. Influenza Other Respir Viruses. 2012;6:80–86. [PubMed]
16. Chartrand C, Leeflang MMG, Minion J, Brewer T, Pai M. Accuracy of Rapid Influenza Diagnostic TestsA Meta‐analysis. Ann Intern Med. 2012;156:500–511. [PubMed]
17. New York State Department of Health . Bureau of Communicable Disease Control Statewide Influenza Surveillance Report for Week Ending June 7, 2014. http://www.health.ny.gov/diseases/communicable/influenza/surveillance/2013-2014/archive/2014-06-07_flu_report.pdf. Published June 14, 2014. Accessed October 3, 2016.

Articles from Influenza and Other Respiratory Viruses are provided here courtesy of Wiley-Blackwell