PMCCPMCCPMCC

Search tips
Search criteria 

Advanced

 
Logo of pubhealthrepLink to Publisher's site
 
Public Health Rep. 2010; 125(Suppl 5): 107–116.
PMCID: PMC2966652

A Public Health Academic-Practice Partnership to Develop Capacity for Exercise Evaluation and Improvement Planning

SYNOPSIS

In December 2006, Congress passed the Pandemic and All-Hazards Preparedness Act to improve the nation's public health preparedness and response capabilities. It includes the role of Centers for Public Health Preparedness (CPHPs) to establish a competency-based core curriculum and perform evaluation of impact on newly developed materials. The Heartland Center for Public Health Preparedness (HCPHP) at the Saint Louis University School of Public Health is part of the Centers for Disease Control and Prevention national CPHP network and is engaged with state and regional partners in workforce development, preparedness planning, evaluation, and multi-year exercise and training cycles. This includes development, implementation, and evaluation of the HCPHP Exercise Evaluation Training Program to improve the competence and capacity for exercise evaluation and improvement planning. This program is designed to enhance quality improvement and performance measurement capabilities to identify increase of workforce competence over time (maturity).

Beginning in 2000, the Heartland Center for Public Health Preparedness (HCPHP)1 at the Saint Louis University School of Public Health was funded by the Centers for Disease Control and Prevention (CDC) as a member of its national network of Centers for Public Health Preparedness (CPHPs).2 The network's goal was to strengthen the competence of public health and other first responders by linking academic and practice expertise. In collaboration with academic and practice partners in Missouri, Kansas, and Kentucky, HCPHP activities were designed to support national guidance through development of an integrated, sustained, and competency-based preparedness education and capacity improvement process.3

Since September 11, 2001, considerable federal guidance has identified frameworks for emergency response, exercise development, and evaluation to improve and expand preparedness capacity. Critical guidance for hospitals and public health agencies was provided by the U.S. Department of Homeland Security (DHS), the Federal Emergency Management Agency (FEMA), CDC, and the Office of the Assistant Secretary for Preparedness and Response.4

In December 2006, Congress passed the Pandemic and All-Hazards Preparedness Act5 to improve the nation's public health and medical emergency preparedness and response capabilities. It included the pivotal role of CPHPs to establish an academic and practitioner competency-based core curriculum and evaluate the impact of newly developed materials.6 This should also include competency-based academic programs to prepare future professionals and professional-development programs to enhance the skills of the workforce in the field of practice to perform exercise evaluations. Both programs should be based on a common competence set and common curricular format to ensure consistent knowledge and skill development.

HCPHP state and regional partners in Missouri, Kansas, and Kentucky are engaged in preparedness planning, evaluation, and multi-year exercise and training cycles. The Missouri Department of Health and Senior Services (MDHSS), Missouri Hospital Association (MHA), and HCPHP developed a competency-based Exercise Evaluation Training Program (EETP) to expand capacity for exercise evaluation and improvement planning (quality improvement), which contributed to a national preparedness core curriculum.7,8

ACADEMIC-PRACTICE PARTNERSHIP

Homeland Security Exercise Evaluation Program (HSEEP) guidance emphasized the need to adequately evaluate integrated multi-year exercises9 to measure performance and enhance maturity levels for a unified response capability among health-care, public health, and other emergency response agencies.10 The HSEEP mobile training course11 and the online FEMA Independent Study (IS)-130 exercise evaluation course12 were created to provide training in exercise design, implementation, and evaluation. However, rapid development, improvement, and deployment of competent exercise evaluation teams also required support through access to onsite development programs.

In 2007, MHA, MDHSS, and HCPHP expanded collaboration to rapidly increase performance measurement and improvement planning (quality improvement) capacity.13 This need was echoed among other HCPHP state and local partners at the 2007 Heartland Region Partners' Retreat. MDHSS, MHA, and HCPHP proceeded to create the onsite EETP model to meet needs in Missouri and disseminate program use among Heartland Region partners and the CPHP network.11 The EETP logic model (Figure 1) was designed to depict the process and interventions through which the project contributes to achievement of short- and long-term objectives and, ultimately, to the support of a national performance-improvement process.

Figure 1.
Logic model for the Exercise Evaluation Training Program, designed by the Heartland Center for Public Health Preparedness to expand capacity for exercise evaluation and improvement planning

THE EETP MODEL

Instructional design

HCPHP consulted with FEMA's Emergency Management Institute (EMI) staff and received approval to modify the FEMA IS-130 online independent study course for field delivery. Therefore, EETP supports, but does not replace, FEMA and DHS courses. Knowledge gained by EETP participants is assessed through completion of the IS-130 test and receipt of a FEMA certificate. The EETP model, which includes an eight-hour core module (CM), a four-hour train-the-trainer module (TTM), and a tool kit, is designed to meet specific needs of MDHSS and MHA partners. It includes content and materials used in the FEMA and HSEEP mobile training courses and related guidance.12

As depicted in the EETP logic model (Figure 1), the program's short-term objectives were to develop an EETP that would improve participants' competence and ability to pass the FEMA IS-130 online test, as well as enhance trainers' competence to disseminate the CM within their jurisdictions. The long-term goal was to train a cadre of evaluators who could be called upon to assist in exercise evaluation within their jurisdiction, state, or region to support a multi-year improvement process. The EETP logic model (Figure 1) was used as the framework for development of the program's instructional objectives (Figure 2), content outline (Figure 3), implementation stages (Figure 4), and curriculum competencies (Figure 5). Practice partners emphasized a need for the CM to include a thorough explanation of HSEEP Exercise Evaluation Guides (EEGs) and their use, including interactive group exercises to reinforce HSEEP frameworks and methods. A simulated exercise was added, including use of a video utilized in the HSEEP mobile training course to provide participants an opportunity to evaluate an exercise using EEGs. The TTM was developed to emphasize adult learning frameworks, principles, and delivery techniques used for dissemination of the CM.

Figure 2.
Instructional objectives of the Heartland Center for Public Health Preparedness' EETP corea and train-the-trainerb modules
Figure 3.
Program content of the Heartland Center for Public Health Preparedness' EETP corea and train-the-trainerb modules
Figure 4.
Implementation stages of the Heartland Center for Public Health Preparedness' EETP model
Figure 5.
Competencies addressed by the Heartland Center for Public Health Preparedness' EETP corea and train-the-trainerb modules

Program faculty included an adjunct EMI instructor, as well as emergency management and public health professionals and educators. Trainer and participant manuals and a resource CD-ROM were produced to adequately implement and evaluate program modules. Development of an EETP tool kit was a priority to enable program dissemination.

Program implementation

As shown in Figure 4, the EETP model was implemented in several stages to assure program pilot testing, improvement planning, and dissemination: (1) a CM and TTM beta test, (2) CM presentations in seven locations in Missouri, (3) a multistate CM and TTM demonstration for Heartland Region partners, and (4) an EETP tool kit for CPHP network dissemination.

The evaluation model

The scope of the EETP project evaluation model included use of program evaluation standards developed by the Joint Committee on Standards for Educational Evaluation, including utility, feasibility, propriety, and accuracy.14 The standards served as guidance to clarify the project purpose and design, as well as the application and analysis of evaluation data to determine the worth or merit of the project. Not all of the standards were applicable due to limitations of resources and time required for a more comprehensive and long-term evaluation of the program.

Purpose of the evaluation.

The purpose of the evaluation was to determine the relative short-term merits of the CM to improve participants' competence in performing as exercise evaluators. The following assessment questions were developed to design the evaluation model and data-collection methods:1417

  • Q1:What were participants' demographic characteristics?
  • Q2:What were participants' perceived competences before and after program completion?
  • Q3:Were participants satisfied with faculty, methods, materials, and facilities used?
  • Q4:What were participants' perceptions of the application of improved competence in exercise evaluation?
  • Q5:After program completion, how were participants assigned or involved in exercise evaluation?

Identical pre- and post-program data-collection methods were used to evaluate the CM and TTM. Participant numbers and data collected were insufficient to perform adequate analysis of the TTM.

Data-collection methods.

A rolling, open-ended, continuous-improvement approach was used to evaluate findings at each stage or phase of the project. Qualitative and quantitative data-collection methods and procedures remained unchanged throughout CM implementation to enhance ability to produce inferences that adequately answered evaluation questions. The following five assessment procedures (with question code labels) were used:

  • A participant demographic assessment instrument (Q1);
  • Pre- and post-program participant competence assessment instruments (Q2);
  • A post-program process evaluation instrument (Q3);
  • A post-program participant retrospective assessment instrument (Q4); and
  • A post-program interview and collection of related documents from practice partners who coordinate exercise evaluation (Q5).

In addition, HCPHP produced post-program summary evaluation reports, as well as feedback and discussion regarding participants' performance, program quality, and project management, which were submitted for ongoing faculty and stakeholder review.

To reduce responder bias, the demographic, pre- and post-program competence assessment, and process evaluation instruments were completed anonymously, coded, and paired for analysis using each participant's entry of a four-digit code. The demographic assessments collected information regarding participants' characteristics, including their professional role and type of organization. The pre- and post-program assessments were designed to measure participants' perceived ability to perform program competencies. Each instrument contained the nine program competencies (Figure 3), measured using a four-point Likert scale for personal ability (1 = not capable, 2 = slightly capable, 3 = capable, and 4 = very capable). The post-program process instrument for assessing participants' satisfaction with program elements or features contained questions regarding faculty performance. This was measured using a four-point Likert scale (1 = poor, 2 = adequate, 3 = good, and 4 = excellent). Additional questions focused on participants' satisfaction level with program materials, content, format, and overall program satisfaction, which was measured using another four-point Likert scale (1 = very dissatisfied, 2 = dissatisfied, 3 = satisfied, and 4 = very satisfied).

Survey Monkey (www.surveymonkey.com) online data-collection and analysis functions were also used. In October 2009, one year after program completion, a retrospective assessment instrument was sent to CM participants. Assessment questions with yes/no or number response choices asked if participants: (1) were involved in exercise evaluation and improvement planning, (2) applied EETP knowledge and skills gained in exercise evaluation, (3) needed further professional development, and (4) passed the FEMA IS-130 examination.

MDHSS and MHA partner interviews were conducted in October 2009 to determine the number of EETP graduates assigned as exercise evaluators. Health-care and public health partners responsible for exercise implementation and evaluation were asked for a list of EETP-assigned evaluators as an impact measure for mobilization of EETP graduates as qualified exercise evaluators.

In summary, multiple procedures and information sources were used to assess important project variables, improve consistency of findings (reliability), and improve inferences drawn from the combination of data sources (validity) to answer evaluation questions. In general, consistent stakeholder involvement was critical to address needs, assure credibility, and gain acceptance of evaluation methods and findings. Resource limitations prevented a more robust evaluation that included a range of methods to infer causation. The addition of project phases that documented participants' pre- and post-program EETP CM knowledge and performance improvement as exercise evaluators could assure a higher level of certainty regarding outcomes associated with completion of the EETP modules.

ASSESSMENT AND EVALUATION

HCPHP staff analyzed data collected to answer project evaluation questions. The pre- and post- program competence assessment data were analyzed by conducting the paired samples t-test using SPSS® 14.0 for Windows.18 Using the same software package, the demographic and program process assessment data were analyzed by running descriptive statistics frequencies. The participant retrospective assessments were analyzed using the online Survey Monkey descriptive statistics and analysis package, and the retrospective partner interview and e-mailed records were summarized for interpretation.

A summary of measures and findings from data analysis is provided in the next section for the CM beta test, the statewide implementation of the CM, and the retrospective assessment.

Evaluation phase 1: the CM beta test

Demographic assessment.

Demographic data collected from 48 beta-test participants indicated that 88% (n=42) were employed at a hospital or community health clinic and 13% were employed by a local (n=3) or state (n=3) health department.

Pre- and post-program competence assessment.

Figure 6 provides combined mean scores for pre- and post-program competence assessments of the 48 beta-test participants' perception of ability to perform the nine program competencies. The paired t-test was applied, and the difference in perceived ability mean scores before (range: 2.2–3.1) and after (range: 3.2–3.6) the program were statistically significant at the p<0.05 level. Competence 1 had the smallest post-program percent change (16%) in participants' perceived ability mean scores. This could be explained by the participants' high level of perceived ability entering the program. In comparison, competencies 2–9 had a higher range of post-program percent change (42%–81%) of participants' perceived ability mean scores. The inference was that CM beta-test participants reported a significant increase in perceived ability to perform all program competencies. In particular, participants perceived a higher percentage of change in ability to perform competencies 2 through 9—an average change of 54%.

Figure 6.
Beta-test participants' perceived ability to perform competencies before and after completing the Heartland Center for Public Health Preparedness' EETP core module, and percent change in perceived ability

Program process evaluation.

Forty-eight participants completed a post-program process survey. A combined mean score for faculty performance was 3.5. Most participants (91%) rated the faculty as good (41%) or excellent (50%). Combined participants' mean scores for satisfaction with educational content (3.3), materials (3.3), and format (3.3) were also high. The majority of participants (86%–88%) were satisfied or very satisfied with program content, materials, and format. The overall program satisfaction mean score was 3.3; a majority of participants (86%) were satisfied (38%) or very satisfied (48%).

Conclusion.

Evidence collected from the demographic, pre- and post-program competence, and process assessment surveys supported the interpretation that implementation of the EETP CM beta test resulted in a significant increase in participants' (health-care and public health practitioners) perceived ability to perform program competencies and a high overall satisfaction rating of all program elements. After review of the beta test evaluation report, faculty and academic practice partners adapted the CM for statewide implementation in Missouri. Adaptations included expansion of EEG examples in the participant manual; revision of EEG use in group exercises to better reflect health-care and public health response experience; revision of group exercises to improve application of data-collection, observation, and recording techniques and to use the HSEEP EEG builder; and enhanced use of HSEEP-compliant afteraction report/improvement plan formats and templates for improvement planning.19

Evaluation phase 2: the CM intrastate program

Demographic assessment.

Demographic data collected from 141 CM participants indicated that approximately 68% (n=96) of respondents were employed by a hospital or community health clinic. The remaining participants were employed by a local health department (21%; n=29) or state health department (11%; n=16).

Pre- and post-program competence assessment.

A total of 141 practitioners completed the seven CM presentations, with a range of nine to 28 attendees for each. Each of the seven presentations' pre- and post-program participant competence mean scores were statistically significant at the p<0.05 level. Figure 7 provides combined mean scores for participants' pre- and post-program competence assessments of perceived ability to perform the nine program competencies. The paired t-test was applied, and the difference in combined participants' perceived ability mean scores before (range: 1.8–2.9) and after (range: 3.1–3.5) the programs were all statistically significant at the p<0.05 level.

Figure 7.
Intrastate participants' perceived ability to perform competencies before and after completing the Heartland Center for Public Health Preparedness' EETP core module, and percent change in perceived ability

Competence 1 had the smallest post-program percent change (16%) of participants' combined perceived ability mean scores. This percent change was the same for competence 1 for the beta-test participants. This could be explained by the participants' high level of perceived ability entering the program. In comparison, competencies 2 through 9 had a higher range (32%–52%) of post-program percent change in participants' combined perceived ability mean scores. The inference was that participants in the intrastate CM presentations reported a significant increase in perceived ability to perform all program -competencies. In particular, -intrastate CM participants perceived a higher percentage of change in ability to perform competencies 2–9 (average of 40%) than beta-test participants (average of 54%). This difference in percent change may be explained by CM program improvements made after the beta-test version was evaluated. For example, changes were made to allow more time for participants to work with application of EEGs and use of the EEG builder, which was not in the beta test. In addition, more time was dedicated to after-action report and improvement plan construction, including model examples.

Program process evaluation.

A total of 141 participants from the seven CM programs completed the post-program process survey. A combined mean score of 3.7 for faculty performance was higher than the beta-test faculty mean score. Most participants (89%) rated the faculty as good (30%) or excellent (59%). Combined participants' mean scores for satisfaction with educational content (3.6), materials (3.6), and format (3.6) were higher than beta-test process mean scores. The majority of participants (92%–94%; higher than beta test) were satisfied or very satisfied with the program content, materials, and format. Participants' overall program satisfaction mean score was 3.6; a majority of participants (93%, higher than beta test) were satisfied (18%) or very satisfied (75%) with the overall program.

Conclusion.

Information collected from the demographic, pre- and post-program competence, and process assessment surveys supported the interpretation that, when compared with the beta test, the implementation of the seven intrastate CM programs resulted in a greater percentage change in participants' perceived ability to perform program competencies. It also resulted in very high ratings of faculty, program elements, and overall program satisfaction.

Evaluation phase 3: retrospective assessment

Participant retrospective assessment.

In October 2009, the retrospective assessment survey was sent to 172 CM participants (beta test or intrastate) to assess if they passed the IS-130 test, applied perceived competence in exercise evaluation, and had needs for professional development. At the time of publication, 45 (26% response rate) had returned completed surveys.

Conclusion.

Although the retrospective assessment is not complete, a majority of initial responders indicated that they completed and passed the IS-130 course test (n=26; 58%); applied perceived improved competence in exercise evaluation (n=27, 60%); were involved in local, regional, and state exercise evaluation (n=29, 64%); and were interested in additional professional development in exercise evaluation and improvement planning (n=29, 64%).

Partner retrospective assessment.

MHA and MDHSS partner interviews, which were conducted one year after the CM presentations, collected evidence of EETP short-term impact. The data suggest that the list of EETP CM participants assigned to evaluate exercises at the local, regional, and state levels is evidence of a rapid increase and utilization of exercise evaluation capacity in Missouri.

Conclusion.

The EETP CM participants' list was used to recruit and assign individuals and teams as evaluators, controllers, or design staff to local, regional, and statewide exercises. Examples from 2008 to 2009, which involved assignment of 45 evaluators, include the Kansas City Strategic National Stockpile/Cities Readiness Initiative Vaccination Exercise, the Missouri Communications Exercise, the St. Louis Full-Scale Bioterrorism Strategic National Stockpile Exercise, the MDHSS H1N1 Response, the Kansas City Regional Hospital Exercise, the 2008 St. Louis Hospital Exercise, and the 2009 St. Louis Area Hospital Exercise.

Analysis summary

Because TTM participant numbers and data collected were insufficient to perform adequate analysis, we focused on evaluating the CM. Multiple procedures and information sources provided evidence that there were relative short-term merits of CM implementation for public health and health-care participants. These included a statistically significant increase in CM participants' perceived competence to perform as exercise evaluators and very high satisfaction ratings of CM faculty, all program elements, and the overall program. Post-program interviews provided data confirming that CM participants were being utilized as exercise evaluators throughout Missouri. In summary, data supported inferences that answered the project evaluation questions including participants' demographic characteristics (Q1), post-program perceived competence improvement (Q2), satisfaction with program elements and methods (Q3), perception of post-program application of improved competence (Q4), and assignment as exercise evaluators in Missouri (Q5).

Resource limitations prevented implementation of a more robust evaluation model that would address more elements in the logic model (Figure 1). EETP worth and merit could be further assessed through additional participant pre- and post-program cognitive (in addition to the IS-130 test) and performance evaluation. Over time, trainers' observations of exercise evaluators' performances would measure performance maturity levels as part of multi-year education, exercise, and quality improvement planning cycles.

DISCUSSION

In 2007, MDHSS, MHA, and HCPHP expanded collaboration to support development of the EETP model, a workforce improvement program to rapidly increase competence and capacity of exercise evaluators in Missouri and the Heartland Region. The project is being implemented in several stages: a beta test of the CM and TTM, implementation of the CM across the state of Missouri, and a presentation of the CM and TTM for Kansas and Kentucky partners. Evaluation of the EETP CM provided evidence of participant perceived improved competence (increased capacity) to perform as exercise evaluators throughout Missouri. Overall project activities supported existing FEMA and HSEEP guidance in exercise evaluation and capability improvement, as well as development of a competency-based preparedness core curriculum.

A review of challenges and lessons learned during implementation of this project may be of use to others involved in developing curricula for preparedness and response capacity improvement.

Challenges

Common challenges arise when implementing preparedness and response workforce development programs. The demand for competent exercise evaluators required partners to accelerate all implementation stages. High interest in registration for onsite programs offered at convenient locations can be dramatically curtailed by recurrent and high-consequence emergencies, such as the 2009 H1N1 pandemic.

Availability of program faculty who possessed in-depth understanding of federal guidance regarding exercise design and evaluation was crucial. Faculty were able to quickly revise and adapt curriculum based on participants' needs and competence levels, group dynamics, interpretation of new guidance, participant feedback, and evaluation reports.

Retrospective assessments did provide evidence that EETP CM participants were assigned by MDHSS and MHA as evaluators in many regional and state exercises, although resource limitations prevented measurement of evaluators' performance levels.

Lessons learned

Because group discussion, interactive exercises, and hands-on practice are important to improve -performance of exercise evaluators, offering onsite programs, such as the EETP CM, is essential to support online courses such as IS-130. Additional support for post-program evaluation of participants' performance as exercise evaluators would provide a higher level of certainty and causation of program impact and outcomes.

Dissemination of this program is critical if momentum gained in Missouri and partner states is to be replicated. Continued support for implementation and evaluation of the TTM and EETP tool kit is essential to disseminate the EETP model through the CPHP network.

CONCLUSION

CPHP academic and practice partners have the expertise and ability to sustain collaboration to implement and evaluate projects such as the EETP. State and local practice partners are eager to expand collaboration to continue to assure a systemic and integrated approach to provide preparedness and response workforce development.10 With sustained resources, the CPHP network and practice partnerships will be able to assure a national, competency-based workforce preparedness core curricula to support federal guidance, such as the Pandemic and All Hazards Preparedness Act,6 National Public Health Strategy for Terrorism and Response,3 and the National Health Security Strategy.20

REFERENCES

1. The Heartland Center for Public Health Preparedness. About HCPHP. [cited 2009 Oct 30]. Available from: URL: http://www.heartlandcenters.slu.edu/hcphp/hcphpabout.htm.
2. Office of Public Health Preparedness and Response, Centers for Disease Control and Prevention (US) Centers for Public Health Preparedness. [cited 2009 Oct 20]. Available from: URL: http://www.bt.cdc.gov/cdcpreparedness/cphp/index.asp.
3. Agency for Toxic Substances and Disease Registry Centers for Disease Control and Prevention (US) A national public health strategy for terrorism preparedness and response 2003–2008. Atlanta: CDC; 2004.
4. Office of the Secretary, Department of Health and Human Services (US) National Bioterrorism Hospital Preparedness Program. Catalog of Federal Domestic Assistance No. 93.889. [cited 2009 Oct 20]. Also available from: URL: https://www.cfda.gov/?s=program&mode=form&tab=step1&id=7650bc386408f85ba480e28470093053.
5. Pandemic and All-Hazards Preparedness Act. Pub. L. No. 109-417, 109 Stat. 3678 (Dec. 19, 2006)
6. Office of the Assistant Secretary for Preparedness and Response Department of Health and Human Services (US) Pandemic and All-Hazards Preparedness Act (Public Law 109-417) Progress Report. Washington: HHS; 2007.
7. Miner KR, Childers WK, Alperin M, Cioffi J, Hunt N. The MACH Model: from competencies to instruction and performance of the public health workforce. Public Health Rep. 2005;120(Suppl 1):9–15. [PMC free article] [PubMed]
8. Klein KR, Brandenburg DC, Atas JG, Maher A. The use of trained observers as an evaluation tool for a multi-hospital bioterrorism exercise. Prehosp Disaster Med. 2005;20:159–63. [PubMed]
9. Gebbie KM, Valas J, Merrill J, Morse S. Role of exercises and drills in the evaluation of public health in emergency response. Prehosp Disaster Med. 2006;21:173–82. [PubMed]
10. Lurie N, Wasserman J, Nelson CD. Public health preparedness: evolution or revolution? Health Aff (Millwood) 2006;25:935–45. [PubMed]
11. Emergency Management Institute, Federal Emergency Management Agency, Department of Homeland Security (US) IS-130 exercise evaluation and improvement planning. [cited 2009 Oct 30]. Available from: URL: http://training.fema.gov/EMIWeb/IS/IS130.asp.
12. Department of Homeland Security (US) Homeland Security Exercise and Evaluation Program Training Course. [cited 2009 Oct 20]. Available from: URL: http://hseeptraining.com.
13. Estrada LC, Fraser MR, Cioffi JP, Sesker D, Walkner L, Brand MW, et al. Partnering for preparedness: the project public health ready experience. Public Health Rep. 2005;120(Suppl 1):69–75. [PMC free article] [PubMed]
14. Joint Committee on Standards for Educational Evaluation. The program evaluation standards: how to assess evaluations of educational programs. 2nd ed. Thousand Oaks (CA): Sage Publications; 1999.
15. Nelson C, Lurie N, Wasserman J. Assessing public health emergency preparedness: concepts, tools, and challenges. Annu Rev Public Health. 2007;28:1–18. [PubMed]
16. Davidson EJ. Evaluation methodology basics: the nuts and bolts of sound evaluation. Thousand Oaks (CA): Sage Publications; 2005.
17. Martineau J, Hannum K. Evaluating the impact of leadership development: a professional guide. Greensboro (NC): Center for Creative Leadership; 2004.
18. SPSS Inc. SPSS®: Version 14.0 for Windows. Chicago: SPSS, Inc.; 2005.
19. Department of Homeland Security (US) Target Capabilities List: a companion to the National Preparedness Guidelines. Washington: DHS; 2007.
20. Department of Health and Human Services (US) National health security strategy of the United States of America. Washington: HHS; 2009.

Articles from Public Health Reports are provided here courtesy of Association of Schools of Public Health