PMCCPMCCPMCC

Search tips
Search criteria 

Advanced

 
Logo of pubhealthrepPublic Health Reports
 
Public Health Rep. 2010; 125(Suppl 5): 33–42.
PMCID: PMC2966643

Understanding Quality: A Guide for Developers and Consumers of Public Health Emergency Preparedness Trainings

Lisle Hites, MS, MEd, PhDa,b and James Altschuld, PhDc

SYNOPSIS

The work described in this article represents two years of collaboration among 32 evaluators from 23 schools of public health involved in the Centers for Disease Control and Prevention's Centers for Public Health Preparedness program. Evaluators in public health emergency preparedness (PHEP) training were tasked with identifying what constitutes quality in PHEP training and providing guidance to practitioners in selecting training packages. The results of their deliberations included development and selection of guidelines for a high-quality course, a justification of the guidelines, and a Training Selection System (TSS) to assist in analyzing extant trainings. In this article, we present the TSS (along with explanatory notes for each of its sections), preliminary feedback from practitioners, and a discussion of next steps.

State and local public health departments are continually searching for training that builds competency in public health emergency preparedness (PHEP) response capability. Given the scarcity of departmental resources, when possible, this training is done within the health department. However, internal training development and conduct is logically constrained by the resources and skills at hand, resulting in varying quality of courses and educational materials. The end result is that development and delivery of training is often outsourced, again with widely varying quality levels noted in trainings received, making it difficult to select training with any assurance that it will meet local needs. This was an observation made by a nationwide group of evaluators who were charged with assessing state public health preparedness centers funded by the Centers for Disease Control and Prevention (CDC). Indeed, it was a major concern in group discussions from 2005 to 2007.

Adding to the complexity of course selection for health departments is the vast number of organizations offering training materials—each difficult to differentiate from the other. For example, incident command training is offered by a variety of federal agencies (e.g., Federal Emergency Management Agency and CDC), private industry, and academic institutions, such as those within CDC's Centers for Public Health Preparedness (CPHP) program.1 PHEP courses vary widely in modality of delivery (e.g., online, live satellite broadcast, or face-to-face), level of knowledge required, length of time required, and structure. The plethora of options can be a help and a hindrance. While it is useful for public health agencies to have a wide array of options, that array poses a challenge for training directors and others who need to quickly ascertain which courses will best help their staff members.

Thus, CDC requested the assistance of the CPHP National Collaborative Network of Public Health Evaluators (hereafter, Collaborative Group) to address the issue. Individuals in the Collaborative Group came from 23 schools of public health across the United States and included 32 evaluators with backgrounds in psychology, sociology, anthropology, education, and public health. After deliberation, the Collaborative Group decided they could best assist public health agencies by identifying the characteristics that determine quality in PHEP training courses. Understanding the characteristics would better position agencies to develop or sort through available training options. Once the Collaborative Group identified the key aspects of quality, they converted them into an instrument, or checklist, for utilization. The quality criteria, the Training Selection System (TSS), and preliminary feedback from PHEP practice partners about its perceived utility are described in this article. The ideas and concepts presented represent a consensus of ideas and practice on the part of the Collaborative Group and are drawn from the literally hundreds of years of combined experience and evaluation wisdom possessed by this group.

FOUNDATION FOR THE TSS

The Collaborative Group (with input from the Association of Schools of Public Health [ASPH] and CDC) began its examination of measures of quality through a process of facilitated discussion. First, they formulated and sorted many different criteria into three domains: (1) course design and structure, (2) training content, and (3) evaluation of learning. More detailed specification was then undertaken for each criterion within a domain. To be of practical value, the Collaborative Group limited the number of criteria in each domain to five.

SELECTION CRITERIA: FIT VS. QUALITY

There are aspects of training that are necessary for an effective training, but which do not connote quality. The depth or level of difficulty of a training is not necessarily a measure of quality. For example, all things being equal, a very high-quality training for receivers of explosion and blast injuries may not be equally appropriate for emergency department nurses, trauma surgeons, and paramedics. It is of no less quality when applied to one group vs. another; however, its appropriateness or fit can vary quite a bit. For the purposes of this guide, variables that do not directly concern quality but are essential to the effectiveness of training will be referred to as “criteria of fit.” Fit is concerned with modality of delivery (e.g., online or face-to-face) and duration of training (e.g., two hours or two weeks). Such questions allow those seeking training to quickly find a match, or fit, given the demographic characteristics of the target population for training against available resources. In contrast, “quality criteria” include instructor traits (e.g., knowledge level or teaching/communication skills), content accuracy, clarity, and internal monitoring of implementation.

Ensuring an appropriate fit is important for training selection, as it matches a course to the specific needs of the group being trained. Quality-oriented criteria focus on intrinsic aspects of training: structure and design, content, and evaluation of learning. In this article, we discuss the fit and quality dimensions within these three domains.

Domain I: Assessing structure and design

Structure and design refer to appropriate training opportunities—intended audiences, presence of measurable learning objectives, and competency-based development. Criteria are basically descriptive in nature.

Criterion 1: Course content is appropriate for the target audience.

Assessment questions.

  • Is a target audience identified in terms of experience and education? (Fit)
  • Does the audience match the audience for whom the training is intended, in terms of experience and education? (Fit)
  • Based on your review of the materials, does the curriculum content match the needs of the target audience? (Fit)
  • Does the curriculum include well-defined learning objectives? (Quality)
  • Do the training goals and objectives address the learners' identified training needs? (Fit)
  • Is the curriculum content directly related to the stated objectives? (Quality)
  • Is it reasonable to believe that learners will be able to perform their jobs better after the training? (Quality and Fit)

Justification/rationale.

Training is developed for a specific audience, so the target audience should be as similar as possible to the one for which it was ideally designed. If the two do not match, a training program is unlikely to meet its objectives. In this regard, it is essential to consider the learners' education and experience. It is also important to look closely at learning objectives, which we define as statements of the measurable achievements that result from the learning activity. Learning objectives contain course content and communicate what skills, attitudes, and/or knowledge one should gain. Objectives facilitate a common understanding of a course.

Criterion 2: Course level is appropriate for participants.

Assessment questions.

  • What course level is appropriate for the target audience? (Fit)
  • Does the course match the desired skill level of the target audience? (Fit)

Justification/rationale.

In 2001, the Council on Linkages Between Academia and Public Health Practice divided skill development into awareness, knowledge, and advanced levels of competency.2 These development levels are important for ensuring the appropriate selection of training. Learners being taught at the right level should be more engaged, should better reach their potential, and will be more likely to participate in future training.

Criterion 3: Course format is appropriate for the participants.

Assessment questions.

  • Which type of modality provides the most effective learning for the target audience? (Fit)
  • Does the course have supportive materials, manuals, handouts, and quizzes? (Quality)
  • What level of interactivity does your audience require for effective learning? (Fit)
  • Does this course meet the target audience's required level of interactivity? (Fit)

Justification/rationale.

Course format is primarily a criterion of fit. Every individual has a preferred learning style that ties more into one delivery format than another. Relatedly, interactivity has to be considered and is dependent on the type of material that is presented and the degree of learner involvement best suited to the course3 (Figure 1). Whenever possible, training developers/selectors should identify what is most beneficial in these two regards for the audience with which they are working. The format should be appropriate, accessible, and understandable to learners.

Figure 1.
Interactivity strategy levelsa recommended by the CPHP National Collaborative Network of Public Health Evaluators to help determine if a course format is appropriate for PHEP training participants

Another relevant issue involves supportive materials. The Collaborative Group felt that the availability of supportive materials is an indicator of course quality—the more course materials available, the more thoroughly the course has been developed.

Criterion 4: Continuing education credit is provided to meet the needs of certain public health professionals.

Assessment question.

  • Does the course offer continuing education units (CEUs)? (Fit)

Justification/rationale.

The assignment of CEUs for course completion does not necessarily ensure high course quality. However, in practice, certain sub-disciplines in public health require CEUs for continued licensure and will favor courses that do offer them. Further, many participants ascribe more legitimacy to a course with CEUs sanctioned by their profession.

Domain II: Content of the training

Assessment of training content involves answering two questions: First, are the training topic, course level, and teaching modality (e.g., compact disc, in-person, or webcast) appropriate for those to be trained? And second, are these elements being brought together in a way that facilitates meeting student learning needs?

Criterion 5: The training was developed and will be delivered by qualified content experts and is based on current evidence and good science.

Assessment question.

  • Based on the background materials provided, are the course developers and trainers competent to design and implement the training? (Quality)

Justification/rationale.

It is important to consider the organization, agency, and/or academic settings, and credentials of the planners and instructors when evaluating this criterion.

Criterion 6: The design and delivery of the course will accomplish the training goals and objectives.

Assessment questions.

  • Is a schedule of training and educational activities included in the curriculum? (Quality)
  • Based on the curriculum and the knowledge and skill level of the learners, is the time allotted for content areas reasonable for the learning objectives to be met? (Quality)
  • Is the curriculum organized in a logical manner? (Quality)
  • Are the teaching methods appropriate for meeting the learning objectives? (Quality)
  • Will participants have access to curriculum materials or support after training is completed? (Quality)

Justification/rationale.

Criterion 6 contains the quintessential elements of quality. For a training to be effective, the time allotted to each content area or activity is critical. If insufficient, it is illogical to assume the learning objectives can be mastered during this course. Analogously, matching teaching methods to the learning objectives and ensuring the course progresses from concrete to abstract, general to specific, and simple to complex can help learners make necessary cognitive connections. Matching design and delivery to objectives is essential for a quality course.

Domain III: Evaluation of learning

Learning is paramount to the success of any training. In practice, evaluation of learning can be approached from two different perspectives: program evaluation and assessment of impact on the learner.

Criterion 7: The course evaluation includes a data-collection tool to gather information about the characteristics of the participants and the course.

Assessment question.

  • Does the course include methods to collect background information about participants, such as characteristics or previous experiences, prior to its start? (Quality)

Justification/rationale.

To be accountable to funding agencies, training packages must have provisions to obtain demographic data describing participants and what services were provided. Obtaining demographic data (such as participant gender, ethnicity, and geographic location) ensures that the appropriate people were reached. Higher-quality course evaluations incorporate the completion of forms gathering characteristics about participants and the course into the delivery structure of the program via Web-based forms that allow the information to be automatically compiled into a reporting format available to the funding agency.

Criterion 8: The course evaluation includes instruments that gather feedback from multiple sources to demonstrate that the course was delivered as planned and to inform decisions about course improvement.

Assessment questions.

  • Does this course contain methods (forms, tests, or observations) to collect evaluation data/information? (Quality)
  • Does the course monitor its implementation by collecting the degree of course content planned and covered? (Quality)
  • Are measures of participant satisfaction collected during the course? (Quality)
  • Does the course monitor implementation by collecting measures of participant perceptions of course quality? (Quality)
  • Does the course monitor implementation by collecting facilitator/instructor perceptions of how well the course is progressing? (Quality)
  • Are external observer perceptions of course quality collected? (Quality)

Justification/rationale.

Process data are valuable to help identify problems that might be preventing a course from being delivered as designed. These data are utilized to modify and improve the course for future presentations, facilitating ongoing refinements in course delivery and examining the extent to which learning objectives are met. Course evaluations should gather data from multiple sources—participants, facilitators, and external observers where applicable—to enhance in-depth perspectives and increase the likelihood of better solutions for course-related problems. Ideally, data should be collected in real time, during and after face-to-face and online courses, keeping in mind that the longer the break between course completion and assessment, the less accurate it will be. The exception is follow-up to determine if learning is sustained over time after the course. The key is to quickly assess process data so timely feedback can be provided to improve the course the next time it is offered.

Criterion 9: The course includes assessment tools to evaluate whether the program is having its desired effect—improving participant knowledge, skills, and competencies.

Assessment questions.

  • Are measures of participant learning collected during the course? (Quality)
  • Does the course assess changes in learners' knowledge of course content with pretests and posttests? (Quality)
  • Does the course assess learners' attainment of competencies with posttests, demonstration-of-skill checklists, exercises, or other means? (Quality)
  • Are learners' attitudes about course content determined? (Quality)

Justification/rationale.

Impact evaluation examines whether the program is creating its intended outcome. Information is gathered by testing participants before and after training. Comparison of pre- and post-training performance determines improvement resulting from training. Such measures might also come from on-the-job observations and/or simulations. Going even further, top-of-the-line evaluations often look at the degree to which learning is retained or, in the best-case scenario, transferred to the job six months to a year later. But this is resource intensive. To reduce costs, participants are frequently asked to report their own perceptions of increased competency. While this request is reasonable, it is not nearly as reliable or accurate as actual measures of knowledge or skill.

Criterion 10: The course represents a best practice.

Assessment questions.

  • Has the course been taught in other settings? (Quality)
  • If so, have the measures and results been consistent with the original offering? (Quality)
  • Has the course been taught to other audiences? (Quality)
  • If so, have measures and results been consistent with the original offering? (Quality)

Justification/rationale.

Although many courses claim to represent best practices, they often do not provide any data to substantiate that assertion. Indicators of best practice can come from feedback from previous participants, information regarding successful implementation/application of the course content to the workplace, and evidence that the course has been successfully utilized elsewhere by a different participant demographic (i.e., generalization).

Criterion 11: The course addresses external guidelines.

Assessment questions.

  • Have the competencies in the course been identified and tied to course objectives? (Fit and Quality)
  • Have Target Capabilities4 and/or Universal Tasks5 been tied to course objectives? (Fit and Quality)

Justification/rationale.

CDC views competency as a foundation for PHEP training. The goal is to tie competencies to identified jobs or roles.6 This process is facilitated by utilizing competency-based needs assessments to identify relevant training. For example, CDC requires that CPHP map bioterrorism core competencies7 to each course developed for state and local public health partners. If this is done, the end result should be a more competent public health workforce.

A second approach to connecting training to external guidelines involves the Department of Homeland Security (DHS) Target Capabilities List4 and Universal Task List.5 The Target Capabilities List consists of an evolving set of emergency response capabilities, each composed of subsets of specific tasks taken from the Universal Task List. These tasks are specific enough to be readily observed or measured. State training directors are under pressure to ensure their exercises comply with this guideline. DHS mandates that states evaluate their drills and exercises via a standardized system called the Homeland Security Exercise and Evaluation Program (HSEEP).8 It is designed to integrate observational measures that identify strengths and weaknesses specific to Target Capabilities. Given this metric, it is important that training directors seek out courses that can fulfill the needs identified by HSEEP. Similar to the competency linkages, Target Capabilities in courses must be assessed. As with competency-based training development, if training is linked to desired Target Capabilities, it should build appropriate PHEP capabilities.

USING RESPONSES TO SELECT OR DESIGN A TRAINING

The TSS may be used in its entirety or broken into subsections according to the needs of the training designer/selector. However, the individual items of the TSS are not designed to be stand-alone indicators or predictors of quality. Applying the full instrument to design or selection involves working through each of the domains, criteria, and questions. Domain I is a broad determination of fit between a specific training activity and learners. “Yes” answers in Domain I are necessary before selecting a training activity. A course that does not develop learners' skills to the needed level will not advance the capability of an organization. To assess Domain II (Criteria 5 and 6), one would need “yes” answers to questions 16–19. Question 20 reflects the availability of curriculum and support to the extent needed. Domain III (Criteria 7–11) concerns participant demographics, the comprehensiveness of the evaluation, and changes in knowledge that have occurred; more “yes” responses are desirable. Replication in multiple settings (Criterion 10) demonstrates reliability and generalizability. Lastly, competency-based needs assessments and other external guidelines need to be addressed as appropriate.

Training directors are encouraged to rate courses with the TSS (Figure 2). This rating will help ensure the best possible course is selected for any given need. And if an agency decides to develop its own training, the TSS provides excellent guidance for doing so.

Figure 2.
Training Selection System, designed by the CPHP National Collaborative Network of Public Health Evaluators to help state and local public health agencies make decisions about which PHEP training best meets the needs of their staffsa

FEEDBACK FROM THE FIELD: VETTING THE TSS

The Collaborative Group decided to seek feedback on the TSS's utility from practitioners at the 2006 Public Health Preparedness Summit. To this end, the instrument was introduced to practitioners along with a sample of materials from actual trainings that are available on the ASPH Collaborative Group website. Participants—representatives from local and state health departments—were asked to use the TSS to examine the training examples. Following the exercise, the participant group discussed the utility of the TSS.

Both the strength and weakness of the TSS was perceived to be its comprehensiveness. While there was agreement that the domains, criteria, and questions covered a vast range of fit and quality variables, this range was far more than the practitioners needed or could apply. They suggested shortening the number of items on the TSS. In response, the authors are currently examining this possibility via a weighted rating of the utility of each item.

The participant group also suggested the TSS be divided according to fit and quality questions. In this way, all courses could be scored on quality, and these ratings could be made available to those searching for courses. The fit questions would be used to filter through courses to find the best training match. Accordingly, the questions are now labeled as fit or quality indicators.

NEXT STEPS

There is additional debate as to who should be rating courses and if ratings can be carried out reliably. It would be possible for training developers to self-rate. Self-rating would be the least resource intensive, but the Collaborative Group felt such a methodology would lack in validity and show bias toward the training developers. Alternately, a neutral party could assess the quality of extant courses, but such a task would be costly and could create discomfort on the part of training developers. Tensions arise when developers are asked to submit to external quality assessment. Who does this, and how should it be done? Until neutral external evaluation results become available, public health agencies are encouraged to take on this task to a lesser extent by applying the TSS. The Collaborative Group hopes the TSS will help agencies select or develop training products that best meet local needs.

Acknowledgments

The authors thank their colleagues in the Centers for Public Health Preparedness (CPHP) National Collaborative Network of Public Health Evaluators (Collaborative Group), especially those who contributed to the three specific subcommittees from which much of the information in this article is drawn. Curriculum Content Subcommittee: Michael Brand—chair, Sandra Senter, and Lynn Paleo; Evaluation Subcommittee: Ralph Renger—chair, James T. Austin, Mary Davis, Alina Dorian, and Marcia Sass; Structure/Design Subcommittee: Martha Wingate—chair, Courtney Andrews, Luann D'Ambrosio, Karen Pendleton, Melanie Livet, and Ed Waltz; non-subcommittee members: Stephen Morse, Iris Smith, Silvia Rabionet, Marcia Testa, Dan Barnett, Diane Zerbe, Mark Edgar, Sheila Chauvin, Christine Siador, Colleen Monahan, Guddi Kapadia, Barry Greene, Thomas Reischl, Mary Hoeppner, Sarah Felknor, Eileen Blake, and Laura Biesiadecki (Association of Schools of Public Health [ASPH] coordinator). Author Lisle Hites served as co-chair of the Collaborative Group for the duration of the project, and author James Altschuld served on the Collaborative Group's Evaluation Subcommittee.

The authors thank Mary Hoeppner and Ralph Renger for their participation as past co-chairs of the Collaborative Group and for review of early drafts of the article.

The authors also acknowledge the 23 schools of public health represented in the Collaborative Group: Columbia University Mailman School of Public Health; Emory University Rollins School of Public Health; Harvard School of Public Health; Saint Louis University School of Public Health; State University of New York at Albany School of Public Health; Johns Hopkins Bloomberg School of Public Health; Ohio State University College of Public Health; University of Alabama at Birmingham School of Public Health; University of Arizona Mel and Enid Zuckerman College of Public Health; University of Iowa College of Public Health; University of North Carolina at Chapel Hill Gillings School of Global Public Health; University of Oklahoma Health Sciences Center College of Public Health; University of Texas Health Science Center at Houston School of Public Health; Tulane University School of Public Health and Tropical Medicine; University of California, Berkeley—School of Public Health; University of California, Los Angeles—School of Public Health; University of Illinois at Chicago School of Public Health; University of Medicine and Dentistry of New Jersey—School of Public Health; University of Michigan School of Public Health; University of Minnesota School of Public Health; University of South Carolina Arnold School of Public Health; University of Washington School of Public Health; and Yale University School of Public Health.

The authors also thank the Centers for Disease Control and Prevention for funding this work through the CPHP, and the members of the National Association of County and City Health Officials and ASPH for assisting with content development, reviewing this article, and providing general guidance and invaluable suggestions throughout its development.

REFERENCES

1. Association of Schools of Public Health, Centers for Public Health Preparedness. Resource center. [cited 2010 Jun 24]. Available from: URL: http://preparedness.asph.org/cphp/resourceCenter.cfm.
2. Council on Linkages Between Academia and Public Health Practice. Core competencies for public health professionals—current challenges and future directions. Washington: Public Health Foundation; 2005. [cited 2010 Jun 24]. Also available from: URL: http://www.phf.org/link/phsr/competency-directions.pdf.
3. ODP approach for blended learning. Washington: DOJ; 2003. [cited 2010 Jun 24]. Department of Justice (US), Office for Domestic Preparedness. Also available from: URL: http://www.homeland.ca.gov/pdf/BlendedLearning.pdf.
4. Department of Homeland Security (US) Target capabilities list: a companion to the National Preparedness Guidelines. Washington: DHS; 2007. [cited 2010 Jun 24]. Also available from: URL: http://www.fema.gov/pdf/government/training/tcl.pdf.
5. Department of Homeland Security (US), Office for Domestic Preparedness. Universal task list 2.0. Washington: DHS; 2004. [cited 2010 Jun 28]. Also available from: URL: http://www.wcdps.org/publicsafety/lib/publicsafety/documents/urbanthunder/universal_task_list_2_0.pdf.
6. Hites LS, Lafreniere AV, Wingate MS, Anderson AC, Ginter PM, Santacaterina L, et al. Expanding the public health emergency preparedness competency set to meet specialized local and evolving national needs: a needs assessment and training approach. J Public Health Manag Pract. 2007;13:497–505. [PubMed]
7. Columbia University School of Nursing, Center for Health Policy. Bioterrorism and emergency readiness: competencies for all public health workers. [cited 2010 Jun 24]. Available from: URL: http://www.nursing.columbia.edu/chp/pdfArchive/btcomps.pdf.
8. Federal Emergency Management Agency (US) Homeland Security Exercise and Evaluation Management Program: HSEEP mission. [cited 2010 Jun 28]. Available from: URL: https://hseep.dhs.gov/pages/1001_HSEEP7.aspx.

Articles from Public Health Reports are provided here courtesy of SAGE Publications