PMCCPMCCPMCC

Search tips
Search criteria 

Advanced

 
Logo of nihpaAbout Author manuscriptsSubmit a manuscriptHHS Public Access; Author Manuscript; Accepted for publication in peer reviewed journal;
 
J Occup Environ Med. Author manuscript; available in PMC 2011 January 1.
Published in final edited form as:
PMCID: PMC2897048
NIHMSID: NIHMS167195

ACTION Live: Using Process Evaluation to Describe Implementation of a Worksite Wellness Program

Carolyn C. Johnson, Ph.D.,1 Yen-Ling Lai, M.S.P.H.,2 Janet Rice, Ph.D.,2 Diego Rose, Ph.D.,1 and Larry S. Webber, Ph.D.2

Abstract

Objective

Process evaluation is a necessary component of randomized controlled field trials. This is a descriptive paper that reviews process evaluation for the ACTION Wellness Program for Elementary School Personnel.

Methods

Methods included self-report by participants, documentation by program staff, and school administrator report. Variables evaluated were program dose, fidelity and reach, exposure to materials and activities, and school factors that could influence program implementation and/or outcomes.

Results

Dose and exposure were high across intervention schools and intervention years. Reach was variable across schools and activities. Schools on the East Bank of the Mississippi River generally had slightly better reach than schools on the West Bank. Some nutrition activities had higher levels of participation than physical activities.

Conclusions

High program dose reflected good effort and cooperation by program staff and schools. A disconnect between exposure and reach showed that high exposure did not always translate to high participation.

The randomized controlled field trial has been accepted as the gold standard in public health research for establishing effectiveness and/or efficacy for a specified intervention upon previously specified outcomes (1). These trials have the advantages of yielding valid data and detecting small important effects, thereby adding a high level of confidence in the efficacy or effectiveness of the intervention. Such a study allows for external validity, and, if appropriately designed, controls factors that could threaten various facets of internal validity (2). Rigorous adherence to the study protocols is central to the implementation of a field trial (3).

Unexpected challenges, however, are inherent in field-based research. The potential for problems in implementing an intervention requires constant vigilance, timely identification of issues detrimental to implementation, and development of resolutions that are efficient and effective. Process evaluation, well planned a priori, can provide information that not only helps identify and resolve problems while the program is ongoing, but also can offer better understanding about how the program was implemented (4-7), help explain program outcomes (8), help engender alternatives to program design (9), and provide a foundation for program maintenance, dissemination, and generalization (10).

ACTION, a wellness program for elementary school personnel, was designed as a randomized controlled field trial for the purpose of reducing and/or preventing overweight and obesity among adult elementary school personnel (11). Process evaluation was planned prior to the development and implementation of intervention protocols and remained an integral component of the ongoing process. The purpose of this study, therefore, is to describe the process evaluation strategies for the ACTION program and to report the data defining the levels of program implementation. Inasmuch as the process data were derived only from the 10 intervention schools, the documentation will be treated as a descriptive case study using descriptive and non-inferential data.

METHODS

Study Design

A suburban school district within the Greater New Orleans area was targeted for ACTION. Twenty-two schools volunteered to participate in the 2.5 year intervention program, 10 of which were randomly assigned to the intervention condition (two years), and 11 assigned to the control condition. The control schools later received an abbreviated version of the intervention after follow-up measurement of the main trial was completed. An additional school was not randomized due to low participation by school staff in the baseline measurement. Outcome measurement at baseline and follow-up was conducted in all 21 schools.

The Mississippi River bisects this school district. Schools were randomized such that half of the intervention and control schools were on the East Bank of the river and half on the West Bank of the river.

Description of the Intervention

The ACTION Wellness Program targeted eating and physical activity behaviors. Some of the interventions were aimed at altering the school environment so that the environment would support healthy eating and increased physical activity. These environmental interventions were ongoing across semesters and included: 1) an identified and marked walking trail on the school premises; 2) the inclusion of ACTION-approved healthy snacks in the on-site vending machines; 3) an innovative school policy encouraging healthy food and snacks served at teacher meetings and other school events; and 4) an identified “wellness area” that contained health-promoting printed materials, exercise videos, and small exercise equipment, such as balls, Frisbees, etc. The wellness area was self-maintained by school personnel with materials and equipment provided by ACTION staff.

Other intervention activities, specific to a particular semester, were offered at the rate of three to four per semester, with some being repeated in an additional semester. Examples of semester-specific activities were:

  1. Bingo Blastoff - the program kickoff and beginning of a 4-week challenge to complete specific nutrition and physical activity goals;
  2. Pedometer Challenge - a 4-week challenge to promote physical activity with the use of pedometers by increasing daily steps by at least 2,500 over baseline;
  3. Move More in March – an activity to encourage participants to increase the number of days they were physically active for 30 or more minutes; and
  4. Mind Your P’s and Cues – a presentation referred to as an ACTION Chat that targeted planning, portion control, pace of eating and food cues.

Activities that were specific to a particular semester, but that were repeated each intervention year, were observances of nationally-designated months, such as National American Heart Month, National Diabetes Month, and National Nutrition Month. ACTION Chats were implemented every semester, but the topics varied by semester.

Process Evaluation

A number of strategies for documenting the implementation of the intervention were developed prior to the beginning of the program. Decisions were made about which process variables would be used, the methods by which they would be documented, who would be responsible for completing the documentation, and procedures for data entry and evaluation. It was recognized that the process evaluation, of necessity occurring within only the 10 intervention schools, would provide a small sample size; therefore, statistical evaluation would be limited to univariate descriptive and non-inferential methods.

The process variables of interest were: 1) dose and fidelity, 2) exposure, 3) reach, 4) contextual factors, and 5) school factors that could potentially influence program implementation and outcomes. Dose included the amount of the activities actually implemented compared to the activities that were planned, while fidelity indicated the extent to which activities were consistent with previously-developed protocols. Reach documented participation by activity and event. Exposure indicated whether participants heard about, saw and/or read ACTION materials activities, events and health messages. Contextual factors were any event that could facilitate or hinder the implementation of the program over which program staff had no control. Contextual factors could also influence secular trends, and thus interpretation of results. Activities or events occurring in the schools that were unrelated to the ACTION intervention could influence outcomes of the trial. It was important, therefore, to document these in order to interpret results appropriately (12,13).

Measurement Instruments

End-of-Semester Survey (EOS)

The EOS was developed for the purpose of obtaining information about the reach of activities that could not be obtained via other measurements. The EOS was also the only method by which we could document any level of exposure to ACTION materials and activities. The EOS was a brief one-page self-report survey that was administered to participants in the intervention schools near the end of each of the four intervention semesters. Employees were asked whether they had heard of various ACTION activities during the past semester, and, as a check, the survey included mention of some activities that were not conducted. An example of an exposure question was, “Are you aware that ACTION Fitness Videos are available at your school for your use?” Examples of reach questions were, “How often have you used the ACTION school walking trails this semester?” and “This semester have you used any of the ACTION Fitness Videos available at your school?” Administration of the EOS usually took place during regularly scheduled within-school personnel meetings. An alternate method was distribution of surveys in personnel mailboxes and collection of completed surveys by ACTION staff at regularly scheduled pickup times and places.

Performance Documentation Form (PDF)

The PDF was a simple documentation form completed by ACTION intervention staff. While in the possession of staff, the PDFs were maintained in a log book by school and turned in weekly for data entry. The PDF recorded dose and fidelity, reach, and contextual factors. Dose was recorded as a yes or no response to whether an event was implemented, while fidelity indicated whether all of the major concepts of the event were implemented as planned. Compliance with protocol (fidelity) was assessed by entries stating “Implemented as Planned” and “Materials distributed.” If the activity could not be implemented, the reason was recorded. Staff recorded the number of participants in the activity (reach). If any event took place at the school, in the district, in the city or elsewhere that prevented implementation, or that could have a detrimental or facilitative effect on implementation, this information was also recorded on the PDF.

Annual School Health Survey

At the close of each intervention school year, the school principal (or the principal’s designee) was asked to respond to four questions concerning healthy eating and physical activity events and policies, other than the ACTION program, that took place during the past school year. The following items which were addressed were school factors that could potentially influence ACTION program implementation and outcomes: 1) involvement of faculty/staff in any activities other than ACTION that promoted healthy eating or physical activity; 2) policy changes at school, district or state level that encouraged faculty/staff to eat healthy and be more physically active; 3) policy changes that may have created obstacles for engagement in healthy eating or physical activity; and 4) whether or not the school had an established wellness policy.

Results

Response rates for the EOS were calculated with personnel rosters obtained at baseline and were averaged for each school across the four intervention semesters. Response rates ranged from a low of 40.1% to a high of 63.8%. The mean school response rate over all schools and semesters was 50.8%. Half of the 10 schools had an average response rate across semesters of 50% or more, and four of those were on the East Bank. Most of the West Bank schools had lower participation in the EOS.

Dose and Fidelity

Dose and fidelity were recorded by program staff using the PDF. Reports were categorized into four broad groups as shown in Table 1. These data were averaged across activities within a semester and then averaged across the two semesters (spring and fall) in each calendar year. Generally, dose was very high across schools and semesters, ranging from 79.5% to 100%. There were no clinically significant differences in dose between East Bank schools and West Bank schools or between intervention years. Dose was highest for observation of promotional materials in the schools, where, of the 20 data points, 19 were 100%.

Table 1
Program Dose for ACTION Recorded on the Performance Documentation Form (PDF) by Year and by Intervention School

Contextual Factors

Contextual factors documented by program staff on the PDF indicated that deficits in implementation most commonly occurred through events that were beyond the control of the program staff. A summary of 2007 factors that contributed to implementation deficits were bad weather (tornadoes on the West Bank), cancellation of group fitness classes due to instructor problems (sick, car problems), space provided initially by school for activities inappropriate (too hot, not air conditioned, too small), or standardized student testing (teachers too busy).

In 2008, the contextual factors were similar, including bad weather, group fitness instructor problems, student testing, and physical repairs to schools that interfered with assigned activity space.

Exposure

Respondents to the EOS in all schools reported a high level of awareness of the ACTION semester-specific activities (Tables 2a and and2b).2b). Rarely did the percentage reporting exposure to the activities drop below 80%, and lower exposure rates were noted for both of the ACTION chats, the Fiber Chat and Apple Ceremony in Spring 2008 and the P’s & Cues Chat in the Fall 2008. This was observed for both the East Bank and West Bank schools.

Table 2a
Exposure Self-Reported on the End-of-Semester Survey (EOS) for East Bank Schools (EB) by Semester
Table 2b
Exposure Self-Reported on the End-of-Semester Survey (EOS) for West Bank Schools (WB) By Semester

Reach

Information on program reach (participation) came from two sources, the EOS and the PDF. The EOS captured information for activities that spanned the four semesters of the intervention and that could not, by the nature of the activities, be observed by program staff. There were four such activities: the school-based walking trail, the ACTION-approved snacks in vending machines, the healthy snack bowl maintained in the personnel lounge, and fitness videos available for checkout and use from either the library or the personnel lounge. It can be seen from Table 3 that the snack bowl was the most popular of these ongoing activities as reported by EOS respondents. Participation rates were the highest for the snack bowl, ranging from a low of 61.9% to a high of 100% across semesters and across schools. On the other hand, the Fitness Videos were reported by EOS respondents as having the lowest reach among these activities, ranging from a low of 4% to a high of 43.3% across semesters and across schools.

Table 3
Self-Reported Reach (Participation) Data from the ACTION End-of-Semester (EOS) Survey by Semester and by Intervention School

Reach for the walking trail ranged from almost 26% to about 84% across semesters and across schools. The reach for the ACTION-approved snacks from vending machines was much more variable, ranging from 0% to 64.5% among EOS respondents.

Averaged across activities and semesters, the East Bank schools (51.8%) had slightly better reach than the West Bank schools (47.6%), although this difference would not be considered clinically significant. Five schools, four of which were East Bank schools, had a reach of close to 50% or more when averaged across activities and across semesters. The reach of the remaining five schools ranged from 49.2% to 39.8%.

The program staff used the PDF to document reach for the semester-specific activities (Tables 4a and and4b).4b). In Spring 2007, Bingo Blastoff, the introductory activity for the ACTION program, was the best attended in all schools. The Pedometer Challenge led the activities in Fall 2007. In Spring 2008, two activities were well attended for the kickoff phase but participation dropped off for the sustained phase of the activities. Move More in March dropped from about 35% to almost 24% and the Fruit/Vegetable Challenge dropped from about 62% for the kickoff to about 25% for wrap-up in the East Bank schools. The same pattern was observed in the West Bank schools for these activities. Two other activities that were apparently popular during Spring 2008 in all schools were the Extreme Recipe Makeover and Spring into Action. In the Fall 2008 semester, the Pedometer Challenge, which involved competition between schools, garnered the highest participation.

Table 4a
Reach for Semester-Specific Activities for East Bank Schools (EB) by Semester and Documented by the Performance Documentation Form (PDF)
Table 4b
Reach for Semester-Specific Activities for West Bank Schools (WB) by Semester and Documented with the Performance Documentation Form (PDF)

Again, the higher reach was documented for the East Bank schools compared to the West Bank schools for all semesters: Spring 2007 (31.1 vs 26.4), Fall 2007 (37.2 vs 27.9), Spring 2008 (38.1 vs 33.9), and Fall 2008 (40.1 vs 30.0). Participation averaged across activities did not reach 50% for any of the intervention schools. Averaged across activities and semesters, the five schools with the highest participation rates were predominantly East Bank schools with one West Bank school, ranging from 39.8% to 35.0%. Lower reach was observed in the remaining five schools, four of which were West Bank schools and ranged from 20.0% to 32.9%.

School Factors

The Annual School Survey was administered once during each intervention year to the principals or his/her designees at the intervention schools to determine if the schools experienced any activities or events not sponsored by ACTION but similar to activities conducted within the intervention. This could include any activities related to increasing physical activity, improving nutrition, or that could affect weight status. The survey also asked about policy changes at the school or district level that had the potential for a positive or negative effect on the implementation of ACTION, as well as whether the school had an established wellness policy other than ACTION.

None of the schools indicated that they had an established wellness policy in year 1, but by year 2, two East Bank schools reported putting a written wellness policy in place. In year 1, only two East Bank schools reported having related activities or events; however, by year 2 three additional schools, two East Bank and one West Bank, reported these kinds of activities (data not shown).

Policy changes with potential positive impact were reported by one East Bank school in year 1 and by two additional East Bank schools in year 2. In year 1 one East Bank school reported offering healthier snacks in the snack shop because of the state’s emphasis on healthier vending choices. In Year 2, an additional two East Bank schools were offering healthier snacks.

Discussion

The ACTION Wellness Program for Elementary School Personnel was conducted in 10 intervention schools while 11 delayed-intervention schools served as comparison schools. In this case study we focused on a process evaluation plan developed prior to the development and implementation of the intervention and comprising an integral component of the overall program. This a priori plan included evaluations of: 1) program dose and fidelity; 2) exposure to program materials; 3) reach or program participation; 4) relevant contextual factors; and 5) school factors, specifically with regard to other similar school-based programs. All of these process variables had been previously recommended by Steckler and colleagues (5,12,13).

Considerations for the selected process evaluation methods were mainly low respondent burden and ease of documentation by program staff. These criteria were met with a one-page self-report survey administered to school personnel toward the end of each of four intervention semesters (EOS) to assess reach and exposure, the PDF log book format for intervention staff to document reach, dose and contextual factors, and the annual school survey, administered at the level of the principal’s office twice during the intervention to assess secular trends.

The dose and fidelity reported by the intervention staff was high, in other words, all intervention schools had consistently high levels of implementation when averaged across activities and semesters. Interestingly, contextual factors documented by intervention staff indicated that most implementation deficits occurred because of extraneous factors and were not due to barriers created by schools or participants. This important piece of information is indicative of the cooperation and receptivity experienced by the ACTION program staff at the school level. We would expect that extreme deficits in program implementation would potentially result in low program effects (3); on the other hand, high levels of implementation (dose and fidelity) do not necessarily translate to program effectiveness (12). Still, documenting program dose is an important first step to conducting a viable field trial, and the data documenting implementation can be obtained only by a planned process evaluation strategy.

Program effectiveness and behavior change have been related to participation (14), but, in order to participate, individuals first need to be aware of the program activities. In the EOS, the target group indicated their awareness of the activities that were conducted during each semester. The activities contained in the checklist of the EOS were conducted only in that particular semester. A large percentage of respondents indicated that they were aware of the activities, and this was consistent across schools, across semesters, and generally across activities. It was noted; however, that awareness was lower for the various ACTION chats, which were planned presentations by program staff on a particular topic. Many of these also had low reach documented by program staff in the PDF, and, using these process data, program staff were able to make adjustments to the program presentations. It should be noted that various methods were used for assessing reach. In the PDF logs, program staff documented counts from attendance sheets (when time permitted) and head counts.

Some reach data, such as information about ongoing school-based activities, were also assessed with the EOS. The highest level of participation was observed for the snack bowl and the lowest for the fitness videos. The healthy snack bowl was filled by school personnel and made available in the school lounge on a regular basis. It is understandable that the snack bowl would be used; it was available, it was convenient, and the snacks were appealing. The fitness videos, on the other hand, were available in the wellness center, but had to be checked out and used at home. The process of using the snack bowl was more convenient and easier compared to the process for using a fitness video.

What we do not know, however, is whether healthy snacks from the snack bowl were consumed in place of less healthy snacks, or if they were being eaten in addition to less healthy snacks. Future process evaluation for nutrition intervention programs would do well to evaluate the actual use of available healthy snacks.

The data obtained via the EOS were compromised in some instances by low response rates by participants. The awareness/exposure data, as well as the reach data for ongoing activities, were based on only those participants who responded to the survey. Given that overall response rates were only about 51%, these results could be biased. It has been noted that low participation rates (reach) could result in a threat to external validity because the resulting bias could compromise the representativeness of the sample, especially without assessment of characteristics of non-respondents (15). In this case, however, it is not known if school personnel who did not complete the EOS actually did participate in activities that were self-reported on the EOS.

Reach documented by program staff in the PDF did not reflect a good correspondence with the exposure data, showing that exposure does not necessarily translate to participation, in other words, participants may be aware of program activities, but selectively choose whether or not to participate. As Baranowski & Stables (4) pointed out in their process evaluation of the 5-a-day projects, process indicators occasionally declined over time. We observed this for those activities that included a kickoff and a follow-up period; the kickoff participation was always higher than the follow-up.

It is difficult to explain why the schools located on the East Bank of the Mississippi River seemed to do a little better than those on the West Bank. The study field office is located on the East Bank, and it may have been easier for program staff to be available to those schools and to generate interest in the activities. Another possibility is that there seems to be more movement among school personnel in this area; therefore, the workforce would be more transient, a less cohesive social structure, and consequently less commitment to worksite programs.

The annual school survey did not indicate that there were enough other similar activities available to the target group to warrant concern about sources other than the ACTION program influencing program outcomes. On a more positive note, additional similar activities related to physical activity and dietary behavior that were not ACTION-related could be a signal for overall increased interest and generalization of the ACTION program. Two schools instituted wellness policies while ACTION was ongoing, and some of the schools effected positive policy changes during that time as well, all of which were considered positive indicators for the ACTION program.

Limitations

No process evaluation data were collected from the control schools. Although Crump et al (16) recommended using process data to address issues of recruitment and maintenance, which would have required process data collection at control schools, we chose to focus our process evaluation plan on actual program implementation. Our past experience in recruiting schools for other programs has been positive. This was confirmed by the fact that we had no difficulty obtaining permission from 22 sites to conduct the program in those schools. Also, through past experience we were confident that we would be able to recruit the expected numbers of individuals for baseline measurement. In this study only one school provided an insufficient baseline.

Quite often we observed low response rates to the EOS. This highlights one of the inherent problems in process data collection. Process data are sometimes difficult to obtain with an acceptable level of reliability and validity. Program participants are not always interested in responding to multiple surveys, and recall across an entire semester may be faulty; yet, for many process variables self-administered surveys are the only methods by which process data can be obtained. Program staff documentations are not always complete, and, even when complete, could be biased toward a favorable position for those schools for which they are responsible. These limitations for process evaluation have been generally acknowledged (5,16,17).

It should also be noted that this is a descriptive study reporting the results of the ACTION process evaluation without attempting to correlate process with outcome results. The primary reason for this is that we wanted to focus on the process measurement, its results, and some of the problems encountered in measurement, rather than on whether process data helped to explain outcome data. We note that this may be a limitation; however, we felt the contribution of the study would be the recognition by other researchers of the array of process measures, as well as their benefits and problems.

Conclusions and Implications

Even given the limitations relative to process data collection, the use of process data collection methods and the information obtained from them, have been useful in the ACTION program for multiple reasons. First, the process data informed program staff about those activities that were popular and those that were not well attended, allowing adjustments to be made and a redesigning of some of the activities while the intervention was in progress. A second advantage, one that is not usually recognized, is that documentation of implementation maintains a high level of awareness and purpose among intervention staff for implementing activities in all locations and maintaining a high degree of fidelity to the protocols. Even allowing for some of the problems present in process data collection, and the additional time and expense involved, researchers should recognize its usefulness and be more consistent in the application of process evaluation measures in public health research.

Acknowledgments

This report is based on a study funded by the National, Heart, Lung and Blood Institute, National Institutes of Health, Grant No. HL079509.

References

1. Green SB. The advantages of community-randomized trials for evaluating lifestyle modification. Controlled Clinical Trials. 1997;18:506–513. [PubMed]
2. Campbell DT, Stanley JC. Experimental and quasi-experimental designs for research. Boston, MA: Houghton Mifflin Company; 1963.
3. Dumas JE, Lynch AM, Laughlin JE, et al. Promoting intervention fidelity: Conceptual issues, methods, and preliminary results from the EARLY ALLIANCE Prevention Trial. Am J Prev Med. 2001;20(1S):38–47. [PubMed]
4. Baranowski T, Stables G. Process evaluations of the 5-a-day projects. Health Education & Behavior. 2000;27(2):157–166. [PubMed]
5. Steckler A, Ethelbah B, Martin CJ, et al. Pathways process evaluation results: a school-based prevention trial to promote healthful diet and physical activity in American Indian third, fourth, and fifth grade students. Prev Med. 2003;37:S80–S90. [PubMed]
6. Perry CL, Sellers DE, Johnson C, et al. The Child and Adolescent Trial for Cardiovascular Health (CATCH): Intervention, implementation, and feasibility for elementary schools in the United States. Health Education & Behavior. 1997;24(6):716–735. [PubMed]
7. Helitzer DL, Davis SM, Gittelsohn J, et al. Process evaluation in a multisite, primary obesity-prevention trial in American Indian school children. Amer J Clin Nutr. 1999;69(4):816S–824S. [PubMed]
8. Pratt CC, McGuidan WM, Katzev AR. Measuring program outcomes: Using retrospective pretest methodology. Amer J Evaluation. 2000;21(3):341–349.
9. Schulz AJ, Israel BA, Lantz P. Instrument for evaluating dimensions of group dynamics within community-based participatory research partnerships. Evaluation and Program Planning. 2003;26:249–262.
10. Lantz PM, Viruell-Fuentes E, Israel BA, et al. Can communities and academia work together on public health research? Evaluation results from a community-based participatory research partnership in Detroit. J Urban Health: Bulletin of the New York Academy of Medicine. 2001;78(3):495–507. [PMC free article] [PubMed]
11. Webber LS, Johnson CC, Rose D, et al. Development of ACTION! Wellness program for elementary school personnel. Obesity. 2007;15(Supplement):48S–56S. [PubMed]
12. Steckler A, Linnan L, Israel BA. Process evaluation for public health interventions and research. San Francisco, CA: Jossey-Bass;
13. Young DR, Steckler A, Cohen S, et al. Process evaluation results from a school- and community-linked intervention: the Trial of Activity for Adolescent Girls (TAAG) Health Education Research. 2008 doi: 10.1093/her/cyn029. [PMC free article] [PubMed] [Cross Ref]
14. Beresford SAA, Shannon J, McLerran D, et al. Seattle 5-a-day work-site project: Process evaluation. Health Education & Behavior. 2000;27(2):213–222. [PubMed]
15. Dzewaltowski DA, Estabrooks PA, Klesges LM, Bull S, Glasgow RE. Behavior change intervention research in community settings: how generalizable are the results? Health Promotion Int. 2004;19(2):235–245. [PubMed]
16. Crump CE, Earp JL, Kozma CM, et al. Effect of organization-level variables on differential employee participation in 10 federal worksite health promotion programs. Health Education Quarterly. 1996;23(2):204–223. [PubMed]
17. Midanik LT, Polen MR, Hunkeler, et al. Methodologic issues in evaluating stop smoking programs. Amer J Public Health. 1985;75:634–638. [PubMed]