PMCCPMCCPMCC

Search tips
Search criteria 

Advanced

 
Logo of nihpaAbout Author manuscriptsSubmit a manuscriptHHS Public Access; Author Manuscript; Accepted for publication in peer reviewed journal;
 
Health Promot Pract. Author manuscript; available in PMC 2013 May 6.
Published in final edited form as:
PMCID: PMC3645265
NIHMSID: NIHMS456620

Community-Based Participatory Evaluation: The Healthy Start Approach

Abstract

The use of community-based participatory research has gained momentum as a viable approach to academic and community engagement for research over the past 20 years. This article discusses an approach for extending the process with an emphasis on evaluation of a community partnership–driven initiative and thus advances the concept of conducting community-based participatory evaluation (CBPE) through a model used by the Healthy Start project of the Augusta Partnership for Children, Inc., in Augusta, Georgia. Application of the CBPE approach advances the importance of bilateral engagements with consumers and academic evaluators. The CBPE model shows promise as a reliable and credible evaluation approach for community-level assessment of health promotion programs.

Keywords: community-based, community engagement, partnership development, bilateral, program evaluation

INTRODUCTION AND BACKGROUND

In the past four decades, health practitioners have turned increasingly to data-driven decision-making processes (Judd, Frankish, & Moulton, 2001) to more effectively manage health and human service programs in both nonprofit and for-profit sectors. Evaluation scientists, in particular, have led the way, with an emphasis on program accountability, service assessment, outcome documentation, and behavioral measurement (Windsor, Clark, Boyd, & Goldman, 2004). Consequently, as early as the 1960s, many foundations, government officials, and groups with a special interest in the alleviation of compelling social problems began to look at their return on investment in social programs. Hence, program organizers, practitioners, and researchers alike continue to invest an inordinate amount of energy in identifying methodologies designed to answer evaluation questions, for example: What programs are yielding their intended outcomes? To what degree are they reaching their intended audience? How much does the program cost compared with intervention alternatives? What level of service (dosage) is necessary to attain desired behavioral effects? Are the benefits of the program worthy of the investment? These are not new questions but questions that continue to push the health education field to validate effective programming prior to widespread adoption of interventions (Rossi & Freeman, 1993).

Behavioral scientists and public health practitioners continue to differentiate research from evaluation, and efficacy versus effectiveness. On one hand, research is typically defined as an activity designed to answer specific scientific questions or tests hypotheses, with the purpose of generating new knowledge to contribute to the scientific literature (Boulmetis & Dutwin, 2005; Windsor et al. 2004); on the other, evaluation serves the purpose of assessing program functionality with regard to the nature, scope, goal attainment, and level of participation in an intervention (Centers for Disease Control and Prevention [CDC], 1999). With community-level evaluations, typically no institutional review board (IRB) approval is required, but with a research paradigm, IRB approval is a must. Thus, methodologies for conducting evaluations of programs evolved, and increasingly evaluators implemented more sophisticated evaluation designs for answering pertinent questions regarding the effectiveness of health and human services programs. Innovation with university faculty doing program evaluation with community-based organizations (CBOs) should require a debunking of the “Helicopter Evaluator.”

The “Helicopter Evaluator” is typically a university faculty researcher who lands in a community to conceptualize, design, and collect data, then takes off to publish papers for his or her own professional advancement. Debunking this model of prostituting communities can be addressed by genuine partnership beginning with an oligopoly, where equality exists between the university faculty and the community constituents. Equality in this context translates into shared resources and budgeting control shared between the entities (university and CBOs) when programs are externally funded.

During the 1970s, the evaluation industry catapulted along with a plethora of evaluation models, including transactional, discrepancy, goal-based or goal attainment, and goal-free models of evaluation (Boulmetis & Dutwin, 2005). Concomitantly, another transition emerged, with greater emphasis on community involvement and institutionalization of community-based programs for health education and health promotion (Minkler, Blackwell, Thompson, & Tamir, 2003). Community stakeholders began to participate in evaluation practice by helping shape the planning and decision-making processes while interventions were being planned on the front end. Community stakeholders were also involved in performing evaluations that covered the full course of services throughout the program’s life, often aggregating data over a number of years, to determine the program’s effectiveness.

Integration of Community With Evaluation Processes

Several approaches to integrate local communities with evaluation process have been employed by avant-garde evaluators. These include use the community health workers (promotoras) adding to their community outreach role the task of collecting health data with preprogrammed handheld personal digital assistants. These indigenous residents are able to leverage their trusted relationships with fellow consumers and gain informed consent and to collect sensitive information from other local residents, thus making the reliability and validity of collected data more robust. The use of mobile communications, such as personal digital assistants and mobile phones for health services and information, can serve as an integrative method, short message service (SMS) message campaigns can be established as either one-way alerts or interactive tools used for health-related education, information, and communication. For example, survey data can be collected; individuals can be quizzed on their knowledge of sexually transmitted diseases and receive information on the nearest health clinic for free testing, information, and services; or health education alerts can be texted and programmed to be received by a population group enrolled in a telephone protocol. These methods work well for teens at the forefront of the texting revolution.

Second, the use of community coalitions serving as the community “watchdog” for those wanting to do evaluation at the community level has giving rise to a new movement of community-level IRBs. The CDC-funded Prevention Research Center at Morehouse School of Medicine functions in this manner. Herein, the community coalition reviews all proposed evaluation projects by academicians for ethical considerations before implementation can be started.

Third, the most significant method for integration of community members into the evaluation process is through job creation. Training local residents as data collectors, interviewers, and so on can be a powerful force in demonstrating goodwill and community engagement with evaluation processes.

Innovative Planning Models

An innovative planning model used by community engagement strategists is the “common market” concept. As early as the 1960s, common market approaches were used by local community members to develop food cooperatives (co-ops). Food co-ops are committed to consumer education, product quality, and member control. The concept has been explored by the faith community ministries: where networking among skilled and unskilled talent is traded among church members. Trading expertise within a community is a novel method of community engagement. For example, one could trade carpentry services for accounting services. The constellation of traded services can be a dynamic and meaningful strategy for community consumers to begin sharing resources as a reduced or nominal cost. These common market agreements and interactions can readily be evaluated for consumer satisfaction.

Another innovative planning tool used by health planners as a strategy for engaging community constituents is the geographical information system (GIS). GIS describes any information that integrates, stores, edits, analyzes, shares, and displays geographic map information for informing decision making. GIS applications are tools that allow users to create interactive queries (user-created searches), analyze spatial information, edit data, create maps, and present the results of all these operations. We have found that community residents like this mapping approach as a way of depicting their neighborhood and showing areas that they are familiar with from a geographical perspective. GIS applications at the community level can be evaluated as part of a community engagement measure of citizen participation in the mapping process.

Documenting and Evaluating

Aspects of community partnership and community engagement have been approached by other community orientation evaluators. The methods typically involve attendance records and quantitative reviews of meeting minutes to assess the level of citizen engagement. We advance that a more robust assessment of community partnership and community engagement can be measured by outcomes such as partnership sustainability after funding ceases, the extent of consumer participation in authorship of manuscripts and related products coming from the project, and the extent of volunteer or staff spin-offs to other community programs. These outcomes can be measured as proxy variables for assessing citizen participation and community engagement.

Grounded Theory and the Emergence of Qualitative Methods

Many behavioral scientists have argued that too much experimental research has been done without the benefit of accurate description of study variables. The tension between qualitative- and quantitative-oriented investigators has been tempered by the increasing acceptance of mixed models. The use of grounded theory approaches has gained acceptance even among the National Institutes of Health grant review committees. The use of the ethnographic methods with its anthropologic roots has demonstrated the importance of context before wholesale experimentation. Moreover, grounded theory has fortified the importance of triangulation when assessing credibility and conformability of the study design.

Four types of triangulation have been discussed in the methodological literature (Denzin, 1978, pp. 291–307; Patton, 1990, pp. 186–189; Yin, 2003, pp. 97–99): (a) data triangulation, (b) investigator triangulation, (c) theory triangulation, and (d) methodological triangulation. Data triangulation refers to using a variety of data sources instead of relying on a single source. Investigator triangulation means employing more than one researcher, constituting a research team to balance predispositions. Theory triangulation aims at bringing multiple perspectives to bear on the data set to yield different explanations that can be pursued and tested. Methodological triangulation combines different methods to study a problem, a case, or a program. Studies that use only one method are subject to biases linked with that particular method. For example, a combination of interview, observation, and archival research can reduce possible distortions or misrepresentations.

METHOD

Participatory evaluation at the community level is a relatively new concept that refers to a process in which individuals in communities, program managers, program personnel, and other decision makers perform an integral function in the design, coordination, and execution of the evaluation implementation activities. Working side by side with community stakeholders in the evaluation, the professional evaluators recognized that the community residents’ knowledge of the community improved the relevance of the evaluation process. This knowledge, through lay-citizen involvement, framed the decision-making questions that the evaluation plan sought to answer. Involving community stakeholders can more appropriately identify measurement strategies and execute the evaluation procedures, at the community level, in a way that only endemic individuals could appreciate.

Description of the Augusta Partnership for Children

The Augusta Partnership for Children, Inc. (APC) is a 501(c)(3) nonprofit collaborative that partners with agencies, organizations, and individuals to improve the lives of children and families in Augusta, Georgia. The collaborative represents businesses; government; education; health care, faith-based, social service, and youth organizations; and private citizens dedicated to promoting the overall health and well-being of its citizens. The mission of the APC is to develop and sustain partnerships that provide services to improve the lives of children and families; and the vision is for the children in the local community to have the tools and support to become healthy, educated, and responsible adults.

Through the Healthy Start collaborative, facilitated by the APC, the aim is to match social services with those who need them and to provide client services where none exist by addressing specific issues identified by clients and case managers. A multitude of data about educational, sociocultural, health, and psychosocial issues are collected and analyzed for many of the APC clients to develop culturally centered and tailored interventions. Moreover, services for clients are strategically coordinated through a referral system that reinforces and supports the programs of myriad partners and collaborators, In partnership with collaborating agencies, the APC has emerged as a lead organization in the local community by addressing issues designed to improve the lives of children and families and strengthening the community through collaboration and through sustainability of health promotion programs. Moreover, some of the primary areas in which the APC matches individuals and families to available social services are aimed at improving child health, child development, school success, family functioning, and family economic capacity.

Community-Based Participatory Evaluation Employed by the APC

Community-based participatory evaluation, employed by the APC, is based on the assumption that community members know better than anyone else the operational boundaries within which they can function given the overall limitations of the evaluation process. The flow of participatory evaluation begins and ends with the incorporation of the community into the design, implementation, and review of assessment procedures. Paramount to the community’s involvement is the recognition of the existence of a community intelligence and cultural context that permeates the practice for this participating approach. The APC participatory evaluation approach involves movement through nine developmental stages involved in conducting a participatory evaluation. These stages are depicted in Figure 1 and include the following: (1) recruit committee members early in the evaluation process (consumers and evaluation specialists; (2) facilitate consumer involvement through initial orientation to the evaluation process; (3) encourage win-win dynamics among stakeholders; (4) work with the community to articulate program aims, goals, and objectives; (5) use bilateral design, selection, or modification of assessment instruments; (6) pilot test assessment instruments; (7) finalize instruments and collect data; (8) carry out ongoing convening of evaluation committee and review evaluation results; and (9) use evaluation feedback for making decisions about the program improvement.

FIGURE 1
Flow of the Community-Based Participatory Evaluation Process

Role of Academic Faculty as External Evaluators

University faculty have historically maintained a unique position to take on responsibilities for conducting evaluation with use of scientific methods for conducting research studies. They are able to transfer this knowledge to the evaluation of community programs, and faculty who are able to foster a partnership with community members are in a good position to implement a participatory genre of evaluation. This new approach to evaluation meant that a new breed of academic professionals had to learn ways to develop relationships with community members and to share equally in the evaluation process. This was no easy task. The role of the external evaluator is to facilitate and encourage participation from everyone, clarify doubts, ensure understanding, share information and skills, and sometimes manage disputes (Crishna, 2006). The process of faculty forming a healthy and functional relationship with community members is both an art and a science.

Role of Community-Based Organizations

CBOs occupy a unique position with regard to their access to underserved populations who have inimitable health needs that could benefit from prevention services. Very often the individuals who are intimately involved in the operation of a CBO have close ties with community members, including community leaders. Most important, they have the advantage of recognition as a credible entity having the interest of the community at heart. This was the case for the APC. Consequently, the APC’s interface with the community enabled access to community residents in a way that academicians, government officials, and other professionals often find daunting. Typically CBO representatives are multitalented. They are “code switchers” and can communicate well in the “boardroom” or at the community stakeholder level. At the same time, they understand the day-to-day struggles and challenges that many community residents encounter as they attempt to overcome the difficulties of urban life—limited job opportunities, substandard housing, diminished access to health services, out of control teens, and inadequate educational resources, and so on.

Frequently, the community stakeholder’s role in the process is to assist and fine-tune the evaluation process, beginning with the identification of the direction for the evaluation. This role is most effective when the dialogue between evaluators and community residents is based on mutual respect and a mutual regard for the knowledge, skills, and talents of each party.

Use of Evaluation Committee

A participatory evaluation committee requires a clear structure and operational process to function most effectively. This can begin with a thorough training and orientation in the participatory evaluation process. Without this foundation, members of the evaluation committee may waiver from the group’s agreed purpose before getting on board with the collective aims of the group. A sense of purposelessness may detract from group participation and motivation to continue their involvement.

Committee members accept their role as guiding the direction and nature of the evaluation. Academic scientists have reported that power differentials among committee members can be a difficult problem to overcome. Power differences are often a barrier to participation in the evaluation. Some committee members may speak, whereas others are reticent (Holte-McKenzie, Forde, & Theobald, 2006). A professional evaluator who seeks to optimize the group members’ value in a participatory evaluation will to solicit their involvement in a nonpaternal manner. Where scientific considerations, such as randomization, for example, are paramount, the evaluator is obligated to explain in a laymen’s vocabulary what these elements mean, how they operate, and why they are important.

Bilateral Discussion of Program Goals/Anticipated Outcomes

The most meaningful community programs tap into the residents’ interests, aspirations, and priority concerns. The first step in the establishment of an evaluation plan based on community priorities is to gather appropriate data from various sources that can be used to cross-verify salient health and social concerns. A community needs assessment process helps to objectify the goal-setting and evaluation-planning process by providing a foundation for documenting what priority issues exist in the community, how severe these they are, and who is most affected in the identified population. Community residents can use data sources such as archival information, community surveys, observational data, windshield surveys, key informant interviews, and focus groups to identify community issues (Wallerstein, 2000).

The discussion about the direction of the program and the anticipated outcomes is a give-and-take process. In a participatory evaluation process, community residents voice their opinions about what changes they would like to see. Program evaluators recommend specific intervention strategies that are likely to achieve these changes. Preferably, these intervention methods would be evidence or science based.

Bilateral Development of Logic Models

Similarly, the various stakeholders engage in a two-way discussion about how the various available resources fit together to support the operation of a program. They work together to depict, in graphic form, a logic model, with the relationship among the variable domains and decision-making bodies (Keller, Schaffer, Lia-Hoagberg, & Strohschein, 2002). Community stakeholders work with the evaluator and program staff to allot what activities the program personnel and other partners will perform. They further work together to identify the intermediate and long-term outcomes of the program (Fielden et al., 2007). Moreover, there is a bilateral marriage that involves generating the salient evaluation objectives and questions.

Bilateral Development of Instruments

Another important role that the stakeholders can play in the participatory evaluation process is to help identify relevant questions, instruments, and assessment techniques. Community residents and other stakeholders can assist in creating questions and question domains that can reveal information about the program that would not be readily apparent without community resident input. Residents assist in generating critical questions that help shape the discussion and influence the decision-making process for the program and the ultimate program evaluation design.

Depending on the nature of the evaluation, community stakeholders and residents often suggest specific questions that they would like to pursue. This is an opportunity to establish key questions that the community stakeholders would like to address as priority health issues in the community. Sometimes it is necessary for community residents to test out the language and the method of administration of a verbal assessment strategy through a small pilot, so that it fits within the culture of the community from which the respondent will come. Community stakeholders may have suggestions for alternative wording or they may point out unanticipated interpretations of various phraseology.

Bilateral Participation in Dissemination Activities

By the time that the evaluation committee members are ready to share evaluation findings with stakeholders, all members should be so familiar with the findings that they can present it in their own words. Sharing in the presentation of results can be a worthwhile way of empowering committee members. First, it communicates that the program organizers and professional evaluators are serious about engaging in mutual sharing of formal power roles. Having community residents or other endemic CBO stakeholders participate in the development of PowerPoint presentations of evaluation findings sends a message that this is truly a team effort. It also obligates the committee members to be knowledgeable and prepared to answer questions about the evaluation process and findings.

Another important issue is that of authorship. Ideally, committee participants should determine the order of authors on published documents. Community members should coauthor all manuscripts for publication and serve as first authors proportionally.

DISCUSSION

It may take a few more years before evaluators, CBO personnel, and community residents learn to maximize fully a participatory evaluation model. Each of these parties will benefit from training in the use of participatory evaluation procedures. Deployment of this model of data collection, interpretation, and dissemination must become a capacity-building priority for participatory evaluation to reach its full potential. Nevertheless, moving the program evaluation field toward greater and more meaningful community engagement will advance the health promotion field and understanding among evaluators and practitioners while building community-based evaluation capacity. Participatory evaluation will upgrade the rigor and quality of the evaluation beyond the visiting expert (helicopter evaluator) and “black box” way of thinking about evaluation.

It is clear from having seen contributions from the participatory evaluation committee members that their input can help objectify a logic model, formulate a relevant evaluation design, or clarify hidden factors that may have contributed to success or limitations of an otherwise science-based intervention. Committee members who are most in touch with the community challenges that dominate health concerns in underserved communities can be a valuable resource for contextual data interpretation and application. Another key role that they can play is documenting the level of service delivery, its appropriateness, and its correspondence with a science-based protocol, and fidelity, where necessary.

CONCLUSION

It would be a travesty to conduct a thorough, credible, program evaluation and determine that it is useless to decision makers. The ultimate purpose in conducting an evaluation is to incorporate the findings into an iterative decision-making loop (CDC, 1999). With each completion of a feedback and program correction cycle, programs should become more responsive, more effective, and more culturally appropriate. Involvement from CBOs and other community stakeholders can enrich this process given their knowledge of the community and the cultural context within which it operates (Marcus et al., 2004). Yet the incorporation of community stakeholders into a participatory evaluation process is still a relatively new approach to program evaluation. Its application within the APC has proved to be a step in the right direction and capitalizes on the “community intelligence.”

References

  • Boulmetis J, Dutwin P. The ABCs of evaluation: Timeless techniques for program and project managers. 2. San Francisco, CA: Jossey-Bass; 2005.
  • Centers for Disease Control and Prevention. Framework for program evaluation in public health. MMWR Recommendations and Reports. 1999;48:1–40.
  • Crishna B. Participatory evaluation (II)—Translating concepts of reliability and validity in fieldwork. Child: Care, Health and Development. 2006;33:224–229. [PubMed]
  • Denzin NK. The research act: A theoretical introduction to sociological methods. New York, NY: McGraw-Hill; 1978.
  • Fielden SJ, Rusch ML, Masinda MT, Sands J, Frankish J, Evoy B. Key considerations for logic model development in research partnerships: A Canadian case study. Evaluation and Program Planning. 2007;30:115–124. [PubMed]
  • Holte-McKenzie M, Forde S, Theobald S. Development of a participatory monitoring and evaluation strategy. Evaluation and Program Planning. 2006;29:365–376. [PubMed]
  • Judd J, Frankish CJ, Moulton G. Setting standards in the evaluation of community-based health promotion programmes—A unifying approach. Health Promotion International. 2001;16:367–380. [PubMed]
  • Keller LO, Schaffer MA, Lia-Hoagberg B, Strohschein S. Assessment, program planning, and evaluation in population-based public health practice. Journal of Public Health Management & Practice. 2002;8(5):30–43. [PubMed]
  • Marcus MT, Walker T, Swint M, Smith BP, Brown C, Busen N, von Sternberg K. Community-based participatory research to prevent substance abuse and HIV/AIDS in African-American adolescents. Journal of Interprofessional Care. 2004;18:347–359. [PubMed]
  • Minkler M, Blackwell AG, Thompson M, Tamir H. Community-based participatory research: Implications for public health funding. American Journal of Public Health. 2003;93:1210–1213. [PubMed]
  • Patton MQ. Qualitative evaluation and research method. 2. Newbury Park, CA: Sage; 1990.
  • Rossi PH, Freeman HE. Evaluation: A systemic approach. Newbury Park, CA: Sage; 1993.
  • Wallerstein N. A participatory evaluation model for healthier communities: Developing indicators for New Mexico. Public Health Reports. 2000;115:199–115. [PMC free article] [PubMed]
  • Windsor R, Clark N, Boyd NR, Goodman RM. Evaluation of health promotion, health education, and disease prevention programs. 3. New York, NY: McGraw-Hill; 2004.
  • Yin RK. Case study research: Design and methods. 2. Thousand Oaks, CA: Sage; 1994.