|Home | About | Journals | Submit | Contact Us | Français|
Although the importance of community engagement in research has been previously established, there are few evidence-based approaches for measuring the level of community engagement in research projects. A quantitative community engagement measure was developed, aligned with 11 engagement principles (EPs) previously established in the literature. The measure has 96 Likert response items; 3–5 quality items and 3–5 quantity items measure each EP. Cronbach’s alpha is used to examine the internal consistency of items that measure a single EP. Every EP item group had a Cronbach’s alpha > .85, which indicates strong internal consistency for all question groups across both scales (quality and quantity). This information determines the level of community engagement, which can be correlated with other research outcomes.
It is important to translate research programs and findings into practice, which requires interventions that are relevant to the lives of the target community or population. Community-engaged research (CER) has emerged as an evidence-based approach to conducting research that uses community–academic partnerships to better address the complex issues that affect the health of marginalized populations (Davis et al., 2011; Israel et al., 2010; Jagosh et al., 2012; Salimi et al., 2012). CER is an umbrella term for the many forms of research (e.g., community-based participatory research, participatory action research, patient-centered research, community–academic partnerships) that have community engagement as a core principle. Although sharing this core principle makes the multiple forms of CER similar in spirit, they are not identical in implementation because they span a spectrum of community engagement from no or minimal engagement (e.g., outreach, community advisory boards) to fully collaborative partnerships (e.g., participatory action research, community based participatory research).
CER requires community involvement and control and integrates research and practice, prompting researchers to partner with the communities and populations experiencing a disproportionate burden of poor health or education or other social outcomes (Campbell & Jovchelovitch, 2007; Chesler, 1991; Israel, Schulz, Parker, & Becker, 1998; Israel et al., 2008; Minkler & Wallerstein, 2010; Mohatt et al., 2004; Nelson, Ochocka, Griffin, & Lord, 1998; Rappaport, 1987; Watts & Flanagan, 2007; Zeldin, 2004). CER has become valued as an effective research strategy for improving community conditions and reducing disparities, particularly those in health (Balazs & Morello-Frosch, 2013; Bordeaux et al., 2007; Eder, Tobin, Proser, & Shin, 2012; Nguyen, Hsu, Kue, Nguyen, & Yuen, 2010; Salimi et al., 2012; Shalowitz et al., 2009; Trinh-Shevrin et al., 2007; Wallerstein & Duran, 2006; Wallerstein & Duran, 2010).
CER is defined by working collaboratively with and through groups of people affiliated by geographic proximity, special interest, or similar situations to address issues affecting the well-being of those people; it is a powerful vehicle for developing trust in community–academic partnerships to bring about changes to improve community health (Fawcett et al., 1995). Engaging community members in the research process is often the missing link to improving the quality and outcomes of health promotion activities, disease prevention initiatives, and research studies (Balazs & Morello-Frosch, 2013; Minkler & Wallerstein, 2003; Minkler, 2004). This work involves a long-term process that includes building trust between researchers and the community through a collaborative framework (Brandon, Isaac, & LaVeist, 2005; Butterfoss & Francisco, 2004; Corbie-Smith, Thomas, & St. George, 2002; Quinn, Kass, & Thomas, 2013).
Community engagement and participation in research can contribute to a more nuanced understanding of health problems, increasing the relevance of problems examined for the affected communities and improving the fit of research activities in community-based settings (Cargo & Mercer, 2008; Jagosh et al., 2011; Minkler, 2005; Wallerstein & Duran, 2010). These changes in process can increase the quality of research, leading to higher participation rates, insightful interpretation of findings, and greater reliability and validity of measures in diverse populations (Cargo & Mercer, 2008; Jagosh et al., 2011, 2012; Nueces et al., 2012).
One research question that has received little attention is the extent to which community members in these community–academic partnerships feel engaged in the research process. Measuring the extent of partner engagement is of critical importance both as partnerships are developing and as a predictor of outcomes in the larger study.
Although the utility of CER is perceived as well established in the literature (Campbell & Jovchelovitch, 2007; Israel et al., 1998; Israel, 2005; Minkler & Wallerstein, 2010; Nelson et al., 1998; Wallerstein & Duran, 2006; Wallerstein & Duran, 2010; Zeldin, 2004), measuring and evaluating community engagement in research activities (the extent to which community members are involved with the decisions and activities of the research project) have been limited and have primarily focused on qualitative approaches (Francisco, Paine, & Fawcett, 1993; Goodman et al., 1998; Khodyakov et al., 2013; Lantz, Viruell-Fuentes, Israel, Softley, & Guzman, 2001; McCloskey et al., 2012; Sanchez, Carrillo, & Wallerstein, 2011; Schulz, Israel, & Lantz, 2003). Qualitative methods are effective at assessing community engagement at a project or program level; however, they are time consuming, do not easily scale up for the evaluation of large-scale or multicommunity projects. In addition, the results cannot be easily compared across programs or institutions for the development of evidence-based best practices.
The use of CER to address health disparities has increased markedly and the need to engage community members in the research enterprise has increased dramatically (Ross et al., 2010a,b). The 68 National Cancer Institute Comprehensive Cancer Centers and 62 medical research institutions that are members of the Clinical and Translational Science Award consortium have been mandated to engage communities in their work and disseminate evidence-based strategies (Clinical and Translational Science Awards Consortium Community Engagement Key Function Committee Task Force on the Principles of Community Engagement, 2011; Eder et al., 2013; Eder et al., 2012; Institute of Medicine, 2013; Wilkins et al., 2013). Thus, the measurement of community engagement in large-scale research programs and an examination of how this approach affects research, discovery, and translation of findings are integral to the evaluation and progress of CER.
Community–academic partners could use a quantitative measure from the beginning, which examines the quality and quantity of adherence to engagement principles (EPs), and examine community engagement longitudinally throughout the partnership. This information might determine the level of engagement across a continuum–no engagement, outreach, mobilization, organization, cooperation, collaboration, full partnership–that could be correlated with other research outcomes (e.g., recruitment rates, diversity of participants, retention rates, and other study specific outcomes). In addition, these data might be useful in strategies for monitoring and improving these partnerships.
This article describes the development process of a quantitative measure that assesses the level of community engagement among community members, building on limited existing quantitative measures of community engagement in public health research (Israel, 2005; Khodyakov et al., 2013; Schulz et al., 2003; Weir, D’Entremont, Stalker, Kurji, & Robinson, 2009). When developed, this measure should provide scores on the overall engagement of people, as well as differentiating the level of engagement among groups in the project. The components of the Program for the Elimination of Cancer Disparities (PECaD) at the Siteman Cancer Center (a National Cancer Institute designated Comprehensive Cancer Center), which works with communities to reduce cancer disparities through outreach, education, and training, served as the partnership site evaluated.
The PECaD was established in 2003 in response to known cancer health disparities; its goals are to create a national model for eliminating disparities in cancer through community-based partnerships, be a catalyst for change in the region by fostering healthy communities, and break down barriers to quality cancer care. The Disparities Elimination Advisory Committee (DEAC) comprises community leaders representing Federally Qualified Health Centers; private physicians; health, social service, and religious organizations; survivors; survivors’ family members; and other interested community groups. The DEAC has worked to identify and develop strategies to address barriers to cancer screening, treatment, and research participation in the region. The DEAC has guided PECaD’s engagement in health promotion and education efforts to address these barriers (Arroyo-Johnson et al., 2015; Thompson et al., 2014).
PECaD began administering a biannual evaluation survey in 2011 (April–May) to evaluate PECaD’s implementation of community EPs. The web-based survey was sent to all individuals affiliated with PECaD, including members of the PECaD disease-specific partnerships, DEAC, partners in PECaD programs, and PECaD-affiliated academics. Although this initial survey was informative in assessing PECaD’s adherence to the community EPs, it lacked specificity into how adherence to these principles was being achieved (Arroyo-Johnson et al., 2015). To address this issue, the DEAC and PECaD researchers formally convened a team to create a measure.
The community engagement measure was examined among participants in the Community Research Fellows Training (CRFT) program, a PECaD pilot project (Coats et al., 2015; D’Agostino McGowan, Stafford, Thompson, Johnson-Javois, & Goodman, 2015). CRFT aims to enhance the infrastructure for CER and promote the role of underserved populations in the research enterprise.
A total of 50 community members were selected to participate in the first cohort of this 15-week-long research training program that is based on the Community Alliance for Research Empowering Social Change training (Coats et al., 2015; Goodman, Dias, & Stafford, 2010; Goodman et al., 2014; Goodman, Si, Stafford, Obasohan, & Mchunguzi, 2012). The Institutional Review Board at Washington University School of Medicine designated CRFT research as nonhuman subjects research. We reasoned that members of this group were all community activists, engaged in multiple kinds of community projects, and attending this training program that was also a research project. Their extensive experience would give them a good understanding of working with academic researchers.
The PECaD survey development team comprised a mixture of research and community members: three PECaD investigators, the PECaD data manager, PECaD program coordinator, and the DEAC community co-chair. The development team met monthly to develop the evaluation framework and second biannual community engagement survey. Several DEAC meetings were dedicated to discussing the evaluation framework, developing consensus on the principles that should guide community participation in research, and examining how to classify projects based on the level of participation by members of the specified community or population, as well as the activities used to encourage or sustain this participation. This bidirectional communication led PECaD to adopt a community-engaged partnership framework and seek to align projects with 11 EPs that have been previously developed in the literature (Israel et al., 1998; Israel, 2005; Israel et al., 2008; Khodyakov et al., 2011, 2013; McCloskey et al., 2012; Minkler & Wallerstein, 2010).
These EPs are based on the 11 principles of CER (Ahmed & Palermo, 2010; Burke et al., 2013; Butterfoss, Goodman, & Wandersman, 1996; Butterfoss & Francisco, 2004; Clinical and Translational Science Awards Consortium Community Engagement Key Function Committee Task Force on the Principles of Community Engagement, 2011; Israel et al., 1998; Israel et al., 2008; Khodyakov et al., 2013; Nueces et al., 2012; Report & Assessment, 2003; Wallerstein & Duran, 2006) and are as follows:
The PECaD survey development team developed items aligned with the 11 EPs to assess the level of community engagement in PECaD projects and worked with DEAC in a cyclical and iterative community-engaged process.
To assess reliability, we calculated Cronbach’s alpha to measure both the degree to which the EP-specific items were correlated with a single EP (internal consistency). It is widely accepted that alpha should exceed 0.70 to show internal consistency in the early stages of measure development (Nunnally, 1994). Previous work determined which items pertaining to six EPs lacked sufficient internal consistency (Cronbach’s alpha < 0.70) in the initial measure (Gennarelli & Goodman, 2013). We redesigned these EP-specific question groups with insufficient internal consistency; to reexamine internal consistency, we included revised items in a second dissemination of the community engagement measure (Gennarelli & Goodman, 2013, 2014). Participants were administered the community engagement measure (initial and revised) on two different program evaluation surveys approximately 6 weeks apart.
A total of 47 fellows were administered the revised community engagement measure, which comprised 96 items (questions), each designed to pertain to a specific EP. Half of the items measured quality (how well) and the other half measured quantity (how often) of community engagement; all of the items have Likert scale response options (see the Appendix for community engagement measure). Three to five quality items and corresponding three to five quantity items measure each EP. All 48 quality items had the following response options: 1 = Poor, 2 = Fair, 3 = Good, 4 = Very Good, 5 = Excellent. All 48 quantity questions had the following response options: 1 = Never, 2 = Rarely, 3 = Sometimes, 4 = Most of the time, 5 = Always.
Cronbach’s alpha was calculated to examine internal consistency for each set of community engagement items that are meant to measure the same EP and are on the same scale (quantity or quality). After internal consistency of the items for each EP was established, we summarized responses into (a) EP-specific mean scores on each scale (quantity, quality) and (b) community engagement quantity and quality scores; mean of the EP-specific scores on each scale.
Because of the small sample size, we conducted a sensitivity analysis to compare results based on a subsample containing only complete case to the data that includes observations with missing items. Only those observations with complete responses to items measuring each EP were included in the complete case analysis and observations that satisfy inclusion criteria are included in the full sample analysis. To be included in the analyses, observations had to meet the following criteria: Each observation had to have at least three answered items for EPs that contained four to five items and at least two answered items for EPs that contained three items. Additional detail on the calculation of scores including SAS code is provided elsewhere (Gennarelli & Goodman, 2013, 2014). Analyses were conducted in 2014; results for the revised community engagement measure are presented here.
Of the 47 fellows, 46 (98%) completed the revised measure (see Appendix). The majority of respondents were female (85%), African American/Black (87%), had obtained a graduate degree (52%), and identified themselves as a community member or as being affiliated with a community-based organization (54%). Table 1 displays the demographic characteristics of the sample. Every EP question group had a Cronbach’s alpha > .85, indicating very strong internal consistency for all question groups on both quantity and quantity scales. Cronbach’s alpha for each EP can be seen in Table 2. Internal consistency was strong for measures (across all 11 EPs) on each scale (quality α = 0.99 and quantity α = 0.98; Table 2).
For the full sample, quantity scores across EPs ranged from 3.4 to 4.0 (between sometimes and most of the time) on a 5-point scale, with an average of 3.8 (community engagement quantity score) and a standard deviation = 0.7; quality scores ranged from 3.1 to 3.9 (between good and very good), with a mean = 3.6 (community engagement quality score) and a standard deviation = 0.9. These results indicate that participants felt the academic partners adhered to 11 EPs between sometimes and most of the time on the quantity scale and the quality of the engagement was good to very good. EP 5–a cyclical and interactive process in the pursuit of objectives–was rated the lowest on the quantity scale. EP 11–plan for a long-term process and commitment–was rated the lowest on the quality scale. EP 1–focus on local relevance and social determinants of health–was rated high on both quantity and quality scales. These results are especially useful in community-engaged program evaluation to pinpoint areas that need improvement. Sensitivity analysis on complete case data shows similar results to the sample that includes missing cases (Table 3).
These results are encouraging for early stage measurement development. The strong internal consistency for all question groups indicates that the survey is well designed to measure adherence to the 11 EPs. The scores assess the level of community engagement, with higher scores corresponding to higher quality or frequency of engagement. For example, a project that completely engages community partners in all aspects of the research would have higher scores when compared to a project with community members serving on a traditional advisory board. Depending on the information of interest, different scores can be calculated.
Although community engagement quantity and quality scores allow for scale-specific, overall measurement of community engagement, EP-specific average scores measure how well the project adhered to a specific principle on each scale (quality and quantity). The best way to examine adherence to each of the 11 EPs is with EP-specific scores because they allow for a comprehensive picture of community engagement.
Average EP scores for the CRFT program ranged from 3.1 to 4.0; these results suggest that CRFT researchers adhered to the 11 PECaD EPs between sometime and most of the time on the quantity scale and between good and very good on the quality scale. The development of EP-specific mean scores on each scale was necessary to fully understand gaps in the implementation of community EPs in research projects and improve upon the successes of community engagement in future projects. Although CER evaluation tends to be largely qualitative, EP-specific mean scores allow for simple, evidence-based, quantitative measurement of community engagement. A quantitative measure that can be implemented using web-based surveys is a major strength, especially in large-scale projects in which qualitative approaches can be cumbersome. As an increasing number of researchers begin to engage communities in their work, these measurements will help to evaluate and improve the quality of CER.
Because of missing values, most alpha calculations are based on n < 50. The smallest sample size for an alpha calculation was n = 34 for the quality-based items of EP 5. However, this sample size is sufficient as Cronbach’s alpha is precise when n ≥ 30, the number of items analyzed is at least 5, and the mean intercorrelation is at least 0.50, which is the case for EP 5 (Iacobucci & Duhachek, 2003). Additionally, Cronbach’s alpha calculations are also sufficiently precise for n ≥ 30 when at least 2 items are analyzed and have mean intercorrelation of at least 0.70 (Iacobucci & Duhachek, 2003), which is true for all calculations in Table 2, making the results of this pilot study quite valuable despite the small sample size.
The small sample size of this initial effort led us to include observations with missing items in score calculations. Sensitivity analyses were conducted to compare the results of complete case analyses to that of the full sample including observations with missing values. These analyses showed that the EP-specific scores are not sensitive to missing items because there were no statistically significant differences between scores calculated with complete case data and those calculated with data including missing values. Our measure was tested on a sample that was primarily African American females with high levels of education and may not be generalizable to other populations.
Future work is necessary to examine the tool in other populations, including cognitive response testing of items, to explore participants’ reactions and thought processes when exposed to items measuring the quality and quantity of community engagement in research (e.g., comprehension or interpretation of the questions, retrieval of relevant information from memory, the formation of judgments about how to respond, and the process of deciding how much information to reveal) (Lutz & Swasy, 1977; Schwarz, 2007; Willis, Royston, & Bercini, 1991).
Additional studies should also seek to extend data on the validity of the measure, identify latent constructs, and use item response theory for the creation of a revised shorter version of the measure. The current 96-item measure is comprehensive; a shorter version would allow for reduced participant burden. However, the sample size (N = 46) is not adequate for additional analyses (e.g., item response theory) that would allow for a reduction in the number of items. Given the amount of time it takes for partnerships to develop and change when administering this comprehensive tool longitudinally, to assess change in the level of community engagement over the course of a project, we suggest biannual intervals as the minimum time between assessments.
The field of CER has matured to the point where principles have been established and are often implemented in community-based studies; a major gap in this field is the ability to rigorously evaluate the level of community engagement and its impact on research processes and outcomes. The development of a quantitative measure to assess community engagement in research makes a major contribution to community-engaged science. These measures are necessary to assess associations between community engagement and research outcomes and understand the mechanisms through which community engagement affects the development and quality of scientific discovery. Although this measure was developed and initially tested in projects addressing cancer disparities in African Americans, the engagement principles measured are generalizable to other diseases and populations. The next steps in measure development include cognitive testing, development of instructions, examination of the impact of administration strategy, psychometric properties in other populations, and examination of measure validity. This measure can be used for monitoring and improving partnerships and exploring how partnerships facilitate outcomes.
This work, the work of the Program for the Elimination of Cancer Disparities evaluation team, and the Community Research Fellows Training pilot project are funded by National Institutes of Health, National Cancer Institute (grant U54CA153460).
The work of Dr. Goodman is supported by the Barnes-Jewish Hospital Foundation, Siteman Cancer Center, National Institutes of Health, National Cancer Institute (grant U54CA153460), and Washington University School of Medicine.
|Engagement Principle 1: Focus on local relevance and social determinants of health|
|Focus on issues important to my community.|
|Focus on health problems that the community thinks are important.|
|Focus on the combined interaction of factors (i.e. personal, social, economic . . . ) that influence health status.|
|Focus on cultural factors that influence health behaviors|
|Engagement Principle 2: Acknowledge the community|
|Show appreciation for community time and effort|
|Highlight the community’s involvement.|
|Give credit to community members and others for work.|
|Value community perspectives.|
|Engagement Principle 3: Disseminate findings and knowledge gained to all partners|
|Let community members know what is going on with the project|
|Help community members with problems of their own|
|Empower community members with knowledge gained from a joint activity|
|Get findings and information to community members|
|Help community members disseminate information using community publications|
|Engagement Principle 4: Seek and use the input of community partners|
|Ask community members for input|
|Use the ideas and input of community members|
|Change plans as a result of community input|
|Involve community members in making key decisions|
|Ask community members for help with specific tasks|
|Engagement Principle 5: Involve a cyclical and iterative process in pursuit of objectives|
|Share the results of how things turned out with the community|
|Seek community input and help at multiple stages of the process|
|Inform the community of what happened when their ideas were tried|
|Plan for ongoing problem solving|
|Involve the community in determining next steps|
|Engagement Principle 6: Foster co-learning, capacity building, and co-benefit for all partners|
|Learn from community members|
|Help community members gain important skills from involvement|
|Encourage academic partners and community members to learn from each other|
|Help community partners get what they need from academic partners|
|Help community members achieve social, educational, or economic goals|
|Engagement Principle 7: Build on strengths and resources within the community|
|Build on strengths within the community|
|Build on resources within the community|
|Help to fill gaps in community strengths and resources|
|Work with existing community networks|
|Engagement Principle 8: Facilitate collaborative, equitable partners|
|Foster collaborations win which community members are real partners|
|Handle disagreements fairly|
|Demonstrate that community members are really needed to do a good job|
|Demonstrate that community members’ ideas make things better|
|Enable community members to voice disagreements|
|Engagement Principle 9: Integrate and achieve a balance of all partners|
|Enable all people involved to voice their views|
|Make final decisions that reflect the ideas of everyone involved|
|Demonstrate that community members’ ideas are just as important as academics’ ideas|
|Treat community members’ ideas with openness and respect|
|Engagement Principle 10: Involve all partners in the dissemination process|
|Make sure that all partners are involved with sharing findings|
|Include community members in plans for sharing findings.|
|Involve community members in sharing health messages in community settings.|
|Listen to community members when planning dissemination activities.|
|Engagement Principle 11: Plan for a long-term process and commitment|
|Make plans for community-engaged activities to continue for many years.|
|Make commitments in communities that are long-term.|
|Want to work with community members for many years.|
Melody S. Goodman, Washington University School of Medicine.
Vetta L. Sanders Thompson, Brown School of Social Work, Washington University in St. Louis.
Cassandra Arroyo Johnson, Washington University School of Medicine.
Renee Gennarelli, Washington University School of Medicine.
Bettina F. Drake, Washington University School of Medicine.
Pravleen Bajwa, Washington University School of Medicine.
Maranda Witherspoon, Missouri Foundation for Health.
Deborah Bowen, University of Washington School of Medicine.