PMCCPMCCPMCC

Search tips
Search criteria 

Advanced

 
Logo of nihpaAbout Author manuscriptsSubmit a manuscriptNIH Public Access; Author Manuscript; Accepted for publication in peer reviewed journal;
 
J Comp Eff Res. Author manuscript; available in PMC May 1, 2013.
Published in final edited form as:
PMCID: PMC3583228
NIHMSID: NIHMS422478
Facilitating comparative effectiveness research in cancer genomics: evaluating stakeholder perceptions of the engagement process
Patricia A Deverka,1* Danielle C Lavallee,1 Priyanka J Desai,1 Joanne Armstrong,2 Mark Gorman,3 Leah Hole-Curry,4 James O’Leary,5 BW Ruffner,6 John Watkins,7 David L Veenstra,8 Laurence H Baker,9 Joseph M Unger,10 and Scott D Ramsey10
1Center for Medical Technology Policy, 401 East Pratt Street, Suite 631, Baltimore, MD 21202-3117, USA
2Aetna, Three Sugar Creek Boulevard, Sugar Land, TX 77478, USA
3National Coalition for Cancer Survivorship, 1010 Wayne Avenue Ste 770, Silver Spring, MD 20910, USA
4Washington State Health Care Authority, 626 8th Avenue SE, Olympia, WA 98501, USA
5Genetic Alliance, 4301 Connecticut Avenue NW, Suite 104, Washington, DC 20008, USA
63021 East Brow Road, Signal Mountain, TN 7377, USA
7Premera Blue Cross MS 432, 7001 220th Street SW, Mountlake Terrace, WA 98043-2124, USA
8University of Washington, School of Pharmacy, 1959 NE Pacific Street H362 Health Sciences Building, Seattle, WA 98195-7631, USA
9University of Michigan, 24 Frank Lloyd Wright Drive, Ste. A3400, Ann Arbor, MI 48106, USA
10Fred Hutchinson Cancer Research Center, University of Washington, 1100 Fairview Avenue North, Seattle, WA 98109-4433, USA
*Author for correspondence: Tel.: +1 410 547 2687, Fax: +1 410 547 5088, pat.deverka/at/cmtpnet.org
Aims
The Center for Comparative Effectiveness Research in Cancer Genomics completed a 2-year stakeholder-guided process for the prioritization of genomic tests for comparative effectiveness research studies. We sought to evaluate the effectiveness of engagement procedures in achieving project goals and to identify opportunities for future improvements.
Materials & methods
The evaluation included an online questionnaire, one-on-one telephone interviews and facilitated discussion. Responses to the online questionnaire were tabulated for descriptive purposes, while transcripts from key informant interviews were analyzed using a directed content analysis approach.
Results
A total of 11 out of 13 stakeholders completed both the online questionnaire and interview process, while nine participated in the facilitated discussion. Eighty-nine percent of questionnaire items received overall ratings of agree or strongly agree; 11% of responses were rated as neutral with the exception of a single rating of disagreement with an item regarding the clarity of how stakeholder input was incorporated into project decisions. Recommendations for future improvement included developing standard recruitment practices, role descriptions and processes for improved communication with clinical and comparative effectiveness research investigators.
Conclusions
Evaluation of the stakeholder engagement process provided constructive feedback for future improvements and should be routinely conducted to ensure maximal effectiveness of stakeholder involvement.
Keywords: cancer genomics, comparative effectiveness research, evaluation, qualitative research, research prioritization, stakeholder engagement, stakeholders
Comparative effectiveness research (CER) evaluates different approaches to the diagnosis, prevention and treatment of medical conditions through either direct generation of evidence or by the synthesis of existing evidence [1]. A defining characteristic of CER is the expectation that stakeholders such as patients, clinicians and policymakers are actively consulted and involved throughout the research process [1,2]. This includes stakeholder engagement at milestones such as identifying and prioritizing research topics, developing research protocols, and interpreting and disseminating research findings. Through active dialogue and collaboration with a diverse group of stakeholders, it is anticipated that the quality of research will be enhanced, research will produce evidence better aligned with healthcare decision-making needs, and study results will be more likely to be translated into clinical practice [2,3,101]. These goals are similar to public involvement initiatives in the National Health Service-sponsored research in the UK and community-based participatory research efforts in the USA, where there is also an emphasis on recognizing and fostering the role and unique contributions of patients and communities to participate in the research process [47].
While the involvement of stakeholders has gained recognition as a critical feature of CER, there have been few evaluations of the impact of the engagement process itself. To date, CER-related insights have come primarily from the Agency for Healthcare Research and Quality’s Effective Health Care Program in the area of priority-setting [8,101]. Limited information exists characterizing how stakeholders have been selected, the effectiveness of different methods for involving them in the research process and their perceptions about the relative success of the overall process [8,101]. Thus, evidence regarding what constitutes best practices for involving stakeholders and whether their involvement leads to different decisions and more informative study results is not yet known.
Frameworks for evaluating the effectiveness of stakeholder engagement practices can be found in the extensive literature on engaging the public and other stakeholders in questions involving environmental risk, science, technology and healthcare, specifically focusing on models of participation that involve interactive consultation between stakeholders and project leaders (deliberative methods) [912]. Findings from this literature have correlated the quality of deliberative methods with measures of successful project outcomes, offering some evidence that an evaluation of the stakeholder engagement process in CER may be informative as an initial study end point because process measures are likely to be related at least to near-term project outcomes [9,12,13]. Furthermore, evaluating stakeholder engagement activities provides investigators with a mechanism to ensure that study goals and objectives are met and helps refine methods for future practice [3].
The objective of this study was to assess a stakeholder engagement process within a multi-organization CER initiative in oncology. The Center for Comparative Effectiveness Research in Cancer Genomics (CANCERGEN) is a multidisciplinary collaborative consortium that includes the Fred Hutchinson Cancer Research Center, University of Washington, the Center for Medical Technology Policy (CMTP), and SWOG (a clinical trials Cooperative Group funded by the National Cancer Institute). The overall objective of CANCERGEN is to generate high-quality evidence regarding the clinical utility and economic impact of promising genomic technology applications versus standard care in oncology. An essential feature of this project is the role of an External Stakeholder Advisory Group (ESAG), created to inform the prioritization and design of comparative effectiveness studies for future conduct within SWOG (Box 1) [14,15]. The results of this study may be helpful for establishing and refining stakeholder engagement practices for future CER projects in complex organization settings.
Box 1. Center for Comparative Effectiveness Research in Cancer Genomics project goals
  • Establish CANCERGEN as a sustainable, multidisciplinary collaborative consortium that facilitates the rapid design and implementation of prospective CER studies on genomics and personalized medicine technologies.
  • Using a consensus process with multiple stakeholders, develop a comprehensive evaluation, assessment and design process to prioritize emerging cancer genomics technologies that can be evaluated through the SWOG clinical trials network.
  • Implement a ‘proof-of-principle’ comparative effectiveness evaluation in conjunction with a currently planned SWOG clinical trial.
CANCERGEN: Center for Comparative Effectiveness Research in Cancer Genomics; CER: Comparative effectiveness research.
Stakeholder selection
Starting in October 2009, CANCERGEN investigators identified relevant stakeholder categories for the project’s ESAG to be patient/consumer, payer, clinician, policymaker/regulator and life sciences industry. Nomination criteria required that potential members be senior representatives of their respective organizations with a working knowledge of genetics, personalized medicine or oncology, with excellent communication skills and the ability to commit the requisite time for a 2-year study. A total of 31 members were identified out of which 13 were recruited; two (of four) from patients and consumers, three (of six) from healthcare providers, two (of six) from industry, three (of six) from purchasers and payers, and three (of nine) policymakers and regulators, with the most common reason for nonparticipation being the time commitment.
Stakeholder engagement activities
Over a 2-year period starting in January 2010, the ESAG participated in a series of activities designed to solicit their perspectives on the prioritization of genomic tests as well as several CER study designs. The impact of the ESAG involvement on project outcomes is depicted in Figure 1. Initially, they were involved in providing input regarding a Phase III protocol that had been previously designed by SWOG investigators (also referred to in this study as ‘clinical investigators’ to distinguish them from the CANCERGEN or ‘CER’ investigators) evaluating the use of OncoType Dx® in the management of women with node-positive breast cancer. Consultation with the ESAG members occurred during one teleconference-based meeting where various aspects of the protocol, such as the inclusion/exclusion criteria and outcome measures, were presented by clinical investigators and the merits of various options from a comparative effectiveness standpoint were discussed. Subsequently stakeholders participated in several teleconference-based meetings to establish priority-setting criteria for genomic tests that had been selected through a landscape analysis [15].
Figure 1
Figure 1
Stakeholder engagement process in the Center for Comparative Effectiveness Research in Cancer Genomics
Next, ESAG members used these criteria to provisionally rank six genomic tests prior to a 1-day face-to-face meeting in June 2010 where CANCERGEN investigators and stakeholders discussed the current evidence base for each test, followed by a final priority-setting exercise. The in-person meeting was followed-up 5 months later with a facilitated teleconference-based discussion focused on the review of two value-of-information (VOI) models generated to further inform the priority-setting process. VOI methods can estimate the benefit of investing in additional studies to determine whether a particular test or treatment should be brought to clinical practice. In this way, VOI is a tool that can help decision-makers maximize the impact of research portfolios on medical care and human health.
Stakeholders used the VOI results to re-rank the final selection of the top three genomic tests. In March 2011, the ESAG again met face-to-face to provide input on draft study designs for two of the top three tests.
Evaluation process
In March 2011, an evaluation of the 2-year stakeholder engagement process (Figure 1) was conducted using a multimodal approach in order to gather a range of feedback while allowing an opportunity for focused discussion [16]. This included an online questionnaire administered via Survey Monkey, followed by one-on-one, semistructured interviews, and finally facilitated discussion with the CANCERGEN investigators [102]. An online Likert-scale questionnaire was developed based on a previously published evaluation framework involving four components for evaluating deliberative processes: representation, the structure of the process or procedures, the information used in the process and the outcomes and decisions arising from the process [9]. These four domains were customized for specific application to CANCERGEN and are defined in Table 1. Respondents completed the questionnaire by indicating their level of agreement with 29 statements (Table 2) relating to their experience in the CANCERGEN project, developed in correspondence to the four domains. Respondents were also requested to provide their name and self-identify themselves according to their stakeholder category – patient, clinician, payer, healthcare provider or policymaker. Each question also included a comment box to allow respondents the opportunity to expand on their rating.
Table 1
Table 1
Definitions of evaluation domains.
Table 2
Table 2
Recommendations for future stakeholder engagement activities in cancer-related comparative effectiveness research.
Stakeholders were given 2 weeks to respond to the questionnaire. During this response period, stakeholders were contacted to schedule a one-on-one, telephone-based interview, which required completion of the questionnaire and therefore served as a reminder to complete the questionnaire. A semistructured interview guide was developed during this time to obtain in-depth information on the ESAG members’ experience. All interviews were conducted by CMTP staff who led all stakeholder engagement activities throughout CANCERGEN. Prior to each interview, CMTP staff reviewed the stakeholder’s questionnaire results and noted areas of neutrality or potential disagreement. Interviewers selected probes from the interview guide in order to further investigate potential areas of disagreement and to seek constructive criticism about the engagement process. Each interview was recorded with the stakeholder’s consent and was 30–45 min in duration. The results of both the online questionnaire and interviews were summarized and circulated to the ESAG. ESAG members had the opportunity to provide comments in writing or through a facilitated discussion held by teleconference with the CANCEGEN investigators, allowing an opportunity to validate findings as well as identify the results stakeholders felt most important to highlight.
Data analysis
Owing to the small sample size, responses to the online questionnaire were tabulated for descriptive purposes only. Interviews were analyzed using a directed content analysis approach [17]. To ensure analytical/investigator triangulation, two investigators independently coded the data and compared their results at the beginning and end of the coding process [18,19]. Discrepancies in coding were resolved through discussion. NVivo® software was used to manage the qualitative data analysis [103].
The evaluation had a response rate of 85% with 11 out of 13 stakeholders completing both the questionnaire and interview. Two stakeholders did not complete the evaluation process owing to competing priorities within the response time frame. The questionnaire results reflected stakeholders’ generally positive assessment of the 2-year project experience with 89% of questionnaire items receiving a rating of agree or strongly agree. Eleven percent of questionnaire items received a rating of neutrality (neither agree nor disagree), with the exception of one question that received a response of disagree. None of the respondents strongly disagreed with any of the questionnaire items. The one-on-one interviews provided an opportunity for a richer dialogue regarding each stakeholder’s engagement experience. The principal findings of the interviews and subsequent stakeholder review are categorized by domain below with stakeholder recommendations for improvements to be incorporated into future stakeholder engagement initiatives outlined in Table 2.
Representation
Questionnaire findings related to this category were generally positive, with 92% of responses to questions concerning ‘representation’ rated as strongly agree or agree and the remaining 8% neutral. These findings were explored in more detail during the key informant interviews, where stakeholders reiterated that they felt the ESAG was balanced in perspective with the appropriate level of representation within the various stakeholder categories. In addition, respondents agreed that the size of the ESAG was fitting for the level of engagement and discussion required for the various tasks over the 2-year project period. During the stakeholder interviews a medical oncologist stated “I thought the size of the group was good. Too many people would have made the free-flow discussion difficult.”
Several stakeholders noted that expanding the size of the ESAG would have unfavorably altered the group dynamic because open dialogue would have become more complicated. While the stakeholders preferred a smaller group size, they had several suggestions for gaining additional expertise on an ‘as-needed’ basis. For example, some suggested that topic-specific experts could be brought into the group for individual meetings or tasks, or that the broader stakeholder group that they represented be given the opportunity to comment on CANCERGEN work independent of ESAG meetings and teleconferences. Stakeholders also suggested oversampling groups or requesting alternate representatives so that there would be greater assurance that all perspectives were present when an individual ESAG member could not participate in a meeting.
Some stakeholders expressed that their project roles were not explicitly defined at the outset and suggested that facilitators should provide each stakeholder with a short description of his or her role during the recruitment phase to clarify expectations. A few stakeholders gave examples of times when they struggled with the ambiguous nature of their role on the ESAG and were not clear if they were being asked to speak for themselves, their organization or on behalf of a larger stakeholder community. If the latter, several stakeholders cited the difficulty of representing their broader stakeholder community owing to the heterogeneity of perspectives within the community. As one consumer representative stated during the interviews, “I tried to represent the consumer perspective but there is no one single consumer perspective. Having an individual speak for a larger group – it’s not just speaking for patients and consumers, but also different types of communities, this becomes a very big challenge.”
During the teleconference discussion of the interview findings between the investigators and the ESAG, stakeholders suggested providing a narrative that describes the background and experience of all the stakeholders involved at the project outset, so that each stakeholder better understood the others’ perspectives.
Process
On the basis of the questionnaire results, stakeholders generally felt that the overall engagement process was conducted in a manner that enabled respectful deliberation (strongly agree: 37%; agree: 52%). Eleven percent of total responses indicated neutral agreement with process-related questionnaire items with the exception of one response of disagreement to the questionnaire item regarding whether there was a clear process for communicating back to the ESAG and whether their input had been incorporated into SWOG decisions.
Again the qualitative interviews provided an opportunity to probe for additional insights into the questionnaire results and also revealed recommendations for improvement. Stakeholders found the level of communication with CANCERGEN investigators to be adequate but recommended that there be more opportunity for interactions recognizing the unique opportunity for stakeholders to engage clinical investigators in dialogue on research development. In particular, stakeholders were interested in understanding how their input was used to inform clinical investigator decision-making. A patient representative spoke of this need during the interviews by stating, “Need more feedback from the (clinical) investigators. Would have been interesting to hear from the (clinical) investigators what they felt the value of the ESAG was.”
In addition, stakeholders suggested that more frequent communication from process facilitators was needed in between in-person meetings and teleconferences to maintain stakeholder involvement over time. Several individuals also recommended that conflicts of interest among stakeholders be specifically addressed and disclosed at the beginning of the process.
Stakeholders found the recruitment process to be unclear and when asked, most either could not recall how they were recruited or described informal procedures such as being contacted because they had previous connections with one or more CANCERGEN investigators or had served on similar groups in the past. During the key informant interviews, stakeholders unanimously preferred in-person meetings for deliberations as this forum best enabled discussion and decision-making as compared with other modalities for engagement activities such as teleconferences and online questionnaires. However, this preference was noted with the simultaneous concern that a significant subset of the ESAG was unable to attend some or all of the in-person meetings over the 2 years. Therefore, stakeholders recommended that future engagement projects further embrace the use of technology to increase longitudinal participation, with one stakeholder representing the payer perspective stating during the interviews: “You will never get all stakeholders to attend the meeting. Remote attendance will make it easier for people to participate and also reduce the overall costs of gathering stakeholder input.”
A vocal minority of stakeholders were particularly adamant that the engagement process should occur more quickly, with more efficient communication feedback loops between the ESAG and the clinical investigators. This was particularly important to ensure that the ESAG’s input would be timely, relevant and have an impact on the study decision-making process at SWOG. A medical oncologist alluded to this during the interviews, stating, “The work of CANCERGEN was very important, but you have to figure out some way to make the process move faster. While you are debating one genetic test, three others are emerging.”
Information
The questionnaire results reflected positive assessments (strongly agree: 36%; agree: 55%) of the accessibility, comprehensibility, balance and objectivity of the information provided to stakeholders throughout the project. The remaining 9% of responses indicated a neutral rating. When explored during the interviews, stakeholders generally gave high marks to the meeting summaries prepared following specific engagement activities. These documents were seen as essential for documenting major points of contention or agreement following stakeholder meetings as well as key decisions, but stakeholders also suggested that, in the future, greater use should be made of executive summaries with hyperlinks to full details to reduce information burden on busy stakeholders while maintaining timely communication about the study (Table 2).
However interview feedback regarding the quantity and quality of background materials received prior to engagement activities (e.g., information on genetic tests, value of information modeling and specific CER study designs) was more mixed. For example, patients and consumer representatives preferred less technical materials, with one stating in an interview, “This stuff is kind of technical. In some cases you were [lucky or careful] that the patient representatives had experience with genetics, testing, [and] test results … patient advocates can handle technical stuff, but not all of them.”
By contrast, other stakeholders such as researchers and clinicians preferred a more in-depth technical analysis. During the interviews a clinician stated, “The summaries could be superficial and the people who were writing them did not have a concept of what would be important to a medical oncologist.”
Discussion with the ESAG during a follow-up teleconference underscored the recommendation to tailor technical briefs and other forms of scientific and clinical information when possible for stakeholder groups based on their comfort level with scientific content.
Outcomes
Overall, responses to the questionnaire (strongly agree: 30%; agree: 56%) reflected the stakeholders’ generally positive perceptions that CANCERGEN objectives were achieved, and that the ESAG input was meaningfully incorporated into the decisions made by the clinical investigators and satisfaction with the process. However, the largest percentage of neutral responses (14%) were given in response to this section of the online questionnaire.
Interview results mirrored questionnaire data in that the priority-setting exercise in CANCERGEN was executed successfully and in agreement that the project had achieved the overarching goal of creating a stakeholder-guided process for conducting CER within a cancer clinical trials cooperative group. Stakeholders completed the process with a better understanding of CER, stakeholder engagement, VOI modeling as a tool for priority-setting and an increased capacity to participate in future CER projects. Stakeholders also stated that the objectives of CANCERGEN were met as measured by the ability of the group to prioritize genetic tests for subsequent CER studies using a deliberative process that they rated as satisfactory and one that allowed them to have an impact on final study decisions.
Stakeholder opinions towards the final study designs, however, were varied. Some individuals expressed frustration that only one of the selected tests could be pursued for a CER study for reasons that were not clearly communicated to the ESAG at the project outset. They questioned whether the ESAG was able to change decisions that had already been made by the clinical investigators. The reality that the ESAG and CANCERGEN were ultimately somewhat outside of the study decision-making process of SWOG was expressed in a comment by one medical oncologist during an interview: “Spent a lot of time debating and evaluating ERCC1 (the top-ranked test) and then SWOG (clinical investigators) decided not to do it – this was a wasted effort.”
Several stakeholders discussed the implications for CER in the broader context of oncology research. A few participants commented that research funders should move beyond the traditional clinical research paradigm to support timely and relevant research that focuses on the needs of patients and has a broader public health impact. During an interview, a patient representative commented, “Research should provide answers that the users are actually interested in. Everyone needs information that is accurate. We need to answer questions that have a larger public health impact.”
We undertook a systematic evaluation of the stakeholder engagement process supporting CANCERGEN, using a published evaluation framework that specifies four key components of any evaluation of a deliberative process (representation, process, information and outcomes) [9]. Our principal findings were that stakeholders generally felt that the 2-year process was fair and competent and that the goal of establishing a stakeholder-driven priority-setting process for cancer genomics CER in a clinical trials cooperative group was achieved. Nevertheless, based primarily on information obtained through semistructured interviews, stakeholders identified the following challenges:
  • Lack of transparency in recruitment processes
  • Unclear role expectations for stakeholders
  • Limited opportunities to communicate with clinical investigators
  • Lack of clarity on how stakeholder input was integrated into study design decisions
  • The challenge of balancing timely input from stakeholders with the pace of research needs
They also provided specific recommendations for improvements in each of the four domains for future stakeholder engagement projects in cancer (Table 2); however, major lessons learned from the evaluation include the need to develop standard stakeholder recruitment practices, role descriptions and procedures for improved communication with clinical and CER investigators. While we intend to apply these insights in future projects, given that stakeholder engagement is relatively new in its application to CER, we conclude that evaluation should be routinely conducted, particularly for multiyear projects that are likely to require significant resources.
This study used both questionnaire and interview methods to explore known areas of concern in engaging stakeholders in CER, while also encouraging frank discussion with investigators to obtain valuable feedback in order to improve on future stakeholder engagement practices in oncology. For example, stakeholders overwhelmingly supported the small group size chosen for the CANCERGEN ESAG, yet small group size has been a concern expressed by CER researchers worried that this constraint could limit the perspectives represented, thus influencing the decisions or recommendations made by the group [8,20,101]. While preferring the perceived advantages of a small group size, the CANCERGEN ESAG noted this trade-off. For any future projects, investigators will need to weigh the pros and cons of altering the size of their stakeholder group. Most importantly, all parties need to recognize that stakeholder groups will never be ‘representative’ in the statistical sense. They are chosen to give a rich range of perspectives and there are qualitative methods involving purposive sampling for ensuring this goal is achieved [19]. Another insight gained during the interviews was the importance of formalizing the recruitment process, including developing a project role description and ensuring that conflicts of interest are explicitly disclosed. Stakeholders were selected based on their knowledge, experience and direct interest in a given topic, increasing the likelihood for conflicts of interest. Managing these potential conflicts, while ensuring a fair and transparent process, is an area of intense interest in CER [21]. At present, while obtaining disclosure of conflicts of interest from all stakeholders is recommended, more guidance is warranted on this topic and should be a continued focus for future research.
Results from the questionnaire and interviews also revealed complexities in the collaboration between CER and clinical investigators that directly affected stakeholder perceptions of engagement processes. The only single rating of disagreement was provided in response to the online questionnaire item regarding the need for greater clarity regarding how ESAG input was incorporated into decision-making processes. While this represents a very small percentage of the questionnaire responses, it warrants examination as the timeliness and completeness of communications were extensively discussed during the follow-up interviews. The practical realities of coordinating CANCERGEN objectives within the existing SWOG decision-making infrastructure sometimes resulted in communication gaps between the ESAG and clinical investigators. This was most notable when the ERCC1 expression test (to predict response to platinum-based adjuvant therapy in patients with non-small-cell lung cancer) was chosen as the top priority test by the ESAG, and then was not selected by clinical investigators for further advancement within SWOG. The reason this study could not be initiated in the short-term was strong competition for non-small-cell lung cancer patients from industry-sponsored trials, although it is likely to be pursued at a later time. In addition, resource constraints were another reality adversely affecting the short-term likelihood of pursuing a prospective evaluation of the second-ranked test, breast cancer tumor markers, given the significant level of investment required to support the proposed definitive clinical trial. A claims-based analysis of current practice patterns and cost implications of monitoring breast cancer patients with these tumor markers is currently underway as an intermediate step to inform future studies.
These factors were outside the control of CANCERGEN but also were not always clearly communicated to the ESAG. This experience directly affected some ESAG members’ perceptions of their ability to meaningfully prioritize research topics. In the future, greater interaction and communication with clinical investigators would help the ESAG understand how their input was being received within the SWOG disease committees and the competing issues influencing research decisions. Stakeholders openly expressed during their individual interviews that they were interested in knowing that their contributions had an impact on project objectives and that different (and hopefully better) decisions resulted from their participation in the project. This view is entirely consistent with the conclusions of UK experts on public participation who hoped to avoid feelings of cynicism and tokenism when they wrote, “It is a waste of everyone’s time unless the decision-maker is willing to listen to others’ views and then do something which it would not have done otherwise” [10].
Ensuring that all stakeholders felt they had sufficient information to meaningfully participate in project-related discussions while avoiding ‘stakeholder fatigue’ during the 2 years of the project was another difficult line to navigate to everyone’s satisfaction. Some variable feedback was received regarding the amount and content of technical information provided to the ESAG prior to each engagement activity. A few stakeholders felt more information would be better while others felt that more information would be overwhelming and difficult to process. Similarly, there were mixed views with respect to how technically sophisticated the background information should be to support informed decision-making. This is an important consideration for stakeholder engagement in CER, as the benefit of bringing diverse stakeholders to the table lies in being able to view research topics through the lens of decision-makers such as patients, clinicians and policymakers. Stakeholders will have different information needs and while ideally, tailoring information to the respective stakeholders is recommended, this may not always be feasible owing to time and resource constraints. The goal is always to make information as accessible as possible and minimize the unavoidable power imbalances that occur when there are ‘experts’ and lay persons participating as members of a stakeholder group. The issue of managing information across a diverse group of stakeholders is not a unique challenge to CER. For example, one stakeholder recommended that for future projects researchers look to other healthcare organizations who routinely communicate information to diverse stakeholders with varied information needs such as the American Heart Association and the National Quality Forum.
The time required to actively involve stakeholders throughout research activities has been a noted concern of researchers [8,101]. In particular, the additional activities supporting stakeholder involvement can add to timelines for completing research projects, especially those occurring as part of in-person meetings [8]. Interestingly, in the CANCERGEN project, stakeholders highlighted this issue as a concern and noted the importance of establishing processes to obtain stakeholder input more efficiently and ensure that communication feedback loops with investigators and SWOG were closed in a timelier manner so that stakeholder input could match the pace of scientific advancements in genomics.
This evaluation of stakeholder engagement in a multiyear CER initiative demonstrates an important, but often missed, aspect of CER activities. Evaluation of stakeholder engagement is essential for establishing legitimacy, measuring impact and informing future improvements [2225]. However, there are few formal evaluations of the effectiveness of stakeholder engagement in CER and no established best practices [101]. We based our evaluation on four domains for evaluating deliberative processes: representation, the structure of the process or procedures, the information used in the process and the outcomes and decisions arising from the process [9]. The roots of this approach rely on the meta-principles of fairness and competence for judging the effectiveness of deliberative processes, but operationalize these normative criteria into process and outcome measures [22,26]. Defining stakeholder engagement in CER as an iterative process of actively soliciting stakeholder perspectives for the dual purposes of decision-making and achieving a shared understanding with investigators allows comparisons to the evaluation literature on deliberative processes [9,11,27]. In particular, a central component of deliberative processes involves informed participants engaging in discussion to understand others’ views on the topic to arrive at a final decision or recommendation on a topic [9]. Given that stakeholder engagement in CER has exactly these goals, we adapted the evaluation framework from the public engagement literature, recognizing that target stakeholder groups for CER projects routinely go beyond the public (patients) to include clinicians, payers, policymakers and industry.
Limitations
While the four-component evaluation framework was developed to guide evaluation processes of public involvement in the healthcare sector, it has not been specifically used in the CER setting [9]. Moreover, the specific adaptation of each domain in the form of questionnaire and interview questions was not pilot-tested or validated prior to administration. We chose the multimodal approach of sequentially obtaining online ratings followed by telephone-based interviews in order to encourage candid feedback from each stakeholder. However, we found that the self-administered questionnaire was not a particularly sensitive measure of stakeholder perceptions of the effectiveness of the engagement process and relied extensively on the semistructured interview responses for insights about lessons learned and recommendations for improvements. In future evaluations, we would expand our questionnaire to also include questions based on the conceptual framework of stakeholder engagement in CER and validate them prior to administration [27]. The goal would be to have a more comprehensive questionnaire that could be used to evaluate future CER projects [27].
In addition, the evaluation only occurred at the completion of a 2-year process. Thus, the responses reflect the stakeholders’ evaluation of the process as a whole and at a snapshot in time, and do not provide specific detail about different engagement activities over the project lifespan. Further, responses at the end of the project were subject to recall bias, potentially limiting the level of feedback stakeholders could provide on early experiences in the project. A particular concern would be that the passage of time obscured notable frustrations or enthusiasms regarding certain aspects of the engagement. Evaluations conducted at regular intervals throughout the engagement process would provide an opportunity to obtain timely feedback specific to different types of engagement and would address any issues or concerns in a timely manner. Finally, we would also include CER and clinical investigators as participants in the evaluation, as they are also stakeholders in the process and their involvement in the development and implementation of best practices is critical.
Although there is a clear rationale for involving stakeholders in CER, much remains to be learned regarding how best to structure the process given particular project objectives and what factors are associated with ‘successful’ involvement from the perspectives of both investigators and stakeholders. Routinely conducting evaluations of the stakeholder engagement component of projects is critically important for understanding the impact of stakeholder involvement and making evidence-based changes to current practices. The insights gained from the CANCERGEN evaluation should prove useful to others designing stakeholder-guided projects in CER, as this represents one of the few systematic evaluations of a multiyear project involving both priority-setting and study design. While ultimately rated as successful in meeting project goals by stakeholders, there are nevertheless important lessons learned from this evaluation that will help ensure the relevancy, quality and efficiency of CER results for future CER studies in oncology.
Executive summary
  • Evaluations of comparative effectiveness research activities involving stakeholders have not been routinely conducted or reported in the literature.
  • We conducted this evaluation of stakeholder engagement procedures to ensure that study goals and objectives were met and to obtain insight for refining processes for future initiatives.
  • Investigators should clearly communicate recruitment procedures, role descriptions and expectations for involvement to facilitate meaningful engagement.
  • While in-person meetings provide an ideal opportunity for discussion and dialogue, scheduling conflicts may prohibit full participation. The use of technology should be embraced for research activities that involve longitudinal participation.
  • Regular and frequent communication between researchers and stakeholders regarding project updates, and in particular how stakeholder input is being used to inform clinical investigator decision-making, is an important element for effective practice.
  • Routine evaluation should be incorporated into stakeholder engagement processes and disseminated to further the goals of stakeholder-informed research.
  • The results of this study may be helpful for establishing and refining stakeholder engagement practices for effectiveness research projects in complex organization settings.
Acknowledgments
The authors gratefully acknowledge the contributions of the Center for Comparative Effectiveness Research in Cancer Genomics External Stakeholder Advisory Group who participated in the evaluation and helped inform significant portions of the manuscript.
Footnotes
Ethical conduct of research
The authors state that they have obtained appropriate institutional review board approval or have followed the principles outlined in the Declaration of Helsinki for all human or animal experimental investigations. In addition, for investigations involving human subjects, informed consent has been obtained from the participants involved. Stakeholder involvement in Center for Comparative Effectiveness Research in Cancer Genomics was previously approved by the Fred Hutchinson Cancer Research Center Institutional Review Board.
Financial & competing interests disclosure
This work was partially funded by the Center for Comparative Effectiveness Research in Cancer Genomics through the American Recovery and Reinvestment Act of 2009 by the National Cancer Institute, NIH under Agency Award # 5UC2CA148570-02, and by National Cancer Institute, Division of Cancer Prevention Award #CA37429. The authors have no other relevant affiliations or financial involvement with any organization or entity with a financial interest in or financial conflict with the subject matter or materials discussed in the manuscript apart from those disclosed.
No writing assistance was utilized in the production of this manuscript.
Papers of special note have been highlighted as:
[filled square] of interest
[filled square][filled square] of considerable interest
1. Sox HC, Greenfield S. Comparative effectiveness research: a report from the Institute of Medicine. Ann Intern Med. 2009;151(3):203–205. [PubMed]
2. Hoffman A, Montgomery R, Aubry W, Tunis SR. How best to engage patients, doctors, and other stakeholders in designing comparative effectiveness studies. Health Aff. 2010;29(10):1834–1841. [PubMed]
3. Walls J, Rowe G, Frewer L. Stakeholder engagement in food risk management. Pub Understand Sci. 2011;20(2):241–260.
4. Bogart LM, Uyeda K. Community-based participatory research: partnering with communities for effective and sustainable behavioral health interventions. Health Psychol. 2009;28(4):391–393. [PMC free article] [PubMed]
5. Boote J, Telford R, Cooper C. Consumer involvement in health research: a review and research agenda. Health Policy. 2002;61(2):213–236. [PubMed]
6. Shalowitz MU, Isacco A, Barquin N, et al. Community-based participatory research: a review of the literature with strategies for community engagement. J Dev Behav Pediatr. 2009;30(4):350–361. [PubMed]
7[filled square]. Telford R, Boote JD, Cooper CL. What does it mean to involve consumers successfully in NHS research? A consensus study. Health Expect. 2004;7(3):209–220. Presents principles of successful consumer involvement in research for the purpose of guiding good practice. [PubMed]
8. Pickard AS, Lee TA, Solem CT, Joo MJ, Schumock GT, Krishnan JA. Prioritizing comparative effectiveness research topics via stakeholder involvement: an application in COPD. Clin Pharmacol Ther. 2011;90(6):888–892. [PubMed]
9[filled square][filled square]. Abelson J, Forest PG, Eyles J, Smith P, Martin E, Gauvin F-P. Deliberations about deliberative methods: issues in the design and evaluation of public participation processes. Soc Sci Med. 2003;57(2):239–251. Presents general principles to guide the design and evaluation of public involvement in healthcare based on a review of the literature. [PubMed]
10. Burgess J, Chilvers J. Upping the ante: a conceptual framework for designing and evaluating participatory technology assessments. Sci Public Policy. 2006;33(10):713–728.
11. Rowe G, Marsh R, Frewer LJ. Evaluation of a deliberative conference. Sci Technol Human Values. 2004;29(1):88–121.
12. Sibbald S, Gibson J, Singer P, Upshur R, Martin D. Evaluating priority setting success in healthcare: a pilot study. BMC Health Serv Res. 2010;10(1):131. [PMC free article] [PubMed]
13. Beierle TC, Konisky DM. Values, conflict, and trust in participatory environmental planning. J Policy Anal Manage. 2000;19(4):587–602.
14. Ramsey SD, Veenstra D, Tunis SR, Garrison L, Crowley JJ, Baker LH. How comparative effectiveness research can help advance ‘personalized medicine’ in cancer treatment. Health Aff. 2011;30(12):2259–2268. [PMC free article] [PubMed]
15[filled square][filled square]. Tharani R, Wong W, Carlson J, et al. Prioritization in comparative effectiveness research: the CANCERGEN experience in cancer genomics. Med Care. 2012 (In Press). Describes the stakeholder-informed research prioritization process employed in Center for Comparative Effectiveness Research in Cancer Genomics. [PMC free article] [PubMed]
16. Creswell JW. Research Design: Qualitative, Quantitative, and Mixed Methods Approaches. 3. SAGE Publications, Inc; CA, USA: 2009.
17. Hsieh H-F, Shannon SE. Three approaches to qualitative content analysis. Qual Health Res. 2005;15(9):1277–1288. [PubMed]
18. Giacomini MK, Cook DJ. for the Evidence-Based Medicine Working Group. Users’ guides to the medical literature. JAMA. 2000;284(3):357–362. [PubMed]
19. Patton MQ. Qualitative Research and Evaluation Methods. 3. Sage Publications, Inc; CA, USA: 2002.
20. Gauvin FP, Abelson J, Giacomini M, Eyles J, Lavis JN. ‘It all depends’: conceptualizing public involvement in the context of health technology assessment agencies. Soc Sci Med. 2010;70(10):1518–1526. [PubMed]
21. Chalkidou K, Tunis S, Lopert R, et al. Comparative effectiveness research and evidence-based health policy: experience from four countries. Milbank Q. 2009;87(2):339–367. [PubMed]
22. Renn O. A Model for an analytic-deliberative process in risk management. Environ Sci Technol. 1999;33(18):3049–3055.
23. Barber R, Boote JD, Parry GD, Cooper CL, Yeeles P, Cook S. Can the impact of public involvement on research be evaluated? A mixed methods study. Health Expect. 2011 doi: 10.1111/j.1369–7625.00660.x. (Epub ahead of print) [PubMed] [Cross Ref]
24. Brett J, Staniszewska S, Mockford C, Seers K, Herron-Marx S, Bayliss H. The PIRICOM Study: a Systematic Review of the Conceptualisation, Measurement, Impact and Outcomes of Patients and Public Involvement in Health and Social Care Research. Clinical Research Collaboration; London, UK: 2010.
25. Rowe G, Frewer LJ. Public participation methods: a framework for evaluation. Sci Technol Human Values. 2000;25(1):3–29.
26. Webler T. ‘Right’ discourse in citizen participation: an evaluative yardstick. In: Renn O, Webler T, Wiedemann PM, editors. Fairness and Competence in Citizen Participation: Evaluating Models for Environmental Discourse. Kluwer Academic Publishers; MA, USA: 1995.
27[filled square][filled square]. Deverka PA, Lavallee DM, Desai PJ, et al. Stakeholder participation in comparative effectiveness research: defining a framework for effective engagement. J Compar Effect Res. 2012;1(2):181–194. Presents discussion of terminology and rationale relating to ‘stakeholder’ and ‘stakeholder engagement’ within the context of comparative effectiveness research. [PMC free article] [PubMed]
Websites
101. O’Haire C, McPheeters M, Nakamoto E, et al. Engaging stakeholders to identify and prioritize future research needs. Methods Future Research Needs Report No 4. 2011 www.effectivehealthcare.ahrq.gov/ehc/products/200/698/MFRNGuide04--Engaging_Stakeholders--6-10-2011.pdf. [PubMed]
102. SurveyMonkey. SurveyMonkey: Free online survey software and survey tool. 2011 www.surveymonkey.com/
103. QSR International. NVivo 9 research software and analysis insight. 2011 www.qsrinternational.com/products_nvivo.aspx.