PMCCPMCCPMCC

Search tips
Search criteria 

Advanced

 
Logo of hexLink to Publisher's site
 
Health Expect. 2017 August; 20(4): 626–637.
Published online 2016 October 5. doi:  10.1111/hex.12493
PMCID: PMC5513001

CJCheck Stage 1: development and testing of a checklist for reporting community juries – Delphi process and analysis of studies published in 1996–2015

Rae Thomas, BEd, GradDip.CounsPsych, PhD,corresponding author 1 Rebecca Sims, BSc(Psych), BPsyScHons, 1 Chris Degeling, BVSc, PhD, 2 Jackie M. Street, BSc, GradDip.PrimaryHealthCare, PhD, 3 Stacy M. Carter, BAppSci, MPH(Hons), PhD, 2 Lucie Rychetnik, BSc(Hons), PGDip(Dietetics), MPH, PhD, 4 , 5 Jennifer A. Whitty, BPharm(Hons), GradDip.ClinPharm, PhD, 6 Andrew Wilson, BMedSci, MBBS, PhD, 7 Paul Ward, PhD, 8 and Paul Glasziou, FRACGP, PhD 1

Abstract

Background

Opportunities for community members to actively participate in policy development are increasing. Community/citizen's juries (CJs) are a deliberative democratic process aimed to illicit informed community perspectives on difficult topics. But how comprehensive these processes are reported in peer‐reviewed literature is unknown. Adequate reporting of methodology enables others to judge process quality, compare outcomes, facilitate critical reflection and potentially repeat a process. We aimed to identify important elements for reporting CJs, to develop an initial checklist and to review published health and health policy CJs to examine reporting standards.

Design

Using the literature and expertise from CJ researchers and policy advisors, a list of important CJ reporting items was suggested and further refined. We then reviewed published CJs within the health literature and used the checklist to assess the comprehensiveness of reporting.

Results

CJCheck was developed and examined reporting of CJ planning, juror information, procedures and scheduling. We screened 1711 studies and extracted data from 38. No studies fully reported the checklist items. The item most consistently reported was juror numbers (92%, 35/38), while least reported was the availability of expert presentations (5%, 2/38). Recruitment strategies were described in 66% of studies (25/38); however, the frequency and timing of deliberations was inadequately described (29%, 11/38).

Conclusions

Currently CJ publications in health and health policy literature are inadequately reported, hampering their use in policy making. We propose broadening the CJCheck by creating a reporting standards template in collaboration with international CJ researchers, policy advisors and consumer representatives to ensure standardized, systematic and transparent reporting.

Keywords: checklist, citizen jury, CJCheck, community jury, reporting standards

1. Background

It is incumbent on researchers to provide adequate descriptions of their research methodology and methods when reporting their findings in the peer‐reviewed literature. Where research methods are well described, we better understand the process by which the findings have been generated and have more confidence in appraising the credibility of these outcomes. Reporting standards also create an incentive for researchers to be meticulous in designing studies. Reporting guidelines have been developed for a range of methods and disciplines for these reasons (e.g. CONSORT, PRISMA, STARD,1 CHEERS2) and some journals now require these checklists to be completed prior to publication. More recently, templates have been developed to provide guidance for describing the interventions tested in research (TIDieR3). When used, these checklists allow comparisons between studies, enhance transparency, support trust in the process and provide robust foundations for future research.

In Western liberal democracies, there has been an increased openness in government public policy processes to include public voices in policy debate and formulation.4, 5, 6 Public engagement processes are gaining popularity, but their methods vary.4, 7 Three broad methods of public engagement include communicative (e.g. public meetings, website information), consultative (e.g. opinion poll, focus groups) and participatory (deliberative poll, citizen jury).8 This article is focused on a group of participatory, deliberative democratic processes with methodologies similar to those described by the Jefferson Center.9 Some have been called citizens’ panels, citizens’ fora and community/citizens’ jury. We refer to these, collectively, as community juries (CJs).

As a participatory form of public engagement, CJs aim to illicit an informed community perspective on controversial or difficult topics where the values and preferences of community members are sought. For this reason, CJ participants are recruited from the general population and deliberate on questions that require an ethically sensitive or value‐based decision. Common CJ elements include extensive provision of information from expert presentations, questioning opportunities, substantial time for deliberation and formulation of a consensus or majority “verdict” by CJ participants.9

CJs have been used to explore community perspectives on several health areas including screening,10, 11, 12 resource priority setting,13, 14 informed consent processes for screening decisions15 and pandemic planning.16 These particular examples were conducted for research purposes and none formally linked CJ outcomes to policy decisions. But in Australia, the extent to which CJs are embedded in policy processes is changing. Recently, CJs have been used by local and provincial governments to garner public input to policy making.17, 18, 19 However, despite the increased use in policy decision making, some hesitancy exists around the trustworthiness of CJ processes.20 If the process methods of CJs were reported comprehensively, consistently and transparently, their use in shaping health policy may increase.

Over a decade has passed since Abelson et al.4 suggested design and evaluation principles for public deliberations. Four key components were identified: representation of citizenry, documentation of deliberative procedures, descriptions of information provided to community members and reporting of outcomes. However, inconsistencies in reporting of these components still occur. A recent systematic review explored the adaptations in CJ methodologies and reported wide variations in how CJs were conducted and reported.21 Also, in a review of public deliberation in health policy, Abelson et al.5 continued to find inconsistencies and ambiguity in reporting method descriptions. Different CJs could legitimately have different outcomes even if they followed the same methodology (random recruitment, same expert information, etc.), but poor and/or inconsistent reporting means it is difficult to compare CJ findings or consider potential reasons for differing results, and it is not yet empirically clear how methodological variation may alter outcomes.

We cannot assess quality until we are clear about methodology. Implementers of CJs know little about what might enhance or detract from the legitimacy of claims about the representation of different types of public; quality of the processes of information provision and public deliberation; or the authenticity of the outcomes. We sought to determine the adequacy of process reporting in CJs published in the peer‐reviewed health and health policy literature. We used two processes. First, we conducted a Delphi survey with a group of published CJ researchers to generate an agreed checklist of CJ characteristics and research methods that are important to report in publications. Second, using this checklist, we then reviewed published CJs to establish the utility of the checklist and determine the most frequently reported and missing descriptions of the CJ methods.

2. Methodology

2.1. Stage 1: initial CJCheck development

Fourteen published Australian CJ researchers and selected government policy advisors attended a CJ Research Symposium at Bond University, Australia, in July 2015. Participants were invited if they were active Australian CJ researchers (13 invited, 10 attended); active policy advisors/government workers (four invited, all attended or were represented); and/or consumer representatives (two invited and neither able to attend). Symposium goals were broad with an aim to discuss several key research and policy questions. Outcomes included the identification of several research agenda and collaborative links. One research question arising from the symposium was to identify potentially important CJ characteristics and processes and determine whether these were reported in published CJ studies. Methodological elements of CJs were brainstormed and iteratively developed by all symposium participants in small group, round robin exercises. A smaller self‐selected working party (from the larger symposium group) participated in a two‐stage Delphi method22 to refine these CJ elements.

The first Delphi round was conducted in September 2015. The survey consisted of the 17 reportable methods suggested by the larger CJ symposium group. These criteria were divided into four groups (planning the CJ, characteristics of the jurors, CJ procedures and CJ schedule). Respondents were asked to rate each criterion on a five‐point Likert scale (1=“important” to 5=“unimportant”). Changes to item wording and new items could be suggested for ranking in the second round. Items that were scored by six or more respondents as “important” or “somewhat important” were retained. Items that had fewer than five respondents scoring in these top two categories were re‐ranked in the second round.

The second round was conducted in October 2015. Questions were again ranked on a five‐point Likert scale. As before, there was one optional, free response question, for further suggestions. On this occasion, respondents were also asked whether the item was important but not necessary to report.

The final checklist (CJCheck) was used in data extraction. Seven items were rated as either “yes” or “no,” while the remainder as “yes,” “no” or “unclear.” Specific checklist criteria are reported in Table S1.

2.2. Stage 2: selection of CJ studies and data extraction

We used two strategies to search for included studies. First, included articles were drawn from two recent systematic reviews pertaining to deliberative democracy.21,23 These systematic reviews included a broad range of deliberative democratic techniques; therefore, we only included articles from these reviews that nominated the deliberative democratic technique most resembling CJs (e.g. random or representative selection of jurors, provision of balanced information to jurors, sufficient time allocated to deliberation).8 Included studies from these two reviews described the deliberative process as “citizens’ panels,” “community or citizens’ juries,” “citizens’ council” or “citizens’ fora”).

In addition to including relevant studies from the two previous reviews, we replicated the search strategy of Degeling et al.23 and modified the search dates to encompass 2013 to 2015 to include studies published since the initial search. We conducted the new search in Medline, Web of Science, Current Contents Connect and Scopus databases in September 2015. Therefore by combining the two search strategies, we included published CJ studies in the health and health policy literature between 1996 and September 2015.

Remaining consistent with others,9, 21, 23 we included studies that described the deliberative process as having characteristics similar to CJs. Our inclusion criteria required studies to describe any or some characteristics resembling those outlined by the Jefferson Centre9 regardless of whether they were reported in detail. For example, studies were required to define themselves as conducting a citizen jury, community jury, citizen panel or fora, providing information to participants from different “experts” and having a deliberative component.9 Protocols were excluded. All studies were screened independently against eligibility criteria by two authors (R.T. and R.S.). Conflicts were checked periodically and resolved through discussion. Criteria definitions were clarified where necessary.

2.3. Data analysis

Data extraction was conducted by the same two authors using the checklist developed in the Delphi rounds. Data were analysed descriptively using Excel.

3. Results

3.1. Stage 1: initial CJCheck development

In the first Delphi round, 13 CJ methodology reporting criteria were identified as either important or somewhat important and retained. In the second Delphi round, nine items were presented: four from round one that required further ranking and five new items. Following the second round, a further three questions were indicated to be important in the reporting of CJs and were retained, two were combined with other retained items, one item was reworded and retained and three were excluded. The Delphi round outcomes are reported in Table 1. No items were consistently ranked as “somewhat unimportant” or “unimportant.”

Table 1

Ranking outcomes of Delphi rounds 1 and 2

Six checklist items were rated higher in importance than others. To be designated as an important item, it had to be rated either important or somewhat important by a clear majority of respondents (8/8 respondents in the first round and 8/9 respondents in the second round).

The final checklist contained 17 items grouped into four areas: planning of the CJ; juror information and characteristics; procedural information; and the scheduling of CJs.

3.2. Stage 2: adequacy of CJ method reporting

From the two systematic reviews,21, 23 45 studies were excluded because they were not described as a CJ and 22 studies were reported in both reviews, leaving a total of 46 studies for full‐text review. In addition to these, our focused literature search modified from Degeling et al.23 found 1598 de‐duplicated articles for further analysis, of which only 18 met the inclusion criteria and were included in full‐text review and data extraction. We excluded 26 full‐text articles from data extraction, leaving 38 studies in our final analyses (see Fig. 1). Included studies and their characteristics are listed in Table 2, with data extraction details of included studies available in Table S2 and reason for study exclusion from full‐text review provided in Table S3.

Figure 1

PRISMA flow chart

Table 2

Studies included in data extraction

Studies varied widely in their reporting of the methodological processes necessary for reliable CJ reproduction. No study reported all 17 checklist items. On average, only 53% of checklist items (nine items from a possible 17) were reported in any given article (SD=19.5%) and this proportion ranged from 12% (2/17 items13) to 88% (15/17 items24). Overall, the checklist item reported most consistently was “Was the number of jurors reported” (92%, 35/38 studies). In contrast, the least reported item was “Are the expert presentations available,” which was only reported in 5% (2/38 studies) of articles. Figure 2 shows the proportion of articles reporting checklist items.

Figure 2

The number of included studies with rating descriptions of checklist items. *Indicates items identified by Delphi respondents as important

3.3. Planning the CJ

When reporting about planning a CJ, 21% (8/38) of studies described whether they had a stakeholder group or committee and if so, the role of this group in CJ planning. Less than half (45%, 17/38) stated how and why the experts were chosen; however, more (58%, 22/38) defined the roles of the experts and most (89%, 34/38) gave a clear description of the jury “charge.”

3.4. Information about the jurors

The majority of studies reported some type of juror information. The recruitment strategy was described in 66% (25/38) of studies and inclusion and exclusion criteria were described in 58% (22/38). Whether the juror was an affected member of the public, unaffected member of the public, advocate or invited via random sampling was reported in 79% (30/38) of studies. However, only just over half (58%, 22/38) reported the demographic characteristics of the jurors. The most consistently reported juror detail was the number of jurors (92%, 35/38).

3.5. Procedural information

There was a wide variation in what procedural information was provided in the published studies. The role and experience of the facilitator was described in less than half (45%, 17/38) of the examined studies. There was an absence of description and accessibility of materials provided to the jurors, with this reported in only 26% (10/38) of studies. Approximately 68% (26/38) described the expert cross‐examination opportunity provided to the jurors. The outcome of juries was the most consistently reported criterion in this section with 84% (32/38) reporting this outcome. But only 47% (18/38) described how the jurors were instructed to deliberate.

3.6. CJ scheduling information

Overall, CJ scheduling information was poorly reported. The frequency and timing of deliberations was reported in only 29% (11/38) of examined studies. The number of presenters and their topics was reported less than half the time (45%, 17/38) and the availability of expert presentations was reported in only 5% of studies (2/38).

4. Discussion

Overall, our findings indicate considerable opportunity for improvement in the reporting of CJs. No published CJs we assessed fully described CJCheck items identified by CJ researchers as important to report. Less than half of the studies in our analyses reported the role of the stakeholder committee; how and why experts were selected; the role of the facilitator; a description of the materials provided to the jurors; how the deliberations were framed; the scheduling of events; the number of presenters and their topics; and whether the expert presentations were available. The most reported checklist items were the number of jurors, a description of the jury charge and the jury outcome. Although these are essential reporting items, they are not sufficient.

Although in this review we have focused on peer‐reviewed publications, we believe our suggested checklist items apply to all reported CJs. CJ organizers need to be sufficiently transparent and rigorous in reporting their methods for several reasons: leaving open the possibility of attempting to approximate and repeat a CJ elsewhere; enabling judgements about the quality of the CJ process; allowing comparison of juries; and facilitating critical reflection on CJ design and its potential effect on CJ outcomes. We cannot do this if authors inconsistently report methodology and methods.

It is not our contention that if a particular CJ was repeated that the CJ “verdict” and reasoning would necessarily be the same. It is feasible that it might not be, given different contexts and participants and therefore different values and preferences. But it is also feasible that repeating a CJ, with different participants but the same content and processes, may produce similar outcomes. Because CJ methods are not currently consistently reported, we cannot assess these questions. Perhaps more importantly, comprehensive reporting of CJs supports transparency and would permit the improved evaluation of CJ methodologies. How representative, procedurally fair and accountable CJs are in the health sector has not been fully evaluated.4, 21 This hampers the utility of CJs to contribute to important health policy debates.

Our study has numerous strengths and some limitations. We conducted a focused review of published CJ studies within the health sector and mapped descriptions of their methods to criteria developed by CJ researchers. The criteria checklist had two rounds of iterations by published CJ authors to ensure that important methodology and processes were captured. Studies were then independently rated using these criteria for completeness of reporting. However, the criteria may be somewhat limited as they were developed by a group of Australian CJ researchers and policy advisors and may not reflect the wider view of other CJ organizers. We consider this an important future development. Additionally, we rated the adequacy of reporting on the basis of the written publication and did not contact the authors for further information. This was a deliberate choice, as our purpose was to assess the adequacy of reporting of published CJs.

Other CJ research has reported the variations in methodology.5, 21, 23 This was the first study to quantify those variations by mapping researcher‐identified methodological reporting criteria. Our process was similar to the initial development of the TIDieR checklist,25 the uptake of which seems likely to strengthen the utility of intervention research. We intend to expand our consortium of CJ researchers, policy advisors and consumer representatives to develop an internationally agreed CJ reporting template. We expect that the legitimization of CJs’ input into public health policies will be assisted by the use of standards for reporting of methodology and methods.

5. Conclusion

To fully report processes within a CJ, it is important that the planning of the event, information about the jurors and their recruitment, and procedural and technical information be available and clearly documented. Our findings suggest that many current CJ publications in health and health policy literature are inadequately reported. Therefore, methods are not transparent and readers cannot compare CJ processes and outcomes. This may not be solely a result of inadequate reporting, but also due to the absence of specific reporting standards for CJs.

Street et al.21 suggested that strict adherence to CJ methodology as described by the originators might limit the generation of new knowledge. We agree. We advocate that CJ organizers and authors be committed to rigour and openness, but also innovation. It is through testing and adapting methodologies that new ideas are developed and understanding expands. As researchers who utilize CJ methodology, it is not our intention to restrict methodological development, but to enhance understanding. In this study, we present an empirically developed and trialled, proposed checklist for reporting CJ methodology and methods. We advocate broadening CJCheck by creating a reporting standards template developed by a consortium of international CJ researchers, health policy advisors and consumer representatives to use when designing and reporting CJs.

Conflict of Interest

The authors report no conflicts of interest.

Funding Support

RT was supported by a NHMRC Screening and Test Evaluation Program (STEP) Grant (#633033). RS was supported by a Bond University Vice Chancellor's Research Grant Scheme. CD, SMC and LR received funding support from NHMRC Project Grant (#1023197). CD received funding support from a NHMRC Project Grant (#1083079). SMC is funded through NHMRC Career Development Fellowship (#1032963). JMS was funded by an Australian National Preventive Health Agency Fellowship (20STR2013F) and an NHMRC Capacity Building Grant (565501).

References

1. Simera I, Altman DG. Writing a research article that is “fit for purpose”: EQUATOR Network and reporting guidelines. Evid Based Med. 2009;14:132–134. [PubMed]
2. Husereau D, Drummond M, Carswell C, et al. Consolidated health economic evaluation reporting standards (CHEERS) statement. BMJ. 2013;346:f1049. [PubMed]
3. Hoffmann TC, Glasziou P, Boutron I, et al. Better reporting of interventions: template for intervention description and replication (TIDieR) checklist and guide. BMJ. 2014;348:g1687. [PubMed]
4. Abelson J, Forest PG, Eyles J, Smith P, Martin E, Gauvin FP. Deliberations about deliberative methods: issues in the design and evaluation of public participation processes. Soc Sci Med. 2003;57:239–251. [PubMed]
5. Abelson J, Blacksher EA, Li KK, Boesveld SE, Goold SD. Public deliberation in health policy and bioethics: mapping an emerging, interdisciplinary field. J Public Deliberation. 2013;9: Article 5. Available at: http://www.publicdeliberation.net/cgi/viewcontent.cgi?article=1256&context=jpd.
6. Barnes M, Newman J, Sullivan HC. Power, Participation and Political Renewal: Case Studies in Public Participation. Bristol: The Policy Press; 2007.
7. Whitty JA. An international survey of the public engagement practices of health technology assessment organisations. Value Health. 2013;16:155–163. [PubMed]
8. Mitton C, Smith N, Peacock S, Evoy B, Abelson J. Public participation in health care priority setting: a scoping review. Health Policy. 2009;91:219–228. [PubMed]
9. The Jefferson Center . Citizens Jury Handbook. Washington, DC: 2004. Available at: http://www.epfound.ge/files/citizens_jury_handbook.pdf Accessed August 22, 2016.
10. Thomas R, Glasziou P, Rychetnik L, Mackenzie G, Gardiner R, Doust J. Deliberative democracy and cancer screening consent: a randomized control trial of the effect of a community jury on men's knowledge about and intentions to participate in PSA screening. BMJ Open. 2014;4:e005691. [PMC free article] [PubMed]
11. Paul C, Nicholls R, Priest P, McGee R. Making policy decisions about population screening for breast cancer: the role of citizens’ deliberation. Health Policy. 2008;85:314–320. [PubMed]
12. Mosconi P, Castellani C, Villani W, Satolli R. Cystic fibrosis: to screen or not to screen? Involving a citizens’ jury in decisions on screening carrier. Health Expect. 2014;18:1956–1967. [PubMed]
13. Gooberman‐Hill R, Horwood J, Calnan M. Citizens’ juries in planning research priorities: process, engagement and outcome. Health Expect. 2008;11:272–281. [PubMed]
14. Street J, Callaghan P, Braunack‐Mayer AJ, Hiller JE. Citizens’ perspectives on disinvestment from publicly funded pathology tests: a deliberative forum. Value Health. 2015;18:1050–1056. [PubMed]
15. Degeling C, Rychetnik L, Pickles K, et al. “What should happen before asymptomatic men decide whether or not to have a PSA test?” A report on three community juries. Med J Aust. 2015;8:335.e1–335.e6. [PubMed]
16. Braunack‐Mayer AJ, Street JM, Rodgers WA, Givney R, Moss JR, Hiller JE, Flu Views team. Including the public in pandemic planning: a deliberative approach. BMC Public Health. 2010;10:501–510. [PubMed]
17. Clear Horizon Consulting . Evaluation – community engagement process for the 10 year financial plan. Clear Horizon ref no CH14_154. 2015. http://participate.melbourne.vic.gov.au/application/files/3514/4477/8217/Evaluation_of_community_engagement_for_the_10_Year_Financial_Plan.pdf. Accessed March 4, 2016.
18. Department of Premier and Cabinet, South Australian Government . Your Say website. http://yoursay.sa.gov.au/initiatives/citizens-jury. Accessed March 4, 2016.
19. Hawkes N. Women, “jurors” are asked how to present risk‐benefit ratio of breast cancer screening. BMJ. 2012;345:e7886. [PubMed]
20. Boswell J, Settle C, Dugdale A. Who speaks, and in what voice? The challenge of engaging ‘the public’ in health policy decision‐making. Public Manage Rev. 2015;17:1358–1374.
21. Street J, Duszynski K, Krawczyk S, Braunack‐Mayer A. The use of citizens’ juries in health policy decision‐making: a systematic review. Soc Sci Med. 2014;109:1–9. [PubMed]
22. Okoli C, Pawlowski SD. The Delphi method as a research tool: an example, design considerations and applications. Inf Manag. 2004;42:15–29.
23. Degeling C, Carter SM, Rychetnik L. Which public and why deliberate? A scoping review of public deliberation in public health and health policy research. Soc Sci Med. 2015;131:114–121. [PubMed]
24. Burgess M, O'Doherty K, Secko D. Biobanking in British Columbia: discussions of the future of personalized medicine through deliberative public engagement. Pers Med. 2008;5:285–296.
25. Hoffmann TC, Erueti C, Glasziou P. Poor description of non‐pharmacological interventions: analysis of consecutive sample of randomised trials. BMJ. 2013;347:f3755. [PubMed]
26. Bennett P, Smith SJ. Genetics, insurance and participation: how a citizens’ jury reached its verdict. Soc Sci Med. 2007;64:2487–2498. [PubMed]
27. Bombard Y, Abelson J, Simeonov D, Gauvin FP. Eliciting ethical and social values in health technology assessment: a participatory approach. Soc Sci Med. 2011;73:135–144. [PubMed]
28. Bombard Y, Abelson J, Simeonov D, Gauvin FP. Citizens’ perspectives on personalised medicine: a qualitative public deliberation study. Eur J Hum Genet. 2013;21:1197–1201. [PubMed]
29. Carman KL, Mallery C, Maurer M, et al. Effectiveness of public deliberation methods for gathering input on issues in healthcare: results from a randomised trial. Soc Sci Med. 2015;133:11–20. [PubMed]
30. Chafe R, Merali F, Laupacis A, Levinson W, Martin D. Does the public think it is reasonable to wait for more evidence before funding innovative health technologies? The case of PET scanning in Ontario. Int J Technol Assess Health Care. 2010;26:192–197. [PubMed]
31. Chafe R, Lauoacis A, Levinson W. Accepting new patients: What does the public think about Ontario's policy? Can Fam Physician. 2011;57:e68–e73. [PubMed]
32. Dunkerley D, Glasnes P. Empowering the public? Citizens’ juries and the new genetic technologies. Crit Public Health. 1998;8:181–192.
33. Einsiedel EF, Ross H. Animal spare parts? A Canadian public consultation on xenotransplantation. Sci Eng Ethics. 2002;8:579–591. [PubMed]
34. Einsiedel EF. Assessing a controversial medical technology: Canadian public consultations of xenotransplantation. Public Underst Sci. 2002;11:315–331.
35. Elwood P, Longley M. My health: whose responsibility? A jury decides. J Epidemiol Community Health. 2010;64:761–764. [PubMed]
36. Finney C. Implementing a citizen‐based deliberative process on the internet: the Buckinghamshire health authority electronic citizens’ jury in the UK. Sci Pub Policy. 2000;27:45–64.
37. Fish RD, Winter M, Oliver DM, Chadwick DR, Hodgson CJ, Heathwaite L. Employing the citizens’ jury technique to elicit reasoned public judgements about environmental risk: insights from an inquiry into the governance of microbial water pollution. J Environ Plan Manag. 2012;57:233–253.
38. Herbison P, Hay‐Smith J, Paterson H, Ellis G, Wilson D. Research priorities in urinary incontinence: results from citizens’ juries. J Obstetr Gynaecol. 2009;116:713–718. [PubMed]
39. Hodgetts K, Hiller JE, Street JM, et al. Disinvestment policy and the public funding of assisted reproductive technologies: outcomes of deliberative engagements with three key stakeholder groups. BMC Health Serv Res. 2014;14:204–217. [PubMed]
40. Iredale R, Longley M. Public involvement in policy‐making: the case of a citizens’ jury on genetic testing for common disorders. J Consum Stud Home Econ. 1999;23:3–10.
41. Iredale R, Longley M, Thomas C, Shaw A. What choice should we be able to make about designer babies? A citizens’ jury of young people in South Wales. Health Exp. 2006;9:207–217. [PMC free article] [PubMed]
42. Kashefi E, Mort M. Grounded citizens’ juries: a tool for health activism. Health Exp. 2004;7:290–302. [PMC free article] [PubMed]
43. Lee YH, Jin DY. An analysis of citizens’ jury on the Korean national pandemic response system. Javnost Pub. 2014;21:23–38.
44. Lenaghan J, New B, Mitchell E. Setting priorities: is there a role for citizens’ juries? BMJ. 1996;312:1591–1593. [PubMed]
45. Longstaff H, Burgess MM. Recruiting for representation in public deliberation on the ethics of biobanks. Public Underst Sci. 2010;19:212–224. [PubMed]
46. McWhirter RE, Critchley CR, Nicol D, et al. Community engagement for big epidemiology: deliberative democracy as a tool. J Pers Med. 2014;4:459–474. [PubMed]
47. Menon D, Stafinski T. Engaging the public in priority‐setting for health technology assessment: findings from a citizens’ jury. Health Exp. 2008;11:282–293. [PMC free article] [PubMed]
48. Molster C, Maxwell S, Youngs L, et al. Blueprint for a deliberative public forum on biobanking policy: were theoretical principles achievable in practice? Health Exp. 2011;16:211–224. [PMC free article] [PubMed]
49. Moretto N, Kendall E, Whitty J, et al. Yes, the government should tax soft drinks: findings from a citizens’ jury in Australia. Int J Environ Res Public Health. 2014;11:2456–2471. [PubMed]
50. Nep S, O'Doherty K. Understanding public calls for labeling of genetically modified foods: analysis of a public deliberation on genetically modified salmon. Soc Nat Resour. 2013;26:506–521.
51. O'Doherty KC, Burgess MM. Engaging the public on biobanks: outcomes of the BC biobank deliberation. Public Health Genom. 2009;12:203–215. [PubMed]
52. Parkin L, Paul C. Public good, personal privacy: a citizens’ deliberation about using medical information for pharacoepidemiological research. J Epidemiol Community Health. 2011;65:150–156. [PubMed]
53. Rogers WA, Street JM, Braunack‐Mayer AJ, Hiller JE. Pandemic influenza communication: views from a deliberative forum. Health Exp. 2009;12:331–342. [PMC free article] [PubMed]
54. Rychetnik L, Doust J, Thomas R, Gardiner R, MacKenzie G, Glasziou P. A community jury on PSA screening: what do well‐informed men want the government to do about prostate cancer screening – a qualitative analysis. BMJOpen. 2014;4:e004682. [PMC free article] [PubMed]
55. Secko DM, Preto N, Niemeyer S, Burgess MM. Informed consent in biobank research: a deliberative approach to the debate. Soc Sci Med. 2009;68:781–789. [PubMed]
56. Stafinski T, Devidas M, Yasui Y. Assessing the impact of deliberative processes on the views of participants: is it ‘in one ear and out the other’? Health Expect. 2012;17:278–290. [PubMed]
57. Timotijevic L, Raats MM. Evaluation of two methods of deliberative participation of older people in food‐policy development. Health Policy. 2007;82:302–319. [PubMed]
58. Toni A, von Braun J. Poor citizens decide on the introduction of GMOs in Brazil. Biotechnol Dev Monit. 2001;47:7–9.

Articles from Health Expectations : An International Journal of Public Participation in Health Care and Health Policy are provided here courtesy of Wiley-Blackwell