1. What are the key steps in producing a valid and reliable systematic review using a realist or meta-narrative approach?
2. How might 'high' and 'low' quality in such reviews be defined and assessed [a] at the grant application stage; [b] during the review; [c] at publication stage and [d] by end-users of such reviews?
3. What are the key learning outcomes for a student of realist or meta-narrative review, and how might performance against these outcomes be assessed?
Literature review, iterative online Delphi panel and real-time engagement with new, ongoing reviews (Figure ).
1. To collate and summarise the literature on the principles of good practice in realist and meta-narrative reviews, highlighting in particular how and why these differ from conventional forms of systematic review and from each other.
2. To consider the extent to which these principles have been followed by published and in-progress reviews, thereby identifying how rigour may be lost and how existing principles could be improved.
3. Using an online Delphi method with an interdisciplinary panel of experts from academia and policy, to produce, in draft form, an explicit and accessible set of methodological guidance and publication standards.
4. To produce training materials with learning outcomes linked to these steps and standards.
5. To pilot these standards and training materials prospectively on real reviews-in-progress, capturing methodological and other challenges as they arise.
6. To synthesise expert input, evidence review and real-time problem analysis into more definitive guidance and standards
7. To disseminate these guidance and standards to audiences in academia and policy
(1) and (2) will be achieved via a narrative review of the literature and supplemented by collating feedback from presentation(s) and workshop(s). These will feed into (3), which will be achieved via an online Delphi panel. The panel will include wide representation from researchers, students, policymakers, theorists and research sponsors. For (4), we will draw on our experience in developing and delivering relevant education modules. For (5), we will capture new realist reviews in progress as people approach us for help and guidance and seek their informed participation in piloting the new materials. (6) and (7) will be addressed by preparing academic publications, online resources and by delivering presentations and workshops.
We aim to generate three main outputs:
1. Quality standards and methodological guidance for realist and meta-narrative reviews for use by researchers, research sponsors, students and supervisors
2. A 'RAMESES' statement (comparable to CONSORT or PRISMA) of publication standards for such reviews, published in an open-access academic journal.
3. A training module for researchers, including learning outcomes, outline course materials and assessment criteria.
Management and governance
The development of guidelines and guidance is a complex and contested process [41
]. It is crucial to avoid the 'GOBSAT' (good old boys sat around a table) approach and ensure that [a] those who contribute to the process represent a diverse, informed and representative sample from both academia and policymaking and that [b] the process itself is systematic, auditable and justifiable. To that end, we will have a small core research team which will meet regularly to review progress, set the next work phase and produce minutes. We will report six-monthly to an advisory steering group, to whom we will present a project update and financial report.
In addition, approximately halfway through the study period, we will present our emerging findings formally to a panel of external researchers in order to collate additional feedback in a technique known as the 'fishbowl'. We will recruit a maximum variety sample of approximately 10 experts in systematic review. The main criterion for inclusion will be academic standing in the critical appraisal and evaluation of qualitative research studies and/or in evidence synthesis, including but not limited to those already familiar with realist or meta-narrative review. We will circulate materials in advance of the fishbowl workshop, including goals of the project, methodology and provisional standards and guidance. The fishbowl session will comprise presentation from the research team followed by discussion, facilitated by someone outside the core research team. The session will be recorded and minuted, and recommendations used to inform revision of the protocol as needed.
The study was deemed exempt from NHS research ethics approval (personal communication S Burke 14.2.11, East London and City Research Ethics Committee).
Details of literature search methods
Our initial exploratory searches have found that the literature in this field is currently small but is expanding rapidly, and that it is of broad scope, variable quality and inconsistently indexed. The purpose of identifying published reviews is not to complete a census of realist and meta-narrative studies. Our comprehensive search will allow us to pinpoint real examples (or publications claiming to be examples) which provide rich detail on their usage of those review activities we wish to scrutinise and formalise. To that end, and drawing on a previous study which demonstrated the effectiveness and efficiency of the methods proposed [42
], and employing the skills of a specialist librarian, we will employ three approaches:
1. Identifying seminal sources known to the research team and other experts in the field (e.g. via relevant networks and email lists).
2. Snowballing both backwards (pursuing references of references) and forwards (using citation-tracking software to identify subsequent publications citing the index paper) from seminal theoretical/methodological publications and empirical examples of realist and meta-narrative reviews. For reviews of heterogeneous bodies of evidence, snowball techniques are more effective and efficient than hand searching or using predefined search strings on electronic databases [42
3. Database searching, especially with a view to identifying grey literature such as PhDs and unpublished reports (some will represent robust and critical applications of the methods and others will highlight 'commonly occurring mistakes and misconceptions').
In addition to identifying a broad range of examples of actual reviews, we will also capture papers describing methodological and theoretical critiques of the approaches being studied.
We will conduct a thematic analysis of this literature which will initially be oriented to addressing six questions, but to which we will add additional questions and topic areas (in order to better capture our analysis and understanding of the literature) as these emerge from our reading of the papers:
1. What are the strengths and weaknesses of realist and meta-narrative review from both a theoretical and a practical perspective?
2. How have these approaches actually been used? Are there areas where they appear to be particularly fit (or unfit) for purpose?
3. What, broadly, are the characteristics of high-quality (and low-quality) reviews undertaken by realist or meta-narrative methods? What can we learn from the best (and worst) examples so far?
4. What challenges have reviewers themselves identified (e.g. in the introduction or discussion sections of their papers) in applying these approaches? Are there systematic gaps between the 'theory' and the steps actually taken?
5. What is the link between realist and meta-narrative review and the policymaking process? How have published reviews been commissioned or sponsored? How have policymakers been involved in shaping the review? How have they been involved in disseminating and applying its findings? Are there models of good practice (and of approaches to avoid) for academic-policy linkage in this area?
6. How have front-line staff and service users been involved in realist and meta-narrative reviews? If the answer to this is 'usually, not much', how might they have been involved and are there examples of potentially better practice which might be taken forward?
7. How should one choose between realist, meta-narrative and other theory-driven approaches when selecting a review methodology? How might (for example) the review question, purpose and intended audience(s) influence the choice of review method?
The output of this phase will be a provisional summary organised under the above headings and highlighting for each question the key areas of knowledge, ignorance, ambiguity and uncertainty. This will be distributed to the Delphi panel as the starting-point for their guidance development work.
Details of online Delphi process
We will follow an online adaptation of the Delphi method (see above) which we have developed and used in a previous study to produce guidance on how to critically appraise research on illness narratives [38
]. In that study, a key component of a successful Delphi process was recruiting a wide range of experts, policymakers, practitioners and potential users of the guidance who could approach the problem from different angles, and especially people who would respond to academic suggestions by asking "so-what" questions.
Placing the academic-policy/practice tension central to this phase of the research, we hope to construct our Delphi panel to include a majority of experienced academics (e.g. those who have published on theory and method in realist and/or meta-narrative review). We will also hope to recruit policymakers, research sponsors and representatives of third sector organisations. These individuals will be recruited by approaching relevant organisations and email lists (e.g. professional networks of systematic reviewers, C.H.A.I.N., INVOLVE), providing an outline of the study and selecting those with greatest commitment and potential to balance the sample.
We will draw on our own experience of developing standards and guidance, as well as on published papers by CONSORT, PRISMA, AGREE, SQUIRE and other teams working on comparable projects [15
The Delphi panel will be conducted entirely via the Internet using a combination of email and online survey tools. It will begin with a 'brainstorm' round ('round 1') in which participants will be invited to submit personal views, exchange theoretical and empirical papers on the topic and suggest items that might could be included in the publication standards. This will be done as a warm-up exercise and panel members will be sent our own preliminary summary (see above). These early contributions, along with our summary, will be collated and summarised in a set of provisional statements, which will be listed in a table and sent to participants for ranking ('round 2'). Participants will be asked to rank each item twice on a 9-point Likert scale (1 = strongly against to 9 = strongly in favour), once for relevance (i.e. should a statement on this theme/topic be included at all in the guidance?) and once for validity (i.e. to what extent do you agree with this statement as currently worded?). Those who agree that a statement is relevant but disagree on its wording will be invited to suggest changes to the wording. In this second round, participants will again be invited to suggest additional topic areas and items.
Each participant's responses will be collated and the numerical rankings entered onto an Excel spreadsheet. Median, inter-quartile and maximum-minimum range for each response will be calculated. Statements that score low on relevance will be omitted from subsequent rounds. Further online discussion will be invited on statements that score high on relevance but low on validity (indicating that a rephrased version of the statement is needed) and on those where there is wide disagreement about relevance or validity. Following discussion, a second list of statements will be drawn up and circulated for ranking ('round 3'). The process of collation of responses, further email discussion, and re-ranking will be repeated until maximum consensus is reached ('round 4' et seq.). In practice, very few Delphi panels, online or face to face, go beyond three rounds since participants tend to 'agree to differ' rather than move towards further consensus [38
Residual non-consensus will be reported as such and the nature of the dissent described. Making such dissent explicit tends to expose inherent ambiguities (which may be philosophical or practical) and acknowledges that not everything can be resolved; such findings may be more use to reviewers than a firm statement which implies that all tensions have been "fixed".
Preparing teaching and learning resources
A key objective of this study is to produce publicly accessible resources to support training in realist and meta-narrative review. We anticipate that these resources will need to be adapted and perhaps supplemented for different groups of learners, and interactive learning activities added [44
]. Taking account of the format and orientation of other comparable materials (e.g. courses produced by the International Cochrane and Campbell Collaborations), though not necessarily aligning with these, we will develop and pilot draft learning objectives, example course materials and teaching and learning support methods. We will draw on our previous work on course development, quality assurance and support for interactive and peer-supported learning in healthcare professionals [35
The sponsor of this study, the National Institute for Health Research Service Delivery and Organisation (NIHR SDO) Programme, supports secondary research calls for rapid, policy-relevant reviews, some though not all of which seek to use realist or meta-narrative methods. We will work with a select sample of teams funded under such calls, as well as other teams engaged in relevant ongoing reviews (selected to balance our sample), to share emerging recommendations and gather real-time data on how feasible and appropriate these recommendations are in a range of different reviews. Over the 27-month duration of this study, we anticipate recruiting two cohorts of review teams over the course of this study: with the first cohort, we will use provisional standards, guidance and training materials based on our initial review of the literature. With the second cohort, we will pilot the standards, guidance and training materials which have been produced/refined via the Delphi process. After following two cohorts of review teams through their reviews, we will further revise the outputs as a master document before considering how to modify these for different audiences.
Training and support offered to these review teams will consist of three overlapping and complementary packages:
1. An 'all-comers' online discussion forum via Jiscm@il http://www.jiscmail.ac.uk/RAMESES
for interested reviewers who are currently doing or have previously attempted a realist or meta-narrative review. This will be run via 'light-touch' facilitation in which we invite discussion on particular topics and periodically summarise themes and conclusions (a technique known in online teaching as 'weaving'). Such a format typically accommodates large numbers of participants since most people tend to 'lurk' most of the time. Such discussion groups tend to generate peer support through their informal, non-compulsory ethos and a strong sense of reciprocity (i.e. people helping one another out because they share an identity and commitment) [47
] and they are often rich sources of qualitative data. We anticipate that this forum will contribute key themes to the quality and reporting standards and learning materials throughout the duration of the study.
2. Responsive support to our designated review teams. Our input to these teams will depend on their needs, interests and previous experience and hence is impossible to stipulate in detail in advance. In our previous dealings with review teams we have been called upon (for example) to assist them in distinguishing 'context' from 'mechanism' in a particular paper, extracting and formalising programme theories, distinguish middle-range theories from macro or micro theories, develop or adapt data extraction tools, advise on data extraction techniques, and train researchers in the use of qualitative software for systematic review.
3. A 'learning set' series of workshops for designated review teams. Much of the learning in such workshops is likely to come from the review teams themselves, and if participants are experienced and wish to offer teaching to others on particular relevant topics this will be encouraged. For the first workshop we will prepare a core syllabus of basic training oriented to explicit learning outcomes, delivered as a combination of prior self-study materials and short taught sessions on the day. Even at the first workshop, however, most of the time will be spent applying the basic principles to the real worked examples of reviews being undertaken.
As explained above, the first cohort of review teams will be run as a pilot and we will explain this to the participants, thereby gaining their active engagement in improving the programme for subsequent learners.