Demands from health policy-makers and managers for syntheses of evidence that are useful, rigorous and relevant are fuelling interest in the development of methods that can allow the integration of diverse types of evidence [66
]. With the diversity of techniques for evidence synthesis now beginning to appear, those using existing, 'new' or evolving techniques need to produce critical reflexive accounts of their experiences of using the methods [3
]. Our experience of conducting a review of access to healthcare, where there is a large, amorphous and complex body of literature, and a need to assemble the findings into a form that is useful in informing policy and that is empirically and theoretically grounded [67
], has led us to propose a new method – Critical Interpretive Synthesis – which is sensitised to the kinds of processes involved in conventional systematic review while drawing on a distinctively qualitative tradition of inquiry.
Conventional systematic review methodology is well-suited to aggregative syntheses, where what is required is a summary of the findings of the literature under a set of categories which are largely pre-specified, secure, and well-defined. It has been important in drawing attention to the weaknesses of informal reviews, including perceived failures in their procedural specification and the possibility that the (thus) undisciplined reviewer might be chaotic or negligent in identifying the relevant evidence, or might construct idiosyncratic theories and marshall the evidence in support of these. It has thus revealed some of the pitfalls of informal literature review. Conventional systematic review methodology has demonstrated considerable benefits in synthesising certain forms of evidence where the aim is to test
theories (in the form of hypotheses), perhaps especially about "what works". However, this approach is limited when the aim, confronted with a complex body of evidence, is to generate
Current methods for conducting an interpretive synthesis of the literature, (such as meta-ethnography) are also limited, in part because application of many interpretive methods for synthesis has remained confined to studies reporting qualitative research. Realist synthesis [68
], which does include diverse forms of evidence, is oriented towards theory evaluation, in particular by focusing on theories of change. Methods for including qualitative and quantitative evidence in systematic reviews developed by the EPPI Centre at the Institute of Education, London, have involved refinements and extensions of conventional systematic review methodology [6
], and have limited their application of interpretive techniques to synthesis of qualitative evidence.
More generally, many current approaches fail to be sufficiently critical
, in the sense of offering a critique. There is rarely an attempt to reconceptualise the phenomenon of interest, to provide a more sweeping critique of the ways in which the literature in the area have chosen to represent it, or to question the epistemological and normative assumptions of the literature. With notable exceptions such as the recent approach of meta-narrative analysis [15
], critique of papers in current approaches to review tends to be limited to appraisal of the methodological specificities of the individual papers.
Conducting an interpretive review of the literature on access to healthcare by vulnerable groups in the UK therefore required methodological innovation that would be alert to the issues raised by systematic review methodology but also move beyond both its limitations and those of other current interpretive methods. The methods for review that we developed in this project (Table ) built on conventional systematic review methodology in their sensitivity to the need for attentiveness to a range of methodological processes. Crucially, in doing so, we drew explicitly on traditions of qualitative research inquiry, and in particular on the principles of grounded theory [5
Key Processes in critical interpretive synthesis
In addition to its explicit orientation towards theory generation, perhaps what most distinguishes CIS from conventional systematic review methods is its rejection of a "stage" approach to review. Processes of question formulation, searching, selection, data extraction, critique and synthesis are characterised as iterative, interactive, dynamic and recursive rather than as fixed procedures to be accomplished in a pre-defined sequence. CIS recognises the need for flexibility in the conduct of review, and future work would need to assess how far formal methods of critical appraisal and data extraction will be essential elements of the method. Our experience suggests that while attention to scientific quality is required, more generally the emphasis should be on critique rather than critical appraisal, and an ongoing critical orientation to the material examined and to emerging theoretical ideas. Formal data extraction may also be an unnecessarily constraining and burdensome process.
CIS emphasises the need for theoretical categories to be generated from the available evidence and for those categories to be submitted to rigorous scrutiny as the review progresses. Further, it emphasises a need for constant reflexivity to inform the emerging theoretical notions, and guides the sampling of articles. Although CIS demands attention to flaws in study design, execution and reporting in our judgements of the quality of individual papers, its critical approach goes beyond standard approaches. Thus, in our review, some methodologically weak papers were important in terms of their theoretical contribution, or in terms of demonstrating the breadth of evidence considered in the construction of particular categories, or in terms of providing a more comprehensive summary of the evidence, while a single strong paper might be pivotal in the development of the synthesis. Hughes and Griffiths' paper on micro-rationing of healthcare [61
], for example, was a key paper in helping to generate the construct of candidacy that later came to unify the themes of our analysis. The critical interpretation in our analysis focused on how a synthesising argument could be fashioned from the available evidence, given the quality of the evidence and the kinds of critiques that could be offered of the theory and assumptions that lay behind particular approaches. In treating the literature as an object of scrutiny in its own right, CIS problematises the literature in ways that are quite distinctive from most current approaches to literature reviewing.
Access to healthcare
The CIS approaches we adopted deferred final definition of the phenomenon of access and the appropriate ways of conceptualising it until our analysis was complete. Our critique of the current literature focused on the inadequacies of studies of utilisation as a guide to explaining inequities in health care. The conceptual model of access that we developed emphasises candidacy as the core organising construct, and recasts access as highly dynamic and contingent, and subject to constant negotiation.
In this conceptual model of access to healthcare, health services are continually constituting and seeking to define the appropriate objects of medical attention and intervention, while at the same time people are engaged in constituting and defining what they understand to be the appropriate objects of medical attention and intervention. Candidacy describes how people's eligibility for healthcare is determined between themselves and health services. Candidacy is a continually negotiated property of individuals, subject to multiple influences arising both from people and their social contexts and from macro-level influences on allocation of resources and configuration of services. "Access" represents a dynamic interplay between these simultaneous, iterative and mutually reinforcing processes. By attending to how vulnerabilities arise in relation to candidacy, the phenomenon of access can be much better understood, and more appropriate recommendations made for policy, practice and future research. Although our review focused on the UK, we suggest that the construct of candidacy is transferable, and has useful explanatory value in other contexts.
In addition to the core construct of candidacy, our analysis required the production of a number of other linked synthetic constructs – constructs generated through an attempt to summarise and integrate diverse concepts and data – including "adjudications" and "offers". It was also possible to link existing "second order" constructs, for example relating to help-seeking as the identification of candidacy by patients, into the synthesising argument, and making these work as synthesising constructs. We feel that this approach allows maximum benefit to be gained from previous analyses as well as the new synthesis.
Reflections on the method
Clearly, questions can be raised about the validity and credibility of the CIS analysis we have presented here. Conventional systematic review methodology sets great store by the reproducibility of its protocols and findings. It would certainly have been possible to produce an account of the evidence that was more reproducible. For example, we could have used the evidence to produce a thematic summary that stuck largely to the terms and concepts used in the evidence itself. However, we felt it important that we produced an interpretation of the evidence that could produce new insights and fresh ways of understanding the phenomenon of access, and that the "critical voice" of our interpretation was maintained throughout the analysis. Simply to have produced a thematic summary of what the literature was saying would have run the risk of accepting that the accounts offered in the evidence-base were the only valid way of understanding the phenomenon of access to healthcare by vulnerable groups. We therefore make no claim to reproducibility, but wish to address some possible concerns. First, it could be argued that a different team using the same set of papers would have produced a different theoretical model. However, the same would be true for qualitative researchers working with primary qualitative data, who accept that other possible interpretations might be given to, say, the same set of transcripts. Clearly, the production of a synthesizing argument, as an interpretive process, produces one privileged reading of the evidence, and, as the product of an authorial voice, it cannot be defended as an inherently reproducible process or product. We would suggest, however, that our analysis can be defended on the grounds that it is demonstrably grounded in the evidence; that it is plausible; that it offers insights that are consistent with the available evidence; and that it can generate testable hypotheses and empirically valuable questions for future research.
Second, subjecting a question to continual review and refinement, as we did, may make it more difficult for those conducting critical interpretive reviews to demonstrate, as required by conventional systematic review methodology, the "transparency", comprehensiveness, and reproducibility of search strategies. This dilemma between the "answerable" question and the "meaningful" question has received little attention, but it underpins key tensions between the two ends of the academic/pragmatic systematic review spectrum. On balance, faced with a large and amorphous body of evidence in an area such as access to healthcare, and given the aims of an interpretive synthesis, we feel that our decision not to limit the focus of the review at the outset, and our subsequent sampling strategies, were well justified. Our decision not to commit to a particular view of what access might be and how it should be assessed at the outset of the project was critical to our subsequent development of a more satisfactory understanding of access.
Third, it could be argued that we have synthesized too small a sample of the available papers, or that the processes used to select the papers are not transparent. We recognize that we have analyzed and synthesized only a fraction of all relevant papers in the area of access to healthcare by vulnerable groups. However, a common strategy in conventional systematic review is to limit the study types to be included; this strategy also might result in only a proportion of the potentially relevant literature being synthesised. While we have described our methods for sampling as purposive, it is possible that another team using the same approach could have come up with a different sample, because, particularly in the later stages of our review, our sampling was highly intuitive and guided by the emerging theory.
The final version of the conceptual model of access to healthcare that we eventually developed did not emerge until quite late in the review process, and much of the later sampling was directed at testing and purposively challenging the theory as we began to develop it. Again, such forms of searching and sampling do not lend themselves easily to reproducibility or indeed auditability. Testing whether the interpretations change in response to different findings will be an important focus for future research, which will also need to evaluate whether apparently disconfirming evidence is the result of methodological flaws or poses a genuine challenge to theory.