|Home | About | Journals | Submit | Contact Us | Français|
Objective To examine the use of qualitative approaches alongside randomised trials of complex healthcare interventions.
Design Review of randomised controlled trials of interventions to change professional practice or the organisation of care.
Data sources Systematic sample of 100 trials published in English from the register of the Cochrane Effective Practice and Organisation of Care Review Group.
Methods Published and unpublished qualitative studies linked to the randomised controlled trials were identified through database searches and contact with authors. Data were extracted from each study by two reviewers using a standard form. We extracted data describing the randomised controlled trials and qualitative studies, the quality of these studies, and how, if at all, the qualitative and quantitative findings were combined. A narrative synthesis of the findings was done.
Results 30 of the 100 trials had associated qualitative work and 19 of these were published studies. 14 qualitative studies were done before the trial, nine during the trial, and four after the trial. 13 studies reported an explicit theoretical basis and 11 specified their methodological approach. Approaches to sampling and data analysis were poorly described. For most cases (n=20) we found no indication of integration of qualitative and quantitative findings at the level of either analysis or interpretation. The quality of the qualitative studies was highly variable.
Conclusions Qualitative studies alongside randomised controlled trials remain uncommon, even where relatively complex interventions are being evaluated. Most of the qualitative studies were carried out before or during the trials with few studies used to explain trial results. The findings of the qualitative studies seemed to be poorly integrated with those of the trials and often had major methodological shortcomings.
Randomised controlled trials are used widely for showing causal relations in health and social care because their study design is the only one that is able to control for unknown or unmeasured confounders. Randomised controlled trials are sometimes used to evaluate “complex” interventions—that is, those “made up of various interconnecting parts”1 that act both “independently and inter-dependently.”2 3 Qualitative approaches can contribute in several ways to the development and evaluation of both complex and other health interventions (box 1). Consequently, increasing numbers of randomised controlled trials of such interventions include qualitative components4 5 and interest in this approach is growing. The use of multiple, integrated approaches may be particularly useful in the evaluation of the effects of complex health and social care interventions as these involve social or behavioural processes that are difficult to explore or capture using quantitative methods alone.1
The need for methodological research on the ways in which qualitative approaches should be used in randomised controlled trials has been discussed widely.6 7 8 9 10 11 However, to our knowledge no studies have attempted to examine systematically current practice on the use of qualitative approaches in randomised controlled trials of complex healthcare interventions and how they could be used to improve the usefulness and policy relevance of the findings of a trial. By complex interventions we mean those including at least some of the following characteristics: several elements that “may act both independently and inter-dependently”2; complex systems or mechanisms for delivery of the intervention; an intervention that is difficult to describe and replicate; complex explanatory pathways, either physiological or psychosocial; and a degree of uncertainty about the mechanism of action of the intervention or its “active ingredient.”2
We systematically examined the use of qualitative approaches alongside randomised controlled trials of complex healthcare interventions to provide an overview of current practice in this area and ways of identifying qualitative studies undertaken alongside randomised controlled trials. We also explored how trial teams could improve the quality of qualitative studies linked to randomised controlled trials and how synergies between qualitative approaches and randomised controlled trials can be maximised.
We obtained a list of all randomised controlled trials published in English during 2001-3 and included in the register of the Cochrane Effective Practice and Organisation of Care Review Group.12 From the list for each year (492 randomised controlled trials in total) we sampled every fifth study to obtain a sample of 100 trials (33 or 34 trials from each year). We chose this approach for several reasons. Firstly, we wanted a sample of recently published trials of more complex interventions as we assumed that the use of qualitative methods alongside such trials has increased in recent years. Secondly, the randomised controlled trials needed to have been published sufficiently long ago to allow associated qualitative studies also to have been published. Thirdly, we thought the Cochrane Effective Practice and Organisation of Care Review Group register, with its specific focus on interventions to change professional practice and the organisation of care, more likely to include a higher proportion of relatively complex randomised controlled trials than databases such as Medline or Embase. Details on studies included in this register and the search strategies used to locate them are available elsewhere.12
Where the sampled report was not the primary paper for the randomised controlled trial, the primary report was located. We attempted to identify all published and unpublished qualitative studies linked to these randomised controlled trials. We defined a qualitative study as any study that used qualitative methods for data collection or analysis. We initially checked the primary randomised controlled trial for citations of qualitative studies. We then located the primary randomised controlled trial in PubMed and searched for related studies and other studies published by the authors of the randomised controlled trial. We also located the randomised controlled trial in the Science and Social Science Citation Index and checked the list of studies citing the paper. Any potentially relevant titles and abstracts were examined and full papers obtained where necessary. Finally, we contacted the authors of the randomised controlled trials for information on any published or unpublished qualitative studies linked to their trials. We received responses for 76 of the 100 papers.
Two reviewers used a standard form to extract data from each “case”—that is, the randomised controlled trial and any qualitative studies. This included descriptions of the randomised controlled trials and qualitative studies; the quality of the randomised controlled trials and the qualitative studies; and information on the approaches (if any) used by the authors to combine the findings of the randomised controlled trials and qualitative studies. The quality of the trials was assessed using the quality checklist of the Cochrane Effective Practice and Organisation of Care Review Group.13 The quality of the qualitative studies was assessed using a modified checklist from the critical appraisal skills programme.14 These modifications included further details on whether the qualitative approach was justified and appropriate to the research question, whether the research context was described adequately, and items to differentiate adequate reporting of methods from the appropriateness of those methods, in relation to the research question. We summarised the findings of the study narratively.
Thirty of the 100 included randomised controlled trials had qualitative work associated with them. Nineteen of these qualitative studies were published, either as stand alone papers or within another paper. In 23 (77%) of the 30 cases in which qualitative approaches were used, the researchers employed qualitative methods for both data collection and analysis. In the remaining seven cases some form of qualitative data collection (for example, group discussions or individual interviews) was used, but no formal analysis of these data using qualitative approaches was reported. Most of the qualitative studies (n=25) were carried out before or during the randomised controlled trial (figure(figure).). An explicit theoretical basis for the intervention was reported in 12 of the 30 cases.
The 30 trials that included qualitative research were carried out in a variety of settings, from general practices to communities and consumers’ homes. Twenty four of the trials were carried out in primary care and the remaining six trials evaluated interventions in secondary care or across a mix of levels.
The trials dealt with a wide range of healthcare issues, the most common being mental health, the appropriate use of medicines, and sexual health. All the trials were carried out in high income countries. The methodological quality of the trials that included qualitative research was similar to those without such studies.
The objectives of the qualitative studies varied widely (table 11).). The 16 studies done either before the trial, or before and during the trial, had one or more of the following objectives: to explore the knowledge, attitudes, or practices of the target groups about the topic in question; to explore the illness experience of consumers; to develop the intervention; and to develop the instrument used to measure the effects of the intervention in the randomised controlled trial.
The nine qualitative studies done during the trials had a wide range of objectives. These included describing the intervention as delivered and exploring issues influencing the effects of the intervention, the illness experience of consumers, participants’ experiences of the intervention (box 2), and reasons for refusal to participate in the trial (box 3).
Researchers carried out a randomised controlled trial in the United Kingdom to evaluate the effects of joint teleconsultations on hospital follow-up appointments. They concluded that patients’ overall satisfaction was higher for teleconsultations than for conventional outpatient appointments. A proportion of patients were, however, dissatisfied with their teleconsultation. It was not possible to determine from the trial findings the reasons for satisfaction or dissatisfaction or how satisfaction could be increased in other telemedicine programmes. The qualitative study aimed to answer these questions.
Semistructured individual interviews were carried out with 24 patients within one month of their teleconsultations. The researchers used the framework approach to carry out a thematic analysis of these data.
Joint teleconsultations were, overall, highly acceptable to patients for several reasons. These included enhanced customer care, such as enhanced convenience, reduced cost, and increased punctuality. Most patients also appreciated the presence of the general practitioner in the consultation, feeling that this improved communication between the specialist and generalist and allowed the general practitioner to summarise and interpret the consultation for the patient. However, one patient stated that they had been excluded from the consultation. Other patients were dissatisfied with parts of the consultation because they would have preferred to be examined directly by the specialist. Some patients found that the technology interfered with their communication with the doctor—for instance, because of lack of synchronisation between sound and vision. The authors of the study discussed how some of these factors could be tackled in joint teleconsultations in the future.
Researchers carried out a randomised controlled trial in the UK to evaluate the effectiveness of pharmacist run clinical drug reviews in patients aged 65 years or more from general practices. Eligible participants were contacted by post, sent one reminder, and contacted by telephone if no response had been made. Twenty six per cent (n=68) of those contacted by telephone did not wish to participate in the study. The aim of the qualitative study was to identify reasons for non-participation.
Researchers used unstructured questions to ask the 68 patients their reasons for not wishing to participate. The responses were recorded in writing. The researchers independently carried out a thematic analysis of these data, then agreed on a set of categories.
Ten broad categories of reasons for non-participation were identified. These included administrative categories such as difficulties in reading the invitation letter and not being available at the time suggested. Other categories were tied to behavioural factors such as confusion or lack of understanding of the trial, negative attitudes to health care, and mistrust of the objectives of the trial. The authors suggested that these factors needed to be addressed to increase the number of patients consenting to studies on drug review.
Of the four qualitative studies carried out after the trial, two explored participants’ experiences of the intervention, one explored factors influencing the effects of the intervention (box 4), and one analysed the process for development of the intervention.
Researchers evaluated a strategy for implementing guidelines for nursing care during labour in hospital in Canada. The results of the trial were mixed. The researchers then carried out a qualitative study to explore why this guideline was introduced successfully in some settings but not in others.
A case study approach was used, with individual interviews and group discussions done with nurses, nurse administrators, and nurse educators at the study sites. Interviews were audiorecorded and transcribed and a form of thematic analysis was applied.
A wide range of factors related to the study settings, the recipients of the guideline, and the characteristics of the guideline interacted to affect implementation of the guideline. Important factors included changes to the external environment of clinical practice; leadership and the availability of equipment in the study settings; concerns among the health professionals targeted; and strategies used to promote uptake of the guideline. The authors concluded that more attention was needed to identify organisational barriers to change and to address these using tailored implementation strategies.
The methodological approaches of the included studies were heterogeneous. Whereas 19 of the qualitative studies did not refer to any specific methodological approach, 11 mentioned approaches such as grounded theory, ethnography, action research, and narrative approaches. Ten studies used several methods for data collection (most often combinations of individual interviews and group discussions), 10 utilised individual interviews only, five used focus groups only, and two used different forms of observation. The remaining three studies were unpublished and we were not able to obtain further information on data collection from the authors.
A number of studies inadequately reported several aspects of the methods: 13 did not describe their sampling approach and the remainder used a mix of purposive, convenience, and random sampling. In 14 studies we could find no information on the approach used to analyse data. Where such information was reported, thematic or content analysis or framework analysis was utilised (n=10) and, more rarely, a grounded theory approach (n=2). Four studies used other approaches.
Where the findings of the trial and qualitative studies were reported in separate papers, the link between the two was not always clear from the papers themselves. Sixteen of the qualitative studies shared authors with the report of the randomised controlled trial. Only nine papers explicitly described some level of linkage between the study teams.
In two of the studies the researchers stated that they had used a “mixed method” approach. Our review of the studies indicated some integration in the interpretation of the results from the trials and the qualitative research in eight cases although the analysis of the data was carried out separately. In most cases (n=13), however, we found no evidence of integration at the level of interpretation.
Ten qualitative studies (including the seven with no formal analysis of the qualitative data) did not provide sufficient data to allow assessment of methodological quality. Quality assessment was therefore carried out on 20 studies (table 22).). This showed high variability, with the most common weaknesses including lack of a clear justification for the qualitative approach used (no information in 16 studies); inadequate descriptions of context, sampling, data collection, and analysis methods; little reflection on the researcher’s role in the research process (no information in 17 studies); lack of clarity on how ethical issues had been taken into consideration (no information in 15 studies), and insufficient evidence to support the claims made in the paper.
We did not identify any relation between interventions that reported using an explicit theoretical framework and the quality of qualitative studies carried out alongside trials of those interventions.
Qualitative studies undertaken alongside randomised controlled trials of interventions to change organisation and practice remain uncommon. Less than one third of recently completed trials of relatively complex interventions in the Cochrane Effective Practice and Organisation of Care register included some form of qualitative research. Of these, only about two thirds were published studies. This is surprising given the nature of these interventions and the growing awareness of the role that qualitative research can play in the design and evaluation of interventions.5 8 19 20 Furthermore, contacts with authors suggested that many valued the findings of qualitative studies. Why then are qualitative approaches not used more extensively alongside trials? Constraints on resources and poor access to relevant expertise were mentioned by study authors in response to our requests for information on qualitative studies. It has also been suggested that linear models for evaluating interventions may impede the use of qualitative approaches. These models, it is suggested, view such evaluation as passing through a series of phases from the development of hypotheses to efficacy trials and then effectiveness trials. This may contribute to the view that earlier phases of research, such as efficacy trials, do not need to incorporate qualitative studies to explore the effects of contextual and other moderating factors. Such methods are seen as important only in the later phases of evaluation.21
Although much has been written on qualitative process evaluation alongside trials of complex interventions, the largest group of qualitative studies identified were those carried out before trials. Firstly, this suggests that reviewers who aim to understand better the effects of interventions through examining qualitative process evaluations may locate little data. Secondly, it indicates the need for more attention to this aspect of trial design.
The rigour of qualitative studies undertaken alongside randomised controlled trials, or at least the reporting of methods used, is an important concern. We identified major shortcomings in many of these studies, particularly issues of sampling, analysis, and critical analysis of the researchers’ roles. Interestingly an explicit theoretical basis for the intervention was reported in over a third of cases—a higher proportion than reported in recent reviews on the use of theory in implementation research.22 23 Twice as many of the randomised controlled trials that included qualitative work also had a clearly specified theoretical basis (40%) compared with randomised controlled trials without any qualitative work (20%). However, the use of theory is by no means the norm in studies in this specialty (only 27% of randomised controlled trials did so explicitly) and it remains unclear whether interventions based explicitly on a particular theoretical approach are more likely to be effective than those designed using pragmatic processes.24 25 26
In our sample we found little evidence of explicit integration of data from qualitative studies and randomised controlled trials and few cases discussed mixed methods approaches. Such data could be integrated in several ways. Discussion of the trial findings could draw on both the qualitative and quantitative data in, for example, exploring reasons for success or failure of the intervention or for variation in effects across sites or individuals. Description of the intervention could also make explicit how qualitative approaches contributed, for example, to identifying barriers to change and developing the intervention. The extent of collaboration within trial teams between researchers from different disciplines, such as social scientists and epidemiologists, is another important aspect. The reported data did not, however, allow us to explore this adequately, and further work on the basis of case studies of trial teams is needed.
This study has several possible limitations. Firstly, we may not have identified all qualitative studies linked to the index randomised controlled trials. However, we did receive a high response rate from the authors of randomised controlled trials, and other reviews have indicated that this approach identifies the largest number of additional studies.23 All methods of identifying studies were resource intensive—a potential barrier to examining qualitative work done alongside trials. Secondly, trials sampled from the Cochrane Effective Practice and Organisation of Care database may not be representative of all randomised controlled trials evaluating interventions to change professional practice and the organisation of care. The sampled trials are unlikely to be representative of randomised controlled trials more widely but are likely to be similar, in terms of their use of qualitative methods, to other randomised controlled trials of complex interventions. Finally, our analysis is based largely on study reports. These may not reflect the extent of integration of qualitative and quantitative findings.
Although well conducted qualitative studies can support trial design and improve our understanding of the effects of complex interventions and the mechanisms through which changes occur, qualitative studies remain relatively uncommon alongside trials of complex interventions. Most of the qualitative studies were carried out before the trial, had important methodological shortcomings, and the findings were poorly integrated with those of the trials. This study highlights ways in which the quality and usefulness of qualitative studies carried out alongside randomised controlled trials can be improved (box 5). Further work is needed to develop further methodological and practical guidance for trial teams who plan to utilise qualitative approaches.8 27
We thank Xavier Bosch-Capblanch, Anna Gaze, Marit Johansen, Matthew Oxman, and Marcus Prescott for their assistance with various aspects of the study.
Contributors: SL and ADO conceived the idea for the paper. SL and CG retrieved the articles, extracted the data, and did the analysis. SL, CG, and ADO drafted the paper. SL is the guarantor.
Funding: This work was supported by a UK Medical Research Council health services research training fellowship to SL. The Norwegian Knowledge Centre for the Health Services provided support to CG and ADO. The funding sources played no part in the conception of the study, data collection, analysis, or writing the article.
Competing interests: ADO was an investigator for two of the included trials and CG worked in the unit in which these trials were coordinated.
Ethical approval: Not required.
Cite this as: BMJ 2009;339:b3496