For an intervention to have a credible chance of improving health or health care, there must be a clear description of the problem and a clear understanding of how the intervention is likely to work. The original MRC framework identified designing, describing, and implementing a well defined intervention as: “the most challenging part of evaluating a complex intervention—and the most frequent weakness in such trials.”2
Table 2 summarises the key tasks for achieving this understanding and gives an example.
Key tasks for optimising an intervention: example of computer support for assessment of familial risk of cancer in primary care
Conceptual modelling or mapping can clarify the mechanisms by which an intervention might achieve its aims. The essential process involves mapping out the mechanisms and pathways proposed to lead from the intervention to the desired outcomes, then adding evidence and data to this map. Modelling of the intervention both depends on, and informs, understanding of the underlying problem. The intervention must engage the target group and affect pathways amenable to change that are identified as important to the problem. In the example in table 2 the intervention engages the general practitioner (providing tailored advice and training), the primary care team (organising referral around a single trained general practitioner), and the patient (facilitating their provision of information).
We found evidence useful in optimising four aspects of the intervention:
Refining the conceptual models by identifying important influences, relations between components, and consequences not previously considered. For example, in table 2, literature reviews of related interventions provided evidence on how computer decision support was received by practitioners, affected consultations with patients, and could improve implementation of guidelines. It also provided evidence on different ways of expressing risk to patients. Qualitative research helped to place the intervention in the context of primary care and consultations with patients.
Generating (tentative) estimates of effect size by populating conceptual models with data from observational studies or systematic reviews. In table 2, the initial data were numbers of appropriate referrals at baseline and findings from related interventions. Further data were provided by carefully controlled intervention studies.
Identifying barriersor rate limiting steps in intervention pathways
—Complex interventions can fail because of unforeseen barriers.21
Barriers can be cognitive, behavioural, organisational, sociocultural, or financial. They may occur early in the intervention process or during steps not previously considered or thought important.22
In the computer support example (table 2) some rate limiting steps were identified early when populating the intervention model with data on uptake of computer support in general practice, but others emerged during subsequent qualitative research. Early identification provides opportunities for resolution (which in this case included redesigning the software and training general practitioners on how to consult while using the software).
Optimising combinations of components in the intervention
—There is no consensus on how to achieve this. Once a conceptual model has been formed, some complex interventions may be amenable to simulations23
or carefully controlled experimental studies outside the normal clinical setting. In our example, simulated patients were used to test the intervention with general practitioners. This identified the likely outcomes for a range of patients and allowed general practitioners to comment on how the intervention could be improved. Simulation can also be used to explore the effect of changes in dose on response, and changes in contextual influences. Early randomised studies also have a place. In the example a randomised study was used to quantify the potential for benefit by using an intermediate outcome (decisions to refer) known to be tightly linked to final outcomes (referrals). Later, in another randomised trial, the researchers attempted to optimise the intervention by including an adaptive arm. In this arm, the intervention could be modified according to practitioner feedback when use of software during consultations fell below predetermined criteria.