Clinical and health services research is continually producing new findings that can contribute to effective and efficient health care. However, despite the considerable resources devoted to this area, a consistent finding is that the transfer of research findings into health care practice is unpredictable and can be a slow and haphazard process [1
Ideally, the choice of implementation strategies would be based upon evidence from randomised trials [2
]. Healthcare practitioners and managers should be able to read a systematic review of an implementation intervention, reliably replicate some – or all – of the components of successful interventions in their own settings, and be confident of what will happen as a consequence. However, this is not currently the case. This is largely due to a combination of the manner in which trials are reported and, at least partially as a consequence, the lack of detail included in reports of systematic reviews.
Systematic reviews of implementation trials conducted to date have categorised interventions on an empirical basis with reviews of interventions such as audit and feedback, reminders, and outreach visiting [3
]. Such classification systems appear to be largely based on intuitive principles, somewhat akin to classifying drug interventions on the basis of whether the drugs are taken orally or intravenously. It is subsequently not surprising that systematic reviews based on these categories raise more questions than they answer. Indeed, reviews of implementation interventions produce a consistent message – all interventions, both within and across categories, are effective some but not all of the time, producing a range of effect sizes from no effect through to a large effect. The substantial heterogeneity of intervention components, targeted behaviours, and study settings make generalising findings from these studies to routine healthcare settings problematic. There is no underlying generalisable taxonomy by which to characterise these interventions, targeted behaviours and settings. There is only a limited and sometimes hopeful understanding of the 'active ingredients' required to develop a successful implementation strategy [4
One way of addressing a situation such as this is to tackle the issue empirically by examining all relevant combinations of the perceived important and modifiable elements of interventions to determine which contribute to a successful intervention. For example, audit and feedback has a range of modifiable elements which could be systematically varied in evaluations. However, varying only five of these elements (content of feedback, intensity of feedback, method of delivery, duration and context) produces 288 combinations [1
]. This is before any replication of studies or the addition of other potential elements of an intervention or different modes of delivery of interventions, such as educational meetings or outreach visits. Given the multiplicity of factors that would need to be addressed, such a 'hit and miss' approach is highly inefficient.
The assumption that clinical practice is a form of human behaviour and can be described in terms of general theories relating to human behaviour offers a basis for systematically developing implementation interventions. For example, if there is empirical evidence that a clinical behaviour is influenced by factors such as health professionals' beliefs or perceived control over their practice, then interventions to change their behaviour could include components that target those factors. The explicit use of theory may offer a number of advantages, such as providing a generalisable framework for predicting and interpreting behaviour, designing interventions and evaluating potential causal mechanisms.
However, theory has not commonly been used in the field of implementation research. Within a review of 235 implementation studies only 53 used theory in any way – to inform study design, develop or design the implementation intervention, and/or describe or measure elements of process for post hoc
interpretation – and only 14 were explicitly theory-based [6
This paper describes the development of a theory-based intervention to increase the frequency of a clinical behaviour among mental health professionals: appropriate disclosure of the diagnosis to people with dementia. The early care of people with dementia should ideally involve a sensitive and accurate explanation of the diagnosis, the likely prognosis and possible packages of care [7
]. Timely disclosure can facilitate decisions about treatment and allows the opportunity to plan family, fiscal and long term care arrangements. In the UK, multidisciplinary mental health teams (MHTs) for older people are often responsible for these tasks. Yet disclosure practice by healthcare professionals varies widely [8
]. Most carers are told the diagnosis but people with dementia themselves are often not told [9
]. Indeed, disclosure is less likely in dementia than in other terminal conditions, such as cancer. There is therefore substantial scope for improving professional practice.