Nurses constitute the largest group of health care providers and their care influences patient outcomes [1
]. However, nurses, like other professionals, often fail to incorporate current research findings into their practices [4
]. A lack of research use contributes to as many as 30%–40% of patients not receiving care, according to current scientific evidence, and some 20%–25% of patients may receive potentially harmful care [5
]. In response, much attention has been directed to developing interventions aimed at changing provider behavior to reflect current research. Several systematic reviews have been published in this area [6
], and authors of such reviews primarily include physicians and outcomes relevant to physicians. For example, Grimshaw and colleagues included only medical providers in a systematic review of guideline dissemination strategies [8
]. Additionally, in a review of continuing education meetings and workshops, only four of the thirty-two studies included nurses [9
]. Poor representation of nursing studies in existing reviews is partially a result of a lack of rigorous nursing research in the area of research utilization. For example, in a review of organizational infrastructures aimed at increasing evidence-based nursing practice, Foxcroft and Cole could locate no studies rigorous enough to be included [11
Generalizing findings from existing reviews to nursing is problematic. While physicians and nurses experience similar challenges in incorporating evidence, there are differences that influence how each group uses research in practice. One key issue is the social structure of the two professions. Nurses typically work in hierarchical social structures as salaried employees. Conversely, in many countries physicians typically work in more autonomous group practices or in hospitals, not as salaried employees, but as attending physicians with privileges [12
]. In these configurations, with the different resulting relationships with the organization, it is likely organizational context will exert different influences on the two groups. A second key difference, related to inpatient care, is the nature and structure of the work of the two professions. Nursing is typically responsible for continuous care over a short period of time. Conversely, episodic contact, often of longer duration, is more the case with medical practice. Moreover, nursing practice does not typically include medical diagnosis or prescribing of diagnostic or therapeutic interventions (although this is changing with the movement to nurse practitioners and other extended practice nursing roles). While these differences are not as common beyond inpatient settings (i.e., community care), the majority of nursing care continues to be provided in hospital settings. Therefore, results from existing reviews cannot be assumed to transfer readily or well to nursing practice in general.
Another weakness, we argue, with existing literature is investigators' reliance upon provider behavior change as a proxy for research use. For example, 88.8% of studies included in a widely cited and influential systematic review of studies aimed at increasing evidence-based practice used behavior practice changes as outcome measures [13
]. Using provider behavior as a proxy for research use has some limitations.
First, relating to different meanings of research use
, scholars generally accept three forms of research utilization: instrumental, conceptual and symbolic [14
]. Instrumental research utilization is the concrete application of research in practice [15
]. Most often, this involves using research to carry out an actionable behavior. Conceptual research utilization is the use of research to change one's thinking but not necessarily one's action [15
]. Symbolic research utilization refers to the use of research to influence policies or decisions [15
]. Investigators have shown the three forms of research utilization can be measured with self-report questionnaires [14
]. However, authors of existing studies (and reviews) have relied primarily upon behavior change outcomes [13
]. Because instrumental research use results in actionable behavior while conceptual and symbolic may not, measuring behavior change may only capture instrumental research use – a portion of the larger research utilization construct.
Second, research in our group has focused on more general measures of research utilization as opposed to specific guidelines or innovation-specific measures. Specific guideline measures have an important role in the understanding of the influences on research uptake, and they permit identification of guideline characteristics that may differentially influence reports of research use. However, we lack direction when attempting to ascertain a level of uptake that can be considered representative of a patient care unit or organization, or when seeking a formula with which to derive a unit or organization's level of research uptake. Thus researchers at organizational levels must rely on the very general measures identified above. Our experience with these general measures has been reasonably promising – we are able to capture variance in responses, the responses are reasonably normally distributed, and factors that one would expect to predict research utilization have generally held true.
Third, while research utilization is assumed to have a positive impact on patient outcomes through provider behavior, this is poorly understood and the means by which this occurs is believed to be inconsistent and complex [21
]. The process by which research becomes used in practice has, in fact, been treated as something of a 'black box phenomenon' [22
]. We know that providers base their behavior on many mediating factors, one of which may be research findings [21
]. Factors such as professional training, clinician experience, organizational context, and administrative support are also influential. Drawing conclusions about the effectiveness of research utilization interventions based on changes in provider behavior alone is probably an unreliable approach, because it is not clear how much of a behavior change can be ascribed to research use and how much to other factors. If provider behavior change results in a patient, or other outcome change, investigators are unable to determine if this is a direct effect (of provider behavior on patient outcome) or an indirect effect, that is, an effect mediated by research utilization. If it is the latter, then understanding which factors are mediated via
a research utilization variable is important as the causal forces that are exerted on that variable may themselves be modifiable but would remain undetected if only behavior change were measured.
The aim of this systematic review was to assess the evidence on interventions aimed explicitly at increasing research use in nursing practice. We were interested in reports in which the investigators had explicitly measured research use. We were therefore interested explicitly in studies that used some general measure of research use.