|Home | About | Journals | Submit | Contact Us | Français|
Randomized clinical trials support the efficacy of a wide range of psycho-educational interventions. However, the mechanisms through which these interventions improve outcomes are not always clear. At times, the theoretically specified factors within interventions have been shown to have specific effects on patient outcomes. But it has also been argued that other factors not identified in the intervention theory (e.g. “non-specific” factors such as patient expectations and therapeutic patient-clinician alliances) have powerful “non-specific” effects that account for most, if not all, of the observed efficacy of psycho-educational interventions. This paper describes important concepts in this debate and discusses key issues in distinguishing between specific and non-specific effects of psycho-educational nursing interventions. Four examples are used to illustrate potential methods of identifying and controlling for non-specific effects in clinical intervention trials.
In a recent editorial, Conn (2008) called for increased efforts to publish reports of studies that quantitatively assess effects of nursing interventions. She emphasized the persistent gap between “real world nursing practice and available empirical evidence about interventions” (p. 913). One critical aspect of such research is identifying an intervention's active components so they can be adequately communicated, transferred, and implemented across populations and settings. In the “real world”, nurses do not have time to deliver intervention components that fail to contribute to beneficial outcomes. Moreover, they need to know the specific factors that must be included to reproduce improvements in outcome. It is the role of nurse investigators to determine whether or not the theoretically active components of an intervention result in specific effects on patient outcomes or whether other factors may have influenced outcomes (resulting in non-specific effects). A long history of opinion exists around specific and non-specific treatment effects in the medical and psychology literatures, but this important topic has received far less attention in nursing. The purpose of this paper is to review key aspects of the historical debate, to discuss the implications of these arguments for tests of psycho-educational interventions in nursing, and to provide examples of approaches that can be used by nurse investigators to account for specific and non-specific treatment effects.
In testing health-related interventions (psycho-educational or otherwise), researchers develop their treatments to include theoretically-derived “specific” factors. That is, they include key elements with theoretically derived actions that are expected to produce the desired change in health outcomes. For example, Johnson's self-regulation theory suggests that providing patients with education including sensory preparation prior to a noxious medical procedure will help them make sense of their experiences and enhance coping (Johnson, Fieler, Jones, Wlasowicz, & Mitchell, 1997). In this example, a careful detailed description of what patients can expect to hear, see, feel, smell, or taste during the procedure is a theoretically-derived specific factor of the intervention. It provides the means through which coping is facilitated. However, effects may also occur due to other “non-specific” factors in the intervention
Non-specific factors are elements of the intervention not specified or directed by the theory. For example, factors such as patient expectations for improvement (outcome expectations), the credibility of the person providing the treatment (interventionist), and credibility of the treatment being delivered often influence study outcomes, even though they are not usually specified as active ingredients in the intervention theory (Greenberg, Constantino & Bruce, 2006; Weinberger & Eig, 1999). Some non-specific factors are so integral to psychological and nursing research that they have been termed “common factors”, meaning they are shared, or commonly occur, across most intervention studies. The therapeutic relationship between interventionist and subject is an example of a common factor. When the interventionist acts in a congenial, caring manner toward the subject, the interventionists' warmth, positive regard, collaboration toward a mutual goal (alliance), and engagement may have an impact on the subject's response. In nursing practice, harnessing the potential of non-specific factors can improve patient outcomes; however, in nursing research, when the goal is to understand the critical components of our interventions, it is imperative to evaluate the relative contributions of specific and non-specific factors on treatment outcomes. (See Table 1 for descriptions and examples of relevant terminology.)
In Lewis Carrol's (1885) Alice's Adventures in Wonderland, the characters emerge, soaking wet from a pool of Alice's tears. The dodo bird proposes a race as a way to dry out. No one cares to measure how far each character ran, or how fast, or for how long. After a time, the dodo bird notices that everyone is dry and declares the race over. When he is asked to name the winner, the dodo bird announces, `everybody has won, and all must have prizes.' In 1936, Rosenzweig first introduced the notion of a “dodo bird verdict” in the psychology literature. He argued that factors other than hypothesized specific factors may be creating beneficial effects among persons receiving psychological treatments. Despite different schools of thought and varying theorized mechanisms of action, Rosenzweig suggested that a large proportion of the beneficial impact of all psychotherapy techniques was based on their shared common factors. He asserted that the therapeutic qualities of the psychologist, his / her warmth and compassion toward the patient, and the psychologist's good intentions to `help' accounted for improvements in outcome. Without careful controls and measurement of different factors and their effects, psychology was no better able to determine the winner among psychological treatments than the dodo bird was in identifying the race winner: all treatments have won, and all must have prizes.
During the same time period, medical researchers were widely adopting the randomized double-blind placebo-controlled study design as an approach to control for non-specific treatment effects. In this type of randomized trial, investigators (1) use a structurally equivalent control group, (2) blind study participants, administrators, and examiners to treatment condition, and (3) attempt to control psychological factors. The control group receiving a structurally equivalent placebo treatment is designed to experience, almost exactly, the same intervention except for the active ingredient. A structurally equivalent placebo provides the same treatment modality (e.g., pill or injection) in a form that is indistinguishable from the active treatment (e.g., same color, size, shape, etc.) and is administered in the same number of sessions or treatments (e.g., taken on the same schedule). Double-blinding masks both the research team and the study participant from knowing which treatment is being delivered (active treatment or placebo). Combined, blinding and use of a convincing placebo control for the influence of psychological factors on the participant outcomes. With blinding and placebo control in place, psychological influences on outcomes should be similar in the treatment and control groups. If the treatment group outcomes are superior to those in control, then the treatment is having an effect above and beyond the placebo response. Blinding the research team prevents subtle messages or cues from a knowing researcher that could influence patients' perceptions of group assignment and expectations for response.
In the mid 1950's, psychologists began to integrate the double-blind placebo-controlled trial in their design of studies testing psychological treatments (Rosenthal & Frank, 1956). Control conditions were designed that matched the psychological treatment in most regards, but did not include any of the theoretically-designated active components. These structurally equivalent control conditions required use of an equal modality (group or individual meeting), delivered in an equal number of sessions and for an equal duration of time, administered by a therapist with equal skill and training, provided with an equal level of individualization to the client, with equal opportunities offered for clients to discuss their concerns.
Scholars have questioned the validity of the double-blind placebo-controlled approach in psychological (including psycho-educational) research. Psychological treatments are very different from most treatments (medications) used in medical research. In psychological interventions, therapist blinding is exceedingly difficult, if not impossible. The psychologist delivering the intervention must know what treatment he or she is providing. Given this knowledge, there is concern that a less faithful and less passionate approach may be used in delivering the placebo treatment. With psychological treatments, the active and placebo treatments are likely to be distinguishable from each other, even by an unknowing or naïve research subject. This results in the problem of the placebo treatment being less credible than the active treatment and participants therefore having lower expectations for the desired outcome. And as an additional concern, deception is required in knowingly delivering a placebo treatment. Many psychologists are uncomfortable using this practice with a vulnerable patient population.
Drawing on this criticism, Wampold and colleagues (1997) published seminal work in the area of non-specific effects of psychotherapy. Wampold's team conducted a meta-analysis of studies in which the efficacy of two or more “bona fide” psychological treatments had been directly compared with each other. Bona fide treatments were defined as those that had been found to be efficacious when compared to a waitlist or placebo control, were delivered by a master's or doctorally prepared therapist, involved development of a therapeutic relationship, and were tailored to the patient's individual needs. Analyses of these head-to-head comparisons revealed a difference in effect sizes near zero. Findings indicated that regardless of the differences in theoretically-derived specific factors, effects of the various psychological treatments were essentially equal. This lack of treatment differences led Wampold and others to suggest that “common factors”, rather than the specific theoretically-derived components, accounted for the benefits resulting from these treatments.
Crits-Christoph (1997) criticized Wampold's methods including the study selection criteria and the averaging of effects across all dependent variables. But perhaps more significantly, Crits-Christoph criticized Wampold's conclusions. In each of the head-to-head studies, two active but different theoretically-derived treatments were compared. In such comparisons, Crits-Christoph argued that equal effects do not necessarily imply a lack of specific effects. It is possible that both treatments acted as theoretically derived and that both resulted in similar beneficial outcomes through different causal mechanisms.
In 2003, Wampold and his team (Baskin, Tierney, Minami and Wampold, 2003) conducted a second meta-analysis, this time evaluating the effect sizes obtained when psychotherapy interventions were compared to structurally equivalent control conditions versus when they were compared to non-equivalent control conditions. Studies were included if they used a randomized controlled design with an adult sample and provided in-person treatment from a trained therapist in more than one session. Control groups were rated for structural equivalence based on six criteria (number of sessions, length of sessions, group or individual sessions, skill and training of the therapist, standardized or individualized therapy, and ability of participants to discuss their issues and receive the same information). If there was a difference between the control group and the active intervention on any of these six criteria, the control group was classified as structurally non-equivalent. Effect sizes were aggregated across all dependent variables, resulting in one estimate per study. A significant difference was identified, with a larger effect size in the non-equivalent control (d = .465) versus equivalent control group (d = .149) studies, [Q = 8.57, p = .003]. The investigators interpreted this finding to suggest that active treatments were little to no more effective than the effects of non-specific factors alone. In fact, Wampold and his colleagues went a step further and concluded that “… active treatments were not demonstrably superior to well-designed placebos” (Baskin et al., p. 973) and therefore, that current research does not support the efficacy of specific factors in psychotherapy interventions. However, one could argue that the above study did indeed reveal active treatments to be a little more effective than the controls. To claim that they were no more effective is an exaggeration given that the data revealed statistically significant differences between treatments and even the structurally equivalent control groups.
Craighead, Sheets and Bjornsson (2005) offered another perspective on Wampold's work and the suggestion that non-specific factors account for the impact of all psychological treatments. They suggested that with adequate research, non-specific effects will become “previously unspecified effects”. That is, all interventions may have some non-specific components, but with careful work to identify what those components are and testing to demonstrate their impact, they become specific factors known to be active components of the intervention. In taking this stance, they acknowledge that outcomes of psychological treatments are the result of both specific and non-specific factors as well as their interactions. Thus, a goal of efficacy studies should be to identify and understand the relative contribution of both specific and non-specific factors.
Much of the debate that has occurred in psychology is applicable to nursing intervention research, particularly with regard to psycho-educational interventions. Psycho-educational and cognitive-behavioral interventions may contain non-specific factors that, similar to psychotherapy interventions, are likely to influence patients' treatment responses. The therapeutic alliance, personal relationship, kindness, compassion, and intention to be helpful are similar in both nursing and psychology. Patient expectations, beliefs in the credibility of treatment, and motivation to improve are also similar across interventions in both disciplines. However, far less attention has been paid to non-specific factors in nursing research as compared to psychology research. In nursing, many investigators are at the stage of testing whether their interventions are superior to control. But as we move to head-to-head comparisons of the relative efficacy of different approaches, we will run into the very issue that our colleagues in psychology have discussed for several decades. Can we show that the theoretically specified factors in various nursing interventions result in significant, specific effects or, rather, will we find that non-specific factors account for similar effects across different interventions? Several techniques can help distinguish between specific and non-specific effects. The use of well-designed (structurally equivalent) control groups is the most well known way to distinguish specific from non-specific effects, likely due to the familiarity most researchers have with double-blind placebo-controlled trials. Additionally, investigators can measure and analyze process variables (e.g., mediator variables, differences in intervention uptake and use by patients, or strength of therapeutic alliance) and use outcome measures designed to capture both specific and non-specific effects.
In the following section we briefly present four cases of clinical trials that attempted to identify and / or control the non-specific factors and their effect on outcomes achieved. The cases illustrate (1) identification of subject-researcher interactions outside the context of the intervention that may be generating non-specific effects, (2) use of a manipulation check analysis to identify both the specific and non-specific factors in an intervention, (3) use of different types of outcome measures to discriminate between specific and non-specific effects, and (4) use of a carefully designed structurally equivalent control group to separate effects of specific versus non-specific factors.
Subject-researcher interactions that may produce non-specific effects are typically thought of as occurring during intervention sessions, but such interactions begin during recruitment and may have critical influences on subjects in both intervention and control conditions. The following example illustrates how important subject-researcher interactions can be and how racial concordance of researchers and subjects may increase the influence of these interactions. The study described here was a pilot study of a psycho-educational intervention termed ACTS (Attitudes, Communication, Treatment, Support) designed to improve adherence to breast cancer therapy recommendations among African American women (Rosenzweig, Serieki, Brufsky, Waihagen & Arnold, 2007). The hypothesis was that the peer derived and delivered ACTS intervention, which incorporates a discussion of attitudes about chemotherapy, communication strategies, treatment explanation, and overall support would increase adherence to recommended chemotherapy among black women with breast cancer.
Subjects randomized to the ACTS intervention met with a race-matched peer interventionist who recruited patients, delivered the intervention, and obtained follow up data. The ACTS intervention began at the participant's first visit to her medical oncologist, immediately after a chemotherapy recommendation was made. Usual care consisted of care routinely provided in the clinic.
All eligible subjects consented to and enrolled in the study, and all subjects, both those assigned to the ACTS intervention and those assigned to usual care, demonstrated 100% adherence with their chemotherapy regimen. Such success in recruitment and such adherence to treatment in the usual care group were unexpected findings that are completely contrary to findings in the published literature (Griggs et al., 2003; Haggstrom, Quate & Smith-Bondman, 2005; Polite & Olopade, 2005; Slater, Brufsky, Conroy, Frisina & Rosenzweig, 2008; Tammemagi, Nerenz, Neslund, Feldkamp, & Nathanson, 2005). Comments from subjects in both groups (usual care and intervention) strongly suggested that the race of the recruiter / interventionist was critical in explaining these unexpected findings. Note that this racially-concordant strategy was purposefully designed to strengthen the likelihood of success in African American recruitment, data collection and retention. Clearly the strategy was successful for these purposes, but the presence of an African American woman in a professional health care role and attention to issues faced by black women at a vulnerable period (chemotherapy initiation) also appears to have had the unexpected effect of making both groups of subjects feel more committed to their treatment regimens.
In this study, usual care was not necessarily “usual” due to subject-researcher interactions during recruitment. The selection of a race-matched woman as a recruiter, while race-sensitive, likely represented more supportive “presence” than African American women may experience in a typical racially discordant care environment. Such attention may influence adherence outcomes. Thus, this is an example of non-specific effects that may have been created by recruitment strategies.
The Written Representational Intervention To Ease Symptoms (WRITE Symptoms) study provides an example of how qualitative analyses of intervention delivery can help to identify specific and non-specific factors in an intervention. WRITE Symptoms is a web-based symptom management intervention based on the Representational Approach to patient education (Donovan & Ward, 2001; Donovan et al., 2007). Improved symptom management is proposed to occur through conceptual change fostered by five specific factors within the intervention: 1) assessing symptom representations; 2) addressing gaps in knowledge and misconceptions; 3) linking current beliefs and strategies with symptom management problems; 4) providing new strategies that could improve management; and; 5) goal setting and planning. The intervention is conducted via private asynchronous message board postings between participants and nurse interventionists.
In all experimental designs, investigators control the independent variable (IV) through an experimental manipulation. A manipulation check allows the investigator to see whether the participants received the experimental intervention (the manipulation) as planned. We analyzed the content of patients' postings as a manipulation check to determine whether patients “received” the theoretically specific factors of the WRITE Symptoms intervention as intended. All participant postings (n=881) containing reflections on study participation were extracted. Through an iterative process, four themes were identified reflecting both specific and non-specific factors within the intervention (Donovan et al., 2008). The first theme was “changes in how participants viewed their symptoms”. Women reported that writing about their symptoms in response to research nurses' detailed questions stimulated them to think about their symptoms differently and gave them a greater sense of control. The second theme was “expert symptom management advice”. Women described the nurses as “experts who cared” and found the symptom management advice to be very helpful. The benefits of “emotional support” was a third theme. Women expressed strong emotional connections with their nurse, felt “supported in their journey”, and felt less lonely and isolated. Finally, participants frequently noted the “convenience of the message board system”. They appreciated the ability to participate on their own schedules from their own homes. They also felt that asynchronous communication gave them the opportunity to think carefully about their responses, without the time pressure of a face-to-face or telephone interaction.
Through a qualitative manipulation check, both specific and non-specific factors were identified within the WRITE Symptoms intervention. As expected, the representational assessment and goal setting/planning approaches seemed to generate conceptual change for participants, helping them to feel a greater sense of control over their symptoms. The symptom management advice was another specific factor identified by participants. Interestingly, two specific factors - discussing misconceptions and linking current concerns with poor symptom management were not reflected in participants' postings. Future research should evaluate whether: a) these are not critical components of the intervention, b) they were not adequately implemented in this study, or c) they are critical, but not explicitly recognized by participants.
Three non-specific factors were identified: emotional support, benefits of writing, and convenience of asynchronous interactions. The extent to which writing and asynchronous communication are critical for promoting conceptual change should be evaluated in future research. These may be important “previously unspecified” factors.
Two lines of work suggest that specific versus non-specific effects of interventions can be distinguished by the type of outcome measures that one examines. First, quality improvement studies have shown that patient satisfaction with health care is generally very high in spite of the presence of unmanaged symptoms such as pain (Gordon et al, 2002). Second, in tests of interventions it is common to see data indicating that patients' like an intervention even though the effects of that intervention on the intended outcomes are negligible (e.g., see Murphy, Price, Stevens, Lynn & Kathryn, 2001). These two lines of work suggest that satisfaction measures tap patients' affective/evaluative reactions to the interpersonal elements of a nursing intervention; that is, measures of satisfaction tap non-specific effects.
The use of a satisfaction measure to assess non-specific effects is illustrated in a study of a psycho-educational intervention designed to improve pain management in persons with cancer (Ward, in press). In a randomized trial of 161 adults with cancer pain, the efficacy of a psychoeducational intervention was compared when presented to patient-significant other (SO) dyads, to patients alone (Solo), or care-as-usual (control). A mediational hypothesis was also tested. The hypothesis posited that the intervention would decrease attitudinal barriers to cancer pain management, which would in turn improve outcomes such as pain severity and quality of life. Subjects in the Dyad and Solo conditions received intervention information at baseline, and two and four-weeks later. All participants completed standardized measures at baseline. Nine weeks after baseline they completed the standardized measures as well as an 8-item evaluation of their satisfaction with the intervention and the study.
Between baseline and 9-weeks, patients in both the Dyad and the Solo groups showed greater decreases in attitudinal barriers compared to those in the control condition and changes in patients' attitudinal barriers mediated between the intervention and all of the standardized outcomes (e.g., pain severity and quality of life). On the evaluation/satisfaction items, patients in the Dyad and Solo conditions reported significantly higher scores for every item compared to those in the control group. Furthermore, the effect sizes for these findings were larger than were the effect sizes for the mediation analyses.
The effects of the intervention appear somewhat modest when one examines standardized measures but robust when one examines the evaluation/satisfaction items. This pattern is congruent with non-specific effects of an intervention wherein one sees larger effects on satisfaction than on specific standardized measures. On the other hand, the study provides evidence of the specific effects of the intervention in that the mediation analyses support the specific hypothesized effects of the intervention. Thus, use of different kinds of outcome measures provided evidence for both the specific and the non-specific effects of the intervention.
Kwekkeboom, Wanta, and Bumpus (2008) conducted a study testing the effects of two cognitive-behavioral strategies (guided imagery and progressive muscle relaxation) on pain intensity and pain-related distress in hospitalized patients with acute cancer pain. The study used a structurally equivalent control condition in a crossover trial. Participants were randomly assigned to one of two treatment orders: (1) Control - Guided Imagery - Control - Relaxation, or (2) Control - Relaxation - Control - Guided Imagery. The guided imagery intervention was a 15-minute exercise using mental images to replace pain with more comfortable or numbing sensations. The progressive muscle relaxation intervention was a 15-minute exercise in which patients were instructed to tense and relax 12 muscle groups in sequence, focusing on the pleasant sensations of relaxation. The control condition provided 15-minutes of information relevant to hospitalization (e.g., orientation to hospital services and the health care team, exercises to stay active while hospitalized, and planning for discharge). All intervention and control conditions were provided as audio recordings, using the same voice, delivered on CD with headphones, and were similar in length.
The specific factors comprising the active interventions were techniques intended to help the patient gain a sense of control over pain sensations and pain-related distress. For imagery, the specific factor was mental reinterpretation of pain sensations. For relaxation, the specific factor was release of muscle tension and focus on the pleasant sensations of relaxation. The non-specific factors in both active and control interventions included attention from a research nurse, equipment (CD player, headphones), completing pain measures, distraction through an audio-recorded message, and undisturbed time to sit or lie quietly. Thus, treatment and control conditions were largely equivalent except for the content of the recorded message.
Pain intensity and pain-related distress were measured before and after each recording. Data analysis revealed that all three conditions produced a statistically significant reduction in pain intensity and pain-related distress. But when active treatments were compared to the control condition, both guided imagery and progressive muscle relaxation resulted in greater improvement in pain than the control condition. These findings suggest that the specific effects of guided imagery and progressive muscle relaxation include, but are greater than the non-specific effects alone. Using a structurally equivalent control condition in this study allowed more accurate assessment of the relative contribution of specific and non-specific factors to overall treatment outcome.
Theories help researchers and practitioners to think about their work in systematic ways and to identify specific factors that are likely to produce specific effects under certain conditions (Hochbaum, Sorenson, & Lorig, 1992). Theory-guided intervention research is increasingly valued in nursing, but there has been inadequate attention to the issue of distinguishing between specific and non-specific effects of interventions. The psychology literature provides a rich history of debate on whether non-specific factors are more responsible for beneficial effects of psychological interventions than specific factors. While the controversy continues, there are many implications for nursing research that can be drawn from the discussions to date.
In most psycho-educational interventions, both specific and non-specific factors influence outcomes. Arguably, some non-specific factors such as therapeutic relationships and communication may be necessary to the interventions. To attempt to eliminate them from the theoretically specified factors would likely damage the intervention as a whole. Instead, these non-specific factors need to be acknowledged and accounted for when trying to understand the mechanisms and specific effects of interventions.
Several approaches for distinguishing specific and non-specific effects of interventions have been discussed here, including the use of structurally equivalent control groups and the use of mediating variables and outcome measures targeted to capture specific and non-specific effects. In addition, approaches for identifying unexpected but important non-specific factors were discussed. An alternative approach may be to begin with complex, multi-component interventions in which the specific factors are not-clearly identified. In subsequent dismantling trials, individual components are removed and the effects of removal are evaluated in order to determine the critical specific factors in the intervention.
Different investigators will no doubt come to different conclusions as to where to place priorities in advancing the science of psycho-educational research. Regardless of approach, the ability to distinguish between the specific, theoretically proposed effects and non-specific effects of interventions is essential for advancing the science in nursing intervention research.
This paper was presented, in part, at the 2008 National State of the Science Congress on Nursing Research, October 2-4, Washington D.C. This work was supported by NINR R01 NR03126 (S. Ward, PI), NINR P20 NR008987 (S. Ward, PI), NINR R21 NR009275 (H. Donovan, PI), the University of Wisconsin-Madison Graduate School (K. Kwekkeboom, PI), NCI K07 CA100588 (M. Rosenzweig, PI) and the Susan G. Komen Foundation (POP33006; M. Rosenzweig, PI).