The assessment of the placebo effect faces a basic conundrum. Patients may desire to please the researcher, or just give a “correct” or expected answer that fits with the experimental situation (19
). When patients report that they feel better after receiving a placebo intervention how do we know to which degree this reflects genuine symptomatic improvement, such as pain relief, that can be attributed to the placebo effect or a response bias? Patient-subjects who receive placebo interventions in clinical trials or laboratory experiments, believing that it was or may have been a real treatment, might be disposed to report positive outcomes to please the investigators with whom they had a clinical relationship. Conversely, those who did not receive any study intervention might be disappointed and disposed to report negative or “correct” outcomes.
This conundrum is posed by two different considerations. First, there is no blinded control for the placebo effect. Second, the placebo effect is most likely to play a role in the treatment of conditions in which the outcome targets are subjective (15
), and necessarily based on introspective subject self-reports, for example pain. In controlled trials to assess the placebo effect (whether in clinical or laboratory settings), the placebo intervention is usually compared with a no-treatment control. Research subjects in the no-treatment arm necessarily know that they are not receiving treatment.
In assessing treatment efficacy of a pharmacological intervention with respect to subjective outcomes, such as relief of pain, blinded placebo-controlled trials are able to discriminate real effects from response bias. Patient-subjects may be disposed to report favorable outcomes by virtue of trial participation. But since they are randomized to masked drug or placebo, significant improvement in the treatment group as compared with the placebo group can be attributed to the efficacy of the treatment, as long as the masking conditions were successful. Response bias may operate to inflate the apparent drug effect (the difference between pre-trial baseline and the time of study outcome measurement); likewise, it may account for all or part of the response in the placebo arm. However, in view of randomization and blinding conditions, there is no reason to infer that the effect of response bias is greater in one arm than the other. In contrast, controlled trials to assess the placebo effect are not able to factor out response bias in this way, because they cannot be blinded.
Another important aspect of response bias is that it is likely to be closely associated with the same causal factors hypothesized to cause placebo effects: a warm patient-provider interaction and the doctor’s verbal and non-verbal suggestion of an important beneficial treatment effect. Thus, the more a physician signals friendliness and confident expectation of improvement, the less likely is the patient to disappoint the doctor who is making such an effort. Recent qualitative studies of patients in randomized clinical trials have demonstrated that patients can become dramatically attached to the research team and very committed the ‘success’ of a trial (23
The conundrum of response bias is not limited to the typical clinical trial design. To elucidate the placebo effect, Benedetti and colleagues have deployed an experimental paradigm that compares the responses of patients to analgesic drugs in conditions of open and hidden administration (24
). For post-surgical patients receiving open injections of drugs in the manner of a typical clinical encounter, a given dose of an opioid drug appears to produce a substantially greater reduction in pain as compared with patients who receive the same dose of drug via a computerized infusion pump but are not informed about when the drug will be administered (25
This paradigm has been interpreted as demonstrating a clinically meaningful placebo effect, or the placebo component of active treatment, without the use of a placebo control. The results are impressive, but can we reliably distinguish between a real, greater reduction in pain in the open treatment group from a response bias, given that the patients knew that they were being given an analgesic drug and that they were participating in an experiment to assess analgesia? Likewise, those receiving the hidden infusion may have been negatively biased in their assessment of pain relief, knowing that they were suffering from pain but not knowing when pain medication would be administered. The open/hidden design is not itself able to rule out the alternative possibility of response bias.
A possible solution to the conundrum of response bias is to design trials assessing placebo effects with objective outcomes, not susceptible to patient behavior, and to blind the outcome assessor. This may be possible in some situations, such as studies of wound healing. However, even wound healing may be susceptible to variations in patient behavior; there is scant reliable evidence that placebo interventions modify objective outcomes in clinical trials (15
); and what is important to patients is usually reduction in symptoms. Thus, trials with (only) objective outcomes would reflect a fairly limited number of clinically relevant problems.
Some have argued that neuroimaging technologies such as functional magnetic resonance (fMRI) and positron emission tomography (PET) can help determine whether placebo effects are independent of response bias (27
). For example, one team of researchers has reported that placebo responses occur in pain-related areas of the brain during the time of stimulation
and not only during assessment (28
), while other researchers have shown that spinal cord mechanisms are involved with placebo analgesia (29
). These experiments seem to indicate that at least some of the observed effect of placebo in an experimental setting is independent of response bias; however, they cannot rule out the hypothesis that some of the observed clinical effect is due to response bias.
In fact, other neuroimaging experiments point to potential involvement of response bias. For example, one study compared no-treatment to placebo treatment and placebo treatment plus naloxone. The placebo group reported significant pain reduction and the pain ratings of placebo treatment plus naloxone partially blocked the placebo behavioral response (30
). But when one examines the simultaneous brain activation patterns of placebo with and without naloxone, there are inconsistencies. In the placebo treatment group, the average blood-oxygen-level-dependent (BOLD) response across all pain responsive brain regions decreased compared to controls. In the placebo treatment plus naloxone group, the BOLD actually shows that the naloxone group had increased activation compared to controls. For this group, instead of the partial blocking of pain sensitivity found in the behavioral data, there was brain activation that usually represents a worsening of pain (not a partial blocking of pain reduction). This finding suggests that what is reported is not necessarily congruent with what is felt, and, at least some of the time, pain self-reports due to placebo treatment are unrelated to the organic process of nociception.
While fMRI can measure hemodynamic blood oxygenation level dependent (BOLD) effect and PET can monitor regional cerebral blood flow and volume and map specific neuroreceptors using radiopharmaceuticals, neither method has advanced far enough to clearly and unequivocally distinguish to which extent the activations observed are “really” being felt by patients. In sum, just as there is no way to construct a blinded controlled trial to assess the placebo effect, so there is no way to eliminate subjectivity from patient-reported outcomes. This simply reflects the familiar but philosophically deep fact that there is no objective access to subjective experiences.