PMCCPMCCPMCC

Search tips
Search criteria 

Advanced

 
Logo of postmedjPostgraduate Medical JournalVisit this articleSubmit a manuscriptReceive email alertsContact usBMJ
 
Postgrad Med J. 2007 April; 83(978): 287–288.
PMCID: PMC2600034

The “dirty tricks” experience can play on us

Short abstract

The therapeutic effect of a medical intervention can be due to the specific effects of a therapy. In addition, there is a multitude of other determinants. The totality of their impact can be such that even a treatment causing no or negative specific effects can be followed by positive perceived therapeutic response.

“At least treatment x does not harm my patient.” How often do clinicians think along these lines? In my field of complementary/alternative medicine, it is arguably the most common reason for using this or that therapy: there is usually little “hard” evidence to suggest harm (by “harm” I mean a negative effect on the disease, not a simple adverse effect). So, if treatment x does not make the condition worse and the patient is keen to try it, we may well decide to condone its use. There is nothing wrong with such a decision—or is there?

If reliable data are missing, how do we know treatment x does not worsen the condition? For one, we have our experience. Then there is the fact that this treatment may have been around for decades or even centuries. And perhaps a few observational studies are also available—of course, this type of evidence is not all that reliable but, in total, it must amount to something. Perhaps we cannot conclude that treatment x is effective, but surely we can be quite certain that it does not make matters worse? I beg to differ.

Experience, the “test of time”, and observational studies all have one thing in common: the lack of a control. If we want to draw conclusions about cause and effect (and “this treatment does not harm my patient” is such a conclusion) we need a positive or negative control. Case reports and observational data are, by definition, open to confounding and bias and therefore unreliable. As a consequence, causal inferences are problematic.

Unfortunately medicine has a long tradition of disregarding this rather obvious fact. Whenever in clinical practice doctors administer a treatment, they are likely to attribute any ensuing clinical improvement to the specific effects of their intervention. In other words, we regularly make definite conclusions about cause and effect on less than solid grounds. So we observe some improvement and we believe it was caused by our therapy. Or we observe no improvement but no harm either and conclude “at least it does no harm”.

Let's try to get some conceptual clarity about what really is going on in such a situation. Figure 11 schematically depicts the case of a patient (or a group of patients) receiving treatment x. Over time, the symptoms improve and we therefore perceive a therapeutic effect. The assumption therefore is that this “perceived therapeutic effect” is due to the specific effect of the intervention.

figure pj57521.f1
Figure 1 Schematic analysis of a typical treatment situation.

In reality, the “perceived therapeutic effect” can be caused by a multitude of effects. Figure 22 shows schematically the range of factors which can be involved. It is easy to see that, even if the specific therapeutic effects were negative (ie, the treatment is harmful), the total perceived therapeutic effect could still be positive. It follows that ineffective and even harmful interventions can be falsely associated with overall improvement. In other words, the fact that our patient gets better or not worse does not mean the treatment was effective or harmless.

figure pj57521.f2
Figure 2 Schematic differentiation of factors contributing to the perceived therapeutic effect.

This analysis is, of course, only schematic and therefore has its limitations. But it outlines how complex cause–effect relationships can be in clinical medicine. I believe that conceptual clarity is essential for recognising what “dirty tricks” experience can play on us. If nothing else, it teaches us to be (self) critical and to insist on reliable evidence—that is, on results which rigorously control for the multitude of confounding factors and biases.

Footnotes

Competing interests: None


Articles from Postgraduate Medical Journal are provided here courtesy of BMJ Publishing Group