In this didactic paper, I offer some personal reflections on perhaps the most common mistake made by beginning researchers: that clinical research is not particularly difficult and can be done by pretty much anyone, regardless of training and experience. I will explain my views by examining several “myths” about research that I believe to be particularly common in the complementary and alternative medicine community. I will end by making some practical suggestions to counter each of these myths.
Myth 1: anyone can do clinical research
A friend of mine who is a professional mountaineer recently received the following Email from a local hiking group: “Dear Sir, we would like to climb a mountain in the Himalayas, perhaps something 22 – 25,000 feet high. We understand from the literature that it is important to take bottled oxygen and were wondering what brand you would recommend.” I am joking of course: I don’t know any mountaineers and I doubt anyone has ever sent such an Email. That is because it is obvious to anyone: try to climb a Himalayan mountain without an experienced leader and you are going to get yourself killed.
Clinical research appears to be a different matter, however. There is a widespread impression that clinical research can be done by almost anyone, regardless of prior skills or experience. I get Emails similar in form to the one above regularly; for a recent example, “I want to do some research on massage and need an outcome measure. What would you suggest?” What I suggested was that the enquirer find an experienced researcher with whom to work. Similarly, a statistician friend received a call from a doctor: “my statistical software has given me an error message: data failed to converge. What does this mean?”’ My friend gave the only possible answer: “it means you need to see a statistician.”
Many clinicians I have met have a double standard: on the one hand, those engaged in clinical activities must have the proper training and experience; on the other hand, anyone can do research. Most clinicians express shock and horror at the very thought that someone without appropriate clinical training and qualifications might treat a patient; indeed, there is plenty enough finger-pointing even at those who do have qualifications (e.g. “doctor acupuncturists don’t do proper acupuncture”). Meanwhile many clinicians do research with no research qualifications whatsoever.
This is perhaps most clearly brought home at ‘research days’ where complementary practitioners, acupuncturists say, attend a few seminars hoping to learn how to do clinical research. Now compare this to an ‘acupuncture day’ at which statisticians without prior knowledge are taught a few techniques so that they can practice acupuncture. Yet whilst ‘research days’ continue to proliferate and ‘acupuncture days’ are unheard of, it is arguable that it is medical research that requires more training (see ).
Training and qualifications for acupuncture compared to research
Myth 2: you can learn how to do research from a book or journal articles
I was recently asked to review a paper that described an ‘n-of-1’ trial of a complementary therapy. The paper contained numerous important flaws and required major revision. It was not hard to see why the authors had gone so badly wrong: they had no formal training in research methods, they had never previously conducted an n-of-1 trial and they were not working at an institution where such trials (or anything remotely similarly) had been conducted. The authors had based their methods on a chapter in a complementary medicine research textbook. The first problem, which is fairly typical of those writing about complementary medicine research, is that the author of this chapter had no experience whatsoever of n-of-1 methodology (similarly, the journal that asked me to review this paper published one entitled ‘how to conduct a survey’, written by an author with no significant survey publications). The second problem is that science is not cookery and scientific texts are not cookbooks. The reason most of us are able to make ratatouille from a recipe is that we all have a stove, have previously chopped an onion and know what a stew is meant to look like. The same is not generally true of research. You cannot expect to throw a cookbook at someone who has never seen a kitchen before and expect to get a Spanish omelet. Give an inexperienced researcher a methodology textbook and similarly, all you’ll end up with is broken eggs.
Myth 3: All you need to do statistics is the right software (although Excel will also do)
The other day I sat down in front of Microsoft Word and typed ‘Now is the winter of our discontent.’ When the rest of Richard III did not flash up on screen I rang Microsoft technical support. They weren’t that helpful so I cut and pasted a few things and sent the results to a literary magazine for publication.
The statistical equivalent is so commonplace as to be cliche. Just as a recent example, I peer reviewed a paper in which many of the p values were given as ‘p=0.000’. This is obviously absurd, on the grounds that any conceivable clinical trial result has a non-zero probability. When I pointed this out to the authors, their defense was that they had cut and pasted from the statistical software so their result must be true. Again we see the double standard: to be a clinician takes years of training; to be a statistician, all you need is some software and familiarity with the ‘paste’ key.
When I read any medical paper, one of the first things I do is to glance over the list of authors. I always want to see whether at least one author is affiliated to a statistics department or has an appropriate qualification (PhD, MPH, MSc). Having a statistician as an author does not necessarily mean that the statistics are correct, just as doctors can give bad clinical advice. Similarly, the absence of a statistician does not mean that the statistics will be incorrect: my neighbor isn’t a doctor but he does sometimes say sensible things about health. On balance though, if I’m sick, I want to see someone with a plaque on the door.
Myth 4: You can do good quality research at your kitchen sink
It is almost impossible to enumerate in full the physical and intellectual resources that are taken for granted by those working in large research institutions. But to take just a couple of examples, if you work at a hospital with over 400 active clinical trials, the complex computer programing required for data entry and randomization databases has already been completed by a specialist team. Working at such a hospital also means that research protocols are evaluated by expert committees of researchers who can offer guidance and advice.
Is it really possible that the isolated practitioner, working alone without expert help or any significant research facilities, can really produce good clinical science? I can’t say I’m 100% sure, but it is difficult to think of many examples.
Myth 5: What is important is that you did your best
I was once asked to read a report of a clinical trial conducted by a medical student. When I remarked that the trial was badly flawed, I was told not to be so critical: it was only a student project, she had done pretty well, considering, and the paper deserved to get published on that basis. Similarly, when I criticized a published paper in a book, I received a nasty letter from the author of the paper saying, in short, “how could you be so mean, it was my first try!”
Now singing a few flat notes on karaoke night at the local bar does not spoil the fun, as long as everyone tries their best and has a good time. The problem with the odd flat note in medicine is that it can ruin everything entirely. Every clinician recognizes this: put a catheter in the wrong place and, unlike singing in the wrong key, someone could die as a result. A medical researcher who tried to treat a sick patient and messed up through lack of skills, knowledge and training would rightly be excoriated; “it was my first patient” or “I did my best” would be no defense, and no comfort, to the injured party. So why is this not recognized for research too? Why the double standard such that it is somehow okay to mess up research, but not medicine, through inexperience, ignorance and lack of resources?