Search tips
Search criteria 


Logo of nihpaAbout Author manuscriptsSubmit a manuscriptHHS Public Access; Author Manuscript; Accepted for publication in peer reviewed journal;
J Soc Integr Oncol. Author manuscript; available in PMC 2008 December 1.
Published in final edited form as:
J Soc Integr Oncol. 2008; 6(2): 82–85.
PMCID: PMC2590769

A basic introduction to research: how not to do research


In this didactic paper, I review some prevalent “myths” about clinical research: anyone can do research; you can learn how to do research from a book or journal articles; all you need to do statistics is the right software (although Excel will also do); you can do good quality research at your kitchen sink; what is important is that you did your best. These myths appear particularly prevalent in the complementary and alternative medicine communities. They are based on a clear double standard: most clinicians would express shock and horror at the very thought that someone without appropriate clinical training and qualifications might treat a patient; meanwhile many clinicians do research with no research qualifications whatsoever. But clinical research can guide clinical decisions that affect the health and well-being of millions of people: it is therefore arguable that poorly conducted research is potentially far more harmful than poor medical practice. As such, it is doubly important that clinical research is conducted by those with appropriate training, statistical help and institutional support.

Keywords: research design, complementary medicine


In this didactic paper, I offer some personal reflections on perhaps the most common mistake made by beginning researchers: that clinical research is not particularly difficult and can be done by pretty much anyone, regardless of training and experience. I will explain my views by examining several “myths” about research that I believe to be particularly common in the complementary and alternative medicine community. I will end by making some practical suggestions to counter each of these myths.

Myth 1: anyone can do clinical research

A friend of mine who is a professional mountaineer recently received the following Email from a local hiking group: “Dear Sir, we would like to climb a mountain in the Himalayas, perhaps something 22 – 25,000 feet high. We understand from the literature that it is important to take bottled oxygen and were wondering what brand you would recommend.” I am joking of course: I don’t know any mountaineers and I doubt anyone has ever sent such an Email. That is because it is obvious to anyone: try to climb a Himalayan mountain without an experienced leader and you are going to get yourself killed.

Clinical research appears to be a different matter, however. There is a widespread impression that clinical research can be done by almost anyone, regardless of prior skills or experience. I get Emails similar in form to the one above regularly; for a recent example, “I want to do some research on massage and need an outcome measure. What would you suggest?” What I suggested was that the enquirer find an experienced researcher with whom to work. Similarly, a statistician friend received a call from a doctor: “my statistical software has given me an error message: data failed to converge. What does this mean?”’ My friend gave the only possible answer: “it means you need to see a statistician.”

Many clinicians I have met have a double standard: on the one hand, those engaged in clinical activities must have the proper training and experience; on the other hand, anyone can do research. Most clinicians express shock and horror at the very thought that someone without appropriate clinical training and qualifications might treat a patient; indeed, there is plenty enough finger-pointing even at those who do have qualifications (e.g. “doctor acupuncturists don’t do proper acupuncture”). Meanwhile many clinicians do research with no research qualifications whatsoever.

This is perhaps most clearly brought home at ‘research days’ where complementary practitioners, acupuncturists say, attend a few seminars hoping to learn how to do clinical research. Now compare this to an ‘acupuncture day’ at which statisticians without prior knowledge are taught a few techniques so that they can practice acupuncture. Yet whilst ‘research days’ continue to proliferate and ‘acupuncture days’ are unheard of, it is arguable that it is medical research that requires more training (see table).

Training and qualifications for acupuncture compared to research

Myth 2: you can learn how to do research from a book or journal articles

I was recently asked to review a paper that described an ‘n-of-1’ trial of a complementary therapy. The paper contained numerous important flaws and required major revision. It was not hard to see why the authors had gone so badly wrong: they had no formal training in research methods, they had never previously conducted an n-of-1 trial and they were not working at an institution where such trials (or anything remotely similarly) had been conducted. The authors had based their methods on a chapter in a complementary medicine research textbook. The first problem, which is fairly typical of those writing about complementary medicine research, is that the author of this chapter had no experience whatsoever of n-of-1 methodology (similarly, the journal that asked me to review this paper published one entitled ‘how to conduct a survey’, written by an author with no significant survey publications). The second problem is that science is not cookery and scientific texts are not cookbooks. The reason most of us are able to make ratatouille from a recipe is that we all have a stove, have previously chopped an onion and know what a stew is meant to look like. The same is not generally true of research. You cannot expect to throw a cookbook at someone who has never seen a kitchen before and expect to get a Spanish omelet. Give an inexperienced researcher a methodology textbook and similarly, all you’ll end up with is broken eggs.

Myth 3: All you need to do statistics is the right software (although Excel will also do)

The other day I sat down in front of Microsoft Word and typed ‘Now is the winter of our discontent.’ When the rest of Richard III did not flash up on screen I rang Microsoft technical support. They weren’t that helpful so I cut and pasted a few things and sent the results to a literary magazine for publication.

The statistical equivalent is so commonplace as to be cliche. Just as a recent example, I peer reviewed a paper in which many of the p values were given as ‘p=0.000’. This is obviously absurd, on the grounds that any conceivable clinical trial result has a non-zero probability. When I pointed this out to the authors, their defense was that they had cut and pasted from the statistical software so their result must be true. Again we see the double standard: to be a clinician takes years of training; to be a statistician, all you need is some software and familiarity with the ‘paste’ key.

When I read any medical paper, one of the first things I do is to glance over the list of authors. I always want to see whether at least one author is affiliated to a statistics department or has an appropriate qualification (PhD, MPH, MSc). Having a statistician as an author does not necessarily mean that the statistics are correct, just as doctors can give bad clinical advice. Similarly, the absence of a statistician does not mean that the statistics will be incorrect: my neighbor isn’t a doctor but he does sometimes say sensible things about health. On balance though, if I’m sick, I want to see someone with a plaque on the door.

Myth 4: You can do good quality research at your kitchen sink

It is almost impossible to enumerate in full the physical and intellectual resources that are taken for granted by those working in large research institutions. But to take just a couple of examples, if you work at a hospital with over 400 active clinical trials, the complex computer programing required for data entry and randomization databases has already been completed by a specialist team. Working at such a hospital also means that research protocols are evaluated by expert committees of researchers who can offer guidance and advice.

Is it really possible that the isolated practitioner, working alone without expert help or any significant research facilities, can really produce good clinical science? I can’t say I’m 100% sure, but it is difficult to think of many examples.

Myth 5: What is important is that you did your best

I was once asked to read a report of a clinical trial conducted by a medical student. When I remarked that the trial was badly flawed, I was told not to be so critical: it was only a student project, she had done pretty well, considering, and the paper deserved to get published on that basis. Similarly, when I criticized a published paper in a book, I received a nasty letter from the author of the paper saying, in short, “how could you be so mean, it was my first try!”

Now singing a few flat notes on karaoke night at the local bar does not spoil the fun, as long as everyone tries their best and has a good time. The problem with the odd flat note in medicine is that it can ruin everything entirely. Every clinician recognizes this: put a catheter in the wrong place and, unlike singing in the wrong key, someone could die as a result. A medical researcher who tried to treat a sick patient and messed up through lack of skills, knowledge and training would rightly be excoriated; “it was my first patient” or “I did my best” would be no defense, and no comfort, to the injured party. So why is this not recognized for research too? Why the double standard such that it is somehow okay to mess up research, but not medicine, through inexperience, ignorance and lack of resources?


Good clinical research can guide clinical decisions that affect the health and well-being of millions of people. Bad research can therefore be just as harmful as bad medicine, perhaps even more so. Until this is more widely realized, and clinicians accordingly pay much more attention to the skills, training and resources needed for high quality science, it is likely that much of the research we see in journals will continue to be little more than the intellectual equivalent of karaoke night.

“How not how not to do research” isn’t very catchy, but when you think about it, that pretty much describes the scientific process: we find out what leads us astray (contaminated test tubes, uncontrolled studies) and try to avoid it (wash glassware, randomize). Here are some practical suggestions to counter each of the myths I discuss in this paper.

Myth 1: Anyone can do research irrespective of skills and experience

We can take the mountain climbing analogy one step further. You wouldn’t try to climb Annapurna without going along with someone who had already some experience of Himalayan mountaineering; accordingly, if you want to do a clinical trial, have a researcher on your team who has done a trial previously; if you want to do molecular marker study, include someone with a track record of publications in that area.

Myth 2: You can learn how to do research from a book or journal articles

Most researchers will tell you that some kind of formal training in research methods is an essential basis for a research career, but also that hands-on experience is critical. If you really want an active research career, you will need to consider a higher degree. Someone who wants to specialize in research should think about a doctorate, otherwise, a Masters’ in clinical epidemiology, public health or biostatistics would provide an excellent foundation for subsequent clinical research activities. In either case, you will need to get experience on as many different research studies as possible during and after you complete your studies.

Myth 3: All you need to do statistics is the right software

Incorporation of biostatistical help is cited by experienced investigators as one of the key determinants of the success or failure of a research program. The ideal is to develop a long-term collaboration with a biostatistician who understands your work, and why it is important. This can be difficult unless the biostatistician and you are members of the same institution (see myth 4).

Myth 4: You can do good quality research at your kitchen sink

Obtain a position at an institution that has a good research infrastructure and high-quality scientists.

Myth 5: What is important is that you did your best

Constantly ask whether you are doing research for the right reasons: why is it that you want that grant? Or that paper published? If your answers deviate much from a passionate commitment to improve human health, then stop and go and do something else. Medical research shouldn’t be about the researcher, it should be about the people who might be helped by the researcher’s science.


Dr Vickers’ work on this research was funded by a P50-CA92629 SPORE from the National Cancer Institute