|Home | About | Journals | Submit | Contact Us | Français|
Over the past half century there has been a vast proliferation first of randomised trials and now of meta-analyses, both of which (if appropriately analysed) can avoid bias. But to get medically reliable answers to previously unanswered questions about life or death treatment decisions it isn’t enough just to avoid bias. We must also ensure that we are not seriously misled by the play of chance, and often the only way to do this reliably is to get appropriate analyses of really large scale randomised evidence.1
At present, many wrong, or at least unreliable, therapeutic answers are being generated by non-randomised “outcomes research,” by small randomised studies, by small meta-analyses, and by statistically inappropriate analyses. Moreover, even when large scale randomised evidence is available, wrong conclusions can be drawn from unduly selective emphasis on particular trials or subgroups—and such “selection biases” can cause even greater errors when there is only a limited amount of evidence to review.
Over the past 50 years randomisation has already delivered reliable answers to some important questions and it offers the promise of reliable answers to many more. For that promise to be properly realised over the next 50 years, however, medical research needs to find practicable ways of greatly increasing the size of randomised studies; otherwise moderate but worthwhile benefits will continue to be missed. One important step towards larger size is the recent emphasis on meta-analyses:2,3 when many different trials have all addressed similar therapeutic questions a synthesis of all of their results not only avoids selective biases but also helps avoid random error.
But it often happens that there are no really large trials and that even a meta-analysis of all the trials in the world isn’t big enough to give statistically reliable answers about major outcomes. The key question then is how, in practice, is it possible to randomise a really large number of patients? For if one is trying to decide how millions of future patients should be treated it may often be appropriate to randomise at least many thousands—as is now becoming possible in breast and intestinal cancer—or even tens of thousands, as has occasionally been possible in stroke and heart disease.
Generally the only practicable way to achieve this is to design trials that are extremely simple and flexible: simplify the entry criteria by use of the “uncertainty principle” (see box), simplify the treatments, and simplify enormously the data requirements. Using the uncertainty principle should allow the process of providing information and gaining consent to become much closer to what is appropriate in normal medical practice. Collecting less information may mean bigger numbers and hence better science: many trials still collect ten or a hundred times too much information per patient, often at the behest of study sponsors or their committees. Requirements for large amounts of defensive documentation imposed on trials by well intentioned guidelines on good clinical practice (or good research practice) or excessive audits may, paradoxically, substantially reduce the reliability with which therapeutic questions are answered, if their indirect effect is to make randomised trials smaller or even to prevent them starting.
A patient can be entered if, and only if, the responsible clinician is substantially uncertain which of the trial treatments would be most appropriate for that particular patient. A patient should not be entered if the responsible clinician or the patient are for any medical or non-medical reasons reasonably certain that one of the treatments that might be allocated would be inappropriate for this particular individual (in comparison with either no treatment or some other treatment that could be offered to the patient in or outside the trial).
To argue the need for some large, simple randomised trials is not, of course, to argue that all other trials are useless: indeed, many small (or complex) trials will continue to be needed for certain purposes, as will many other types of clinical research. But for many important questions about practicable therapeutic improvements in controlling the common causes of death or serious disability there is no reliable alternative to large scale randomised evidence.
The reason for this is simple: when it comes to major outcomes it is generally unrealistic to hope for large therapeutic effects. Moreover, if a particular treatment did produce a really large effect on survival then we might well be able to recognise this reliably without any randomised trials. The efficacy of penicillin, for example, was so great that it was recognised before the introduction of randomisation. Likewise, the main hazards of tobacco are so great that they were recognised without randomisation. Hence, if substantial uncertainty remains about the effects of some particular treatment on survival then these effects are likely to be small or only moderate. For example, it might be reasonable to hope that a new treatment for acute stroke or acute myocardial infarction could reduce recurrent stroke or death in hospital from 10% to 9% or 8% (as aspirin does,4,5 preventing 10000 or 20000 deaths per million treated), but not to hope that it could halve in-hospital mortality. Many lives could, however, be saved by moderate reductions in the common causes of death—and if, eventually, several moderate benefits are reliably demonstrated their combined effects may be substantial.5
Thus, those who sponsor, perform, and regulate therapeutic research need to find ways of making trials much simpler and much larger. Otherwise the next 50 years of randomised evidence will not fulfil the promise of 50 years ago, when a properly6 randomised clinical trial was first published,6,7 transforming medical research by its method of generating unbiased answers to many therapeutic questions.