Clearly, the results of these trials could have been accurately predicted based on knowledge of sponsorship alone. The studies appeared to be well executed in general, most had academic authorship, and they were intended to survive stringent US Food and Drug Administration (FDA) review. Hence, we considered that poor scientific quality was unlikely to be a major contributor to this result. It actually has been suggested that 'quality scores' of industry-sponsored RCTs may be better than those of publicly funded RCTs [6
], strengthening our belief that these studies were generally carefully and accurately performed.
How, then, could these results have been so predictable? There are at least four nonexclusive potential explanations. First, a placebo control was employed as an arm in 32 of the 45 studies, and prestudy evidence (e.g. phase I, II, and prior phase III studies) is likely to have suggested that the new drug would be found to be superior to placebo. Second, of the 22 studies in which an active comparator arm was used, three were derived from previously reported RCTs, essentially reporting the same trial using different end-points such as cost or quality of life. Third, 'publication bias', discussed below, might have played a role. Finally, we suggest that a relatively undiscussed bias, 'design bias', is likely to have contributed substantially to the predictability of these results.
In the present study we only studied abstracts for one meeting in one medical specialty, and we classified studies using only information available in the abstract. This might limit the ability to generalize from our findings, and certainly if one looked at enough trials some exceptions would be found. However, our results are confirmed by many other studies. In a study of 61 industry-sponsored trials of nonsteroidal anti-inflammatory drugs [17
], 100% of trials found the sponsor's drugs comparable or superior to the comparator; no studies favored the comparison treatment. In multiple myeloma trials, equipoise was generally met in publicly funded RCTs, but 74% of commercial RCTs favored the new product, suggesting differences related to funding source [6
]. Bekelman and coworkers [18
] found financial conflicts of interest to be widespread. Als-Nielsen and colleagues [19
] found that funding source was related to reported conclusions in drug trials. On the other hand, many publicly sponsored RCTs address issues with uncertain outcomes (see O'Dell and coworkers [20
]). However, a large majority of industry-sponsored trials must violate the principle of equipoise.
Publication bias, where positive studies are more likely to be published than negative ones, is an obvious potential explanation for these findings. However, the magnitude of publication bias remains controversial, and a number of studies suggest that this type of bias has only small effects [21
], whereas our observed effects are huge. Recently, publication bias in oncology trials was noted to include more specific causes of 'lack of time or resources', 'incomplete study', and others that are not necessarily related to study outcome. Only 10% of unpublished papers were so categorized because of insufficient priority to warrant publication; 81% of positive trials and 70% of negative ones were published, suggesting a publication bias of perhaps 13% [25
]. Publication bias is thus unlikely to account for the unanimity of the results reported here, although it is likely to have played some role. Publication bias occurs after an RCT has been completed.
Design bias, on the other hand, occurs before the RCT is begun, when the trial parameters are determined and before the final decision has been made to initiate the study [21
]. Major deviations from equipoise appear to result from the design stage, particularly during the drug development process.
Consider the drug development sequence from the perspective of the sponsor. A promising agent is found. Then, there are pharmacological studies, pharmacokinetic studies, animal studies, initial studies in humans, dose ranging studies, toxicity studies, and others (Fig. ). Most potential medications fail to pass these hurdles. Only drugs that continue to show promise are considered for expensive RCTs, and there is already a great deal known about these drugs.
The drug development sequence. Note that the drug sponsor accumulates large amounts of data about the drug, and has eliminated the least promising agents before the design process for randomized controlled trials occurs.
From an industry perspective the drug development process must involve 'designing for success' (Fig. ). In a well established set of procedures company consultants and staff debate what is known about the drug, its competitors, its potential advantages in terms of toxicity or efficacy, and the potential disease indications. One of us (JFF) has frequently been involved in this process. Then, trials are designed that include the patients, dosages, study duration, end-points, and comparators that are likely to provide a positive result for the sponsor and one that is acceptable to the US FDA. These design decisions are intended in part to identify the most appropriate clinical niche for the product, using all prior information. A funding commitment by a for-profit entity to an RCT that may cost hundreds of millions of dollars simply will not be made unless a positive outcome may be predicted with considerable certainty.
Is 'design bias' a bad thing? At first it appears so. After reflection, however, we would suggest 'not necessarily'. How else should studies be designed? Should we study drugs without promise – study drugs that are not thought to be superior to placebo or drugs with no known potential advantages over existing drugs? Should we conduct studies that fail to identify an appropriate, perhaps narrow, therapeutic niche for the drug? From a trial participant's perspective, the current design process limits the chance of exposure to an ineffective or unduly toxic drug. From a social perspective, violation of equipoise is essential to efficient medical progress. To enroll humans in large RCTs without preliminary studies might pose truly major risks to participants, but after preliminary studies have been conducted true uncertainty no longer exists. The principle of equipoise becomes the paradox of equipoise.
Ethical issues with equipoise
The equipoise principle, upon examination, actually contributes to ethical problems, in part because it embodies an unreasonably paternalistic attitude. When we, as clinicians, ask a patient to consider participation in a trial, the typical responses are 'Might this study help others?' and 'Are the risks reasonable?' In stark contrast, the equipoise principle does not allow consideration of potential social benefits or consideration of the magnitude of the (often very small) risk to the patient. Contrary to the altruism expressed by many patients, equipoise gives weight neither to personal autonomy nor to personal satisfaction.
Of substantial importance, the authoritative Belmont Report [26
], the US National Institutes of Health, the US FDA, and the Nuremberg Code [27
] do not require equipoise. The Belmont Report enunciates three principles: autonomy (respect for persons), beneficence, and justice. Autonomy refers to the patient who, unless of diminished mental capacity or under coercion, is capable of self-determination and must be granted its exercise. Beneficence refers to study designs that to the greatest degree possible maximize benefits and minimize risks to the patient. Justice refers to fairness in distributing the benefits and burdens of research. The concluding section of the Belmont Report notes a standard of 'the reasonable volunteer' and that 'no undisclosed risks may be more than minimal'. Even more importantly, 'risks to subjects must be outweighed by the sum of the anticipated benefit to the subject, if any, and the anticipated benefit to society' and 'beneficence requires that we protect against risk of harm to subjects and also that we be concerned about the loss of substantial benefits that might be gained from research' [26
Equipoise fails as an ethical principle because it directly contradicts the principle of personal autonomy and fails to consider the balance between the individual and the societal good. It implicitly encourages performance of trials that will prove uninformative by limiting the expected differences between the arms of the trial. It has been suggested, with similar reasoning, that underpowered RCTs are unethical because the resulting negative results may not benefit society [28
We propose two principles that more appropriately underlie the ethics of enrollment of experimental subjects into an RCT: positive expected value and exercise of personal autonomy.
First principle: positive expected value of participation
The equipoise principle is applied at an inappropriate point in time. The subject's decision is whether to accept or decline a trial, not which arm to enroll in (Fig. ). The decision to accept the trial necessarily comes before the randomization procedure, which assigns the patient to a particular study arm. The patient, without foreknowledge of the arm to be assigned, must base the decision to accept the trial upon the pooled expectation for the RCT arms and not upon the value of any single arm. The principle of 'equal uncertainty between the arms of the RCT' must be replaced with the principle of a reasonable 'expected value' for the participant after pooling the RCT arms. The standard becomes the expected value of outcomes after declining the RCT (usual care) as compared with the average expected value of outcomes after accepting the trial. This comparison does not depend on the expected values of the individual arms of the RCT but on their pooled average.
Figure 2 Presentation of a randomized trial protocol for consideration by a patient. This presents an idealized sequence of invitation, factual evaluation, ethical valuation, decision, and randomization. Note that factual evaluation contrasts benefits and risks (more ...)
For example, consider an RCT of a new drug that is believed likely to reduce osteoarthritis pain by 40% versus a clinical standard known to reduce pain by 20%. Before randomization, each participant has a pooled expected value of a pain reduction of 30%, which is 10% more pain relief than under usual care. The expected value of participation is positive – it is of greater value to the participant than declining the RCT and accepting usual care, and the study is ethically sound – but it does not meet the weaker principle of 'equipoise' or 'uncertainty'.
Placebo-controlled RCTs will usually have positive pooled expected values when new drug and placebo alike are added to usual care. However, if the placebo and the new drug replace usual care a study may not have positive pooled expected values. If usual care is expected to yield 30% pain reduction, placebo 10%, and new drug 40%, then the pooled average of the arms of 25% is less beneficial than usual care. For the 'positive expected value' principal to be met in this instance, the expected pooled positive effects from the placebo and new drug must exceed the expected negative effect from the loss of usual care.
The equipoise principle is replaced here by a new ethical standard of reasonable 'positive expected value' with a higher standard in protection of the participant. Unlike equipoise, this standard allows placebo-controlled RCTs with predictable results, where the net effect over all participants is expected to be a benefit. It does away with the charade of pretending not to know the likely outcome in advance.
Second principle: autonomy for patients to make decisions despite negative average expected value
The reasonable and autonomous prospective participant might wish to accept an RCT even though the pooled expected value is negative. Therefore, following full factual review of expected outcomes with the potential participant, the personal values of that patient need to be applied. A common circumstance is altruism, with the patient valuing social progress higher than personal risks. In other instances, minimal increases in risks, although real, may be of little importance to the patient.
Another circumstance is the choice of a decision strategy other than 'expected value' by the patient. 'Expected value' is not the only defensible decision strategy for an individual; decisions may be driven by a desire to have a chance at the best possible outcome or to avoid the possibility of the worst possible outcome. A reasonable person could choose to participate in an RCT because the best possible result could only come from acceptance, even if the pooled expected value of accepting the RCT is negative. For example, accepting an RCT might be the only possible way to gain access to a new treatment. Alternatively, another reasonable potential participant could decline a trial with a positive average expected value because of excessive fear about the described potential side effects of the new drug ('risk aversion').
Autonomy implies the ability to make decisions under a personal rather than an externally imposed value system. However, the standard is that of the 'reasonable person'. Autonomy also implies absence of coercion, full disclosure of significant risks, and protection of those with limited mental capacity, as well as other experimental subject protections described in the Belmont Report [26
We do not propose that formal decision analysis (expert driven) and utility assignment (patient driven) should be required of prospective participants, although they could well be helpful to institutional review boards. The complexity of the discussion must be appropriate for the patient. However, formalizing the concepts helps to clarify the issues involved. In a similar vein, Lilford [30
] recently noted that 'choices should be based both on probabilities of events (which experts might know) and on the value that a patient places on these events (which only the patient can know)'.
In a broader sense, we note that this same decision process should apply to all studies involving human participant, and not only RCTs. A forthright invitation, objective explanation of known facts, application of the patient's personal values, and a reasonable and autonomous decision are salient necessities for all such studies.
Equipoise is a false and diverting principle, and should be replaced by 'positive expected value' using the patient's values, with elimination of the internal contradictions that surround the concept of equipoise and with a return to the Belmont principles. In this context, should we worry that a large majority of commercial clinical trials are positive? The answer is this is 'yes and no'. Certainly, skepticism and vigilance are required with industry-sponsored trials because there are inevitably large financial interests, and omitted results, redundant presentations, and marketing 'spin' do sometimes occur [18
]. Studies designed to evaluate noninferiority with an US FDA approved comparator may represent 'me too' drugs and raise questions about a societal need for a drug. On the other hand, the ability to design clinical trials for success, and therefore to be able to predict outcomes in advance, implies the ability to cull weak drugs early in the approval process (and this frequently occurs), thus exposing fewer patients, improving efficiency in drug development, possibly reducing the cost of pharmaceuticals [32
], and identifying those drugs that are most likely to be clinically useful and of positive social value.