|Home | About | Journals | Submit | Contact Us | Français|
Most clinical research findings are false. As for the few studies with results that are true, well, here’s more bad news. Most of those findings are useless. These are but two of the bold statements made by Dr. John Ioannidis in a recent paper in PLoS Medicine.
“I have long been frustrated in seeing that much clinical research seems to be losing its purpose, and it does not really help humans,” Ioannidis, director of the Stanford Prevention Research Center at the Stanford University School of Medicine, said in an email. “I am very optimistic that we can do things better.”
Ioannidis’ article is “fascinating and provocative,” according to Timothy Caulfield, Canada Research Chair in Health Law and Policy and research director of the Health Law Institute at the University of Alberta. In general, Caulfield agrees with the major conclusions in the paper, and noted that quantity tends to trump quality in the research community, a situation that may boost academic careers but does little for patients.
“One could argue that it is not ethical to recruit patients for clinical trials that have little chance of providing a meaningful contribution, particularly if the consent process does not reflect that reality,” Caulfield said in an email. “Many patients likely participate because they believe they are helping to move clinical practice forward. But, as this paper highlights, that is often not the case.”
The word “provocative” also came up in comments on Ioannidis’ ideas from Jonathan Kimmelman, an associate professor in the Biomedical Ethics Unit at McGill University. And though he found the piece to be timely and compelling, Kimmelman expressed doubt about the assertion that most clinical research isn’t useful.
“That’s a bold claim. The article doesn’t really establish it,” Kimmelman said in an email. “What it does is offer various features that make trials useful, and it suggests that many trials fail to reflect these factors. My own research and experience would affirm many trials are not useful — whether it is most, however, I cannot say.”
Another problem with judging the usefulness of research is the difficulty of defining “useful” in this context. According to Ioannidis, useful clinical research “adds to what we already know” and leads to “favorable change in decision-making.” But this doesn’t acknowledge that much of the most useful research confirms current knowledge and clinical practice and may not lead to change, according to Dr. Elizabeth Loder, the acting head of research for BMJ.
“Furthermore, it’s not always possible at the time research is done to fully judge its usefulness,” Loder said in an email. “Sometimes that becomes apparent much later.”
In his paper, Ioannidis provides several features of clinically useful research. To provide more real-world value, he suggests, clinical researchers should address problems with high disease burdens and avoid exaggerating the health threats posed in their fields of study, which he refers to as “disease mongering.” Research should also be patient-centred, pragmatic and preceded by systematic reviews to gauge current knowledge. Other factors to consider include feasibility (many trials are terminated because of futility), transparency (utility increases if data and methods can be verified and used by others) and value for money (especially important in an era of limited resources).
Overall, this is a valuable conceptual framework and a “good lens with which to see clinical research,” according to Dr. Hani El-Gabalawy, the scientific director of the Institute of Musculoskeletal Health and Arthritis at the Canadian Institutes of Health Research (CIHR). Striving to make clinical research more patient-centred, for example, is already a priority at CIHR, as part of its SPOR program (Strategy for Patient-Oriented Research). Some of Ioannidis’ views of the research community, however, appear excessively negative, according to El-Gabalawy.
“He really comes down hard on researchers, saying they do clinical research without knowing what’s out there,” says El-Gabalawy. “With all due respect, people just don’t get funded unless they’ve done their homework. Working for a funding agency, I know that anyone who hasn’t scoured the literature and looked at the novelty of their clinical research simply doesn’t get money.”
Others note that although Ioannidis’ ideas on how to improve the utility of research sound great, there would be many challenges in actually reforming the clinical research system. For one, there’s the academic promotion system, which incentivizes “bad, useless research” and “glorifies research and publications over patient care,” according to Loder. Kimmelman noted that many parties benefit from the status quo, including drug companies that sponsor redundant clinical trials to promote their products and medical centres that earn revenue by running or hosting studies of marginal value.
“I do think tweaks can be made, but given how entrenched many of these incentives are, it won’t be easy to cause a significant shift in a short amount of time,” according to Caulfield.
Ioannidis, however, is more optimistic. He believes the features of useful research he lists in his paper are all feasible and can be realized if people commit to change. More useful research benefits everyone, he suggested. Patients will receive better care. The pharmaceutical industry will produce better drugs and technologies and waste fewer resources on unnecessary research and development.
“Researchers would clearly gain the most,” according to Ioannidis. “I don’t think that anyone is particularly happy deep in one’s heart to feel that the research done is useless. I have nothing against research productivity in terms of publishing more papers, and it does not mean that reform will cut back on the productivity of researchers. It will just make their productivity more likely to be worth it and make a real difference.”