|Home | About | Journals | Submit | Contact Us | Français|
‘The strongest arguments prove nothing so long as the conclusions are not verified by experience. Experimental science is the queen of sciences and the goal of all speculation.’ (R Bacon)
Journals were important for the advancement of science. The birth of the scientific journal 300 years ago helped to change science from a hodgepodge of different formats and virtually no quality control to a uniform system with peer review for research communication. Now, however, after three centuries, with the advent of the Internet and other new modes of research communication, there is the need to establish a system to determine if journals are the optimal means for research communication.
Figure 1 presents some of the technology in modern biotechnological laboratories. Almost all the technology was invented after 1995. We do not find 1931 state-of-the-art calculators, such as the one advertised in the Lancet (Figure 2) in modern laboratories. The pocket calculator is over a million times less powerful than PCs and laptops that currently reside in the laboratory. As a result, the Otis King Calculator became extinct. In contrast, there is one 300-year-old technology that has survived almost untouched and still used today in virtually all the laboratories of the world: the scientific journal. Isn't it perplexing that the Lancet and others have survived with little change over centuries, whereas almost all other 300-year-old scientific technologies have died out? Why is this?
We would argue that the primary reason that journals have not changed is that they are ‘faith based’: we believe in them, we dare not question them. Most certainly, research communication is successfully shared through journals. Most scientists in developed countries have access to the research knowledge of our peers though journals, but the literature is almost inaccessible in developing countries. The adage of ‘if the shoe fits, wear it’ is how we look at the journals. However, this argument did not fit with the Otis King 1931 pocket calculator. The 1931 calculator works, but other technologies out-paced and displaced it.
Let us dissect the scientific research process as seen in Figure 3. We are all familiar with this process of research and publication. We first complete our research and then prepare it for publication. The structure is very specific to scientific publication, with the IMRaD organization (Introduction, Methods, Research, and Discussion). Virtually all journals use this structure—with some notable exceptions, such as Science and Nature. The manuscript is then sent out for peer review, with two or three reviewers providing comments. The article is returned to the editor for a final decision.
The three primary tenets of a scientific journal are IMRaD, peer review and editorial decision. This model has had a long history and has been used millions of times. In 2002, there were 22 000 scientific journals, each publishing on average 154 articles (3 388 000 articles in total). In 1960 there were 2815 journals published.1,2 We can interpolate backward and conservatively estimate that there have been about 50 000 000 scientific articles published, almost all of which have used the model of publication as presented.
Why hasn't peer review, IMRaD, the editorial decision process and the overall journal process evolved into a new form of research communication? We would argue that the reason is that this has been due to the almost non-existent use of the scientific method to question and test the publication process itself. We use the publication process to collect, describe and distribute the results of research using the scientific method. We almost never turn this onto itself to use the scientific method to test the scientific publication process. We combed the literature on Medline and could find only 13 articles on IMRaD, with no scientific hypothesis testing studies of the structure (e.g. if it were better than other forms of research communication in terms of understandability, interest, recall, etc.). Jefferson recently presented an outstanding review of peer review and could find only 19 studies on peer review that were scientifically sound.3 We could find only 14 articles examining the editorial board/editorial decision making. Thus, with over 50 million articles and 300 years of the traditional journal approaches, there has been only a handful of studies questioning or testing the journal process itself. We scientists keep using the process without question, but with no data to show that it is effective. There is thus no evidence-based approach to the science of research communications. Recent studies reveal that peer review often misses major methodological problems in articles.4 No wonder it has not changed or improved, as there are no data questioning the process. Hypothesis testing research and randomized trials could easily and cheaply be initiated to understand the ‘grand challenges’ of research communication, but sadly they have not.
Isn't it strange that three features that are inherent to research communication have not been looked at scientifically? There are several possible reasons for this. The most likely is that we scientists have almost complete faith in the journal process as right and unassailable. We thus take a ‘faith based’ approach to research communications. Faith is defined as a firm belief in something for which there is no proof. Many of us might view questioning of the journal process as an attack on science itself. Clearly, the scientific journal process is not a part of the scientific method. We are taught early in our training about the importance of learning to write articles (e.g. IMRaD), the power of peer review and a belief in the editorial system. We do not question the process, despite the fact that the essence of science is questioning. Questioning peer review is like questioning the Bible, Quran or Torah. One role of science is to help separate science from dogma, which we should now do with journals, and avoid a faith based approach. New approaches need to be taken—you cannot teach dogma new tricks!5
In many ways, scientists in 2006 are similar to Galileo in the early 1600s.6 Galileo had enormous difficulty in trying to publish his classic work Dialogue concerning the Two Chief World Systems (Figure 4). His book presented a strong argument for a heliocentric universe. The organization of this book was vastly different from other scientific books, as it was a dialogue between three people arguing the merits for different views of the universe, particularly whether the Sun (Galileo's view) or the earth (the Church's view) was the centre of the universe.
The Inquisition board set up by Pope determined that the Dialogue had major problems. The first fault was the format, whereby the typeface was inappropriate and the organization quite different from a scientific book, and therefore did not fulfill the definition of a research communication. A parallel problem now would be to submit a Noble Prize lecture in PowerPoint to the BMJ or Lancet. It would be rejected in an instant, but should it be?
The major problem for the Dialogue was that Galileo questioned the faith that the church had in the earth as the center of the universe. This is similar now with the journal centric view that the scientific journal is central to all research communications. We and others have pointed out that this is not true anymore, with the Internet and even PowerPoint becoming primary tools of research communication.7
It is the scientific method that is central to science, not the scientific journal. The scientific method should be central to other research communication processes, but it is not and has not been used to continuously improve how we communicate research. Because of this, we are forced into a conundrum—we cannot change the process if the process if based upon faith, not data.
Experiences of various fields, including industry, demonstrate there are other forms of quality control besides peer review that could potentially be utilized in the biomedical journals. These methodologies include 6-sigma, statistical quality control, and web based, consumer driven systems such as that employed by Amazon, eBay, and Slashdot. There are thousands of studies in business and sociology evaluating the decision making process that could be brought to bear to evaluate the decision process at the editorial level, but they have not been used. It would seem very simple to develop randomized trials to determine which system best improves the quality of publication. As Jefferson has pointed out, there are almost no data suggesting that the existing peer review systems work and none to suggest that they are better than any other system.
What can be done? We argue that there needs to be developed a ‘Science of Research Communication’. This would be a new discipline that would be defined as ‘that branch of science which assesses the optimal means by which research can be communicated.’ It needs to be an interdisciplinary approach driven by scientists, not editors.
With the introduction of the scientific method to the peer review process we hope we can move from the level of a 1931 pocket calculator to the level of the supercomputer. Based upon the data, we cannot reject the hypothesis that scientific journals are faith based. We need to increase the power of the design through experimentation to adequately test the hypothesis.
‘The dogmas of the quiet past are inadequate to the stormy present... we must think anew and act anew.’ (A Lincoln)