“Let's try for Science, Nature, or Cell!” exclaim a student/postdoc and his/her advisor. These journals reach a wide audience, as many scientists frequently scan their tables of contents. However, scanning tables of contents has become less important now with the availability of search engines such as PubMed than it was in the past, when journals were retrieved one at a time from the shelves of a library. Thus it is somewhat counterintuitive that the three main journals have remained very powerful, when print subscriptions are in decline and most journals can be accessed electronically. The primary reason driving the current frenzied submission rate to these journals is the opportunity for career advancement. Publications in these journals are golden eggs in a curriculum vitae (CV) that can significantly enhance chances for getting jobs and grants.
In a meritocracy, evaluation of productivity is necessary, and judgment of quality must come into play. But have we dug ourselves into too deep of a hole by relying so heavily on journals and their associated impact factors for making decisions about quality? Have we “outsourced” too much of our responsibility in peer evaluation to journals?
A rationale for adopting a journal hierarchy as a proxy for quality is that top journals receive many papers; in partnership with scientific reviewers, they invest considerable energy in sorting through submissions to identify the “best science.” While this may seem like a perfect Darwinian selection system, we also are all aware of its flaws (Simons, 2008
; Johnston, 2009
). Not infrequently, a paper in a “top journal” fades from sight after publication, while the subsequent impact of a paper in a “lesser journal” increases. Journals also look for particularly newsworthy content to enhance their image (which they have the right to do) and not always for the best science. Given the large numbers of submissions, there also is a tendency to accept papers that have a clean bill of health from three or four reviewers, which is not necessarily a metric of outstanding science. The ultimate decision makers also are the journal editors, not the scientists who write the reviews. Thus this peer review system is heavily filtered in a nontransparent (or at least translucent) process that incorporates the goals of a journal. Furthermore, many scientists do not want to waste time on the “journal game,” prefer open-source journals, or seek more page space for their published work. Thus many outstanding studies are never subjected to the “top journal litmus test” in the first place.
There is disgruntlement in our scientific community about the growing emphasis of the where, rather than the what, in evaluating publications. This emphasis is creating more submissions, as a paper is often serially submitted, initially reaching for the top and, if rejected, moving down the journal food chain until it finds a home. This wastes enormous time that could be spent on doing science and creates anxiety among students and postdocs. However, I would argue that the fault does not lie with journal editors and their staff; their job is to make their journals successful. It is our job as a scientific community to evaluate published scientific work. We have created the predicament in which we find ourselves.
If the ball is in the court of the scientific community, why have we clung so tightly to and even reinforced the journal hierarchy? Not uncommonly, scientists who complain about the system succumb to it when it is their turn to write/present a peer review evaluation. Scientists themselves have become seduced by the sparkle of a high-profile paper on a CV. With so many papers and a shortage of time for reading and understanding them, counting high-profile papers on a CV is an easy solution for a scientist with a busy schedule. Reducing complex science to easy impact factors also provides tools for administrators who do not themselves understand the science.
What can be done to dig ourselves out of this rut? The first step is recognizing that peer evaluation is our responsibility. In evaluating qualifications for a grant, a job, or a promotion, it is too simplistic to think that judgment has already been rendered by prior competition for the most prized journal pages. Second, our scientific community might do well to reassert the value of publishing outstanding science in specialty journals. The phrase “better fit for a specialty journal” has become an uncomplimentary, lethal blow in the review process. But this view was not held by previous generations of scientists. While Science and Nature have been the places to publish important and provocative short communications for more than a century, prior generations of scientists often chose to publish their more complete, but still high-impact, studies in journals such as the Journal of General Physiology, the Journal of Biological Chemistry, and the Journal of Cell Biology. More recently, excellent new journals (such as Molecular Biology of the Cell and PLoS) have been added as publication possibilities. However, online supplemental material in Science and Nature (which in reality few read) also has contributed to the lower stature of the longer-format specialty journals, since 5 years of work can now be contained in a 2000-word “print” article along with the larger reservoir of space available as online material. Third, expanding the group of broad-interest, highly ranked journals beyond the present holy trinity might take some of the pressure off the system (the new eLife journal being launched by the Howard Hughes Medical Institute [HHMI], the Wellcome Trust, and the Max Planck Society will hopefully help in this expansion).
Most importantly, the merit of scientific work must be assessed after publication, rather than solely during the journal review process. We do not have a good general scheme for achieving this, and efforts such as Faculty of 1000 have not been influential in affecting evaluations. However, rather than waiting for new schemes, scientists who conduct peer review need to make sufficient effort to assess and articulate the value of scientific studies. We need to restrain ourselves from just using journal names as primary evidence of merit. For example, it is not uncommon for a grant discussion or a promotion letter to say, “In the past five years, the principal investigator (PI) has published six papers, two of which were published in Cell.” Chances are that the work is excellent and the PI highly productive, but we need to explain the science first and kick the habit of resting one's argument of scientific productivity by invoking the name of a high-profile journal, as if this is all the information needed.