Search tips
Search criteria 


Logo of glycobLink to Publisher's site
Glycobiology. 2016 May; 26(5): 429.
Published online 2016 March 29. doi:  10.1093/glycob/cww036
PMCID: PMC4851720

Novel or reproducible: That is the question

When did novelty become a criterion for stellar science that is rewarded with promotions, funding and tenure? Many so-called high-impact journals require novelty in the papers they publish (e.g. “The criteria for a paper to be sent for peer-review are that the results seem novel, arresting (illuminating, unexpected or surprising)…” (Getting Published in Nature: The Editorial Process, Nature;; “Selected papers should present novel and broadly important data, syntheses or concepts” (General Information for Authors, Science magazine; Interestingly, these are the same journals that struggle with the highest rates of retraction (Fang and Casadevall 2011). As scientists, journals and funding agencies grapple with issues centered around research reproducibility (Prinz et al. 2011; Begley and Ellis 2012; Landis et al. 2012; Collins and Tabak 2014; Bradbury and Pluckthum 2015), we should examine the emphasis that is put on novelty and how it could potentially influence the scientific process.

Our mission as scientists is to determine how nature works. We are expected to be unbiased at every stage in the scientific process, including the generation of a question or hypothesis, experimental design, data collection and analysis, and conceptualization of models that best explain the results in as simple a manner as possible. We then design experiments that test the model and, through an iterative process, modify the model to account for subsequent experimental findings. Ultimately, we hope to arrive at a model that most accurately represents nature and how it works. If the most accurate and parsimonious model is one that has never been described before—great! If it is one that has been described previously in another system—great! Either way, we have a better understanding of nature and a firmer foundation on which to build future experiments and gain biological and clinical insights. Whether a particular model is novel is completely irrelevant to this process.

However, if we enter into a study with the mindset that we have to find something never seen before (to publish in a “high-impact” journal), we are no longer performing science in a purely empirical, unbiased fashion—we are now biased to find something novel. This potentially influences how studies are designed; what experimental systems are used; how data are collected, analyzed or prioritized; and how models are formed (Fanelli 2010). There are instances where carefully crafted experiments are designed to prove rather than test a novel hypothesis. Experimental systems can be employed for the express purpose of generating novel observations rather than deciphering how a process works. Data that point to a previously described mechanism may be overlooked because they will not result in the novel findings required for a “high-impact” publication. In an effort to describe novel mechanisms, many scientists may not be operating by the principles of Occam's razor (or the law of parsimony) in developing hypotheses and models. The end result is that we are more likely to have models that do not accurately represent how nature works.

The irony here is that this type of science ends up being the opposite of high-impact. It does not adequately inform future studies and has a higher likelihood of needing correction. It has the net result of moving a field backward, rather than forward, as time and money are wasted trying to build upon findings that may not be reproducible or biologically/clinically relevant (Freedman et al. 2015).

If we are truly concerned about scientific reproducibility, then we need to reexamine the current emphasis on novelty and its role in the scientific process. These types of discussions will hopefully remind people that conducting unbiased, quantitative, well-controlled science that others can build upon (regardless of its novelty) is what will move fields forward and have a long-lasting impact on our understanding of nature. This also includes publishing reproducible, well-controlled studies that have resulted in negative findings, as negative results can be as informative for future studies as positive ones. Praising and rewarding this type of science (as was done many decades ago), will reinforce the indisputable importance of unbiased scientific inquiry. Perhaps then, reproducibility will not be so novel.


  • Begley CG, Ellis LM 2012. Drug development: Raise standards for preclinical cancer research. Nature. 483:531–533. [PubMed]
  • Bradbury A, Plückthun A 2015. Reproducibility: Standardize antibodies used in research. Nature. 518:27–29. [PubMed]
  • Collins FS, Tabak LA 2014. Policy: NIH plans to enhance reproducibility. Nature. 505:612–613. [PMC free article] [PubMed]
  • Fanelli D. 2010. Do pressures to publish increase scientists’ bias? An empirical support from US States data. PLoS ONE. doi:10.1371/journal.pone.0010271. [PMC free article] [PubMed]
  • Fang FC, Casadevall A 2011. Retracted science and the retraction index. Infect Immun. 79:3855–3859. [PMC free article] [PubMed]
  • Freedman LP, Cockburn IM, Simcoe TS 2015. The economics of reproducibility in preclinical research. PLoS Biol. 13(6):e1002165 doi:10.1371/journal.pbio.1002165. [PMC free article] [PubMed]
  • Landis SC, Amara SG, Asadullah K, Austin CP, Blumenstein R, Bradley EW, Crystal RG, Darnell RB, Ferrante RJ, Fillit H et al. 2012. A call for transparent reporting to optimize the predictive value of preclinical research. Nature. 490:187–191. [PMC free article] [PubMed]
  • Prinz F, Schlange T, Asadullah K 2011. Believe it or not: How much can we rely on published data on potential drug targets? Nat Rev Drug Discov. 10:712. [PubMed]

Articles from Glycobiology are provided here courtesy of Oxford University Press