Search tips
Search criteria 


Logo of plosonePLoS OneView this ArticleSubmit to PLoSGet E-mail AlertsContact UsPublic Library of Science (PLoS)
PLoS One. 2010; 5(10): e13636.
Published online Oct 18, 2010. doi:  10.1371/journal.pone.0013636
PMCID: PMC2956678
Self-Selected or Mandated, Open Access Increases Citation Impact for Higher Quality Research
Yassine Gargouri,1 Chawki Hajjem,1 Vincent Larivière,2 Yves Gingras,3 Les Carr,5 Tim Brody,5 and Stevan Harnad4,5*
1Institut des Sciences Cognitives, Université du Québec à Montréal, Montréal, Québec, Canada
2Observatoire des Sciences et des Technologies, Université du Québec à Montréal, Montréal, Québec, Canada
3Canada Research Chair in the History and Sociology of Science, Université du Québec à Montréal, Montréal, Québec, Canada
4Canada Research Chair in Cognitive Sciences, Université du Québec à Montréal, Montréal, Québec, Canada
5School of Electronics and Computer Science, University of Southampton, Southampton, United Kingdom
Robert P. Futrelle, Editor
Northeastern University, United States of America
* E-mail: harnad/at/
Conceived and designed the experiments: LC SH. Performed the experiments: YG CH. Analyzed the data: YG CH TB. Contributed reagents/materials/analysis tools: VL YG LC TB. Wrote the paper: SH.
Received January 3, 2010; Accepted September 29, 2010.
Articles whose authors have supplemented subscription-based access to the publisher's version by self-archiving their own final draft to make it accessible free for all on the web (“Open Access”, OA) are cited significantly more than articles in the same journal and year that have not been made OA. Some have suggested that this “OA Advantage” may not be causal but just a self-selection bias, because authors preferentially make higher-quality articles OA. To test this we compared self-selective self-archiving with mandatory self-archiving for a sample of 27,197 articles published 2002–2006 in 1,984 journals.
Methdology/Principal Findings
The OA Advantage proved just as high for both. Logistic regression analysis showed that the advantage is independent of other correlates of citations (article age; journal impact factor; number of co-authors, references or pages; field; article type; or country) and highest for the most highly cited articles. The OA Advantage is real, independent and causal, but skewed. Its size is indeed correlated with quality, just as citations themselves are (the top 20% of articles receive about 80% of all citations).
The OA advantage is greater for the more citable articles, not because of a quality bias from authors self-selecting what to make OA, but because of a quality advantage, from users self-selecting what to use and cite, freed by OA from the constraints of selective accessibility to subscribers only. It is hoped that these findings will help motivate the adoption of OA self-archiving mandates by universities, research institutions and research funders.
The 25,000 peer-reviewed journals and refereed conference proceedings that exist today publish about 2.5 million articles per year, across all disciplines, languages and nations. No university or research institution anywhere, not even the richest, can afford to subscribe to all or most of the journals that its researchers may need to use [1]. As a consequence, all articles are currently losing some portion of their potential research impact (usage and citations), because they are not accessible online to all their potential users [2].
This is supported by recent evidence, independently confirmed by many studies, to the effect that articles whose authors have supplemented subscription-based access to the publisher's version by self-archiving their own final draft to make it accessible free for all on the web (“Open Access”, OA) are cited significantly more than articles in the same journal and year that have not been made OA. This “OA Impact Advantage” has been found in all fields analyzed so far – physical, technological, biological and social sciences, and humanities [3][12]
Hence OA is not just about public access rights or the general dissemination of knowledge: It is about increasing the impact and thereby the progress of research itself. A work's research impact is an indication of how much it contributes to further research by other scientists and scholars – how much it is used, applied and built upon [13][17]. That is also why impact is valued, measured and rewarded in researcher performance assessement as well as in research funding [18].
Self-archiving mandates
Only about 15–20% of the 2.5 million articles published annually worldwide are being self-archived by their authors today [8], [19]. Creating an Institutional Repository (IR) and encouraging faculty to self-archive their articles therein is a good first step, but that is not sufficient to raise the self-archiving rate appreciably above its current spontaneous self-selective baseline of 15–20% [20]. Nor are mere requests or recommendations by researchers' institutions or funders, encouraging them to self-archive, enough to raise this 20% figure appreciably, even when coupled with offers of help, rewards, incentives and even offers to do the deposit on the author's behalf [21]. In two international, multidisciplinary surveys, 95% of researchers reported that they would self-archive if (but only if) required to do so by their institutions or funders. (Eighty-one percent reported that, if it was required, they would deposit willingly; 14% said they would deposit reluctantly, and only 5% would not comply with the deposit requirement; [22].) Subsequent studies on actual mandate compliance have gone on to confirm that researchers do indeed do as they reported they would do, with mandated IRs generating deposit rates several times greater than the 20% self-selective baseline and well on the road toward 100% within about two years of adoption [20].
Universities' own IRs are the natural locus for the direct deposit of their own research output: Universities (and research institutions) are the universal providers of all research output, in all scientific and scholarly disciplines; they accordingly have a direct interest in hosting, archiving, monitoring, measuring, managing, evaluating, and showcasing their own research output in their own IRs, as well as in maximizing its uptake, usage, and impact [23], [24]. OA self-archiving mandates hence add visibility and value at both the individual and institutional level [25].
In 2002, The University of Southampton's School of Electronics & Computer Science (ECS) became the first in the world to adopt an official self-archiving mandate. Since then, a growing number of departments, faculties and institutions worldwide (including Harvard, Stanford, and MIT) as well as research funders (including all seven UK Research Funding Councils, the US National Institutes of Health, and the European Research Council) have likewise adopted OA self-archiving mandates. Over 160 mandates had already been adopted and registered and charted in the Registry of Open Access Repository Material Archiving Policies (ROARMAP) as of summer 2010.
In 2008, mindful of the benefits of mandating OA, the council of the European Universities Association (EUA, consisting of more than 800 universities, in 46 countries) unanimously recommended that all European Universities should create IRs and should require all their research output to be deposited in them immediately upon publication (to be made OA as soon as possible thereafter). The EUA further recommended that these self-archiving mandates be extended to all research results arising from EU research project funding. A similar recommendation was made by EURAB (European Research Advisory Board). In the US, the FRPAA has proposed similar mandates for all research funded by the major US research funding agencies.
Some studies, however, have suggested that the “OA Advantage” might just be a self-selection bias rather than a causal factor, with authors selectively tending to make higher-quality (hence more citable) articles OA [26][29]. The present study was carried out to test this hypothesis by comparing self-selected OA with mandated OA on the basis of the research article output of the four institutions with the longest-standing OA mandates: (i) Southampton University (School of Electronics & Computer Science) in the UK (since 2002); (ii) CERN (European Organization for Nuclear Research) in Switzerland (since November, 2003); (iii) Queensland University of Technology in Australia (since February 2004); (iv) Minho University in Portugal (since December, 2004).
The objective was to compare citation counts – always within the same journal/year – for OA (O) and non-OA (Ø) articles, comparing the O/Ø citation ratios for OA that had been self-selected (S) vs. mandated (M). (The critical comparisons of interest were hence SO/Ø vs. MO/Ø.) The sample covered articles published between 2002 and 2006. The metadata for the articles were collected from the four institutional repositories, as well as from the Thomson-Reuters citation database. (Citation counts were extracted from the Thomson-Reuters database November, 2008. About two years need to elapse for the citations from the most recent year to stabilize.)
The effect of OA on citation impact cannot be reliably tested by comparing OA and non-OA journals because no two journals have identical subject matter, track-records and quality-standards (nor are there as yet enough established OA journals in most fields). The comparison must hence be between OA and non-OA articles published within the same (non-OA) journals [5]. For each of the mandated articles, Mi, deposited in our four mandated IRs, we accordingly collected, as our pool of nonmandated controls for comparison, the Nj articles that had been published in the same journal, volume and year. Our sample of self-archived articles from 2002 to 2006 was distributed across 1,984 non-OA journals in the Thomson-Reuters database (Table 1). (Based on the Directory of Open Access Journals (DOAJ), 2% of journals indexed by Thomson-Reuters in 2006 were OA journals. All articles from these journals were removed from our pool because for them O/Ø comparisons were not possible.)
Table 1
Table 1
Journal counts per year.
To reduce our nonmandated comparison sample to a reasonable processing size, we restricted the number of journal/year-matched controls to the 10 Øj articles that were semantically closest to their corresponding target Mi (as computed on the basis of shared words in their titles, omitting stop words). This tightening of content similarity also made the control articles even more comparable to their targets than using the full spectrum of same-journal content. The total size of the article sample (6215 mandated targets plus their 20,982 corresponding controls) from 2002 to 2006 was 27,197. (When more than one M article was published in the same journal/volume/year (which represents 66% of M articles), only 10 articles were selected as controls, using keyword matching for one of these M articles.)
The full-text OA status of the articles in our sample was verified using an automated webwide search-robot [8] as well as an automated Google Scholar search. (Note that any OA articles that our robot missed would reduce any OA Advantage. Hence our estimate of the OA Advantage is conservative.) Figure 1 shows each of our four mandated institutions' verified annual OA article deposits as a percentage of the institution's total published article output for each year based (only) on those articles published in the journals indexed by the Thomson-Reuters citation database; the resulting estimate of the overall OA mandate compliance rate is about 60%.(for publishing years 2002–2006, with the deposits up to 2009, when the analysis was conducted). Note also the robot data's confirmation of the approximately 15% baseline for spontaneous, self-selected (i.e., non-mandated) OA self-archiving among the control articles in the same journal/years [19].
Figure 1
Figure 1
Open Access (OA) Self-Archiving Percentages for Institutions With Self-Archiving Mandates Compared to Non-Mandated, Self-Selected Controls.
This mandated deposit rate of 60% is substantially higher than the self-selected deposit rate of 15–20%. Of course, with anything short of 100% compliance it remains a logical possibility to hold onto the hypothesis that the OA citation advantage could be solely a self-selection bias by arguing that, when self-archiving is mandated, what used to be a bias toward self-selectively self-archiving one's more citable articles instead takes the form of a selective bias toward noncompliance with the mandate for one's less citable articles. But in that case a reasonable expectation would be at least a substantial reduction in the size of the OA impact advantage with a mandated self-archiving rate three times as high as the spontaneous self-archiving rate, were it indeed true that the OA advantage was solely or largely due to self-selection bias.
To test whether mandated OA reduces the OA citation advantage, 4 kinds of articles need to be compared:
  • O M: OA, Mandated,
  • Ø M: Non-OA, Mandated,
  • O S: OA, Self-Selected
  • Ø S: Non-OA, Self-Selected
The analysis uses the citation counts within each journal/year. Because the date on which the mandate was first adopted varies (from 2002 to 2004) for the four institutions, we analyzed the data for the four institutions jointly as well as individually. The individual analyses show the time-course of mandate compliance more clearly; the global analysis combines data, enlarges the sample size and smoothes out incidental effects of institutional and timing differences.
We compared the following ratios: O/Ø, OM/OS, OS/ØS, OM/ØM, OM/Ø, OS/Ø and OM/OS using their mean log citation ratios. For example, to compare mandated OA with self-selected OA, we computed the logarithm of the ratio OMj/OSj for each journal j and then we computed the arithmetic mean of all the logarithms of those ratios for all journals. With OA/OS, there would be an advantage in favor of OM if the logarithm of the ratio was greater than zero, and in favor of OS otherwise.
A mathematical equation, expression, or formula.
 Object name is pone.0013636.e001.jpg
The logarithm is used to normalize the data and to reduce any effect arising from articles that have relatively high citation counts, compared to the whole sample. The comparisons are all within-journal, to minimize between-journal differences in content, quality and average citation levels (“journal impact factor”); OA articles are keyword-matched to their non-OA controls in order to minimize any differences still further.
Overall, OA articles are cited significantly more than non-OA articles, confirming the repeatedly observed OA Advantage (O/Ø). There is also no evidence at all that mandated OA (OM) has a smaller citation advantage than self-selected OA (OS). Figure 2 shows the results for the four institutions together. Appendix S1 shows each institution separately. The pattern for the individual institutional data is largely the same as for the average across the four institutions.
Figure 2
Figure 2
Log Citation Ratios Comparing the Yearly OA Impact Advantage for Self-Selected vs Mandatory OA 2002–2006.
For all OA vs Non-OA (O/Ø) comparisons, regardless of whether the OA was Self-Selected (S) or Mandated (M), the mean log citation differences are significantly greater than zero (based on correlated-sample t-tests for within-journal differences; Table 2). There is no detectable reduction in the size of the OA Advantage for Mandated OA (60%) compared to Self-Selected OA (15%).It would require a very complicated argument indeed (“self-selective noncompliance for less citable articles”) to resurrect the hypothesis that the OA Advantage is only or mostly a self-selection bias in the face of these findings. (Such an argument does remain a logical possibility until there is 100% mandate compliance, but an increasingly implausible one.)
Table 2
Table 2
Paired Samples Test.
Logistic regression
The number of citations an article receives can be correlated with and hence influenced by a variety of variables. Those variables, in turn, could create another kind of bias. For example, older articles tend to have more citations than younger articles simply because there has been more time to cite them. If OA articles tended to be older than non-OA articles, then article age, rather than OA, could be the cause of the OA Advantage. A way to test whether correlates of citation other than OA are responsible for the OA Advantage is to perform a multiple logistic regression analysis to see whether OA alone is still significantly correlated with higher citations when the correlation with other variables has been “factored out.”
In ordinary multiple regression analysis, there might be, say, three “Predictor” variables used to predict a 4th “Target” variable. For example, in weather forecasting, each of (P1) temperature, (P2) pressure, and (P3) humidity is individually correlated with, and hence predictive of (T) rain. These three pairwise correlations are each examples of simple regression. The prediction is much better, however, if we use all three predictors jointly. This is called multiple regression. It gives each of the predictors a “weight” (ß) that estimates how much it contributes independently to predicting rain, with the other 2 predictors factored out. Multiple regression analysis works if the variables are continuous (like temperature) and normally distributed (i.e., bell-curve-shaped). But if the variables are discrete or not normally distributed, a variant analysis called logistic regression is used in which the variables are subdivided above and below a cut-off point, and various different models, with different cut-off points, are tested to see which ones predict the target variable the best in each range. We use this variant analysis, because our variables are not all continuous or normally distributed. The logistic regression weights (Exp(ß)) are estimates of the size of the individual contributions of each of our predictor variables to our target variable (citations).
(In Table 3 – and in all the other Tables displaying the Exp(ß) weights for our logistic regressions – the relative size of the Exp(ß) weight for each of our 15 predictor variables (in each of our models, which vary in their ranges and cut-off points) estimates how much (and in what direction) each predictor contributes to predicting the target (citations); statistically significant contributions are in boldface. To visualize the size and the direction of the independent contributions of our predictors, each Table has a corresponding Figure, showing the contributions as color-coded bars.)
Table 3
Table 3
Set of fourteen variables (plus one interaction) potentially influencing citation counts.
We have accordingly analyzed the following set of variables that are potentially influencing citations. Variables 1–8 are known to be correlated with citation counts. Variable 9 is OA itself; and variable 10 is a measure of the degree to which the relation between OA and Age is non-additive. Variable 11 indicates whether or not the OA is mandated. Variables 12–15 are just the four mandating institutions that are our reference points in this study.
All self-citations were subtracted from the citation counts. (About 32% of the articles in our sample have at least 1 self-citation, with an average of about 2 self-citations per article.) As is well-known, and evident from Figure 3, citation counts are not normally distributed and instead follow a power-law or stretched-exponential function [19], [30], [31]. We accordingly used binary stepwise logistic regression analysis, with a dichotomous dependent variable, selecting for each test the model that maximizes the chi-square likelihood ratio. To make the interpretation of the coefficients easier, we exponentiated the ß coefficients (Exp(ß)) and interpreted them as odds-ratios (minus 1, to highlight the polarity of any change). For example, we can say for the second model (M2) that for a one unit increase in OA, the odds of receiving 5–9 citations (versus 1–4 citations) increased by +.323 (i.e., a factor of 1.323). Table 4 and Figure 4 show (Exp(ß)-1) values for each model with “x–y cites vs. y–z cites” as dependent variables ((x,y,x) [set membership] {1, 2, 3, …, 20}), assigning 1 if the citation count (minus self-citations) was between y and z and 0 if it was between x and y. The four models comparing citation ranges are: (M1) zero vs. lo (1–4); (M2) lo (1–4) vs. med-lo (5–9); (M3) lo (1–4) vs. med-hi (10–19); (M4) lo (1–4) vs. hi (20+). (The Exp(ß) values of the variables turned out to have the same polarity and to be quite similar in magnitude, whether or not self-citations were substracted.)
Figure 3
Figure 3
Distribution of citation counts (minus self-citations) for articles.
Table 4
Table 4
The (Exp(ß)-1) values for logistic regressions.
Figure 4
Figure 4
Exp(ß)-1 values for logistic regressions.
Figure 4 shows that citations are, as is already well-known, positively correlated with the first eight variables listed earlier (Age, Journal Impact Factor, Authors, References, Pages, Science, Review, USA) – as well as with OA. Articles that are made OA have significantly higher citation counts. In this analysis the significant OA advantage is independent of the other variable; it is present in every citation range but highest in the highest citation range (1–4 citations vs 20+ citations): In other words, the OA advantage is strongest for highly cited articles. (The classification as ‘Review’ is derived from the Thomson-Reuters database, which uses number of references cited as its main criterion for classifying an article as a Review. As the number of references cited is another one of our predictor variables, there was probably some confounding of these two non-independent factors in our analysis. Citations came out as negatively correlated with the Review variable for the low-medium citation ranges in our analysis, so it was eliminated in further analyses.)
In our sample, articles by authors at the mandated institutions have higher than average citation counts; this effect is present only in the medium-high citation ranges (and is of course also influenced by the level of author compliance with the institutional Mandate, discussed further below). CERN articles have higher citation counts in the lowest and especially the highest citation range. However, when all CERN articles are excluded from our sample, there is no significant change in the other variables.
There is a significant interaction between Age and OA (Age*OA) for the lowest citation range comparison, zero/lo (0 vs. 1–4 citations) as well for the highest comparison, lo/hi (1–4 citations vs. 20 citations and more). Both the linear main effect of age and OA, and this nonlinear interaction are statistically significant. Figure 5 illustrates the Age*OA interaction effect for the lo/hi range comparison using the means for OA and Non-OA citation counts for each article age. The pattern again confirms the OA advantage but also shows that in the lo/hi comparison range the advantage increases more for older articles, over and above what would be expected from age alone.
Figure 5
Figure 5
Interaction between OA and article age.
Logistic regression by Impact Factor interval
In order to compare articles published in comparable journals and to see the profile for journals in increasing impact ranges (see distribution, Figure 6), we divided our sample into 4 quartiles in terms of Journal Impact Factor (JIF), each range covering 25% of the articles:
A mathematical equation, expression, or formula.
 Object name is pone.0013636.e002.jpg
Only the top quartile contains journals with JIFs from 1.78 to 29.96. As we are also interested in the variability within this top quartile, we further subdivided it into two octiles, each covering 12.5% of the articles. (Subdividing more minutely would make the sample sizes too small to detect effects of interest.) This yielded a total of five ranges for the JIF variable:
A mathematical equation, expression, or formula.
 Object name is pone.0013636.e003.jpg
The same regression is done separately for each JIF range by controlling all the variables (except JIF). Figures 7, ,8,8, ,9,9, ,10,10, ,1111 (and Appendix S2: Table S2a–S2e) summarize the values of Exp(ß)-1 corresponding to the controlled variables for each JIF range. (As noted earlier, our Exp(ß) values for these variables exhibit the same polarity and pattern whether or not we exclude self-citations from the citation count.)
Figure 6
Figure 6
Distribution of Journal Impact Factors by Journal.
Figure 7
Figure 7
Exp(ß)-1 values for logistic regressions (Lowest JIF Range: 0.0–.0.63).
Figure 8
Figure 8
Exp(ß)-1 values for logistic regressions (JIF range 0.63–1.05).
Figure 9
Figure 9
Exp(ß)-1 values for logistic regressions (JIF range 1.05–1.78).
Figure 10
Figure 10
Exp(ß)-1 values for logistic regressions (JIF range 1.78–2.47).
Figure 11
Figure 11
Exp(ß)-1 values for logistic regressions (JIF 2.47–29.96).
When articles are published in a low JIF journal, citation counts for their individual articles are positively correlated with Age, References, Authors, OA and M. The OA advantage is greater in the higher citation ranges. For the lowest range of individual article citations, the Age*OA interaction is significant, but OA itself is not.
For articles in journals with JIFs between 0.63 and 1.05, the pattern is quite similar, except that the Age*OA interaction is absent and OA itself (alongside Age, as separate variables) is significant.
For articles in journals with JIFs between 1.05 and 1.78, the pattern is again quite similar. The USA and Review variables now also correlate with citation increase.
For journals with JIFs between 1.78 and 2.47, longer articles (more pages) have more citations. Here the OA advantage is significant only in the highest citation count ranges. The number of authors is also less correlated with increased citations as the citation range gets higher. CERN and QUT have a citation advantage in this JIF range. However, removing the articles from these institutions does not alter the pattern for the other variables.
For journals with JIFs between 2.47 and 29.96. The OA advantage is again significant for the highest citation ranges. (The increased citations for USA and Review articles also increase in significance). In this JIF range, CERN has a citation advantage in medium-high citations ranges. Removing the articles from this institution, however, does not change the pattern for the other variables.
Overall, OA is correlated with a significant citation advantage for all journal JIF intervals as well as for the sample as a whole. This advantage is greatest for the highest citation ranges. When regressions are done separately for the different JIF ranges, the Age*OA interaction disappears, but OA and Age (as separate variables) remain significant. (There is no significant effect of a specific institution compared to the rest of the institutions, hence there is no need to exclude any specific institution from our sample.)
This study confirms that the OA advantage is a statistically significant, independent positive increase in citations, even when we control the independent contributions of many other salient variables (article age, journal impact factor, number of authors, number of pages, number of references cited, Review, Science, USA author). All these other variables are of course correlated with citation counts, so the fact that OA continues to correlate significantly with an independent positive increase in citation counts even when the contributions of all these other correlates are calculated independently means that the OA Advantage is not just a bias arising from either a random or a systematic imbalance in the other correlates of citations.
Moreover, the OA advantage is just as great when the OA is mandated (with mandate compliance rate ~60%) as when it is self-selective (self-selection rate ~15%). This makes it highly unlikely that the OA advantage is either entirely or mostly the result of an author bias toward selectively self-archiving higher quality – hence higher citability – articles. Nor are the main effects the result of institutional citation advantages, as the institutions were among the independent predictor variables in the logistic regression; the outcome pattern and significance is also unaltered by removing CERN, the only one of the four institutions that might conceivably have biased the outcome because its papers were all in one field and tended to be of higher quality, hence higher citability overall.
Since, with the exception of our one unidisciplinary institute – CERN (high energy physics) – the pluridisciplinary articles from the three other mandated institutional repositories are mostly not in fields that habitually self-archive their unrefereed preprints well before publication (as many in high energy physics do), nor in fields that already have effective OA for their published postprints (as astronomy does: [9], ), it is also unlikely that the OA advantage is either entirely or mostly just an early-access (prepublication) advantage [33], [34]. This will eventually be testable once there are enough reliable data available on deposit-date, relative to publication-date, for a large enough body of self-archived OA articles. In any case, an early-access advantage in a preprint self-archiving field translates into a generic postpublication OA advantage in that vast majority of fields in which authors do not self-archive their prepublication preprints and so their published postprints are accessible only to subscribers – except if they have also been self-archived. The OA mandates all apply only to refereed postprints, self-archived upon publication, not to pre-refereeing preprints, self-archived before publication.
This study confirms that the OA advantage is substantially greater for articles that have successfully met the quality standards of higher-impact journals and it is also greater in the higher-citation ranges for individual papers within each journal-impact level. The typical Pareto distribution for citations whereby the top 10–20% of articles receive about 80–90% of all citations [35], is present in our own sample of 708,219 articles extracted from Thomson-Reuters from 1998 to 2007: about 20% of articles received about 80% of all citations. In addition, 10% of journals receive 90% of all citations.
The implication is that OA itself will not make an unusable (hence uncitable) paper more used and cited (although the proportion of uncited papers has been diminishing with time; [31]). But wherever there are subscription-based constraints on accessibility, providing OA will increase the usage and citation of the more usable and citable papers, probably in proportion to their importance and quality, hence citability. We accordingly infer from our results that the most likely cause of the OA citation advantage is not author self-selection toward making more citable articles OA, but user self-selection toward using and citing the more citable articles – once OA self-archiving has made them accessible to all users, rather than just to those whose institutions could afford subscription access. In other words, we conclude that the OA advantage is a quality advantage, rather than a quality bias: it is not that the higher quality articles – the ones that are more likely to be selectively cited anyway – are more likely to be made OA self-selectively by their authors, but that the higher quality articles that are more likely to be selectively cited are made more accessible, hence more citable, by being made OA.
Our results also suggest the possibility that mandated OA might have some further independent citation advantage of its own, over self-selected OA – but until and unless this effect is replicated, it is more likely that this small, previously unreported effect was due to chance or sampling error. If there does indeed prove to be an independent “mandate advantage” over and above OA itself, a possible interpretation would be the reverse of the self-selection hypothesis: There may be a higher proportion of higher-quality work among the 80% that are not being made OA on a self-selective basis today than among the 20% that are; the result is that the OA mandates serve to help bring this “cream of science” to the top.
It also needs to be noted that some of the factors contributing to the OA advantage are permanent, whereas others will shrink as OA rises from its current 15–20% level and will disappear completely at 100% OA. All competitive advantage of OA over non-OA (because OA is more accessible) will of course vanish at 100% OA (as will the possibility of concurrent measurement of the OA Advantage). Any self-selective bias (whether positive or negative) will likewise disappear at 100% OA. What will remain will be the quality advantage itself (the tendency of researchers to selectively use and cite the best research, if they can access it), but maximized by leveling the playing field, making everything accessible to every user online.
There will continue to be the early-access advantage in fast turnaround fields: It is not that making findings accessible earlier merely gets them their citation “quota” earlier; providing OA earlier significantly increases that quota, probably by both accelerating and broadening the uptake of the findings in further research [33]. And even after the competitive advantage is gone because all articles are OA, the download advantage will continue to be enjoyed by all articles [36], [37] (thereby potentially influencing research even where it does not generate citations), while the quality advantage will see to it that for the best work, increased downloads are translated into uptake, usage and eventual increased citations. (Higher download counts earlier on have been found to be correlated with, hence predictive of, increased citation counts later; [38].)
Summary and Conclusion
The assumption that increasing access to research will increase its usage and impact is the main rationale for the worldwide OA movement. Many prior studies have by now shown across all fields that journal articles that are made freely accessible to all potential users are cited significantly more than articles that are accessible only to subscribers. There is prior evidence for a self-selection bias toward the preferential self-archiving of higher quality articles in a few special fields (such as astronomy and some areas of physics) where most articles are made OA in unrefereed preprint form long before they are refereed and published, and where the published version is effectively accessible to all potential users as soon as it is published. Authors may indeed be more reluctant to make the preprints of papers about which they have doubts freely accessible online before they are refereed [29], [33]. But we have now shown that for most other fields (i) the OA Advantage remains just as high for mandatory self-archiving as for self-selected self-archiving and that (ii) this is not an artifact of systematic biases in other correlates of citation counts. Both the self-archiving and the mandates apply to refereed postprints, upon acceptance for publication, not to unrefereed preprints.
Hence the OA Advantage is real, independent and causal. It is indeed true that the size of the advantage is correlated with quality, just as citations themselves are correlated with quality (the top 20% of articles receiving about 80% of all citations); but we infer that the real cause of the higher OA advantage for the more citable articles is not a quality bias from author self-selection but the quality advantage of the more citable articles, an advantage that OA enhances by maximizing accessibility, and thereby also citability. On a playing field leveled by OA, users can selectively access, use and cite those articles that they judge to be of the highest relevance and quality, no longer constrained by their accessibility.
Overall, only about 15–20% of articles are being spontaneously self-archived today, self-selectively. To reach 100% OA globally, researchers' institutions and funders need to mandate self-archiving, as they are now increasingly beginning to do. We hope that this demonstration that the OA Impact Advantage is real and causal will provide further incentive and impetus for the adoption of OA mandates worldwide in order to ensure that research can at last achieve its full impact potential, no longer constrained by today's needless limits on its accessibility to its intended users [39][42].
To measure that maximized research impact, we and others are already developing new OA metrics for monitoring, analyzing, evaluating, crediting and rewarding research productivity and progress [18], [36], [38], [43][52]. Hence there is no need to have any penalties or sanctions for non-compliance with OA self-archiving mandates. As the experience of Southampton ECS, Minho, QUT and CERN has already demonstrated, OA mandates, together with OA's own intrinsic rewards (enhanced research access, usage and impact), will be enough to reinforce the causal connection between providing access and reaping its impact, through the research community's existing system for evaluating and rewarding research productivity. In the online era, researchers' own “mandate” will no longer just be “publish-or-perish” but “self-archive to flourish.”
Appendix S1
OA Impact Advantage for each Institution. Figure 2 showed the mean log citation ratios for O/Ø, OM/OS, OS/ØS, OM/ØM, OM/Ø, OS/Ø and OM/OS for the four institutions together. The outcome was that the Open Access (OA) citation advantage was present and roughly equal whether the OA was Self-Selective (S) or Mandated (M). That showed that the OA Advantage is not merely an artifact of author self-selection. This appendix shows the results for each institution separately. As will be evident, the pattern for the individual institutional data is largely the same as it is for the average across the four institutions.
(0.21 MB DOC)
Appendix S2
Multiple regression by JIF - Beta values. The multiple logistic regression we applied to our total sample of journals is applied here separately to the journals in each JIF (Journal Impact Factor) range by including all the other 14 predictor variables, apart from JIF itself. Tables S2a–S2e summarize the values of Exp(β)-1 corresponding to the predictor variables for each JIF range. The results were discussed in Figures 711.11. In sum, they show that whereas citation counts grow with an article's age across all the citation range comparisons for our four models (zero/low, low/medium1, low/medium2, low/high), OA's contribution tends to be more on the high-citation end, being greater in the higher JIF range (JIF4–JIF5) among journals and in the low/high range comparisons (M4) among articles.
(0.13 MB DOC)
Competing Interests: The authors have declared that no competing interests exist.
Funding: Social Science and Humanities Research Council, Monitoring, Measuring and Maximizing Research Impact, and Canada Research Chair in Cognitive Sciences. The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.
1. Odlyzko A. The economic costs of toll access. 2006. Jacobs, Neil, Eds Open Access: Key Strategic, Technical and Economic Aspects Chandos Publishing (Oxford) Limited.
2. Hitchcock S. The effect of open access and downloads (‘hits’) on citation impact: a bibliography of studies. 2010. Available:
3. Evans JA. Electronic Publication and the Narrowing of Science and Scholarship Science. 2008;321(5887):395–399. [PubMed]
4. Evans JA, Reimer J. Open Access and Global Participation in Science. Science. 2009;323(5917):1025. [PubMed]
5. Harnad S, Brody T. Comparing the Impact of Open Access (OA) vs. Non-OA Articles in the Same Journals. 2004. D-Lib Magazine 10(6). Available:
6. Eysenbach G. Citation Advantage of Open Access Articles. PLoS Biology. 2006;4(5) [PMC free article] [PubMed]
7. Giles CLK, Bollacker S, Lawrence S. CiteSeer: An Automatic Citation Indexing System. 3rd ACM Conference on Digital Libraries. 1998:89–98. Available:
8. Hajjem C, Harnad S, Gingras Y. Ten-Year Cross-Disciplinary Comparison of the Growth of Open Access and How it Increases Research Citation Impact. IEEE Data Engineering Bulletin. 2005;28(4):39–47. Available:
9. Kurtz M, Brody T. The impact loss to authors and research. 2006. Jacobs, Neil, Eds Open Access: Key Strategic, Technical and Economic Aspects Chandos Publishing (Oxford) Limited.
10. Lawrence S. Free online availability substantially increases a paper's impact. Nature. 2001;411:521. Available: [PubMed]
11. Moed HF. Statistical Relationships Between Downloads and Citations at the Level of Individual Documents Within a Single Journal. Journal of the American Society for Information Science and Technology. 2005;56(10):1088–1097.
12. Norris M, Oppenheim C, Rowland F. The citation advantage of open-access articles. Journal of the American Society for Information Science and Technology. 2008;59(12):1963–1972. Available:
13. Brin S, Page L. The Anatomy of a Large-Scale Hypertextual Web Search Engine. Computer Networks and ISDN Systems. 1998;30:107–117. Available:
14. Garfield E. Citation Indexes for Science: A New Dimension in Documentation through Association of Ideas. Science. 1955;122:108–111. Available: [PubMed]
15. Garfield E. Citation Frequency as a Measure of Research Activity and Performance. Essays of an Information Scientist. 1973;1:406–408, 1962–73, Current Contents, 5.
16. Garfield E. Can Researchers Bank on Citation Analysis? Current Comments. 1988;44 Available:
17. Page L, Brin S, Motwani R, Winograd T. The PageRank Citation Ranking: Bringing Order to the Web. 1999. Available:
18. Harnad S. Open Access Scientometrics and the UK Research Assessment Exercise. Scientometrics. 2009;79(1) Available:
19. Björk BC, Welling P, Laakso M, Majlender P, Hedlund T, et al. Open Access to the Scientific Journal Literature: Situation 2009. PLoS ONE. 2010;5(6):e11273. doi:10.1371/journal.pone.0011273. Available: [PMC free article] [PubMed]
20. Sale A. The acquisition of open access research articles. 2006. First Monday 11(9). Available:
21. Smith C, Yates C, Chudasama S. Open Research Online: A self-archiving success story. 2010. 5th International Conference on Open Repositories, Madrid, Spain. Available:
22. Swan A. The culture of Open Access: researchers' views and responses. Jacobs, N, Eds Open Access: Key Strategic, Technical and Economic Aspects Oxford : Chandos. 2006:52–59. Available:
23. Holmes A, Oppenheim C. Use of citation analysis to predict the outcome of the 2001 Research Assessment Exercise for Unit of Assessment (UoA). 2001. Library and Information Management 61. Available:
24. Oppenheim C. Do citations count? Citation indexing and the research assessment exercise. Serials. 1996;9:155–61. Available:
25. Swan A, Carr L. Institutions, their repositories and the Web. Serials Review. 2008;34(1) Available:
26. Craig ID, Plume AM, McVeigh ME, Pringle J, Amin M. Do Open Access Articles Have Greater Citation Impact? A critical review of the literature. Publishing Research Consortium, Journal of Informetrics. 2007;1(3):239–248. Available:
27. Davis PM, Fromerth MJ. Does the arXiv lead to higher citations and reduced publisher downloads for mathematics articles? Scientometrics. 2007;71(2) Available:
28. Henneken EA, Kurtz MJ, Eichhorn G, Accomazzi A, Grant C, et al. Effect of E-printing on Citation Rates in Astronomy and Physics. Journal of Electronic Publishing. 2006;9(2) Available:
29. Moed HF. The effect of ‘Open Access’ upon citation impact: An analysis of ArXiv's Condensed Matter Section. Journal of the American Society for Information Science and Technology. 2006;58(13):2145–2156. Available:
30. Lariviere V, Gingras Y, Archambault E. The decline in the concentration of citations, 1900–2007. JASIST. 2009;60(4):858–862. Available:
31. Wallace M, Larivière V, Gingras Y. Modeling a Century of Citation Distributions. Journal of Informetrics. 2009;3(4):296–303.
32. Henneken EA, Kurtz MJ, Accomazzi A, Thomson D, Grant C, et al. Use of Astronomical Literature - A Report on Usage Patterns. Journal of Informetrics. 2008;3(1):1–90. Available:
33. Kurtz MJ, Eichhorn G, Accomazzi A, Grant CS, Demleitner M, et al. The Effect of Use and Access on Citations. Information Processing and Management. 2005;41(6):1395–1402. Available:
34. Kurtz MJ, Henneken EA. Open Access does not increase citations for research articles from The Astrophysical Journal. 2007. Available:
35. Seglen P. The skewness of science. Journal of the American Society for Information Science. 1992;43:628–638.
36. Bollen J, Van de Sompel H, Hagberg A, Chute R. A Principal Component Analysis of 39 Scientific Impact Measures. PLoS ONE. 2009;4(6):e6022. available: [PMC free article] [PubMed]
37. Davis PM, Lewenstein BV, Simon DH, Booth JG, Connolly MJL. Open access publishing, article downloads, and citations: randomised controlled trial. British Medical Journal. 2008;337:a568. Available: [PMC free article] [PubMed]
38. Brody T, Harnad S, Carr L. Earlier Web Usage Statistics as Predictors of Later Citation Impact. Journal of the American Association for Information Science and Technology (JASIST) 2006;57(8):1060–1072.
39. Bernius S, Hanauske M. Open Access to Scientific Literature - Increasing Citations as an Incentive for Authors to Make Their Publications Freely Accessible. 2009. 42nd Hawaii International Conference on System Sciences (HICSS '09). pp. 1–9. Available:
40. Brody T, Carr L, Gingras Y, Hajjem C, Harnad S, et al. Incentivizing the Open Access Research Web: Publication-Archiving, Data-Archiving and Scientometrics. CTWatch Quarterly. 2007;3(3) Available:
41. Carr L, Harnad S. Offloading Cognition onto the Web. IEEE Intelligent Systems. 2009;24(6) Available:
42. Dror I, Harnad S. Dror I, Harnad S, editors. Offloading Cognition onto Cognitive Technology. Cognition Distributed: How Cognitive Technology Extends Our Minds Benjamins. 2009. (2009)
43. Adler N, Harzing, AWK When Knowledge Wins: Transcending the sense and nonsense of academic rankings. The Academy of Management Learning & Education. 2009;v8(1):72–95. Available:
44. Brody T. Citebase Search: Autonomous Citation Database for e-Print Archives. 2003. ECS Technical Report, University of Southampton. Available:
45. Cronin B. The citation process: the role and significance of citations in scientific communication. London: Taylor; 1984.
46. De Bellis N. Bibliometrics and Citation Analysis: From the Science Citation Index to Cybermetrics. Scarecrow Press; 2009.
47. Diamond, Arthur M. What is a Citation Worth? Journal of Human Resources. 1986;21:200–15. Available:
48. Harzing AK, Wal RVD. Google Scholar as a new source for citation analysis? Ethics in Science and Environmental Politics. 2008;8(1):62–71. Available:
49. Jacso P. Testing the Calculation of a Realistic h-index in Google Scholar, Scopus, and Web of Science for F. W. Lancaster. Library Trends. 2006;56(4):784–815. Available:
50. Moed HF. Citation Analysis in Research Evaluation. NY Springer; 2005.
51. Cronin B, Meho LI. Using the h-index to rank influential information scientists. Journal of the American Society for Information Science and Technology. 2006;57(9):1275–1278.
52. De Robbio A. Analisi citazionale e indicatori bibliometrici nel modello Open Access. Bollettino AIB. 2007;47(2):257–288. Available:
Articles from PLoS ONE are provided here courtesy of
Public Library of Science