|Home | About | Journals | Submit | Contact Us | Français|
We enjoyed reading the article by Chew et al. (JRSM 2007; 100: 142-150).1 The article clearly highlights the vulnerability and limitations of impact factor in evaluating the quality of journals. A well-informed and careful use of this impact data is thus essential. Thompson Scientific agrees that there are limitations attached to impact factors, and emphasize that there is no substitute for informed peer review.2
Many scholars have suggested that Thompson should count citations only to original research articles, eliminating the problem of news, stories, editorials, reviews and other kind of material which can influence the citation rates falsely. In 2006, Bollen et al. proposed the PageRank algorithm used by Google to distinguish the quality of citations and hence improve impact factor calculations.3
There is a definite need for other methods for analyzing bibliographic material and assessing its quality. Instead of citations, as being used in calculating impact factor, one can ask the peer reviewers to rate an accepted article over a score of hundred at the time of its review. Since articles are usually evaluated on several quantitative and qualitative parameters—for example, originality, clarity, content, methodology, discussion—this score will give a fair idea of ‘quality’ of the article. Scores from two or more blind reviewers will increase the reliability of the score. The score thus calculated can be published along with the accepted article. Since the editorials, reviews, letters, etc., are not original articles, this score cannot be calculated for the same.
Competing interests None declared.