PMCC PMCC

Search tips
Search criteria

Advanced
Results 1-2 (2)
 

Clipboard (0)
None
Journals
Authors
Year of Publication
Document Types
1.  The "impact factor" revisited 
The number of scientific journals has become so large that individuals, institutions and institutional libraries cannot completely store their physical content. In order to prioritize the choice of quality information sources, librarians and scientists are in need of reliable decision aids. The "impact factor" (IF) is the most commonly used assessment aid for deciding which journals should receive a scholarly submission or attention from research readership. It is also an often misunderstood tool. This narrative review explains how the IF is calculated, how bias is introduced into the calculation, which questions the IF can or cannot answer, and how different professional groups can benefit from IF use.
doi:10.1186/1742-5581-2-7
PMCID: PMC1315333  PMID: 16324222
2.  Relevance similarity: an alternative means to monitor information retrieval systems 
Background
Relevance assessment is a major problem in the evaluation of information retrieval systems. The work presented here introduces a new parameter, "Relevance Similarity", for the measurement of the variation of relevance assessment. In a situation where individual assessment can be compared with a gold standard, this parameter is used to study the effect of such variation on the performance of a medical information retrieval system. In such a setting, Relevance Similarity is the ratio of assessors who rank a given document same as the gold standard over the total number of assessors in the group.
Methods
The study was carried out on a collection of Critically Appraised Topics (CATs). Twelve volunteers were divided into two groups of people according to their domain knowledge. They assessed the relevance of retrieved topics obtained by querying a meta-search engine with ten keywords related to medical science. Their assessments were compared to the gold standard assessment, and Relevance Similarities were calculated as the ratio of positive concordance with the gold standard for each topic.
Results
The similarity comparison among groups showed that a higher degree of agreements exists among evaluators with more subject knowledge. The performance of the retrieval system was not significantly different as a result of the variations in relevance assessment in this particular query set.
Conclusion
In assessment situations where evaluators can be compared to a gold standard, Relevance Similarity provides an alternative evaluation technique to the commonly used kappa scores, which may give paradoxically low scores in highly biased situations such as document repositories containing large quantities of relevant data.
doi:10.1186/1742-5581-2-6
PMCID: PMC1181804  PMID: 16029513

Results 1-2 (2)