Bibliometrics: the counter argument

Metrics are being used to assess both individuals and institutions but it is widely accepted that they are crude if not brutal instruments that can be seriously flawed. The pressure to account for public money has led to quantitative measures being used to assess the quality of the research.  It can be argued that the metrics used are poorly designed; open to misinterpretation ty and they are more suitable to the sciences than to other disciplines. This has led to the DORA (the San Francisco Declaration on Research Assessment) which attacks the use of the Journal Impact Factor and the Leiden Manifesto which offers 10 principles for the fair and transparent assessment of research.

What is wrong with traditional metrics?

The Journal Impact Factor (JIP) has been an important tool in assessing scientific and technical literature since the 1950s. It has been used to compare the importance of different journals when considering publishing platforms. Although its use is commonplace in the sciences and it is integrated into Thomson Reuters’ Web of Knowledge database, its limitations have been the topic of debate. Because it measures citations at the level of the journal, it cannot reliably be used to draw inferences about the impact of a particular article or author. Even when used for the purpose of ranking journals, it has faced criticism on several counts: it is easy to manipulate, either by gaming what gets counted or through coercive citation (forcing academics to add spurious citations in order to inflate that journal’s impact factor; it is a proprietary and non-transparent system whose rankings cannot be reproduced by independent researchers. It can lead to excessive emphasis on publishing in high-IF journals.

Back to top