The Ice Age of impact factor

The formula to measure the quality of a research journal calls for a rethink

The Ice Age of impact factor

The Higher Education Commission has adopted nearly half a century old measure proposed by bibliographer Eugene Garfield (in his article "Citation Analysis as a Tool in Journal Evaluation") that he published in Science in 1972. Later, in 1992 information firm Thomson Reuters took over his institute (ISI) and with that the right to issue the list of impact factors of journals. This is one of the important metrics that is widely used to measure the quality and prestige of a research journal.

Intuitively speaking, the basic hypothesis of any prestige measurement metric is: the number of citations to a scientific work. This typically represents its innovation and scholastic contribution -- the higher the number the more impactful an article or journal is.

For a layman’s understanding, the impact factor of a journal in a given year is simply the ratio of number of citations that articles of a research journal received in the last two years to the total number of papers published in the same period. To substantiate, let us assume that Journal X published 100 articles in the years 2014-2015 and during these years, its articles received 500 citations (in total), then its impact factor for the Year 2016 will be 500/100 = 5.

Once this measure became popular and researchers and academicians started using it to identify the prestigious journals, it also attracted the attention of policy and decision making bodies that always need numeric measures of quality for impact assessment of research.

Editors of some impact factor journals ask authors to cite at least 1-2 articles that are relevant to the under review research papers and are published in the same journals. The motivation is to ensure that such citations directly contribute towards impact factor calculation; this unjust demand is hardly resisted because researchers are under pressure to "publish or perish".

Unfortunately, such journals have done an irreparable damage to the higher education sector. Hundreds of academicians published in these fake journals have used them as a vehicle for reward.

Since this threat looms on every publisher, a relatively intelligent caveat is to publish a group of 5 to 10 journals in a domain and then ask authors to cite papers of other sister journals; as a result, such forced citations never contribute towards self-citations measure and this keeps everyone happy. Vice-chancellors keep rewarding these paper publishing humanoids because they earn good university rank for the University; and HEC keeps boasting about an exponential increase in the number of impact factor publications.

Measuring the "quality" of a journal through a single metric only and then using it erroneously to conclude the "quality" of a researcher without looking at the temporal impact of publications since their first appearance needs an urgent review.

No one has the time to investigate the quality of these publications and we create a delusional augmented reality where these zero-sum publications are projected as the ultimate Himalayas of researchers.

To conclude, impact factor only uses a count of raw citations -- a very primitive measure once we look around and see the arena of advance intelligence and machine analytics.

The HEC, despite above-mentioned well-published shortcomings of impact factor, extended its use in a domain for which it was never designed: using it to compute the impact factor of a researcher.

The impact factor of a researcher is simply summing the impact factor of journals in which he has been published -- higher the impact factor of a researcher the more scholastic stature he enjoys now by virtue of this policy initiative. This is clear that impact factor (even if it is earned on true merit by a journal) reflects a cumulative measure of the impact of researchers published in the journal. Using others’ work to measure the impact of a researcher is not only absurd but defies common sense -- a researcher’s own papers shall be the foundation of his impact pyramid.

An example: if a researcher has published 200 papers in a journal during the last 10 years (shows something might be wrong if the number stands at more than 20 papers annually) and they have received only 10 citations in these years; then using the citation measure, these papers with a high degree of probability, did not receive any attention of his peers.

Luckily, if the papers are published in a journal with an impact factor of 2, he is nearly on the top of Impact Factor Everest with an impressive impact factor of 400. HEC and the concerned VC will present this professor as a role model of "impactful research" for the next generation. On the contrary, if a professor has published only 10 research articles in 10 years and the total citations to his work now stands at more than 10,000, but the impact factor of the poor guy is only 20 (assuming again an average 2 impact factor journal) because the poor guy chose to publish less number of high impact articles (evident from high number of citations).

Another important criticism is that impact factor of a journal fluctuates annually; as a result, the impact of a researcher also changes and this happens because of none of his research contributions or misdeeds.

Last but not the least, researchers working in different knowledge domains have totally different mindset towards sharing research: In the field of medicine and natural science, the mainstream researchers go for publishing in journals; while in Computer Science and Engineering that is not the only outcome of a research project (conference papers, patents and technology development are also internationally recognised outcomes); and similarly, in Social Sciences the publication of books by a prestigious publisher is a lifetime dream achievement. As a consequence of this, the impact factor of medicine, chemistry, physics and biology journals are very high compared with engineering, computer science and social science journals. This puts even bright researchers from these fields at a great disadvantage and their voice gets lost in the powerful jazz and rock of impact factor.

To conclude, measuring the "quality" of a journal through a single metric only and then using it erroneously to conclude the "quality" of a researcher without looking at the temporal impact of publications since their first appearance needs an urgent review.

For a better understanding, let us take an easy to understand problem: The government wants to make a merit policy for engineering universities. One member of merit committee says, "Look guys, a quality student is the one who gets 98 per cent marks in Mathematics". Another member says, "The one with more than 90 pc marks in Physics is the quality student." A third one says, "The one with more than 85 pc marks in Chemistry should be given more weight." It would be foolish to make a single metric merit criterion (say on mathematics) to define merit of a quality student. If this happens, minority of hardworking and passionate students would still be working hard to get good scores in all subjects, but the shortsighted students will focus on admission only, hence only on mathematics.

Unfortunately, HEC forefathers ignored this sensible argument: A high impact factor is a measure of quality of a research journal but is not the only measure of the quality of a research journal. Good governance is all about providing a continuous quality improvement in the policies by pervasive and persistent analyses. If we are unable to break this Ice Age of ‘Impact Factor’, then this will not only block the natural evolution in quality research but will eventually stagnate the whole knowledge ecosystem.

The Ice Age of impact factor