Our aim is twofold. On the one hand we discuss the limitations of the impact factor as a criterion for assessing mathematical journals, and suggest substituting a set of different types of indicators including the SCImago Journal Rank. On the other, we state that scientometrics such as the impact factor cannot be used alone in evaluating researchers’ work: one must have both a package of metrics as an objective measure and peer review by human beings as a subjective judgement.

In the 1960s, the notion of impact factor was introduced to assist libraries in deciding which journals to purchase. Since the late 1990s, it has been employed as a metric for measuring the quality of scholarly journals.

The Web of Science (WOS), a bibliographical database created by Clarivate Analytics, computes the journal impact factor (JIF) to recognize the relative importance of each journal. To be assigned a JIF, a journal first needs to satisfy certain quality criteria in order to be included in the Journal Citation Report (JCR). The JCR is a selective list consisting of more than 11,000 journals. The (2-year-) impact factor of a journal in a specific year measures the average number of citations from that year of the papers published in that journal during the previous 2 years. More precisely, the 2-year-impact factor of a journal in a year is computed by the formula

where denotes the number of citations in the year of papers published in the journal in the years and , and stands for the number of papers published in the journal in the year . A citation of a paper given by the author(s) of the paper is called a self-citation.

The SCImago Journal Rank (SJR) of a journal is a 3-year-impact factor reflecting the influence of the journal supported by Scopus. It depends not only on the number of citations of its published papers but also on the prestige of the journals in which the citations appeared; see [4 B. González-Pereira, V. P. Guerrero-Bote, and F. Moya-Anegón, A new approach to the metric of journals’ scientific prestige: The SJR indicator. J. Informetrics4, 379–391 (2010) ]. A drawback, however, has been reported regarding Scopus: namely, the database of Scopus journals with assigned SJR includes about 30,000 journals, which is a very large number of journals of varying quality.

Furthermore, WOS provides the indicator Eigenfactor (EF) that ranks journals in the same manner as that used by Google to rank websites. Based on 5-year citation data, it adjusts for citation differences through various disciplines. Thus the SJR and EF seem to be well-suited for evaluation of the quality of a journal; see [7 H. F. Moed, From Journal Impact Factor to SJR, Eigenfactor, SNIP, CiteScore and Usage Factor. In Applied Evaluative Informetrics. Qualitative and Quantitative Analysis of Scientific and Scholarly Communication. Springer, Cham, 229–244 (2017) ].

Each subject category of JCR journals is divided into four quartiles: Q1, Q2, Q3, and Q4, where Q1 denotes the top 25 percent of all journals in terms of their JIF. There are analogous quartiles for the journals in Scopus according to their SJRs.

Table 1:

Scientometrics indices as found in databases in the year 2020

JIFSJRMCQEF
Acta Math.2.4585.773.950.007
Iran. J. Fuzzy Syst.2.2760.510.110.001
J. Funct. Anal.1.4962.421.610.035
J. Funct. Spaces1.8960.460.430.001
Amer. J. Math.1.7113.281.670.009
Mathematics1.7470.3NANA

Replacement for the impact factor

The JIF has received serious criticism for various reasons, such as: lack of statistical significance [9 D. I. Stern, Uncertainty measures for economics journal impact factors. J. Econ. Lit. 51, 173–189 (2013) , 10 J. K. Vanclay, Impact factor: Outdated artefact or stepping-stone to journal certification? Scientometrics92, 211–238 (2012) ], poor representativeness and robustness [5 T. Lando and L. Bertoli-Barsotti, Measuring the citation impact of journals with generalized Lorenz curves. J. Informetrics11, 689–703 (2017) ], insensitivity to field differences [6 H. F. Moed, Measuring contextual citation impact of scientific journals. J. Informetrics4, 265–277 (2010) ], insensitivity to the weight of the citing articles [2 A. Ferrer-Sapena, E. A. Sánchez-Pérez, L. M. González, F. Peset, and R. Aleixandre-Benavent, Mathematical properties of weighted impact factors based on measures of prestige of the citing journals. Scientometrics105, 2089–2108 (2015) ] and manipulability by editorial strategies [8 K. Moustafa, The disaster of the impact factor. Science and Engineering Ethics21, 139–142 (2015) ]. Here is a list of some of the most significant limitations:

  • it counts citations of articles that are not included in the denominator of the above formula;

  • its analysis period is 2 years, which is not suitable for evaluation of mathematical research;

  • it merely counts citations, without considering their quality. Therefore the JIF may force some mathematicians to do research in topics on which a lot of people are working, who can potentially cite their papers. It is easy to find evidence that such topics are mostly outside the mainstream of mathematics;

  • it includes self-citations;

  • it is relatively easy to manipulate JIFs and some other scientometrics. There are “mutual citation groups” in which researchers in a certain circle heavily cite each other’s work in order to enhance the JIF of a certain journal and artificially inflate the impact of their own papers.

The SJR aims to fix the above problems by providing a more effective computation formula, including a longer period of 3 years for counting citations, attributing different weight to citations, and limiting self-citations. Some studies show that using the SJR can improve the situation to some extent. It is at any rate a first step towards avoiding some of the limitations of JIF; see [1 M. R. Elkins, C. G. Maher, R. D. Herbert, A. M. Moseley, and C. Sherrington, Correlation between the Journal Impact Factor and three other journal citation indices. Scientometrics85, 81–93 (2010) , 3 E. García-Pachón and R. Arencibia-Jorge, A comparison of the impact factor and the SCImago journal rank index in respiratory system journals. Archivos de Bronconeumología50, 308–309 (2014) ].

To illustrate the drawbacks and inadequacy of JIF in mathematics, let us take a closer look at the JIF numbers. There are mathematical journals in the 2019 list of JCR-Q1 whose impact factors are “unexpectedly large”. For instance, the Iranian Journal of Fuzzy Systems is ranked 15 in the category of Mathematics of the JCR list, while the very prestigious journal Acta Mathematica, launched in 1882, is ranked 13; also, American Journal of Mathematics and Transactions of the American Mathematical Society are ranked 32 and 60, respectively.

However, the SJR for Iranian Journal of Fuzzy Systems is 0.51 but for Acta Mathematica, it is 5.77. Similarly, the Mathematical Citation Quotient (MCQ), a 5-year-impact factor computed by MathSciNet (an online publication of the American Mathematical Society), for Iranian Journal of Fuzzy Systems and Acta Mathematica are 0.11 and 3.95, respectively.

This pattern can be seen in other journals. For example, Journal of Function Spaces is ranked 24, while the leading journal Journal of Functional Analysis is ranked 47! Again both the SJR and the MCQ of Journal of Functional Analysis are much greater than those of Journal of Function Spaces.

There is a similar situation regarding the American Journal of Mathematics, established in 1878, and a recently launched JCR journal named Mathematics.

Some important reasons for such unexpected JIFs are as follows:

  • a high rate of publication on a topic. For instance, “fixed point theory” is a popular topic that a lot of mathematicians work on;

  • a considerable number of researchers working on a topic. For example, the number of mathematicians who are working on “fuzzy mathematics” is much greater than those working on “-theory”, and hence the general rate of citations in such topics is high.

  • the open accessibility of a journal.

  • Non-ethical ways to increase JIF used by a few journals. While the term “predatory journal” is arguable, the mere appearance of this term shows that the problem does exist.

The backlog between acceptance and publication in some mathematics journals may exceed two years. Journals with such large backlogs, which are usually good journals, may have unexpectedly low JIF. Nowadays, some journals have moved to the continuous article publishing (CAP) model in which every article, after acceptance, is published immediately within the current issue.

We think that Clarivate Analytics should improve its formula for computing JIF. Until then, we suggest that scientific committees should consider a package of indicators such as the JIF, SJR, Citescore, Eigenfactor together.

The scientometric indicators developed for journals, essentially based on citations, should not be applied as a tool to assess the work of individual researchers. In fact, as citation occurs after research, the direction of research should not be affected by any demand for citation. The scientometric data reflect to some extent the quality of a journal, but not so much the actual quality of a single paper, since not all papers in a journal are cited equally.

As we explain in the next section, when a scientific committee uses only scientometric data to evaluate a mathematician’s achievement, without any human assessment, they are using a flawed approach that may result in an unfair judgement.

The role of human assessment

A large number of universities around the world use scientometrics tools to evaluating the research of academic members, postdoctoral researchers, and Ph.D. candidates for promotion, employment, or funding. It seems that such universities have no other reliable sources, and possibly suffer from lack of any peer-reviewed system in which the content of papers is expected to be evaluated by professional mathematicians. In addition, dealing with scientometric data is much easier than reading papers and assessing their content.

There are mathematicians who believe that scientometric data such as the SJR are reliable instruments for judgments, since they make assessments more objective and free them from the crude or biased judgements of human beings. They argue that quantitative indicators help funding organizations, publishers, and policy-makers to gain strategic intelligence that leads toward fairer outcomes and ensures that their budget is spent in the most effective way.

However, there are others who are against using scientometrics to measure scientific publications, due to the lack of transparency. Scientometrics may cause distortions that have detrimental effects on the development of scientific fields. For example, some supporters of the JIF subscribe to the idea that every paper published in a high-ranked journal must contain excellent mathematics, which is not entirely true in general; one can easily find some counterexamples in the literature. Some mathematicians propose that citations are relevant only when dealing with large numbers. In small numbers, they can be a misuse of statistics. These mathematicians continue to trust in evaluation by human beings, even though it may be subjective in the sense that it is influenced by the human dualities of love and hate, good and bad, as well as true and false. They believe that metrics put the worth and livelihood of our young mathematicians at risk and have undesirable impacts on the scientific life of all mathematicians.

Although citations do not show all the good qualities of a paper, they (in particular, non-self-citations by reputed researchers in prestigious journals) may help experts in evaluating and documentating research work. Papers with no citation over a ‘long period of time’ cannot be regarded as high-level papers. For that matter, not all highly cited papers are necessarily high level papers. However, abuse of scientometric data such as the JIF and games with numbers can happen, and may mislead people instead of being an indicator.

Conclusion

Scientometrics tools can be used, provided that one keeps their disadvantages and distortions in mind, and they are considered together with the judgement of experts based on depth and extent of papers. Such experts could be asked to look at a candidate’s self-selected best papers, research programs, and statements of major achievements. No assessment is complete without a peer review. Furthermore, we need a modification of the policies of universities, funding organizations, and so on to support human assessments.

We hope that the various ideas discussed in this note may help not only mathematicians but the whole of the scientific community to improve their point of view and their assessment guidelines.

Mohammad Sal Moslehian is a Professor of Mathematics at Ferdowsi University of Mashhad and a member of the Academy of Sciences of Iran. He was a member of the Executive Committee of the Iranian Mathematical Society from 2004 to 2012 and a Senior Associate of ICTP in Italy. He is the editor-in-chief of the journals Banach J. Math. Anal., Ann. Funct. Anal., and Adv. Oper. Theory published by Birkhäuser/Springer. moslehian@um.ac.ir

    References

    1. M. R. Elkins, C. G. Maher, R. D. Herbert, A. M. Moseley, and C. Sherrington, Correlation between the Journal Impact Factor and three other journal citation indices. Scientometrics85, 81–93 (2010)
    2. A. Ferrer-Sapena, E. A. Sánchez-Pérez, L. M. González, F. Peset, and R. Aleixandre-Benavent, Mathematical properties of weighted impact factors based on measures of prestige of the citing journals. Scientometrics105, 2089–2108 (2015)
    3. E. García-Pachón and R. Arencibia-Jorge, A comparison of the impact factor and the SCImago journal rank index in respiratory system journals. Archivos de Bronconeumología50, 308–309 (2014)
    4. B. González-Pereira, V. P. Guerrero-Bote, and F. Moya-Anegón, A new approach to the metric of journals’ scientific prestige: The SJR indicator. J. Informetrics4, 379–391 (2010)
    5. T. Lando and L. Bertoli-Barsotti, Measuring the citation impact of journals with generalized Lorenz curves. J. Informetrics11, 689–703 (2017)
    6. H. F. Moed, Measuring contextual citation impact of scientific journals. J. Informetrics4, 265–277 (2010)
    7. H. F. Moed, From Journal Impact Factor to SJR, Eigenfactor, SNIP, CiteScore and Usage Factor. In Applied Evaluative Informetrics. Qualitative and Quantitative Analysis of Scientific and Scholarly Communication. Springer, Cham, 229–244 (2017)
    8. K. Moustafa, The disaster of the impact factor. Science and Engineering Ethics21, 139–142 (2015)
    9. D. I. Stern, Uncertainty measures for economics journal impact factors. J. Econ. Lit. 51, 173–189 (2013)
    10. J. K. Vanclay, Impact factor: Outdated artefact or stepping-stone to journal certification? Scientometrics92, 211–238 (2012)

    Cite this article

    Mohammad Sal Moslehian, Impact factor, an inadequate yardstick. Eur. Math. Soc. Mag. 120 (2021), pp. 40–42

    DOI 10.4171/MAG/19
    This open access article is published by EMS Press under a CC BY 4.0 license, with the exception of logos and branding of the European Mathematical Society and EMS Press, and where otherwise noted.