There are a variety of metrics that attempt to describe the influence and reputation of scholarly journals. Many of these, such as the Journal Impact Factor, are based on bibliometric indicators like citation counts. Other metrics attempt to measure how selective a journal is or how widely it is distributed. Some of the most commonly employed journal metrics are described below.
No matter the metric, it is important to acknowledge the limitations it presents and the overall limitations and dangers of quantitative methods in describing the value of research.
Journal Impact Factor (aka JIF, IF, or "impact factor") is an indicator produced annually by Clarivate (formerly ISI). It reports the ratio of the number of citations the journal received in the previous two years to the number of citable items published in those years. Citable items include articles, reviews, and proceedings. There is also a five-year impact factor calculated using the previous five years. New numbers come out in the summer of each year.
JIF = (# of citations received in the JCR year for items published in previous two years) /
(# of citable items published in previous two years)
Fun Fact: The Journal Impact Factor was originally developed to aid academic libraries' decisions about which journals to purchase/retain in their collections.
Journal Impact Factor can be found through the University Libraries access to Journal Citation Reports at https://unco.idm.oclc.org/login?url=https://jcr.clarivate.com/. A journal's website or unaffiliated groups that evaluate or review journals may also report a journal's impact factor, but these should be verified for currency and validity.
Similar to Journal Impact Factor, CiteScore measures the citation impact of a journal. It is produced by Elsevier and "is based on the number of citations to documents (articles, reviews, conference papers, book chapters, and data papers) by a journal over four years, divided by the number of the same document types indexed in Scopus and published in those same four years." It is updated annually.
CiteScore = (# of document citations received in past 4 years) / (# of documents published in same 4 years)
CiteScore can be found freely available on Scopus Preview at https://www.scopus.com/sources. A journal's website or unaffiliated groups that evaluate or review journals may also report CiteScore, but these should be verified for currency and validity.
Source-normalized Impact per Paper (SNIP) is a journal impact metric that accounts for disciplinary differences in order to make comparisons between fields. It compares a journal’s citations per publication with the citation potential of its field. A field is defined as the set of publications citing that journal. Originally conceptualized by Henk Moed, it is now produced by Elsevier via Scopus.
The Eigenfactor Project is responsible for three primary metrics: Eigenfactor Score, Normalized Eigenfactor Score, and Article Influence Score. Journal Citation Reports provides the following descriptions of each of these metrics:
These metrics can be found through the University Libraries access to Journal Citation Reports at https://unco.idm.oclc.org/login?url=https://jcr.clarivate.com/ by customizing the indicators reported in journal search results. They were previously also reported at eigenfactor.org, but this tool is no longer updated.
While it was developed to describe the work of an individual author, the h-index can also be calculated for groups of authors, including organizations and journals. The h-index compares the number of papers to the number of citations received such that the h-index of an author (or other entity) is where h of their papers have at least h citations each. Variants of the h-index, such as the g-index, attempt to incorporate additional factors, such as highly-cited papers and length of publication history.
Paper Number (ranked by citation count)
|Number of Citations
Citation count = rank value?
|70 ≠ 1
|12 ≠ 2
|6 ≠ 3
|5 ≠ 4
|5 = 5
|4 ≠ 6
|3 ≠ 7
|3 ≠ 8
Acceptance rates (or the inverse, rejection rates) report the number of accepted manuscripts compared to the total number of manuscripts submitted to the journal. There are few standards when it comes to measuring and reporting acceptance rates, making them difficult to interpret and compare. Journals that are especially prominent or broad in scope tend to have lower acceptance rates due to submission volume.
Acceptance rates can be difficult to track down and verify. You may even find disagreement between the numbers reported by the sources listed here. Use caution when relying on this information.
Many journal lists exist that attempt to rank or categorize journals by factoring in multiple metrics and/or more qualitative or subjective methods to form a more holistic evaluation. These lists exist at many levels and may be produced by trade associations, publishers, academic societies, or even individual university departments. When using a tool like this, it is important to understand its methodology and limitations.
This page lists only the more commonly discussed metrics. You can look up or browse additional metrics in
Comic by xkcd from https://xkcd.com/927. This work is licensed under a Creative Commons Attribution-NonCommercial 2.5 License.