Skip to Main Content
It looks like you're using Internet Explorer 11 or older. This website works best with modern browsers such as the latest versions of Chrome, Firefox, Safari, and Edge. If you continue with this browser, you may see unexpected results.

Journal Publication Outlets

Guidance on selecting the best journal to publish your research.

Learn More

presentation thumbnail

An overview of research impact by UNC librarians Jen Mayer and Nicole Webber is available in our workshop recordings at digscholarship.unco.edu/workshops/1/

Understanding Journal Impact

There are a variety of metrics that attempt to describe the influence and reputation of scholarly journals. Many of these, such as the Journal Impact Factor, are based on bibliometric indicators like citation counts. Other metrics attempt to measure how selective a journal is or how widely it is distributed. Some of the most commonly employed journal metrics are described below.

No matter the metric, it is important to acknowledge the limitations it presents and the overall limitations and dangers of quantitative methods in describing the value of research.

decorative image

Journal Impact Factor

What It IsDownload What is Impact Factor Handout

Journal Impact Factor (aka JIF, IF, or "impact factor") is an indicator produced annually by Clarivate (formerly ISI). It reports the ratio of the number of citations the journal received in the previous two years to the number of citable items published in those years. Citable items include articles, reviews, and proceedings. There is also a five-year impact factor calculated using the previous five years. New numbers come out in the summer of each year. 

JIF = (# of citations received in the JCR year for items published in previous two years) /
(# of citable items published in previous two years)

Fun Fact: The Journal Impact Factor was originally developed to aid academic libraries' decisions about which journals to purchase/retain in their collections.

Where to Find It

Journal Impact Factor can be found through the University Libraries access to Journal Citation Reports at https://unco.idm.oclc.org/login?url=https://jcr.clarivate.com/. A journal's website or unaffiliated groups that evaluate or review journals may also report a journal's impact factor, but these should be verified for currency and validity.

See Also

CiteScore

What It Is

Similar to Journal Impact Factor, CiteScore measures the citation impact of a journal. It is produced by Elsevier and "is based on the number of citations to documents (articles, reviews, conference papers, book chapters, and data papers) by a journal over four years, divided by the number of the same document types indexed in Scopus and published in those same four years." It is updated annually.

CiteScore = (# of document citations received in past 4 years) / (# of documents published in same 4 years)

Where to Find It

CiteScore can be found freely available on Scopus Preview at https://www.scopus.com/sources. A journal's website or unaffiliated groups that evaluate or review journals may also report CiteScore, but these should be verified for currency and validity.

See Also

SNIP

What It Is

Source-normalized Impact per Paper (SNIP) is a journal impact metric that accounts for disciplinary differences in order to make comparisons between fields. It compares a journal’s citations per publication with the citation potential of its field. A field is defined as the set of publications citing that journal. Originally conceptualized by Henk Moed, it is now produced by Elsevier via Scopus.decorative image

Where to Find It

SNIP can be found on Scopus Preview at https://www.scopus.com/sources and on the CWTS Journal Indicators website.

See Also

Eigenfactor

What It Is

The Eigenfactor Project is responsible for three primary metrics: Eigenfactor Score, Normalized Eigenfactor Score, and Article Influence Score. Journal Citation Reports provides the following descriptions of each of these metrics:

decorative image
"The Eigenfactor Score is a reflection of the density of the network of citations around the journal using 5 years of cited content as cited by the Current Year. It considers both the number of citations and the source of those citations, so that highly cited sources will influence the network more than less cited sources. The Eigenfactor calculation does not include journal self-citations."
"The Normalized Eigenfactor Score is the Eigenfactor score normalized, by rescaling the total number of journals in the JCR each year, so that the average journal has a score of 1. Journals can then be compared and influence measured by their score relative to 1."
"The Article Influence Score normalizes the Eigenfactor Score according to the cumulative size of the cited journal across the prior five years. The mean Article Influence Score for each article is 1.00. A score greater than 1.00 indicates that each article in the journal has above-average influence."

Where to Find It

These metrics can be found through the University Libraries access to Journal Citation Reports at https://unco.idm.oclc.org/login?url=https://jcr.clarivate.com/ by customizing the indicators reported in journal search results. They were previously also reported at eigenfactor.org, but this tool is no longer updated.

See Also

H-Index

What It Is

While it was developed to describe the work of an individual author, the h-index can also be calculated for groups of authors, including organizations and journals. The h-index compares the number of papers to the number of citations received such that the h-index of an author (or other entity) is where h of their papers have at least h citations each. Variants of the h-index, such as the g-index, attempt to incorporate additional factors, such as highly-cited papers and length of publication history.

Example of h-index calculation for author with 8 papers (h-index = 5)

Publication Year

Paper Number (ranked by citation count)

Number of Citations h-index

Citation count = rank value?

2019 1 70 70 ≠ 1 No
2018 2 12 12 ≠ 2 No
2019 3 6 6 ≠ 3 No
2020 4 5 5 ≠ 4 No
2021 5 5 5 = 5 Yes
2020 6 4 4 ≠ 6 No
2021 7 3 3 ≠ 7 No
2022 8 3 3 ≠ 8 No
Graph depicting calculation of h-index
Fig. 1 from Hirsch 2005, p.16570

Where to Find It

The h-index can be found on the ScimagoJR website at https://www.scimagojr.com/ and on Google Scholar's Metrics page.

See Also

  • Hirsch, J. E. (2005). An index to quantify an individual's scientific research output. Proceedings of the National Academy of Sciences - PNAS, 102(46), 16569-16572. https://doi.org/10.1073/pnas.0507655102
  • Roldan-Valadez, E., Salazar-Ruiz, S. Y., Ibarra-Contreras, R., & Rios, C. (2019). Current concepts on bibliometrics: A brief review about impact factor, Eigenfactor score, CiteScore, SCImago Journal Rank, Source-Normalised Impact per Paper, H-index, and alternative metrics. Irish Journal of Medical Science, 188(3), 939-951. https://doi.org/10.1007/s11845-018-1936-5
  • University of Pittsburgh Library System. (2022). Research impact and metrics: Author metrics. https://pitt.libguides.com/bibliometricIndicators/AuthorMetrics

Acceptance Rate

What It Is

Acceptance rates (or the inverse, rejection rates) report the number of accepted manuscripts compared to the total number of manuscripts submitted to the journal. There are few standards when it comes to measuring and reporting acceptance rates, making them difficult to interpret and compare. Journals that are especially prominent or broad in scope tend to have lower acceptance rates due to submission volume.

Where to Find It

Acceptance rates can be difficult to track down and verify. You may even find disagreement between the numbers reported by the sources listed here. Use caution when relying on this information.

See Also

Rankings, Classifications, and Scoring

What It Is

Many journal lists exist that attempt to rank or categorize journals by factoring in multiple metrics and/or more qualitative or subjective methods to form a more holistic evaluation. These lists exist at many levels and may be produced by trade associations, publishers, academic societies, or even individual university departments. When using a tool like this, it is important to understand its methodology and limitations.

Where to Find Itdecorative image

Examples include:

See Also

  • Anderson, V., Elliott, C., & Callahan, J. L. (2021). Power, powerlessness, and journal ranking lists: The marginalization of fields of practice. Academy of Management Learning & Education, 20(1), 89-107. https://doi.org/10.5465/amle.2019.0037
  • Bales, S., Hubbard, D. E., vanDuinkerken, W., Sare, L., & Olivarez, J. (2019). The use of departmental journal lists in promotion and tenure decisions at American research universities. The Journal of Academic Librarianship, 45(2), 153-161. https://doi.org/10.1016/j.acalib.2019.02.005
  • George, J. F. (2019). Journal lists are not going away: A response to Fitzgerald et al. Communications of the Association for Information Systems, 45, 134-138. https://doi.org/10.17705/1CAIS.04508
  • Teixeira da Silva, J. A., & Tsigaris, P. (2018). What value do journal whitelists and blacklists have in academia? The Journal of Academic Librarianship, 44(6), 781-792. https://doi.org/10.1016/j.acalib.2018.09.017

Other Metrics

This page lists only the more commonly discussed metrics. You can look up or browse additional metrics in

  • Metrics Toolkit - An index that provides information about research metrics, including calculations, sources, strengths, and limitations.
  • Snowball Metrics - A select group of metrics chosen by an international group of universities aiming to standardize metrics for greater reliability, validity, and comparability.
  • Elsevier Research Metrics Guidebook - Developed as documentation for the use of their tools like Scopus and SciVal, the guidebook provides information about a variety of metrics with suggestions regarding their usage.

 

xkcd comic: "How Standards Proliferate: (see: A/C chargers, character encodings, instant messaging, etc.). Frame 1: Situation: There are 14 competing standards. Frame 2: Stick figure 1: "14?! Ridiculous! We need to develop one universal standard that covers everyone's use cases." Stick figure 2: "Yeah!". Frame 3: Situation: There are 15 competing standards.

Comic by xkcd from https://xkcd.com/927. This work is licensed under a Creative Commons Attribution-NonCommercial 2.5 License.

Image Credits:

Journal Rank Podium adapted from Image by Michał Jamro from Pixabay and Image by Megan Rexazin from Pixabay
Eigenfactor diagram from http://www.eigenfactor.org/about.php
Pyramid image adapted from Vector Vectors by Vecteezy