[go: up one dir, main page]

How to (not) abuse the ABS journal ranking

Marcel Garz
6 min readMar 20, 2024

--

The ABS list is a widely used journal ranking for assessing researchers’ quality of work. Unfortunately, some institutions have developed a culture of obsession with this list, where researchers think first about maximizing their ranking and not primarily about advancing science. On social media, the ranking-fixation can be easily illustrated by looking at the way some researchers self-promote their work.

But the problem goes deeper than cringe posts on social media. As an interdisciplinary researcher at a business school, I’ve been struggling with the narrow scope of the ABS list. Even though my school emphasizes the need for cross-discipline collaborations, the formal and informal rules tied to the ABS list hinder interdisciplinary research. An obvious barrier is that (high-quality) publications in certain fields do not “count”.

The ABS list and its limitations
The ABS list groups journals into “fields”, such as accounting and finance, and then rates journals on a scale from 1 (low quality) to 4 (high quality). Surprisingly, this scale cannot be used to make comparisons across fields. In its Methodological Guidelines, the ABS cautions “… against simply counting the number of 3 and 4-rated journals in a particular field, and comparing the result with other fields”. For instance, the requirements for publication at the ABS-3 rank in entrepreneurship (e.g., Small Business Economics) could be very different than at the same rank in economics (e.g., Journal of Public Economics). Using a non-comparable scale is quite a limitation.

Another mystery is why the ABS list features certain fields and omits others. For example, why is psychology included but not communications research or political science? These disciplines seem equally (un-)important for a business school. Similarly, the ABS list omits some of the leading scientific journals that should be clearly relevant at a business school, such as Nature and Science.

Incentivizing and evaluating researchers
Here a few examples of how Jönköping International Business School (my employer) uses the ranking:

  • Financial publication incentives: Faculty members receive a publication bonus that is directly linked to the ABS rank of a journal. Hence, researchers in different fields are evaluated on the same scale, even though the ABS advises against comparisons across fields. What’s worse, there is no publication bonus for journals not included in the list. There you go, Nature and Science.
  • Recruitment: The school’s criteria for hiring faculty require ABS publications (or equivalent), the exact number depending on the position. For instance, associate professors require a lower number and rank of ABS publications than full professors. As a member of my school’s recruitment committee, I often read external assessments of job applications where reviewers compare candidates by blindly counting the number of ABS publications, while ignoring achievements outside the ABS list, despite the “or equivalent” criterion.
  • Performance review: For annual development talks and performance reviews, employees are asked to state the ABS rank of their publications. Similarly, when the school compiles annual activity reports, publications are listed according to their ABS rank.

There is probably not a single day where I don’t hear somebody point out the ABS rank of a journal. That’s not surprising, considering how the ABS list is (mis-)used when making decisions that have massive financial and career implications for people. For example, presenters in research seminars have developed a habit of pointing out the ABS rank of their targeted journal. During doctoral feedback sessions, the ABS potential of PhD projects is customarily discussed. In methods courses, undergraduate students are instructed to not cite any journal articles outside the ABS list or below the ABS-2 rank. When researchers announce a new publication on social media, they often include the ABS rank in the announcement. Some announcements don’t even describe the research but merely state the ABS rank of the publication.

Control, devotion, and peer pressure
A cult can be defined as a group that excessively controls its members by requiring unwavering devotion to formal and informal practices, and where vulnerable people join through peer pressure. The ABS fixation meets these requirements. Managers, supervisors, and recruiters want to know the ABS rank of people’s publications on a regular basis and will judge them accordingly (control). Because of that, most research is planned so that it has the best chances for publication in journals ranked high in the list (devotion). It is difficult for individuals to ignore the conventions of a community. This is especially true for people at the beginning of their academic career, most of whom will adapt to the formal and informal rules just because their colleagues comply (peer pressure).

A culture where everything and everybody is constantly benchmarked against the ABS list doesn’t facilitate a healthy research environment. The ABS cult induces people to optimize their research agendas for publication in well-ranked journals, rather than advancing science by creating new and useful knowledge. Sure, well-ranked journals often publish high-quality research, but only within the boundaries of certain conventions. The ABS cult discourages researchers from pursuing novel and critical research because this research may not be compatible with mainstream preferences at high-ranked journals.

The way forward: A healthier, more diverse, and more inclusive research environment
I understand that metrics are necessary to evaluate and incentivize researchers. However, celebrating a cult around a single and flawed ranking provides a disservice to the research community. I believe three things can be done. First, any ranking should be used as a supplementary tool only, on top of reading people’s research. Second, different rankings have different flaws. If an organization uses multiple rankings, these flaws hopefully cancel each other out. Third, senior researchers and academic leaders need to be better role models. They should question whether the rules set up to incentivize and evaluate researchers facilitate novel/critical/diverse research, or whether these rules create an environment that prioritizes creepy self-promotion and excessive control. Make sure that “or equivalent” rules are in place and enforced. Tell us about your research, not where it’s published. Don’t hide behind lazy arguments (“It’s the best ranking we have” or “People can work someplace else if they don’t like the ABS list”). Perhaps we can seek inspiration from places that have dropped out of rankings, such as the University of Zurich.

Being less dogmatic about what considers valuable research will foster a more inclusive environment, where people with different interests and talents can thrive. Being open towards diverse ideas, skills, and approaches will it make easier to form interdisciplinary research teams that have a chance to find solutions to complex issues, such as the climate crisis, rising inequality, and migration. If people are not benchmarked against a single ranking, they will be more open to contribute to interdisciplinary work that may not end up being published in their “home” discipline.

In addition, even in places that favor research specialization by prioritizing some core research area, tolerance and inclusion are of great value. For instance, it is well known that academics disproportionately often suffer from mental health issues, especially at early career stages. Celebrating a cult won’t make people more resilient, but we may be able to reduce stress and pressure by stopping the obsession with the ABS list.

--

--

Marcel Garz

Prof at JIBS | Economist & Media Researcher | Data Enthusiast | marcelgarz.de