Short reflection on the 2026 FT50 journal ranking update

On 29 April 2026 the Financial Times published an updated list of the 50 journals publishing the highest quality research produced by scholars in leading business schools (aka the FT50 list). Three journals were removed. Three new ones added; with the justification that the former had become ‘less influential’ since the last update in 2016. https://www.ft.com/ft50-journals

What I find interesting is that I’ve seen a few posts arguing that, as a result of the update, university X or country Y had ‘lost’ Z number of FT50 publications.

To me, this framing reveals just how far the fetishism of rankings has gone in academia – even among those critical of them.

In 2019, I published an article in one of the journals that have now been removed from the list. The article (obviously) has not changed after this week’s update. Yet, by the above logic of ‘lost FT50 publications’ it would now no longer be considered an article of FT50 quality.

That, however, would be true if and only if the update suggested that the FT50 list was wrong all along and has now been corrected. In that case, the claim that universities or countries have lost FT50 publications would make some sense.

But I don’t think that’s what the people posting about ‘lost FT50 publications’ have in mind. Rather, they – quite rightly of course – assume that the FT50 list will be used diachronically rather than synchronically. Due to the rampant ranking fetishism, in a couple of years time, none will actively remember when the list was last updated. Research published in journals removed from the list before they were removed will be considered less good than articles published in journals added in 2026 before they were added.

Such a diachronic use of the list goes against the FT’s own explanation of the changes. Indeed, the FT does of course not suggest there were any errors in previous ‘journal quality’ assessments. Rather it says that sometime between 2016 and 2026 the ‘influence’ of the removed journals has declined.

Yet, that raises another question: How was the decline in influence assessed? The FT writes it ‘conducted a poll […] among the leadership of 200 business schools that took part’ in various FT MBA rankings ‘and held a wider series of consultations and analyses.’

What exactly the business school leaders were asked is unclear. Tellingly, the FT uses “influence” and “quality” of the underlying research interchangeably. That, of course, is in itself problematic (and indeed unscientific). But the assumption is that all 200 business school leaders (intuitively?) agree on the criterion for good quality research worthy of inclusion or otherwise. That of course is a big assumption for all sorts of reasons. Given the lax methodology, alternative interpretations are possible: e.g. that business school leaders preferences or indeed their ability to judge influence (or academic quality) have changed.

Regardless, what depresses me most in all of this is the irony that a key tool of assessing the quality of academic research is itself based on an opaque methodology that is purely subjective, lacks any scientific rigor, and yields non-reproducible results and probably would not survive any serious per review process!*

So, here’s my tip: Rather than fetishising journal ‘quality metrics’ – read the paper and judge for yourself whether it is any good! 😉

(If you want to join me down in the rabbit hole of journal rankings and assessments of academic quality this is for you: https://doi.org/10.1007/s11192-021-03988-x)

*(Note that the 2016 methodology was somewhat less opaque – but not less lacking in academic rigor: https://www.ft.com/content/3405a512-5cbb-11e1-8f1f-00144feabdc0)