To truly judge the quality of research, read it
In recently published research, we explore a phenomenon that we term ‘the journal quality perception gap’. This study is of importance to how we promote and judge scientific progress in society, but also of wider interest given the modern metrification of so much of how we live our lives.
Our study is set in the domain of academic research. All over the world researchers are rewarded in their careers based on the quality of research they publish in academic journals. This is a good idea, as we want the best researchers to be recognized so they can impact the growth of knowledge. One popular way of judging research quality is to have journal ranking lists. These assign a ranking to journals from excellent to very good to good to okay to awful. A researcher whose research is accepted more to the very good and excellent journals is deemed a top researcher.
A problem with this system is that it is very hard to judge the quality of a journal, regardless of what journal ranking systems claim. Yet these rankings hugely impact on what research is produced. If you say that the Journal of Puppy Studies is an excellent journal and the Journal of Pandemics is a poor journal, then a huge proportion of researchers will focus on puppies because that is where the rewards are. A more fundamental issue is that if professors don’t believe the system to be fair, it is very hard to persuade them to avoid playing a rankings game and focus on publishing quality research. If you perceive the system to be flawed, then why take it seriously? This has knock-on effects in terms of the knowledge we are producing. Our study looked at this latter point.
We surveyed a large number of professors in business schools in the United Kingdom on their opinions on a popular UK journal ranking list (generally called the ‘ABS list’). We found that professors’ rankings of journals disagreed with official rankings for 40% of rankings made. So, right from the beginning, we saw there is strong community disagreement that the way of recognizing good research is fair.
Our study then focused on why this might be the case and why professors perceive research quality differently to national rankings systems. We found that major drivers for disagreements were driven by personal characteristics, their experiences with certain journals, and, importantly, the extent to which people liked the idea of the ranking system or were already winners (successful researchers as measured by the system) within the ranking system. Those inside the system were far more likely to accept its process and agree that rankings were fair. But the large group that this did not apply to were of the view that there are a wide range of mistakes in the rankings.
Essentially this is a timeless issue, with a new modern twist. Winners from a system protect it, and the others stand outside shooting holes in it. The issue is what do those who are not winners from the system do. Do they become disillusioned and stop producing research? Does this affect their enthusiasm for teaching about their research topics? Does society lose out because of this. There is a lot we need to consider here, as when you metrify something – anything – you create these two groups who all perceive outcomes in different ways and this affects participation. We view this as vital in the management of how and why research is produced in a society.
Methodology
We surveyed a large number of professors in business schools in the United Kingdom on their opinions on a popular UK journal ranking list (generally called the ‘ABS list’). We wanted to see to what extent they agreed that the rankings were fair and correct. So we asked them this directly. We asked: for every journal you are familiar with, what do you think the ranking should be? Overall we received 20,000 journal rankings and proceeded to work out why researchers might agree or disagree with individual rankings.
Applications and beneficiaries
We can extend this perception gap idea out away from the narrow world of research. Are these same ideas at play, for example, in how we judge the quality of companies based on relative stock market value, good artists based on number of music downloads, politicians based on votes received. There is a possibility that you have one group that becomes unquestioningly accepting that the rankings mean everything and another group that says the rankings mean nothing and we shouldn’t pay any attention to them. So we end up with these very polarized groups in society on many different cross-sections, all acting within their own isolated islands. Our study didn’t go so far as to be able to say that is a possibility outside of academic research, but it did leave us pondering if there might be this fundamental problem created by all ranking systems. That how we emotionally perceive quality has a huge impact on the extent to which we consider an entire system acceptable or not. Hopefully future studies can explore this idea further.
Reference to the research
Cormac Bryce, Michael Dowling, Brian Lucey (2020) The Journal Quality Impact Gap. Research Policy, Volume 49, Issue 5, June 2020, 103957
Link to media
Bryce, Cormac, Dowling, Michael, Lucey, Brian. 2018. To truly judge the quality of research, read it. Times Higher Education magazine, 11/2018. To truly judge the quality of research, read it | Times Higher Education (THE)