The end of ranking journals

Professor Tan Sri Dato' Dzulkifli Abdul Razak
Learning Curve: Perspective
New Sunday Times - 24-07-2011

AUSTRALIAN Minister of Innovation, Industry, Science and Research Kim Carr recently announced that journals will no longer be assigned rankings in a radical shake-up of the Excellence in Research for Australia (ERA) initiative.

ERA categorises journals according to various tiers such as A*, A, B and C, devised and administered by the Australian Research Council (ARC).

The ERA initiative assesses research quality within Australia’s higher educational institutions using a range of indicators and other proxies to support the evaluation of research excellence. It aims to assure others of the excellence of research conducted.

In one exercise, the ARC assessed 330,000 research outputs such as citations, books and articles of 55,000 researchers in 41 institutions. For 2012, the ARC reportedly had refined the journal indicator and profiles of A*, A, B and C ranks will not be used.

In announcing the change, Carr categorically pointed out: “There is clear and consistent evidence that the rankings were being deployed inappropriately within some quarters of the sector, in ways that could produce harmful outcomes. (They are also) based on a poor understanding of the actual role of the rankings.”

He added: “One common example was the setting of targets for publication in A and A* journals by institutional research managers.

“In the light of these two factors — that ERA could work perfectly well without the rankings and that their existence was to focus on ill-informed undesirable behaviour in the management of research — I have made the decision to remove the rankings based on ARC’s expert advice.”

The step taken earned praises from a leading expert on rankings in higher education.

Professor Ellen Hazelkorn, director of research and enterprise in the higher education policy unit at Dublin Institute of Technology, endorsed the decision of ARC to drop the designated A* to C rankings assigned to the 22,000 journals used in the bibliometrics for ERA.

She reportedly expressed doubts about the role of journals in academic culture.

In an interview with The Australian, Hazelkorn noted that journals, their editors and reviewers can be extremely conservative. “They act as gatekeepers and can often unintentionally discourage intellectual risk-taking,” she said. 

Meanwhile, she argued that the soaring number of journals may be a response to the increasing complexity of knowledge — which might be an acknowledgment that there are many legitimate ways of thinking or could be a reaction to the perception that journals were closed to contrary viewpoints or methodologies.

This, in turn, points to “a hierarchy of knowledge and disciplinary values (that) endorses a traditional world order, privileging some researchers and their universities over others”.

Hazelkorn reiterated the problem of over-reliance on peer review to measure research impact. “Impact (of an article) is perceived simply as that which is read within the academic community rather than its impact on society,” she said. “Many articles are published, but how many actually have beneficial value to society?” 

Assessments should go beyond simply reviewing what one academic has written and (whether it has been) read by another.

“Today, policy-makers and the wider society want to know how research can be used to solve major societal and global challenges.”

Hazelkorn, who wrote one of the most authoritative books on rankings (Rankings and the Reshaping of Higher Education: The Battle for World-Class Excellence, 2011), highlighted that “rankings have injected a moral panic into the policy debate, encouraging simple and simplistic correlations between rankings and global competitiveness”.

She observed that  academic work has been transformed from “a relatively autonomous profession operating within a self-regulated code of collegiality” into an “organisationally managed workforce comparable to other salaried employees”. 

For countries such as Malaysia — where the privilege of being autonomous is within the purview of bureaucrats who understand little of the implications of rankings — the tendency to “organisationally manage” is so severe that the “self-regulated code of collegiality” (the so-called keserakanan) is reduced to command-and-control, or else, ultimatum! It makes nonsense of what an academia is supposed to be!

For a long while now we have been sucked into an alien “quality” measure of the ISI Impact Factor — a highly controversial, commercially-driven and somewhat simplistic measurement that relates to the “average” number of times the paper is cited by others.

What impact it really brings is of dismal concern, if at all. Those who dared to challenge this have known to be “blacklisted”.

Should we not be as bold as our more experienced Australian counterparts and display similar courage to affect a real “knowledge transformation” in our education system?

* The writer is the Vice-Chancellor of Universiti Sains Malaysia. He can be contacted at vc@usm.my