Metric perversion

The use of journal rankings and citations data throughout the REF would hamstring innovation, argues Hugh Willmott

October 6, 2011

University managers - from vice-chancellors to heads of department - please take note. According to draft guidelines, many research excellence framework subpanels will not use journal rankings or citations data to evaluate the quality of research in the 2014 REF. This fact was helpfully highlighted in the pages of Times Higher Education by Andrew Oswald, professor of economics at the University of Warwick ("Data be damned: REF's blueprint for systemic intellectual corruption", 22 September). Those senior staff who have been "urging" (bullying) academics by demanding that they select certain topics, adopt particular methodologies and acquire a style favoured by highly ranked journals or risk scholarly excommunication by being declared "research inactive", please desist.

But Oswald argued that journal rankings and citations data should be made available to all REF panels. He claimed that the metrics act as a "bulwark against (the) subjectivity and bias" of peer review - as if there were no systematic bias in such data. Then he proposed that the subjectivity and bias of human judgement should be applied to ensure that "citations data or journal rankings are not used unthinkingly". Where is the logic in this?

In many fields, the use of such metrics may be defensible. But in others, the decision to exclude them reflects an awareness of their homogenising, divisive effects on research. Oswald's own discipline, economics, exemplifies this loss of diversity. A few US-based journals now dominate the rankings, from which heterodox work is effectively barred. The casualties are innovation, multidisciplinarity and critique. Flat-earthers are alive and well in the pages of these journals. Even the global financial crisis has not diminished their faith.

Lovers of metrics fear that spurning their use is "dangerously subjective" and "anti-evidential". They are fond of attributing only the basest of motives to anyone who dares to point out the pseudo-objectivity or perverse effects of such data. Like the advocates of chastity belts, metrics lovers are confident that only licentious self-interest can explain resistance to their use, and only "systematic intellectual corruption" can follow from this refusal. But what about the "systematic corruption" associated with the intense pressures heaped upon publishers, editors and researchers as a result of reliance upon metrics? Publishers anxious for libraries to subscribe to their journals pressurise editors to raise their citation counts by fair means or foul. In turn, with the "metrification" of heterogeneous fields, the editors of "top-tier" journals now wield enormous influence over authors, who are obliged to shoehorn their work into the desired mould or see it cast into the outer darkness of publication in "lesser" journals.

ADVERTISEMENT

So making citations and journal rankings available to all panels would send out all the wrong messages. It would effectively discourage the path-breaking, innovative, multidisciplinary work that fails to fit the mould of top-tier journals.

The enduring virtue of peer review, despite its many imperfections, is that it allows for the content of academic work to be judged on its merits, not simply on the basis of where it appears or how frequently it is cited. Peer review allows for the possibility that good, heterogeneous research is published across a range of journals, and for the idea that articles published in ostensibly top-tier journals may in fact be devoid of originality and significance, even if they are unmatched in their rigour.

ADVERTISEMENT

REF panel members are accountable to their peers, and the working methods and judgements of each panel must be defensible to their specialist academic community if the outcomes of the panels' evaluations are to have any credibility. In fields of research where journal rankings and citation indices are widely regarded as unreliable and/or perverse indicators of quality, it would be inappropriate and counterproductive for panel members to endorse them, let alone be guided by them.

Reliance on metrics produces perverse effects, encouraging researchers to focus on a narrow range of issues and methodologies. This may be more justifiable in disciplines where there is little diversity, but it is "dangerously objective" in heterogeneous fields.

Register to continue

Why register?

  • Registration is free and only takes a moment
  • Once registered, you can read 3 articles a month
  • Sign up for our newsletter
Register
Please Login or Register to read this article.

Sponsored

ADVERTISEMENT