Metrics-based mini REF ‘won’t be credible’

Green Paper recommends use of metrics, despite July report’s reservations

十一月 10, 2015
Bog snorkelling contestants, Llanwrtyd Wells, Wales
Source: Alamy
More measuring: the research landscape may be assessed in between full REFs

A proposed additional assessment of research quality between research excellence frameworks based on metrics such as citations rather than peer review would not be seen as credible, according to one of the authors of a major government-commissioned report on the subject.

Despite The Metric Tide report concluding in July that it was “not currently feasible to assess the quality of research outputs using quantitative indicators alone”, the idea of a “mini REF” that uses metrics has nonetheless made it into the government’s Green Paper on higher education.

The paper suggests “making greater use of metrics and other measures to ‘refresh’ the REF results and capture emerging pockets of research excellence in between full peer review”.

Stephen Curry, a professor of structural biology at Imperial College London and one of the authors of the metrics report, said that he did not think an intermediate assessment based on metrics “would have the credibility and support of the community”.

The “real problem” with metrics was that “on their own they can’t be reliable because we can’t have enough data across all the disciplines”, he said, citing arts and humanities as an area where “the information just isn’t there” in terms of citation coverage.

The Green Paper, released on 6 November, is only a consultation document, and a spokesman for the Department for Business, Innovation and Skills said that it would await responses on the future shape of the REF. The Green Paper says that it will “consider” the findings of The Metric Tide.

Debate will now move towards how exactly metrics will be used in a new REF system. The 2014 REF did use metrics, but only in a small way. Fewer than a third of the subject panels requested citation statistics, and these data were generally used only where there was disagreement among the reviewers over quality.

In the natural sciences, citations metrics are more abundant, Professor Curry said, but running an intermediate assessment involving only some subjects would lead excluded disciplines to be seen as “second best”. And the more metrics were used, he said, the more universities would attempt to game them.

He added: “I would question whether you would need a mini REF. Does the research landscape really change in two to three years?”

James Wilsdon, professor of science and democracy at the University of Sussex and chair of the group that wrote The Metric Tide, shared Professor Curry’s caution.

“Having looked at the question of metrics in exhaustive detail…I for one, and my committee, are not persuaded that there’s an easy solution here in moving overall from a peer-review process to a metrics process,” he said.

But publishers, which sell a variety of metrics tools, have pushed for their inclusion in the assessment process.

Earlier this year, Nick Fowler, managing director of Elsevier’s research management division, argued in a presentation to the Higher Education Policy Institute that greater use of metrics could drive down costs, and that multiple metrics made gaming the system “very hard”.

david.matthews@tesglobal.com

后记

Print headline: Metrics-based mini REF ‘won’t be seen as credible’

请先注册再继续

为何要注册?

  • 注册是免费的,而且十分便捷
  • 注册成功后,您每月可免费阅读3篇文章
  • 订阅我们的邮件
注册
Please 登录 or 注册 to read this article.
ADVERTISEMENT