Do Australia’s ERA discipline assessments really measure research excellence?

Frank Larkins calls for more transparency in how the Excellence in Research Australia exercise uses global benchmarks to measure improvements in science and humanities research

六月 17, 2019
Magnifying glass

Australian governments and universities have used the assessments from the Research Council’s Excellence in Research for Australia (ERA) exercises to gauge the international standing of discipline research performances for eight years. 

Outcomes from the four ERA rounds, 2010 to 2018, have also empowered universities to strategically realign their research profiles with reference to other universities and world standard benchmarks. As such, a critical question is: what discipline world standard benchmarks have been used for the 22 fields of research and how have they changed with successive rounds? 

The analyses that I have undertaken and published in two articles Research at Australian Universities: Is Excellence Really Excellent? and Anomalies in Research Excellence ERA Performances of Australian Universities at the LH Martin Institute, University of Melbourne highlight several anomalies in the assessment methodologies used and in their application over successive rounds. 

The findings raise questions about the real significance of the research ratings in comparison with universities in other developed countries.

For example, in 2018 a remarkable 259 of 323 science-related discipline units submitted by universities were rated at above world standard. In contrast, in 2012 some 126 of 296 units were judged to be above world standard, suggesting that excellence had increased for 133 units. The improvement between the 2012 and 2018 ERA rounds has been remarkable, from 43 per cent discipline excellence to 80 per cent.

There was a similar number of units (353) in the humanities and social sciences disciplines evaluated in 2018. However, only 124 units (35 per cent) were given a performance rating of above world standard. Furthermore, there was only a minor improvement of 30 units since 2012 when 94 of 345 units (27 per cent) were assessed to be excellent.

The large disparity between the 2018 research excellence rating of the science-related disciplines and the humanities and social sciences disciplines, and the contrasting performance changes over the rounds raise questions as to the suitability of the methodologies used. 

However, there is a lack of transparency around the evaluation process. The quantitative and qualitative details of the benchmarks used by the ARC and how they have changed since 2012 are not available for independent scrutiny. Furthermore, the ARC does not provide any relative commentary on the assessment of individual disciplines or universities, leaving the research community to speculate on the significance of the findings.

We do know that evaluations for the science-related disciplines rely heavily on metrics. In particular, a university’s performance relative to the discipline world-average citation rate per paper can change significantly over time. In the past two decades there has been major growth in scientific publications mostly from countries in the developing world that generally have lower citation counts. On average the citations per paper, while increasing over time, have been less than for papers from developed countries. 

Consequently, for some disciplines, the world-average paper citations have declined. This means it is likely that the assessment of some discipline outputs from Australian universities has been artificially raised without an absolute improvement in research performance because of the lower average benchmark globally. 

The humanities and social sciences discipline evaluations, meanwhile, place a stronger emphasis on peer review – an assessment measure that, while more qualitative, would appear to be harsher and more stable with less variation between ERA rounds based on the lower number of HASS units that were shown to improve from 2012 to 2018. 

The anomalies in research excellence performances raise a number of questions that require more information and discussion before they can be satisfactorily answered.

For example, are the HASS discipline performances in non-Group of Eight universities really inferior to their SR discipline performances? Are there fundamental flaws in the world standard benchmarks used in the different methodological assessment approaches? 

Is the case for more funding for science-based disciplines being undermined by ERA findings that 80 per cent of all university research discipline performances are above world standard?

It is true that Australian research performances have benefited from the scrutiny of the ERA exercises. However, the time has come for the ARC to release for independent evaluation the quantitative and qualitative details of the benchmarks used and how they have changed with time, not least because there can be very serious perception and funding consequences for university departments as a result of the ERA discrepancies in SR and HASS discipline assessments.

Moreover, the credibility of the ERA and its future as a viable research assessment tool depends upon fuller disclosure. The national interest of preserving breadth and strength in course and subject offerings is compromised by a flawed ERA process.

Frank Larkins is professor emeritus and former deputy vice-chancellor at the University of Melbourne. A compendium of Australian university performance reviews that he has published are available at https://franklarkins.wordpress.com.

请先注册再继续

为何要注册?

  • 注册是免费的,而且十分便捷
  • 注册成功后,您每月可免费阅读3篇文章
  • 订阅我们的邮件
注册
Please 登录 or 注册 to read this article.
ADVERTISEMENT