Metrics would ruin the REF

The REF and TEF should be brought together, but adopting low-quality shortcuts would undermine the whole purpose, says Thom Brooks 

May 5, 2021
Binary code symbolising the dichotomy between REF and TEF
Source: iStock

At the end of every research excellence framework (REF) cycle, the question is asked whether a more metrics-driven system could simplify and improve what is undeniably a complex, time-intensive exercise. This time around is no exception; since submissions closed at the end of March, there has been a flurry of articles addressing this question, and it is certainly a discussion worth having. However, it seems to me that the answer remains the same: no.

The REF aims, among other things, to identify the relative world-class strengths of UK academic departments, scoring them according to output produced, impact generated and their research environments. It seems likely that high output scores reflect the strongest research environments, in terms of protected research time, availability of research funding and frequency of research leave. So one simplification might be to do away with outputs and score the REF entirely on environment (which is much less labour-intensive to assess than outputs) and impact.

However, the incentives that doing so would introduce might damage quality. Departments might divert scarce resources to supportive measures but without maintaining sufficient regard of what they helped produce. Moreover, the correlation between quality of input and quality of output is hard to definitely prove, and if our aim is to assess the quality of the research produced, anything less than a close look at the outputs themselves would seem to be second best.

But could the burden of assessing outputs be lightened by switching from peer review to some form of metrics? I think not. Every metrics-driven model I have seen for assessing research quality tends to focus on what can be counted – and all are very imperfect proxies for what needs to be counted: quality.

ADVERTISEMENT

Take journal rankings. Different academic fields disagree (including internally) about whether there is or could be a definitive ranked list of the best journals. In most, if not all, fields in the arts, humanities and social sciences, it is not uncommon for different journals to approach the same referees to assesses the same piece of work; specialist expertise is scarce, after all. Sometimes those approached recuse themselves if they have already reviewed the manuscript for another journal. Sometimes they don’t. Either way, the supposed “best” journals certainly don’t have a stranglehold on the best reviewers or the highest standards. Rather, in my experience, the “best” journals are generally considered to be those most commonly available within universities – and those tend to be the oldest titles.

What about other possible metrics? Grant income (and the need for it) varies enormously by field and scholarly approach. Citations, meanwhile, are a particularly unreliable indicator of quality in the humanities. In my own work, for instance, I cite work that I find mistaken as often, if not more so, than work supporting my points, since I aim to offer something new. Does that matter at an aggregate level? After all, universities are assigned funding based on the entirety of their REF submission, so you might ask how much granularity we really need. However, the minutiae of the rankings do matter enormously for those involved. Cutting corners might save time, but if it short-changes certain departments and institutions, it isn’t worth it, in my opinion.

ADVERTISEMENT

Perhaps a metrics-based approach might be more relevant to the sciences, and there is nothing to say, a priori, that all disciplines must be judged in the same way; I am aware that, in Australia, for example, a hybrid approach is used depending on the discipline. But if we do want a universal approach, the worst option apart from all the others would appear to be peer review.

While it might be difficult to improve on the REF as a mode of national research assessment, I do worry that it is so utterly divorced from the assessment of teaching quality. This seems particularly misguided for those research-intensive institutions that champion their research-led teaching.

Some might worry that bringing teaching and research together would create more administrative workload. This is an important concern, and I agree with those who argue that we need to focus more on doing research and teaching rather than filling out reports about it. Yet it is also important that we think clearly about what we do and why we do it.

In this case, I don’t think the administrative burden need increase at all. We in the UK already spend a lot of time on teaching quality through assurance exercises, periodic departmental reviews and annual reviews of teaching – not to mention the teaching excellence framework (TEF) itself. How difficult would it be to reorient these exercises so that strategic planning about teaching focuses on research-led teaching excellence? After all, the staff and students who create and benefit from research and research-led teaching are whole human beings, so why should our institutional research and educational strategies be run by separate teams, each with their own future plans?

Of course, any joined-up assessment of research and its contribution to world-class education would probably lend itself even less to a metrics-driven approach than the REF does. But what I propose would, I believe, produce a far more holistic, far more useful measure of departmental and institutional quality.

We can and we should do better that the current approach – but not by grasping for low-cost, low-quality metrics.

Thom Brooks is the dean of Durham Law School and the president of the Society of Legal Scholars. He comments in a personal capacity.

Register to continue

Why register?

  • Registration is free and only takes a moment
  • Once registered, you can read 3 articles a month
  • Sign up for our newsletter
Register
Please Login or Register to read this article.

Related articles

Reader's comments (5)

As long as scholars behave rationally, they will try to get their work maximum exposure by publishing it in widely read and highly cited journals. This creates a market value for journals, where those with a higher impact factor are more desirable, within any given field. That makes journal impact factors normalised per discipline a good proxy for REF output ratings. Yes, there is stochastic variation if we use them as a proxy: The quality of some articles will be overestimated, and the quality of some will be underestimated. But because we would be averaging over many publications per department, this would still be a consistent estimator. (It may be possible to factor this uncertainty into a confidence interval if one wanted to provide more than a point estimate of departmental quality.) I have seen some people argue that top journals are not the best journals. But I have not seen any convincing arguments. It's often people who haven't been able to publish in good journals who reject journal metrics. All that being said, things may be a bit more blurry (i.e., have a higher uncertainty) in the arts and humanities, compared to STEM, natural sciences, and social sciences, because people there tend to publish in books and because quality may be hard to assess objectively when it comes to things that include art or opinion as a component. But even that is not necessarily an argument against metrics because manual, qualitative panel assessments may be just as error-prone in assessing the quality of the work if it is subjective. With all that in mind, I have yet to see compelling evidence for the claim that journal impact would be a bad aggregate measure of output quality.
Best thing would be just to scrap the REF and TEF. Both a complete waste of public money. Give it to the health service.
A few thousand words of missing the point there. Metrics aren't very good at evaluating individuals, but they are good at evaluating *departments*, because individual errors average out. Given they are also cheaper and simpler than peer review, I don't understand why we are still having this argument. I can only assume there are a huge number of people who are perversely invested in the REF process.
There is little to no likelihood of consensus on any assessment scheme to replace REF and/or TEF and/or KEF. Many of us are frustrated at the amount time and opportunity cost associated with assessment ... particularly because this is diverted from the doing of core academic research, scholarship, learning & teaching, impact and knowledge exchange. It would be possible to model the extent to which metric driven approaches achieve similar outcomes to the current expert-led process and indeed, previous attempts to do this have been produced. IF the REF panel didn't anonymise the assessment process at the level of the individual outputs we could see at a much more granular level- what is happening. This would resolve the aggregated/disaggregated issues for units of assessment ... but of course, there is no incentive to do so for the REF panels ... and consequently everyone is guessing whether their individual output, with its individual citation/altmetric/etc. data was or was not well-correlated with the expert opinion delivered by the panel. I doubt our students would accept feedback which was only about the aggregate outcomes for their year group. Undertandably we are tasked with giving specific feedback and marks for their individual work. It is an odd juxtaposition that our academic community are given relatively high level, aggregate feedback and denied the chance to learn how their own individual outputs were rated so that they can understand which types of work, in which types of outlets will help develop their career.
What would happen if the REF/TEF were scrapped? Suddenly all academics would reduce the quality of their teaching and everyone starts doing shoddy research? Institutions like Cambridge/Oxford/Harvard are revered around the world, why? because of REF? If you know academia, you would know that it is a small world and your reputation travels before you. It is one profession where individual achievements are on public display for everyone to see and judge. What is the REF panel going to tell me that I can't read a paper or look at grant document and understand about its quality? Well infact REF panel wont even tell about any individual outcome. Just total nonsense ! All it does is keep the rent seekers and other businesses like exeternal reviewers for the internal assessments etc. thriving. Apparently being on the REF panel it self is very prestegious, yes so ahead create a market there as well and enable gatekeepers and well connected to ensure that those positions also go to a select few. Does It help people decide on the best universities? Given how dependent UK universities are on international stuednt income, get a grip - as one recruitment agent from the China commented - everyone knows there are 20 universites in the top 10 in the UK. After the REF everyone claims to be top of something! So wake up and smell the coffee. This business needs to be wound up pronto.

Sponsored

ADVERTISEMENT