Is the benefit of the REF really worth the cost?

Simpler options are imperfect but perhaps no more so than the panels’ unavoidably cursory ‘peer review’ of submissions, says Dorothy Bishop

April 28, 2021
Divergent paths - one direct, the other convoluted, symbolising two approaches to the REF
Source: Getty

As long ago as 1998, Colin Blakemore, then president of the British Neuroscience Association, expressed his reservations about the burden imposed by the UK’s research assessment exercise (RAE) on both institutions and those charged with “peer reviewing” their submissions.

“The changes in ranking that now occur from exercise to exercise are generally small in magnitude and in number,” he noted. “In other words, huge effort and cost are being invested to discover less and less information.”

In 2009, the Treasury proposed a much simpler system that awarded block grants in relation to institutions’ research income. However, this was greeted by howls of anguish from academics. Institutions had invested a lot in preparing for the RAE. Those who had done well out of it were particularly wedded to the status quo. And those with a high proportion of arts and humanities had realistic concerns about losing funding given that grants in these fields are small compared with science. So the new name belied the research excellence framework’s duplication of the RAE’s assessment methodology.

The influential Metric Tide report of 2015 discussed “metrics based” assessment, conceptualised as evaluation of individuals via indicators such as publication or citation volumes. But this was roundly rejected as unable to capture nuances of quality. “Peer review is not perfect, but it is the least worst form of academic governance we have,” the report concluded.

ADVERTISEMENT

Now, with the UK government on a crusade against red tape and the academic workforce reeling from the pandemic, the REF – whose latest submission deadline passed at the end of last month – is again coming under scrutiny. It is worth reflecting on possible alternatives.

We should start by accepting that any new system will soon fall foul of Goodhart’s law: “When a measure becomes a target, it ceases to be a good measure.” This was recognised in 2009: if grant-based income was to be the metric, just watch that researcher who previously managed to do excellent work with few resources suddenly go cap in hand to the research councils.

ADVERTISEMENT

We should also ask ourselves what we are trying to achieve. The stated purposes of the REF are to “provide accountability for public investment in research and produce evidence of the benefits”, “provide benchmarking information and establish reputational yardsticks” and “inform the selective allocation of funding for research”.

I’d be happy to ditch the second aim. A healthy higher education sector has a diversity of institutions, including some that may specialise in research on small and local issues. Measuring everyone against a single “world leading” yardstick is as undesirable as it is unrealistic.

The other two goals make sense. Public accountability is vital, and we need a fair way to allocate research funding. Indeed, in countries where corrupt and nepotistic practices are entrenched, the REF is seen as admirably transparent.

I have two objections to the current approach. First, the costs and benefits still seem massively unbalanced. For the past three years, institutions have been engaged in preparing their submissions, with many going through a mock REF in the process. Between May 2021 and February 2022, subject panels will assess the submissions. I did a few back-of-the-envelope calculations for my subject area: assuming each output is doubly assessed, each panel member could have about 500 outputs to assess over those 10 months. And that’s before we get to impact case studies.

Second, the REF process is not peer review in the sense that this term is usually understood. This is not a slur on the integrity of panel members; nobody, however dedicated, could have the expertise and time to adequately peer review such a large volume of work, much of which may be outside their specific interests. As Derek Sayer asked in 2014, how could a panel of historians evaluate the thoroughness, rigour and accuracy of a monograph on a specific period of eastern European history if none of them had background in that area?

Largely because academics are suspicious of simple metrics, we’ve ended up with a hugely complex system that is intended to give a more detailed and nuanced evaluation of quality but, in effect, just generates an enormous workload while achieving no more valid a result than could be obtained from a simpler system.

Yet what could go in its place? In 2014, I suggested that we could award funds in proportion to the number of active researchers at an institution, weighted, as is currently done, by the expense of research in each discipline. This would be far from perfect, but it would certainly cost a lot less.

It is easy to predict that institutions would proceed to designate everyone a researcher, with no quality control. However, if we anticipate such problems, we should be able to minimise them. The basic rule would be to start out with the simplest system possible and only add more complexity if the benefit demonstrably outweighed the cost.

ADVERTISEMENT

We already have data from many REF rounds. We could evaluate how different the outcomes would have been if we had used simpler indicators. We could then decide on whether the cost of our current system is justified. Are we really spending all that time and money to achieve a fairer and more precise result? Or would we get an equally defensible outcome from an exercise based on existing indicators that could be completed in a matter of weeks rather than years?

Dorothy Bishop is professor of neurodevelopmental psychology at the University of Oxford.

ADVERTISEMENT

POSTSCRIPT:

Print headline: Do the benefits of the REF really justify its cost and complexity?

Register to continue

Why register?

  • Registration is free and only takes a moment
  • Once registered, you can read 3 articles a month
  • Sign up for our newsletter
Register
Please Login or Register to read this article.

Related articles

Reader's comments (9)

Getting grants used to be out of financial necessity - if a specific (ambitious) research project cannot proceed without funding, then we applied for it. Since grant awards have become a performance metric, all academics are now pressured to get grants for REF and promotions. How can this type of reward structure not lead to financial inefficiency of UKRI's funding???
I agree, I find this very frustrating. I'm a mathematician - other than my computer, I just need pen and paper. Why do I have in, with respect to grant income, the same requirements as someone who employs teams of research assistants, or someone in physics who runs a lab?
The intention of this article is good. More efficiency in the REF would be a good thing. But I am not sure the proposed cure would be suitable. Giving each and every institution the incentive to hire as many bodies as possible would increase the speed of the monopoly game that is the competition between universities. If my University is not assessed on paper quality and instead on Department size, they will do what they can to recruit more clueless student customers with poor education from developing countries just to get the income required to hire those new bodies for the REF. Instead, we need to maximise quality per Department, not quantity. The recent suggestion here on THE to use citation metrics of journals would do this trick. They are quick and easy to deploy, and while they may be off for each particular article, these effects will balance each other out statistically because there are many submissions and because the journal's impact factor is a consistent estimator of scholarly article impact.
I totally agree with Professor Bishop's premise that the costs of assessment now outweigh the insights gained. We're also now beset by the problems of "gaming" the system at everything from the individual researcher to the institutional level. My own view is that this isn't just about research performance and that similar dramatic simplification is needed in TEF and KEF so that academic staff spend more time doing academic work and less time accounting for the doing of academic work ... https://www.timeshighereducation.com/blog/radical-rethink-uks-excellence-frameworks-needed. If we don't pause and rethink now, another 5-8 years will pass and the opportunity cost will continue to grow.
I have a contact at another Russell group university who for the past two or three REFS did the following. With and friend/colleague, after work, he went through all of their depts. intended REF submission and using a simple Excel table calculated a score based on the Impact factor of each publication and the citation index. They then printed out the Excel sheets, sealed them in a envelope, which was dated & signed across the flap. These envelopes were not opened until the official score emerged and in both cases their score were only a second decimal place different. Cost of their effort - about 3hrs and £10 (six pack of 'refreshment' ; somewhat cheap and just as effective as the 10's of millions for REF, which as been a massive waste of both money and academic time!
But you see.... think about all those people who are employed, paid, and get reputational benefits from being appointed as REF panelists/lead?? What on earth will they do if they lose all those benefits??
I agree with the article completely. The current REF has built a massive internal and external bureaucracy for very little return. Peer review is not really a reliable method of judging articles that have already been through a review process. In addition, the difference between the top departments are marginal. It is just a huge waste of time and effort for a marginal outcome.
REF is a poker game with high stakes for VCs and research directors. The impact on academic life (publish 4* or perish) should not be underestimated. A simpler way of incentivizing academics to achieve research goals would be to ...
In some fields almost all of the papers that get into the top journals use US data. Partly to do with realiability and availability of data, but also becuase most of these journals are US based. Is that proper use of UK tax payers funds? Coud the REF not give more weight to research that is primarily aimed at studying issues that are relevant to the UK?

Sponsored

ADVERTISEMENT