They saw a star and rejoiced

The REF results can be wrapped up and presented many different ways. THE puts its analysis under the sector’s tree

December 18, 2014

So, how did you do in the REF? Your answer will no doubt depend on which ranking you are looking at. Even as we speak, universities are doubtless scouring the various league tables generated both internally and by the media, looking for one in which they ride highest – before issuing that press release trumpeting their success. And why not?

But what should Times Higher Education, as the sector’s own publication, do to try to bring order and meaning to the data, when the range of possible methodologies is so varied?

One approach would be to do what the Higher Education Funding Council for England itself does: publish the results in an unranked format and leave it to others to squabble over how they should be interpreted. But that wouldn’t make for very palatable reading. Besides, even Hefce will implicitly produce a ranking next year when it settles on a new funding formula to turn the REF results into a list of who gets what in terms of quality-related research funding – the REF’s ultimate raison d’être.

All submission strategies amount to game-playing, given that all universities take a view on who to submit, and very few submit all eligible staff

ADVERTISEMENT

So, with continuity in mind – and accepting that no methodology is perfect – we have chosen to stick with the broad methodology we used in relation to the 2008 research assessment exercise.

This involves turning each university’s spread of 4*, 3*, 2* and 1* research into a grade point average of between zero and four.

ADVERTISEMENT

Some argue that an approach such as this, which focuses purely on the quality of submissions without taking any account of their size, encourages universities to “game play” – to submit only their very best researchers in the hope of maximising their GPA for reputational purposes (and funding be damned).

But there is a sense in which all the submission strategies amount to game-playing of sorts, given that all universities are obliged to take a view on who to submit, and very few submit all eligible staff.

It is also worth noting that those universities that do well on GPA would doubtless produce and publicise their own GPA rankings even if THE did not.

Nevertheless, we accept that some in the sector feel strongly that our ranking ought also to reflect the proportion of their eligible staff that each university submits.

Unlike in 2008, the Higher Education Statistics Agency has on 18 December produced more-or-less reliable figures on numbers of staff eligible for submission to the REF.

However, since the agency was not able to release those figures before THE went to press, we have not been able to use them in the tables we present in this issue.

We will do so on 1 January, and perhaps producing two different rankings is no bad thing – the sector itself can take a view on which is the most meaningful. In this spirit of “more is more”, we also include scores for “research power” – the product of GPA and volume of people submitted – in this week’s ranking.

This measure also disadvantages those whose submissions are highly selective and gives a better indication than GPA of where the bulk of the QR money is likely to flow.

ADVERTISEMENT

Producing multiple rankings will no doubt fuel the “game playing” by university PR teams, who will see any table in which they perform well as a gift. But it is Christmas, after all.

john.gill@tesglobal.com

Times Higher Education free 30-day trial

Register to continue

Why register?

  • Registration is free and only takes a moment
  • Once registered, you can read 3 articles a month
  • Sign up for our newsletter
Register
Please Login or Register to read this article.

Sponsored

ADVERTISEMENT