Institutions should work together to develop “next-generation metrics” for “responsible research assessment”, the League of European Research Universities (Leru) says.
In a position paper released this week, Leru warns against the use of “one-size-fits-all” methods of academic evaluation, urging universities to tailor their policies to their own “missions” while collaborating with other institutions and funding agencies to “leverag[e] international expertise”.
The paper sets out one “overarching recommendation” to universities: “Use indicators and metrics that are contextually relevant, that support responsible research evaluation, and that align with your institution’s mission. Institutions should collaborate and reuse existing metrics expertise in order to maximise their efficiency in achieving this goal.”
Next-generation metrics might incorporate both newly adopted indicators – progress towards open science, for instance – and existing metrics applied in “novel ways” or in different contexts, the paper says. Citing limited resources, inconsistent data collection and embedded practices as barriers to universities attempting to evaluate research processes effectively, Leru calls on institutions to employ existing expertise and literature while experimenting with new approaches.
Paul Wouters, chair of the Leru experts writing group and emeritus professor of scientometrics at the University of Leiden, recommended that members of the university group team up to conduct “experiments” on new research assessment approaches.
“I could imagine, for example, that three or four Leru universities might collaborate on redesigning the way they do the annual performance interview with their researchers,” he said in an online Q&A.
“Then another three or four universities might work together on implementing the CoARA [Coalition for Advancing Research Assessment] principles with respect to assessment of research groups. And another four universities could bring together their computer and library science people to see what data could most easily be harvested for meaningful metrics with respect to, for example, open science practices or the promotion of gender equality.”
Such experiments, Professor Wouters said, would help to familiarise institutions with new approaches to research assessment. “We are not advocating a complete revolution, where we suddenly do away with everything,” he added.
“In chemistry, for example, where the present indicators do say something useful, the steps would be perhaps a bit more gradual than in fields like history or philosophy, where counting citations makes no sense at all.”
Calling institutions’ “obsession” with global university rankings “misplaced”, Professor Wouters said that in the absence of research assessment reform, “we also run the risk of missing important breakthroughs, because researchers don’t have enough time to spend on developing completely new fields”.
A second expert group must be established to tackle teaching assessment reform, Leru notes. Current metrics such as student satisfaction, Professor Wouters said, often fail to reflect teaching quality. “Someone can be very dissatisfied with a course but still learn a lot and find it very valuable in the long term,” he explained.
Register to continue
Why register?
- Registration is free and only takes a moment
- Once registered, you can read 3 articles a month
- Sign up for our newsletter
Subscribe
Or subscribe for unlimited access to:
- Unlimited access to news, views, insights & reviews
- Digital editions
- Digital access to THE’s university and college rankings analysis
Already registered or a current subscriber? Login