In defence of the National Student Survey

John Cater maligns the misuse of an NSS that can offer genuine insight into university teaching

三月 11, 2017
Woman with thumbs up sign

Seldom does a day pass without the National Student Survey getting another kicking. If not by the sector, then the Commons; if not the Commons, then the House of Lords while discussing the Higher Education and Research Bill. 

For certain, the NSS has its flaws, and they are flaws that will be accentuated this year by the ill-judged decision to replace “outcome” questions (personal development) with “process” (engagement) questions, conveniently forgetting the number of lectures many of us missed while we were students. 

And they are flaws further accentuated by the decision to eliminate the one question that focused on that key vehicle for representation and participation: the students’ union. 

But what of the alternative view? Twelve years of time series data. Surveys completed every year by the majority of the eligible population, a quarter of a million students across 130 universities and on virtually every degree programme. Twenty-three questions, five possible answers, 115 data fields and shed-loads of open comments – for the committed reader the most valuable source of all.  

Of course, the administration of the survey may have been abused by some, although the evidence is far less compelling than the innuendo. If this is a problem, it could easily be resolved, in 2017 if not in 2005, by changes in how the survey is administered. 

Far greater abuse takes place at the analysis and reporting stage, with the state, the funding council and much of the media lazily focusing on two possible answers (very/mostly satisfied) to one ill-defined generic question (No 22: overall satisfaction).

Two data fields out of more than a hundred. And only to tell us that the answer to 30 million individual data entries every year is “87” (which is the near-perennial percentage for the number of students who are “very” or “mostly” satisfied, according to Question 22). Reductio ad absurdum. Douglas Adams would have been delighted – and a quarter of a million respondents have the right to feel insulted.  

So we now approach the teaching excellence framework, and the institutionally critical division of universities into gold, silver and bronze, not (regrettably) “outstanding”, “excellent” and “meets expectations”. And we do so with the most fully developed measure we have of students’ perception of their learning experience denigrated and disparaged, with commentators claiming that responses to questions such as whether “feedback on my work has been prompt” are influenced by the attractiveness of the tutor (hopefully another nebulous concept). 

Indeed, the UK higher education minister Jo Johnson, in his letter to the House of Lords on 3 March, specifically supports this downplaying when he states that “we have explicitly said that the NSS metrics are the least important”. 

The counterweights to the NSS are already there: retention, employment, graduate employment. But none of these seeks to measure teaching excellence, nor is there an alternative measure proposed that does. There is no reference to a Retention, Employment and Graduate Employment Framework in the forthcoming Act, but there is a real danger that the REGEF is what we’ll get.   

John Cater is vice-chancellor of Edge Hill University.

请先注册再继续

为何要注册?

  • 注册是免费的,而且十分便捷
  • 注册成功后,您每月可免费阅读3篇文章
  • 订阅我们的邮件
注册
Please 登录 or 注册 to read this article.
ADVERTISEMENT