Scrap that student survey now

December 12, 2003

The proposed national satisfaction poll would be a costly and pointless exercise, says Lee Harvey

The Higher Education Funding Council for England's proposed national student survey is a misleading, intrusive, expensive and ultimately worthless venture.

The original proposal was to survey graduates about their experience, but Hefce now proposes to survey students halfway through their final year.

This is an unacceptable intrusion into university life that will damage existing improvement processes based on internal explorations of student satisfaction.

ADVERTISEMENT

Many universities conduct institution-wide surveys of the student experience. These are designed to address issues of concern to students and are linked to action cycles.

The proposed survey in January 2005, which will neither contribute to the improvement action process nor provide any useful information, will clash with these important internal processes, also scheduled for the spring term. The result will be low response rates on one or probably both surveys.

ADVERTISEMENT

This disastrous consequence is unacceptable and will be detrimental to students. Those commissioning the pilot were fully aware of this potential clash and it is staggering that they should support the survey of final-year students mid-term. Vice-chancellors need to oppose this expensive and unhelpful scheme.

And expensive it will be. The pilot has already cost £400,000 by conservative estimates, not taking into account substantial "free" labour.

All this to achieve a response rate of about 40 per cent from 23 institutions.

Apply this to the sector as a whole and we are talking at least £3 million. But that won't provide the credibility needed to convert the results into league tables. This will require response rates of 70 per cent or more.

This will not just mean a doubling of costs but a three to fourfold increase because telephoning non-respondents is an extremely costly process. What will all this effort and £10 million a year generate?

Meaningless and pointless statistics of use neither to institutions for improvement purposes, nor to intending students as indicators to help them choose a course suited to their learning style.

I am all for institutions making their internal feedback available to prospective students. The proposed approach, though, is laughable in its pointlessness. The pilot, for example, assembled nine statements on teaching with which respondents might agree or disagree on a five-point scale. These are averaged and a teaching score generated ranging from one to five - a low score being more positive than a high one. There were five other scales and an overall rating. It is proposed that the post-pilot version will have fewer items per scale.

What do the average scores show? What does 1.5 for teaching mean? Well, it means students quite strongly agree that teaching is... is what? Well, better than if it had scored 3.4, but maybe not quite as good as if it had scored 1.3.

ADVERTISEMENT

But what is it about teaching that this score represents? The whole scheme is based on the "interchangeability of indicators" thesis developed, pragmatically not theoretically, by sociologist Paul Lazarsfeld and colleagues in the early 1960s. It assumes that there is a concept called teaching and that any set of an unspecified subgroup of similar indicators is as good as any other for measuring the concept.

Various statistical manipulations, such as factor analysis, "prove" this.

But the whole process is based on an invalid presupposition - that the concept "teaching" is unidimensional. If it isn't - and it isn't - the average is meaningless.

The point is that no prospective student is going to make a decision on what course to take based on whether a teaching score is 1.5 or 1.8.

We should be encouraging students to opt for courses that suit their learning style, and these aggregate scores entirely fail to provide useful information for that purpose. Now is the time to stop wasting money and scrap this pointless project.

Lee Harvey is director of the Centre for Research and Evaluation, Sheffield Hallam University.

ADVERTISEMENT

Register to continue

Why register?

  • Registration is free and only takes a moment
  • Once registered, you can read 3 articles a month
  • Sign up for our newsletter
Register
Please Login or Register to read this article.

Sponsored

ADVERTISEMENT