Listen to learners

十二月 7, 2001

Student feedback has a crucial role to play in evaluating institutional performance, writes Lee Harvey.

Student feedback surveys have a unique element: they provide an insider view of what is really going on. Student feedback may not tell the whole story, but it gives a good indication of excellence on the one hand and sounds warning signals on the other.

This valuable source of information has been under-exploited. But with the recent publication of Ron Cooke's consultative report on the information that universities will be expected to make available, this could change. It recommends that student-satisfaction surveys be made public in an effort to improve the quality of institutions - although this approach tends to place an emphasis on the student as consumer rather than learner.

The question, though, is what feedback data should be reported. Data have two functions, information and improvement. Data that help inform people's perceptions of a university should be public. To most people, this means data that can be compared across universities and programmes of study. But one could end up in endless debates about what the components of a core set of satisfaction questions should be.

On the basis of long-term wide-ranging research, we have identified a generic set of questions that occur in most, if not all, substantial institution-wide satisfaction surveys, including some overseas examples. This generic set deals primarily with course organisation, the learning process, what students learn, and learning support. It does not, and should not, include questions about teacher performance.

Another concern is over how public data should be reported. There are three dangers: first, that we could overwhelm readers with statistics; second, that we do the opposite by making it too simple; and third, that we throw everything into the mix and provide relatively meaningless summary assessments that do not give the level of information required to make informed choices.

Over the years, the Which? model of information provision has proved popular. Translated to the student satisfaction survey, this would mean that the mean result for each item on the questionnaire would be reported on a five-point scale for programmes and for universities as a whole (rated A-E). Programme-level reporting would be suitable for ratings of learning development and programme organisation. University-level reporting might be more appropriate for student services and campus environment.

Surveys should address the importance that students attach to each item. Something may not be particularly satisfactory, but if it is of no importance to students on a programme then a reader of the information can give it less weight when making an assessment. Satisfaction and importance ratings are combined into a single grading by changing the case of the A-E grade (upper case being very important, lower case less important). Thus any programme can be easily compared with another programme without satisfaction ratings being aggregated in a meaningless way.

On this basis, not only could results be posted on institutional websites, but it would also be a simple matter to compile reports that compare similar programmes across the sector.

Student data is not just about information, it plays an important role in the improvement process. To continue to do so, institutional satisfaction surveys will need to be tailored to individual institutions' requirements. This would mean augmenting the core questions with a set of locally determined questions. In our experience, these additional questions are most meaningful when derived from qualitative discussions with students, rather than being imposed by management or teaching staff.

Improvement requires integrating student views into a continuous cycle of analysis, reporting, action and feedback. This requires professional data collection and clear reports that identify areas for action, delegating responsibility for action, encouraging ownership of plans of action and ensuring feedback to those generating the data. Much data on student views are not used effectively because the action cycle is not embedded in the university's institutional culture.

Lee Harvey is professor and director of the Centre for Research into Quality at the University of Central England.

* Should students' views feed into the quality process? Email soapbox@thes.co.uk

请先注册再继续

为何要注册?

  • 注册是免费的,而且十分便捷
  • 注册成功后,您每月可免费阅读3篇文章
  • 订阅我们的邮件
注册
Please 登录 or 注册 to read this article.
ADVERTISEMENT