Wrangles over measuring teaching quality move Down Under

If the Australian government wants to link university funding to student satisfaction, it must ensure that scores reflect more than students’ gender, wealth or ease of passage, says Julie Hare

六月 21, 2018
A man spoon-feeding a woman
Source: iStock

In return for their significant public investment in universities, governments increasingly demand transparency and accountability. In Australia, education minister Simon Birmingham has gone so far as to sanction the publication of league tables – a move that would have been considered unthinkable just a few years ago.

The rationale is simple: university applicants can make better informed decisions about what and where to study if armed with information about student engagement, graduate outcomes and employer satisfaction.

The most recent 2017 Student Engagement Survey was published in May. The 112-page report, with its 60 tables and 83 figures, is a policy nerd’s wet dream, and its contents will be recycled into the government’s Quality Indicators for Learning and Teaching (QILT) website in a form that students are supposedly able to fathom – although I am still unable to do so.

The headline results are simultaneously predictable and surprising. Overall, satisfaction was at a healthy 79 per cent, where it has been hovering for the past six years. No surprises there. However, Australia’s funding models and policy frameworks are geared to promote massive over minute, comprehensive over niche and research over teaching. How, then, do you explain the fact that the top three institutions, all with ratings of more than 90 per cent, are small, niche and teaching-focused?

Two of them, Victoria’s University of Divinity and Queensland’s Bond University – are barely larger than your average Australian high school. The third, the multi-campus University of Notre Dame, is bigger, but its 12,000-student headcount is still minuscule by Australian standards; most universities have double that, and some, such as Monash University, have five times as many. Moreover, both Divinity and Notre Dame are religious-based, with the former’s 1,500 students focused solely on theology and ministry.

Bond, meanwhile, is Australia’s only full-fee private university. Another private institution, Torrens University, part of the US-based for-profit Laureate Education Group, languishes further down the list, but that could just reflect its low response rate.

The anomalies accumulate. Take the University of Melbourne, which has the highest student demand in the country (based on entry scores) and the lowest attrition rates. Yet it is ranked by its students below the median, with an engagement score of 77.5 per cent.

All this is particularly fraught since, from 2020, the government plans to use data from sources such as the Student Engagement Survey, the Graduate Outcomes Survey and the Employer Satisfaction Survey to not so much punish as un-reward those institutions that perform below the average. It just doesn’t know quite how yet.

That is a worrying prospect for those at the bottom of those league tables. If their students are genuinely less satisfied than those elsewhere then they need to get some strategies in place to redress that quick sticks. But if the league table is being driven by factors outside institutions’ control then there is potential for policy misadventure.

According to the report, female, mature, English-speaking, external, and first-generation students are all statistically more likely to rate their university experience highly. The flip side is that young, male, privately schooled students are likely to produce less positive results. And here we need look no further than the University of New South Wales: the only university in the country with a majority of male students, and that tends to attract students straight out of school.

My point is not to defend New South Wales: it can do that for itself. I am merely making the point that if the government is going to un-reward universities for not performing well on criteria such as student engagement, it had better be certain that scores aren’t just artefacts of students’ gender, age or socio-economic status.

If it values academic rigour, it had also better be sure that engagement scores aren’t just a product of how much students are spoon-fed – both academically and socially – and how generously they are marked.

The arguments that the UK has already had about the use of student satisfaction scores in the teaching excellence framework look destined to be re-run Down Under over the next couple of years.

Julie Hare is a freelance writer and former higher education editor of The Australian.

后记

Print headline: A measurement problem shared

请先注册再继续

为何要注册?

  • 注册是免费的,而且十分便捷
  • 注册成功后,您每月可免费阅读3篇文章
  • 订阅我们的邮件
注册
Please 登录 or 注册 to read this article.

Reader's comments (1)

Julie Welcome back. It just goes to show that these sorts of metrics are poor proxies for quality.
ADVERTISEMENT