AI potential ‘squandered by universities’ risk-focused approach’

Higher education staff missing opportunity to use generative AI tools to relieve them from routine tasks, Australian survey finds

八月 12, 2024
People gathered in Sydney for the annual RoboCup event to illustrate AI potential ‘squandered by universities’ risk-focused approach’
Source: PETER PARKS/AFP/Getty Images

A risk-based approach to artificial intelligence (AI) has deterred university staff from using it productively, with some avoiding it entirely and others covering their tracks for fear of being unfairly penalised.

A survey of more than 3,400 workers at 28 Australian universities has found that very few are using generative AI to relieve them from routine tasks, while its application to pedagogy is largely confined to teaching students about AI rather than using AI to teach.

Lead researcher Abby Cathcart said academics were more likely than professional staff to be using AI, with 75 per cent of surveyed academics saying they utilised it in their work.

But just 37 per cent used it as a teaching tool, mainly to instruct students about integrity.  

She said few academics employed AI in curriculum design, despite the “advanced use cases” for this application. Fewer than 10 per cent used it frequently to help generate assessment criteria, standards or rubrics.

A handful of Australian academics used AI in “clever” ways, such as getting students to use bots to draft assignments before critiquing and revising the results.

“Our results show that most of the sector is not at that mature level. Until more staff engage with AI more frequently, they will not develop capability in understanding the opportunities and risks,” said Professor Cathcart, director of student success and teaching advancement at Queensland University of Technology.

She said universities and regulators tended to “focus first on risk” rather than treating AI as an opportunity, leaving many staff in a “waiting pattern. The difference is, more students are progressing their use of AI. If we continue to have this wait-and-see approach…we’re not preparing students for the world of work and we’re going to be very much left behind them.”

The survey found that most staff knew about their universities’ AI policies, but few considered them useful. A small subset – 8 per cent – said they concealed their AI use for fear of being misunderstood or punished.

Professor Cathcart’s team found little use of AI for administrative purposes other than composing emails. “Very small percentages” of university staff harnessed it for drafting reports or creating meeting agendas or minutes.


Campus resources on AI in higher education


“How much time does everyone in the sector spend in meetings? If we’re attending meetings where the minutes are not being generated by AI, then we’re wasting energy,” she claimed.

“AI’s great promise is that it’s going to release us from the pedestrian administrative tasks that distract us from the parts of the job that we love. But that doesn’t seem to be the way the sector’s moving.”

She said some applications of AI – in producing code, transcribing interviews with research subjects or generating student feedback from bullet points, for example – could be extraordinarily time-saving. But there was little evidence of its use for these purposes.

Only 12 per cent of survey respondents said AI substantially improved their productivity, with slightly more than half claiming it had limited or no productivity impact.

Professor Cathcart said some universities could only afford enterprise-wide licences for AI tools with “quite limited utility. You can waste an awful lot of time messing around with them, trying to get them to do things that don’t seem very helpful at all.”

She said “relentless change and growing workloads” had hindered staff from embracing AI.

The research is due to be published in September.

john.ross@timeshighereducation.com

请先注册再继续

为何要注册?

  • 注册是免费的,而且十分便捷
  • 注册成功后,您每月可免费阅读3篇文章
  • 订阅我们的邮件
注册
Please 登录 or 注册 to read this article.

Reader's comments (2)

Ah yes, because who wouldn’t want to replace their routine tasks with the thrill of troubleshooting AI errors and praying the technology doesn’t randomly hallucinate a solution? Sounds like a dream come true! (Disclaimer: this reply was written by ChatGPT in response to a prompt asking for a sarcastic comment on this article)
I doubt future auditors will accept: "we have no idea who made the decision because we sacked our meeting secretary, the AI hallucinated the minutes and everyone just assumed it was magic so didn't review them properly."
ADVERTISEMENT