专家称定制机器人笔试可遏制作弊

学者认为创建线上考试的独特数据集比“天真的”荣誉准则或不完善的在线监控更可取

一月 5, 2022
Robots charge by the play field at the 2013 RoboCup German Open tournament, illustrating an article about online cheating
Source: Getty

点击阅读英文原文


基于一所英国大学进行的尝试,研究人员称为每个学生创建独特数据集的定制考试可显著减少在线考试中的作弊行为。

随着一些研究表明越来越多大学采取线上考试导致学生作弊行为大幅增加,学者们开始研究如何设计考试,从而使学生无法串通作弊或抄袭他人的答案。

在一种新方法中,埃克塞特大学(University of Exeter)的化学家正使用计算机编码为一个班级生成60个不同的数据集。在占总成绩20%的这个数据分析测试中,每个学生将参加基于一个定制数据集的评估。


THE Campus resource: using multimedia as a tool to enhance and transform assessment


这份发表于《化学教育杂志》(Journal of Chemical Education)的研究称,考试脚本对实验室设备进行建模以产生真实的数据,但引入了一些随机性,因此每个数据集都不同。

埃克塞特大学生物科学高级讲师艾莉森·希尔(Alison Hill)与同事尼古拉斯·哈默合著了这篇研究,她解释说:“如果你的考试对获得学位影响重大,而且改到网上进行,这就提供了作弊机会,甚至激励了这种行为。”

“这(在线作弊)不仅仅发生在学术界——我丈夫是德文郡的国际象棋冠军,当国际象棋因疫情转为在线进行时,有报道称有些人使用电脑来帮助他们获胜。”

希尔博士解释说,防止学生在数据分析问题上串通作弊的一种方法是将考试时间限制为一小时。然而,这个简短的考试窗口不公平地惩罚了那些网络连接不佳的学生或来自不同时区的学生,因为后者经常不得不在凌晨3点开始考试。

但她认为,学生们更喜欢24小时的考试窗口实际上是邀请学生们共享答案。

希尔博士说:“我们在其他地区看到,一旦试卷上线,WhatsApp小组就会立即成立——人们只是将这种分享视为对时间的一种良好投资。”她认为依赖大学的荣誉准则来阻止作弊行为是“天真的”。

“我们不能彻底阻止作弊,但如果每个学生在面对相同问题时有独特的数据集,那么作弊的成本效益平衡对学生将不再有利,因为他们需要根据不同的信息集再次(为同学)解答一遍。”

这篇文章称,通过创建不同数据集来设计防作弊考试的方法可应用于大部分数据密集型考试,每份考卷都有自动生成的答卷。

希尔博士说,这种考试设计比疫情中试行的一些监考技术要有效得多,比如让考生面对网络监考摄像头,有些考生能轻易绕过。

她说:“位于防疫封锁区的学生总能找到方法来绕过这种类型的规定。”

jack.grove@timeshighereducation.com

本文由刘静为泰晤士高等教育翻译。

后记

Print headline: Computer code tops honour code in thwarting cheats

请先注册再继续

为何要注册?

  • 注册是免费的,而且十分便捷
  • 注册成功后,您每月可免费阅读3篇文章
  • 订阅我们的邮件
注册
Please 登录 or 注册 to read this article.

相关文章

Reader's comments (9)

I really rate and really value these kinds of approaches. I've been doing similar for around 8 years now with coursework for a large physiology module where there is still cheating going on in one of the small tasks where they have several days to complete it but timers on the screen when they start. They share answers and ask questions in the WhatsApp group. My exams for the same module have also been online for several years now, but cheating is happening far less as I'm able to package and constrain those far better. A colleague in my department is doing a similar randomised data set thing as this article on his module for chemistry and maths using "Numbas". And a buddy in my office ran his genetics exam last year using a series of unique gene sequences where each student had a different one that changed the outcome of the question for each student so there was no point sharing information. We need more of this. Great stuff, Alison!
Thank you, Chris. I use NUMBAS for my Medicinal Chemistry module and it is brilliant. Good luck with maintaining standards!
This certainly looks like a feasible solution that will get rid of the benefits of cheating. The most depressing thing is that students cheat at all. They do not seem to see the value in actually doing the work rather than just chasing the marks. The chess anecdote is really sad as players are only cheating themselves since they will one day play face-to-face again and find it more difficult without their computer assistance.
This seems like a great idea but it will only work for a very limited range of courses which require analysis on numerical data. I suspect it is also hugely time consuming to prepare such tests and to ensure that they are bug-free.
This seems like a great idea but it will only work for a very limited range of courses which require analysis on numerical data. I suspect it is also hugely time consuming to prepare such tests and to ensure that they are bug-free.
We also included images and so it is not restricted to numerical data. There is some initial investment but we have provided annotated scripts of the programme files we used and anyone interested can get in touch with us. Once the initial file is written, it can be reused again the following year with new parameters.
This seems like a great idea but it will only work for a very limited range of courses which require analysis on numerical data. I suspect it is also hugely time consuming to prepare such tests and to ensure that they are bug-free.
It's your classic "a little effort up front saves infinite time later" situation. Yes, the first year is a bit of a faff. But after that? You have more time to dedicate to other innovations, research, and better quality and quantity of feedback on those assessments that cannot be so easily automated and protected. And if you start to building different question formats as well as things like branched scenarios to change the outcomes based on single decision points, you can really start to get creative with how these assessments run in different, non-numerical areas.
Nothing new here (apart from the robot). To avoid cheating or rote learning back in the 1970s (before all the cheating mills and online learning) we at Sociology at Birminghan Poly 'personalised' assessment by, for example, asking for a comparison between two case studies, from a list of 50 or so. Once a pairing had been selected no one else could offer the same pairing. Seemed an obvious way to deal with the problem but, of course, requires more work on the part of the assessor/examiner!
ADVERTISEMENT