Machine learning software designed to help identify assignments produced by essay mills has the potential to deliver a limited improvement in detection rates, according to an Australian study.
Markers who took part in an experiment using an alpha version of Turnitin’s Authorship Investigate, which compares submissions with students’ previous work to identify anomalies, identified 59 per cent of contract cheating cases when using the tool – compared with 48 per cent without it.
Academics at Deakin University who conducted the test described the results as “very exciting”. However, other experts expressed disappointment that the improvement was not more significant.
For the experiment, detailed in Assessment & Evaluation in Higher Education, 24 experienced markers across a range of disciplines were each given 20 assignments. Each set contained 14 legitimate assignments and six that were purchased from contract cheating websites.
Once they had made an initial judgement, the markers were allowed to revise their decision on whether or not the paper was from an essay mill after using Authorship Investigate, which had accessed seven previous assignments written by each student to scrutinise their writing style. The tool compares linguistic attributes such as sentence complexity and length before warning whether elements of the submission fall outside the expected range.
Phillip Dawson, associate director of the Centre for Research in Assessment and Digital Learning at Deakin and one of the paper’s authors, said that the 11 percentage point increase in detection was “very exciting”, adding that more recent versions of Authorship Investigate that are now being used were likely to be even more effective.
He acknowledged that the improvement in detection was smaller than the 24 percentage point rise reported in an experiment conducted with Deakin colleague Wendy Sutherland-Smith – also a co-author on the latest paper – which examined the impact of improving markers’ training in detecting contract cheating.
However, the type of marker training they investigated had involved academics spending three hours examining submissions written by essay mills similar to those that they would be marking, and the pair argue that this would not always be possible when staff worked across multiple campuses and had other commitments.
“Academics often don’t follow through with accusations of contract cheating because it’s too hard and too time-consuming. [Authorship Investigate] will take some of this work out of their hands,” Dr Dawson said. “This will help markers have confidence to bring the evidence to the committee for discussion.
“It’s always a human judgement about whether contract cheating has happened; we want people to be the decision-makers with input from the systems.”
Irene Glendinning, academic manager for student experience at Coventry University, said that although the results were encouraging, it was “disappointing” that the significance was not more “startling”. However, “any boost to detection would help provide a deterrent to students cheating in the first place”, she said.
Thomas Lancaster, a senior teaching fellow at Imperial College London, said that while he supported “any technology that made it easier to detect contract cheating”, the first priority “has to be encouraging staff to actively look for contract cheating and making sure that they are supported if they think that they have”.
He cautioned that despite the sector’s having better tools to detect this type of cheating, the “essay industry is getting more sophisticated”. “I’ve seen a provider telling students to keep hiring the same writer for all their assignments, so they don’t get caught by tools tracking their writing style. We have to continue to keep developing our response to contract cheating in all the forms that takes.”