News that robots are coming to steal our jobs may have been underestimated, following an incident which suggests that automation is going a step further in preventing human discoveries being published at all.
Jean-François Bonnefon, a research director at Toulouse School of Economics, told peers of his surprise in learning that a paper he submitted to an unnamed journal had been “rejected by a robot”.
According to Dr Bonnefon, “the bot detected ‘a high level of textual overlap with previous literature’. In other words, plagiarism.” On closer inspection, however, the behavioural scientist saw that the parts that had been flagged included little more than “affiliations, standard protocol descriptions [and] references” – namely, names and titles of papers that had been cited by others.
“It would have taken two [minutes] for a human to realise the bot was acting up,” he wrote on Twitter. “But there is obviously no human in the loop here. We’re letting bots make autonomous decisions to reject scientific papers.”
Reaction to the post by Dr Bonnefon, who is currently a visiting scientist at the Massachusetts Institute of Technology, suggested that his experience was far from unique. “Your field is catching up,” said Sarah Horst, professor of planetary science at Johns Hopkins University, “this happened to me for the first time in 2013.”
Sally Howells, managing editor of the Journal of Physiology and Experimental Physiology, said that her publications and most others used Turnitin’s iThenticate to detect potential plagiarism.
“However, this is the first time that I have seen a ‘desk rejection’ based solely on the score,” she said.
Ms Howells said that most editors would ask the system to exclude references from a plagiarism scan. “The software is incredibly useful, but must always be checked by a human,” she said. “Thankfully there are still a few of them left.”
Kim Barrett, editor-in-chief of The Journal of Physiology and distinguished professor of medicine at the University of California, San Diego, agreed that anti-plagiarism tools “need to be used appropriately, and they should never be the basis for an automatic rejection”.
Mark Patterson, executive director of the online megajournal eLife, said that his platform did not use software to screen for plagiarism but did conduct “a number of quality control checks…in addition to the scrutiny by the editors”.
“Where computational methods are used at other publishers, staff need to then interpret the findings to avoid situations like the one highlighted,” he said. “In the future, of course, these techniques are likely to get much better.”
POSTSCRIPT:
Print headline: Confused robot says no to paper
Register to continue
Why register?
- Registration is free and only takes a moment
- Once registered, you can read 3 articles a month
- Sign up for our newsletter
Subscribe
Or subscribe for unlimited access to:
- Unlimited access to news, views, insights & reviews
- Digital editions
- Digital access to THE’s university and college rankings analysis
Already registered or a current subscriber? Login