Logo

Will ChatGPT change our definitions of cheating?

We can’t yet know if we have a full taxonomy of ChatGPT-enhanced mischief, or whether certain uses should be classed as mischief at all, writes Tom Muir

Tom Muir's avatar
Oslo Metropolitan University
2 Nov 2023
copy
0
bookmark plus
comment
1
  • Top of page
  • Main text
  • Additional Links
  • More on this topic
copy
0
bookmark plus
comment
Brain supplanted by machine AI
Source: iStock

You may also like

ChatGPT as a teaching tool, not a cheating tool
3 minute read
Students comparing notes and work in pairs

Popular resources

We are talking again about ChatGPT’s potential for student dishonesty. A report in Times Higher Education alerts us to ChatGPT’s capacity to respond to images, meaning that students could, in theory, simply take a picture of an exam paper containing images or diagrams, and ChatGPT would be able to answer the questions. Earlier this year, a BBC story emerged documenting uses of ChatGPT by students at Cardiff University, and reporting that a student’s ChatGPT-enhanced essay had been awarded a first – the highest grade that student had obtained.

Something else is worth paying attention to here. We are still talking about spotting quite obvious wrongdoing. There is no intellectual difficulty in working out what has happened in these or similar situations: a student has pretended to do work that they had not in fact done. The challenge is in detecting the dishonesty – but our definitions of dishonesty, cheating and misconduct remain intact. So far, at least.

But we can’t yet know what kinds of cheating ChatGPT might make possible in the future. We can’t yet know if we have a full taxonomy of ChatGPT-enhanced mischief. Perhaps all our definitions will remain intact or perhaps something new is rumbling down the tracks towards us.

Right now, we might say, any act of academic misconduct falls along a kind of continuum, in part because it is all susceptible to the same types of detection (an alert lecturer noticing a change in writing style, plagiarism-detection software). Along this continuum fall the student who has a chaotic note-taking style and ends up being unable to distinguish between their own words and those of a source; the student who grabs a single paragraph from an internet source and doesn’t attribute it; and the student who buys an essay from an online essay mill.

These would all be clear examples of students (accidentally or deliberately) passing off someone else’s ideas as their own. But using text from ChatGPT is perhaps not quite the same thing – because ChatGPT is interactive, generative and creative. An internet source exists before and outside a decision to steal from it and is unchanged by the act of stealing from it. But ChatGPT only produces text in response to a prompt; one must interact with it.

So far, we might think that ChatGPT-produced text is clearly on the continuum I was outlining above – that is, a student including ChatGPT text in (for example) an essay would be passing off another’s words or ideas as their own. But the generative, interactive capacities of ChatGPT take us in another direction. A large language model (LLM) such as ChatGPT producing text in response to a prompt can surprise us – and we might very well want it to.

It’s for this reason that Mike Sharples, in a recent THE piece, says that he intends to use ChatGPT to “augment” his thinking. He means, I think, that its generative capabilities might prompt him to create presentations or papers in ways that he would not have done previously. He wants to be surprised: the prompt you give ChatGPT can always generate text you were not expecting.

What this means is that it’s hard to bind or limit an LLM. We might say to students that they may use it up to a point or that certain limited uses of it are legitimate on certain courses – but its responses could still exceed such limits. It could still surprise us.

Let’s try a thought experiment.

As tools such as spelling and grammar checkers are legitimate, we decide that we can allow students to use ChatGPT to go slightly further and improve the surface-level features of a text. A student might then prompt ChatGPT to check the coherence of a text or to make sure that paragraphs have topic sentences.

Here is where the student might be surprised.

The resulting reorganisation of text is such that a new thought, or a new line of argument, crystallises in it. If we accept that thinking and language are related (and I think we must), then this is a possibility with an appropriately sophisticated LLM. This student might now respond to ChatGPT’s text as though they are themselves being prompted and refine the text further. And then they might involve ChatGPT once more, producing a new text, and so on. We are now talking about a student and a machine co-writing and responding to one another, creating a complex, multi-stranded text braided together from the work of both student and machine.

This would be something different from splicing “chunks” of text written by ChatGPT into an essay.

It might very well be misconduct – but I don’t think it falls on the continuum of misconduct I described above. It is the generative capabilities of LLMs, their capacity to surprise us, that would make such a collaboration with ChatGPT possible. We will need to think carefully about how misconduct is defined when collaboration like this – which may very well be an authentic, meaningful learning process – is possible.

I don’t think we are yet at a point where this thought experiment can become reality. But then, this time last year the conundrums we now face with LLMs were not reality, either.

So, how might we, as educators, respond to these new circumstances? I can think of a couple of ways.

One is that we might need to write modules in which we tell students that they can use ChatGPT to their hearts’ content – it will not be classed as misconduct, but they need to document what they are doing. Such an approach would allow us to get an understanding of what students are doing and how they are incorporating LLMs into their own work habits. We could then start refining our definitions of misconduct in the light of what we find out.

Related to this, we might need to prepare for the idea that the use of LLMs could be encouraged in foundation or first year and that students would be expected to decrease their reliance on them as their expertise increased. We could expect text produced by ChatGPT in assignments to be appropriately labelled as such. This would echo some of the things we know about plagiarism, which can be usefully seen as a “stage” immature scholars need to pass through.

Our starting points, in other words, should be understanding how students might benefit from using LLMs at different points in their degrees. From there, we might consider what uses are legitimate and what might be “too much”.

Tom Muir is associate professor of English for academic purposes at OsloMet – Oslo Metropolitan University, Norway.

If you would like advice and insight from academics and university staff delivered direct to your inbox each week, sign up for the Campus newsletter.

For more resources related to this topic, see our collection AI transformers like ChatGPT are here, so what next?

Loading...

You may also like

sticky sign up

Register for free

and unlock a host of features on the THE site