Logo

The (AI) sky isn’t falling

Students using generative AI to write their essays is a problem, but it isn’t a crisis, writes Christopher Hallenbrook. We have the tools to tackle the issue of artificial intelligence

California State University, Dominguez Hills
17 May 2024
copy
0
bookmark plus
  • Top of page
  • Main text
  • More on this topic
image credit: iStock/t:sompong_tom.

You may also like

THE podcast: how to use generative AI in your teaching and research
THE podcast graphic

Popular resources

In January, the literary world was rocked by the news that novelist Rie Qudan had used ChatGPT to write 5 per cent of her novel that won Japan’s prestigious Akutagawa Prize. The consternation over this revelation mirrored the conversations that have been taking place in academia since ChatGPT was launched in late 2022. Discussions and academic essays since that time have consistently spoken of a new wave of cheating on campus, one we are powerless to prevent. 

While this reaction is understandable, I disagree with it. Students using AI to write their essays is a problem, but it isn’t a crisis. We have the tools to tackle the issue.

AI is easy to spot

In most cases AI writing can be easily recognised. If you ask multipart questions, as I do, ChatGPT defaults to using section headings for each component. When I grade a paper that has six section headings in a three- to five-page paper (something I have experienced), I see a red flag. ChatGPT’s vocabulary reinforces this impression. Its word choice does not align with how most undergraduates write. I’ve never seen a student call Publius a “collective pseudonym” in a paper about The Federalist Papers, but ChatGPT frequently does. AI is quick to discuss the “ethical foundations of governance”, “intrinsic equilibrium” and other terms that are rare in undergraduate writing if you haven’t used the terms in class. Certainly, some students do use such vocabulary. 

One must be careful and know one’s students. In-class discussions and short response papers can help you get a feel for how your students talk and write. Worst-case scenario, a one-to-one discussion of the paper with the student goes a long way. I’ve asked students to explain what they meant by a certain term. The answer “I don’t know” tells you what you need to know about whether or not they used AI. 

Even when you can’t identify AI writing so readily, you will likely fail the paper on its merits anyway. I’ve found ChatGPT will frequently engage with the topic but will write around the question. The answer is related to what I asked about but doesn’t answer my question. By missing the question, making its points in brief and not using the textual evidence that I instruct students to include (but I don’t put that instruction in the question itself), ChatGPT produces an essay that omits the most essential elements that I grade on. So even if I miss that the essay was AI generated, I’m still going to give it a poor grade.

The summary is ‘dead and buried’

Careful consideration and structuring of essay prompts also reduce the risk of students getting AI-written work past you. A simple summary of concepts is easy for ChatGPT. Even deep questions of political theory have enough written on them for ChatGPT to rapidly produce a quality summary. Summaries were never the most pedagogically sound take-home essay assignment; now they are dead and buried. 

Creativity in how we ask students to analyse and apply concepts makes it much harder for ChatGPT to answer our questions. When I was an undergraduate student, my mentor framed all his questions as “in what manner and to what extent” can something be said to be true. That framework invites nuance, forces students to define their terms and can be used to create less-written-about topics. 

Similarly, when responding to prompts asking about theories of democratic representation, ChatGPT can effectively summarise the beliefs of Publius, the anti-federalist Brutus or Malcolm X on the nature of representation, but it struggles to answer: “Can Professor Hallenbrook properly represent Carson? Why or why not? Draw on the ideas of thinkers we have read in class to justify your answer.” In fact, it doesn’t always recognise that by “Carson”, I am referring to the city where I teach, not a person. By not specifying which thinkers, ChatGPT has to pick its own and in my practice runs with this prompt, it used almost exclusively thinkers I had not taught in my American political thought class.

Ask ChatGPT first, then set the essay topic

I select my phrasing after putting different versions of the question through ChatGPT. Running your prompt through ChatGPT before you assign it will both let you know if you’ve successfully created a question that the generative AI will struggle with and give you a feel for the tells in its approach that will let you know if a student tries to use it. I’d recommend running the prompt multiple times to see different versions of an AI answer and make note of the tells. It is a touch more prep time but totally worth it. After all, we should be continually re-examining our prompts anyway.

So, yes, ChatGPT is a potential problem. But it is not insurmountable. As with plagiarism, some uses may escape our detection. But through attention to detail and careful design of our assignments, we can make it harder for students to use ChatGPT to write their papers effectively and easier to spot it when they do.

Christopher R. Hallenbrook is assistant professor of political science and chair of the general education committee at California State University, Dominguez Hills.

If you would like advice and insight from academics and university staff delivered direct to your inbox each week, sign up for the Campus newsletter.

Loading...

You may also like

sticky sign up

Register for free

and unlock a host of features on the THE site