Artificial intelligence and academic integrity: striking a balance
A look at how universities can encourage the ethical and transparent use of artificial intelligence tools to support learning while guarding against misconduct
Generative artificial intelligence (AI) has stirred great debate and discussion across higher education. Opinion pieces, blogs and social media feeds are full of questions with no clear answers, yet educators are having to grapple with AI concerns in their classes. AI-generated text is widely accessible and constantly evolving, so educators must figure out how to adapt and what to adopt.
AI technology has the potential to revolutionise and elevate the educational journey for students by introducing personalised learning experiences tailored to individual needs. It presents avenues for enhancing accessibility in education. Nevertheless, it is imperative to address potential ethical issues that accompany its integration, which raise questions including: what is the potential impact on academic integrity? Will technology encourage cheating? To what extent should AI-generated data or tools be allowed for university teaching and learning? How do we adapt our teaching in a world where the majority of people can turn to technology for a response?
These issues are not novel. Plagiarism, cheating and academic integrity have been at the heart of ethical discussion for years. Unfortunately, AI tools have become associated with academic dishonesty.
- Resource collection: AI transformers like ChatGPT are here, so what next?
- ChatGPT and AI writers: a threat to student agency and free will?
- Campus webinar: Artificial intelligence and academic integrity
Technological use has been increasingly embraced by teachers over the past 10 years as more information and instruction is delivered online, with mixed success. Such tools have enhanced the potential reach of educators and increased flexibility in instructional delivery but there remain challenges associated with varied computer access and skills, increased isolation and self-direction. AI-generated text is the latest shift in higher education, prompting us to reflect on what learning means now and in the future.
AI enhances the availability of information and can handle complex tasks, analyse large volumes of data and make decisions with minimal human involvement. So, when applied to teaching and learning, the key question is: when does the use of AI or technology become a violation of the learning process?
The use of AI and other technological tools does not inherently hinder learning; the critical factor is how to use the technology while maintaining academic integrity. Information generated by AI can be inaccurate or misleading, meaning it will fabricate research, fake a source, not attribute authors for their work, and obscure students’ comprehension of subject matter. This can undermine the educational process.
Despite the limitations, it is important to explore the potential of technology to enrich students’ productivity and learning experience. Instead of taking a punitive stance, educators should model responsible use of new technology and show how AI can support learning, demonstrating potential structures, approaches or perspectives.
In academic writing, syntax, grammar use and maintaining originality and clarity are paramount. However, at times, selecting the appropriate words and structure can be a hurdle. This is where tools like Quillbot and Grammarly step in. These help students to enhance their written work by improving the expression of language and correcting spelling, grammatical errors, punctuation blunders or awkward phrasing.
Tools like ChatGPT-3 and GPT-4 can also assist with effective writing. Students can be asked to assess the tool’s response to a major question or concern in their discipline. This allows them to employ critical thinking skills to determine the quality and depth of the AI response.
In this regard, an effective approach involves having students engage with generative AI to elucidate theories linked to real-life occurrences. For example, students could be asked to use generative AI to examine the Syrian migrant crisis using a specific migration theory. By assessing the practical application of a theoretical concept and related AI response, students are encouraged to analyse the theory from different angles and build comprehension of that subject area.
AI can and should be a complement to student learning, not a substitute. The ability to critically evaluate and assess text is an essential academic skill – one that will become increasingly valuable as students encounter more AI-generated content.
Future efforts should focus on defining the “appropriate use” of AI tools. Students should be educated not only on the capabilities but also on the limits of AI. In this way, universities will nurture a generation of learners who are adept at harnessing technology but also equipped with critical skills for discernment and effective application.
As AI becomes ubiquitous, universities must equip students with the necessary skills and understanding to use these tools responsibly. Universities that embrace AI thoughtfully to ensure that human interaction, critical thinking and the value of in-depth education are preserved will likely thrive in this new educational landscape. It is important to strike a balance, whereby AI enhances and supports rather than violates the learning process.
Georgina Chami is a lecturer in the Institute of International Relations at the University of the West Indies.
If you would like advice and insight from academics and university staff delivered direct to your inbox each week, sign up for the Campus newsletter.