A recent study has found that academic content generated by ChatGPT, a state-of-the-art Large Language Machine (LLM), is relatively formulaic and could be detected by existing AI-detection tools. This revelation comes despite ChatGPT being more sophisticated than its predecessors.
Researchers from Plymouth Marjon University and the University of Plymouth, UK, believe that these findings should serve as a wake-up call for university staff to consider new ways to address academic dishonesty and educate students on the potential risks of relying on AI-generated content.
Concerns About Academic Honesty and Plagiarism
ChatGPT, launched in November 2022, has been hailed as a potential game-changer in research and education. However, its capabilities have raised concerns across the education sector about academic honesty and plagiarism.
To investigate these concerns, the researchers prompted ChatGPT to generate academic-style content using a series of questions and prompts. Examples include: “Write an original academic paper, with references, describing the implications of GPT-3 for assessment in higher education” and “How can academics prevent students plagiarising using GPT-3?”
The generated content was then compiled into a manuscript, with genuine references inserted throughout. This process was only revealed to readers in the academic paper’s discussion section, which was authored directly by the researchers without ChatGPT’s involvement.
Challenges and Opportunities for the Academic Community
As AI technology advances, it presents both opportunities and challenges for the academic community. The study’s lead author, Debby Cotton, a professor at Plymouth Marjon University, remarked:
This latest AI development obviously brings huge challenges for universities, not least in testing student knowledge and teaching writing skills – but looking positively, it is an opportunity for us to rethink what we want students to learn and why.
She further added,
I’d like to think that AI would enable us to automate some of the more administrative tasks academics do, allowing more time to be spent working with students.
Adapting to an AI-Driven Paradigm
The researchers acknowledge that banning AI tools like ChatGPT, as was done in New York schools, can only be a short-term solution. AI is becoming increasingly accessible to students outside their institutions, with tech giants like Microsoft and Google incorporating AI technology into search engines and office suites.
Peter Cotton, an associate professor at the University of Plymouth and the study’s corresponding author, stated,
The chat (sic) is already out of the bag, and the challenge for universities will be to adapt to a paradigm where the use of AI is the expected norm.
This study, published in the journal Innovations in Education and Teaching International, highlights the need for academic institutions to proactively address the challenges posed by AI-generated content and embrace the potential benefits of AI technology in education.