Quality and Qualifications Ireland (QQI) is planning to extend the state’s guidelines on academic misconduct to include the use of artificial intelligence (AI). The Qualifications and Quality Assurance (Education and Training) (Amendment) Act “empowers QQI to prosecute those who facilitate academic cheating”.

A white paper focusing on academic integrity is being distributed to all higher-level education institutions and is expected to be published in 2024. QQI will also be asking education providers to examine their “internal policies on academic integrity and ensure they capture these newly evolved risks”.

In particular, they are being asked to focus on the repercussions that should be faced by individuals found using AI. If academic misconduct has occurred “awards or qualifications may be withdrawn”. UCD’s current Student Plagiarism Policy defines plagiarism as “the inclusion, in any form of assessment, of material without due acknowledgment of its original source”.

The University uses Turnitin as its originality checker. As of April 2023, Turnitin includes an AI writing indicator that can detect AI-generated content. However, some US universities such as the University of Texas and Michigan State have decided to not use Turnitin’s AI detector due to fears of false positives. Turnitin admitted that the detection software isn’t perfect but stated the false positive rate was less than one percent.

There are numerous websites online that promise to make AI writing seem more human. With some AI detection softwares capable of detecting the use of ChatGPT and other generative AI, an automated content arms race has begun but the risk of false positives is always there.

To test the accuracy of these detection softwares The College Tribune ran the official UCD plagiarism policy through a series of AI detection softwares and while the results varied, a number of detection softwares deemed the policy itself to be AI-generated.

Therefore it seems the risk of false-positives is highly likely. Some universities have begun to update their assessment practices in an effort to prevent students from using AI. Maynooth University has a full page of resources to help staff and students better understand generative AI and AI detection tools. In addition to this the University has a policy whereby all assessments may be verified by oral examination.

AI Futures is a project that is currently underway in the UCD College of Arts and Humanities. It “aims to support students and teaching staff in navigating teaching, learning, and assessment in the context of new developments in generative AI”.

The College Tribune contacted Dr Fionnuala Walsh, who is a member of UCD’s AI Futures team. The project aims to make new and curated resources readily accessible to teaching staff and students. In addition to this another aim of the project is to “coordinate talks and workshops, and facilitate practice-sharing across the college”.

Dr Walsh emphasized that this project is “prioritizing student learning in its consideration of the potential challenges and benefits of generative AI, rather than having an emphasis on disciplinary procedures”. The College Tribune asked UCD students for their thoughts and experiences regarding AI. One third-year architecture student said she had used generative AI to create concept imagery and had not been caught doing so. “I used it for a project brief and presentation to enhance my work in that way, but I know so many people who actually use it to write assignment outlines or to explain concepts to them.”

“Literally everyone uses it,” said Peter, 23, a UCD social sciences student.“I used ChatGPT consistently last semester to write the outline of my essays. For one politics module I even used it to write summaries of my essays and to write my introductions and stuff to my essays, any time I used ChatGPT I ran it through software to make it more human. If your [sic] smart about it, the college can’t catch you.”

One final-year student said they believe the use of generative AI on assignments is “extremely prevalent” and that “most students are using AI to give them a start on an assignment or for help with a particular aspect at the very least. I know people who have used AI for an outline of an essay.”

Asked what UCD could do to mitigate the threat of AI to academic integrity, the student said, “I think examples of people being caught are mostly clear cut, and that those people have just ripped something off of a generator in its entirety. When it comes to using aspects of AI-generated content or using it for help then I don’t think there’s much that UCD can do.”

“Possibly a return to in-person exams, but that would be a step in the wrong direction. I do think AI can be embraced, and if rules are put in place surrounding its use then it could just become part of student life. It’s already potentially going to have a significant role in shaping other aspects of society.”

Another final-year student had a different approach; “Yeah, I do reckon it is prevalent in collegework – albeit, to varying degrees. While I’m sure some people use it to generate full responses (as I have seen done by friends/classmates), the answers are often honestly just not good. I do think on the whole we are seeing a rise in its use – be it to prompt ideas, provide counterpoints, proofread, etc., it is another tool in the student toolbox – is it any different using generative AI to proofread vs using spellcheck?”

Another student agreed that more presentations and in-person exams may be a possible solution but they asked “Why should students be punished for the lack of integrity from other students?” A different student explained that they had used AI for college work but “mostly just to help format my own ideas and to see what way I should approach the question. The ideas are all my own, I just prompt the AI to format it in bullet point form”.

This student emphasised that they use AI for “formatting purposes only. I personally disagree with using it for idea generation, but if I have generated the ideas myself, what makes the use of AI unethical? Usually, you would go to a person to help with formatting, but that isn’t cheating. So what makes using AI cheating?”

When asked if they believed UCD would be able to detect those using AI this student answered “yes and no. Blatant plagiarism is going to get caught. AI is derivative from other academic ideas. However, I’ve never been caught for help with formatting and I’ve been using it for over a year”. Not all students believe AI should be used so freely. A UCD student explained that they are “aware of multiple students using it for assignments, and in group project settings and not receiving any penalisation from it when there are other students putting in the actual effort. It’s been used as an easy tool to write essays etc that people couldn’t be bothered to write”.

One student commented that they are “really nervous about using AI for assignments” and that they would only use AI to help “come up with ideas for approaching essays, otherwise, I don’t use it”. However, most students who commented seemed to be in agreement that AI does pose a risk to academic honesty, with one student remarking that it creates a “threat in the workforce” with the danger of some students “not being able to put theory into practice”.

Another student commented that if more become reliant on AI for essay writing then “the same style of writing will constantly be produced”. This sentiment was shared with one student commenting that the overuse of AI will “stagnate research”.

Ellen Clusker – News Editor
Additional reporting: Hugh Dooley – Co-Editor