Abstract
This novel study explores AI-giarism, an emergent form of academic dishonesty involving AI and plagiarism, within the higher education context. The objective of this study is to investigate students’ perception of adopting generative AI for research and study purposes, and their understanding of traditional plagiarism and their perception of AI-plagiarism. A survey, undertaken by 393 undergraduate and postgraduate students from a variety of disciplines, investigated their perceptions of diverse AI-giarism scenarios. The findings portray a complex landscape of understanding with clear disapproval for direct AI content generation and ambivalent attitudes towards subtler uses of AI. The study introduces a novel instrument to explore conceptualisation of AI-giarism, offering a significant tool for educators and policy-makers. This scale facilitates understanding and discussions around AI-related academic misconduct, contributing to pedagogical design and assessment in an era of AI integration. Moreover, it challenges traditional definitions of academic misconduct, emphasising the need to adapt in response to evolving AI technology. The study provides pivotal insights for academics and policy-makers concerning the integration of AI technology in education.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
1 Introduction
The integration of AI into higher education offers many potential benefits for enhancing teaching and learning. It can provide personalised educational experiences and encourage independent learning among students (Keleş et al., 2009; Tsai et al., 2020), while for teachers, tools such as automated essay scoring systems and intelligent diagnosis systems reduce the time and costs required for assessing and providing feedback to large groups (Chan & Lee, 2023; Zawacki-Richter et al., 2019). However, the rapid advancement of generative AI technologies also presents a serious threat to higher education. In recent years, the growing popularity of the text-based generative AI platform, ChatGPT (OpenAI, 2022), has sparked discussions in the higher education community about its potential impacts on academic integrity (Chan & Hu, 2023; Gendron et al., 2022; Liebrenz et al., 2023; Stokel-Walker, 2023), particularly with the emergence of AI-giarism. Academic integrity can be briefly defined as honesty in academic endeavours (Löfström et al., 2015), while AI-giarism in the academic context primarily concerns the misuse of AI tools by students and researchers to produce essays, research reports, or even entire theses where the individual presents the AI-generated work as their own (Chan & Tsi, 2023; Cotton et al., 2023). This misuse could reduce the learning process from acquiring, applying, and critiquing knowledge to merely generating output that may be factually incorrect and misleading (Alshurafat et al., 2023). Another significant concern with AI-giarism is the difficulty of differentiating between human-authored text and AI-generated text. While AI software can effectively detect traditional forms of plagiarism, they may fail to identify instances where AI tools have been used to produce a given piece of work. This known limitation could lead to an increase in undetected occurrences of academic dishonesty, further compromising academic integrity (Ahmad et al., 2023).
As we transition into this AI-dominated era, it appears that AI is increasingly and unceasingly becoming integrated into our human behaviours and interactions, raising the question of whether its assistance in academic writing is acceptable. In this study, we will explore this topic by investigating students’ current understanding of traditional plagiarism, as well as what constitutes as AI-related plagiarism – AI-giarism.
2 Literature review
2.1 Conceptualisation of plagiarism
Plagiarism is the act of using someone else’s work without proper acknowledgement or permission, passing it off as one’s own original work. Park (2003) defines it as “literary theft,” while Freedman (1994, p. 517) expresses a similar sentiment, believing that plagiarism is “an attack on individuality, on nothing less than a basic human right”. However, researchers and practitioners have noted that the act of plagiarism is complicated and ambiguous as “between imitation and theft, between borrowing and plagiarism, lies a wide, murky borderland” (as cited in Park, 2003, p. 475). In some cases, authors may inadvertently use content without being aware of its original source, treating it as their own work without proper attribution – a scenario which Leatherman (1999) describes as “the point at which an idea passes into general knowledge in a way that no longer requires attribution”. Wager (2014) discusses the ongoing issue of plagiarism in scholarly works and the various factors that influence the definition and handling of plagiarism; factors such as the extent of copying, originality, positioning, referencing, the intention of the authors, their seniority, and the language they are writing in, are all important considerations when defining plagiarism (Wager, 2014). The extent of plagiarism can range from copying a few sentences to entire papers or chapters, the latter of which also breaches copyright laws. Wager’s article also notes that while summarising others’ works is common in scholarly writing, it can be difficult to distinguish between acceptable summarising and outright copying, and there thus ought to be clearer definitions and guidelines for plagiarism. These issues are further complicated by the increasing use of AI technologies in academic works. As Asamoah et al. (2024) discuss, there is a shift in conversation from the detection and prevention of plagiarism, to using objective knowledge and educational technologies to correct plagiarism. As AI-powered writing tools become more prevalent, it is important to understand how students perceive and use AI in their academic work, and how AI-integrated technologies can still be used to promote academic integrity.
2.2 Emergence of AI-giarism
The use of AI in academic writing has generated significant interest in AI-related plagiarism. This has led to the emergence of a new term that warrants our attention: AI-giarism. This term, combining “AI” and “plagiarism”, has yet to be widely researched and defined within academic literature. For the present study on AI in education in relation to academic misconduct, the following definition is proposed:
AI-giarism refers to the unethical practice of using artificial intelligence technologies, particularly generative language models, to generate content that is plagiarised either from original human-authored work or directly from AI-generated content, without appropriate acknowledgement of the original sources or AI’s contribution.
AI-giarism can happen when people use AI tools or language models to create written or multimedia content, such as articles, blog posts, images, and videos, without properly attributing the original sources or sufficiently modifying the generated content to enhance its own originality (Salvagno et al., 2023). Some AI tools generate content by automatically combining or predicting information gathered from different sources such as articles or websites, and paraphrasing them to create fresh content. However, it is not always clear what constitutes AI-giarism – in terms of intellectual property theft, Eke (2023) raises the question of whether original writing generated by AI should be considered plagiarised content in the first place, as no one’s intellectual property is actually stolen. Although this may not be plagiarism in the strictest sense, submitting work that is not one’s own is still a violation of academic integrity and a form of academic misconduct (Perkins, 2023). Compared with the former, academic misconduct is a more general term that is defined as any form of action that does not comply with the standards of the academic community (Louis et al., 1995) and as a type of misconduct, most universities have a zero-tolerance rules on plagiarism (e.g., The University of Hong Kong, 2023). In Fig. 1, developed to compare traditional plagiarism and plagiarism related to AI-generated content, the left-hand side shows common zero-tolerance rules on plagiarism found among most universities, from one end of the scale (“completed the work totally by oneself, using proper citations where necessary”) to the other (“using exact words from a source without due acknowledgement of the source”).
Evidently, the use of AI blurs traditional boundaries of authorship and plagiarism, raising new questions about academic integrity in the digital age especially as the identification of AI-generated work and the accurate detection of plagiarism have become increasingly challenging. While technologies have been developed to detect AI use and relevant plagiarism (Karnalim et al., 2024; Stefanovič et al., 2024), determining the extent to which a course of action or a situation involving AI use upholds academic integrity has become ever more complex. As previously discussed, researchers have raised concerns about the potential threats that AI use can bring to academia and academic integrity (e.g., Cotton et al., 2023; Eke, 2023; Perkins, 2023), particularly in relation to plagiarism (Dehouche, 2021; Salvagno et al., 2023). With the integration of AI into teaching and learning, McKnight (2021) emphasised the need for teachers to guide students in understanding whether the use of AI in their written tasks should be acknowledged, and if so, how to acknowledge it. Similarly, Eke (2023) argues that acknowledgement of AI use should be part of the academic integrity policies of both educational institutions and publishers. Currently, many publishers follow American Psychological Association (APA) and Modern Language Association (MLA) citation guidelines, both which suggest that it is important to be detailed and descriptive when referencing language models like ChatGPT, providing information such as the specific version of the model used, organisation responsible for its development, the prompt input to generate any used content, and so on. Referencing AI-generated content can be a complex and nuanced task with many different types of models and applications, each with their own unique characteristics and data sources (Chan & Colloton, 2024), and each with potential limitations and biases inherent within them.
2.3 Measuring student perceptions of plagiarism
Most university plagiarism policies tend to be punitive in nature, imposing penalties on students who are found to have committed plagiarism in their academic works. However, Pecorari and Petrić (2014) contend that punitive measures may not be effective for prevention as some acts of plagiarism are unintentional and due to a lack of understanding. In contrast, approaches to educate and enhance students’ knowledge of plagiarism is believed to increase their skills in handling sources correctly and reduce intentional copying (Leung & Cheng, 2017; Obeid & Hill, 2017). Punitive policies are also referred to as a problem-oriented approach or punish solutions, whereas the latter is known as a solution-oriented approach or education-based solutions (Wette, 2010; Zhang, 2024). It is important to understand how students view plagiarism and how much they know about the concept in order to devise and implement effective educational measures that foster academic integrity and promote ethical scholarly practices among students.
Although institutions often have their own plagiarism policies that define the related offenses and consequences, how plagiarism is understood and interpreted still varies from individual to individual. Research has found that students from various academic and cultural backgrounds perceive plagiarism differently. Sutton et al. (2014), for example, reported that business students had a more lenient view of plagiarism than students from other faculties, whereas poor referencing was perceived as more serious by UK postgraduate students compared to undergraduate students. In addition, students from high power distance and low individualism cultural contexts, such as Eastern Europe, tend to have negative perceptions of plagiarism policies and practices (Mahmud et al., 2019).
Several studies have found that students only vaguely understand plagiarism as well. In Chien’s (2017) study, students could only articulate a basic definition of plagiarism, but could not recognise or apply this knowledge in their own writing; meanwhile, Childers and Bruton’s (2016) study found that students struggled to identify cases of plagiarism when examples moved beyond the direct lifting of words and sentences. Similarly, Gullifer and Tyson (2010) found that students were not able to distinguish subtle plagiarism behaviours in cases involving the acknowledgement of ideas and paraphrasing of text. These findings demonstrate the conceptual complexity of plagiarism, and such confusion will only be exacerbated by the widespread use of AI in higher education.
Over the years, numerous studies have sought to assess students’ understanding of plagiarism and their ability to avoid it. Some scholars have developed validated psychometric instruments to provide a reliable measurement of students’ perceptions of plagiarism. For instance, Mavrinac et al. (2010) created an instrument to measure students’ positive attitudes, negative attitudes, and subjective norms regarding plagiarism. This instrument was further refined by Howard et al. (2014). Oghabi et al. (2020) contributed to this body of research by developing their Sociocultural Plagiarism Questionnaire, which evaluates students’ theoretical and practical understanding of plagiarism. Furthermore, Cheung et al. (2017) developed a psychometric measure focusing on students’ attitudes and beliefs about authorship as a means to prevent plagiarism.
Apart from these studies, there are others that have used validated surveys and questionnaires to analyse students’ perceptions of plagiarism. They revealed further differences in levels of awareness and knowledge about plagiarism across various countries and disciplines. For instance, studies by Bretag et al. (2014) and Smedley et al. (2015) found that Australian students from the Global North demonstrated a high level of awareness and understanding of plagiarism. However, research on Croatian (Bašić et al., 2019) and Canadian students (Bokosmaty et al., 2019) unveiled troubling results where these students struggled with adhering to referencing rules and avoiding self-plagiarism.
In contrast, research conducted in countries of the Global South highlighted several factors contributing to insufficient awareness and understanding of plagiarism. These factors include language barriers that encourage a more leniency towards plagiarism (Erguvan, 2022), confusion about referencing rules (Hussein, 2022; Ibegbulam & Eze, 2015), cultural differences that foster greater tolerance of plagiarism behaviours, such as when emphasis is placed on personal relations over copyright rules, and an entrenched belief in the merits of memorisation and imitation (Hu & Lei, 2012; Oghabi et al., 2020). There are also misconceptions about contract cheating, which is the practice of paying others to complete academic work on one’s behalf (Romanowski, 2022). Given the fragmented understanding of plagiarism among higher education, several scholars have emphasised the need for further guidance, policies, and training workshops to enhance students’ research writing and academic referencing skills (Issrani et al., 2021; Rathore et al., 2015).
While several studies have focused on student perceptions of the use of ChatGPT in their academic work (e.g., Burkhard, 2022; Malik et al., 2023; Ngo, 2023), they primarily examine how students use ChatGPT, its benefits and challenges, and barriers to its use. While Tindle et al.’s (2023) and Burkhard’s (2022) studies explored student views of ethical concerns associated with ChatGPT, they did not specifically focus on what students perceived as plagiaristic behaviours associated with AI use. In view of the increasing popularity of AI and its potential impacts on higher education as discussed earlier, it is important to investigate how students view the use of AI in academic work, how they understand plagiarism in relation to various uses of AI, and whether they view various AI-assisted activities as forms of AI-giarism. Insights gained will have practical implications for promoting responsible use of educational and information technologies, allowing for measures to be taken that assist students in utilising AI responsibly and in ways that benefit their learning, and the continued fostering of academic integrity in higher education.
2.4 The study
The objective of this study is to explore students’ perceptions of adopting generative AI for research and study purposes, and investigate their understanding of traditional plagiarism as well as perceptions of AI-giarism. It is critical to examine their understanding of both forms of plagiarism within the same study for several reasons.
Firstly, doing so will provide a point of reference. Traditional plagiarism is a well-established concept within academia, with a comprehensive body of literature that outlines what constitutes plagiarism and its potential consequences. By analysing students’ understanding of traditional plagiarism, the study provides a reference point from which to understand their comprehension of AI-giarism.
Secondly, this will help identify gaps in students’ understanding and knowledge. Examining students’ perceptions of traditional plagiarism and AI-giarism in tandem allows for a comparison between the two. This comparison can highlight areas where students’ understanding of AI-giarism may fall short or differ significantly from their understanding of traditional plagiarism, offering insights that can direct educational initiatives in closing such gaps.
Furthermore, establishing an understanding of students’ perceptions towards both concepts is integral to evolving policies of academic integrity, helping to shape relevant, robust, and comprehensive policies that appropriately address the evolving landscape of higher education.
Finally, taking this approach will also help shed light on broader questions about academic integrity in the digital age. The ways in which students perceive and differentiate between these two forms of plagiarism can provide insights into their general attitudes towards academic integrity, their conceptualisation of originality, and their understanding of the ethical implications of using AI technologies in academia.
Overall, the integration of both traditional plagiarism and AI-giarism into the same study provides a foundation for addressing these challenges effectively and appropriately, with implications for education, policy, and practice in higher education. The research questions for this study are thus:
-
1.
What are students’ perceptions of plagiarism, both in traditional forms and AI-giarism?
-
2.
How do students’ perceptions of AI-giarism compare and contrast with that of traditional forms of plagiarism?
-
3.
How do students’ perceptions of these concepts reflect their understanding of AI-giarism as a violation of academic integrity?
3 Methodology
3.1 Instrument
An online questionnaire was used to explore the understanding, attitudes, and experiences related to plagiarism and AI-giarism among students in higher education in Hong Kong. Given the exploratory nature of this study, a quantitative approach was taken where the use of a survey allowed us to gain general insights into students’ understanding of AI-giarism in Hong Kong, as well as gather data from a large group of participants.
The survey items designed to measure perceptions of traditional plagiarism were informed by an extensive review of relevant literature on academic integrity and the zero-tolerance policies widely adopted by various universities. This approach ensured that the survey reflected both theoretical insights and practical policy considerations. To provide transparency and substantiate our methodology, specific references for each survey item, from which the individual item themselves were constructed.
Given the nascent stage of research on AI-giarism at the time of our study, we faced significant challenges in developing a measurement tool that could accurately capture students’ understanding of AI-enabled plagiarism. While existing studies (e.g., Burkhard, 2022; Malik et al., 2023; Ngo, 2023) offered insights into students’ perceptions of using AI tools like ChatGPT for academic purposes, there was a notable gap in literature specifically addressing students’ perceptions of what constitutes plagiarism in the context of AI usage. Recognising the importance of this emerging area, we engaged in discussions with researchers, teachers, and students to construct this part of the survey. Although this section was not validated in the traditional empirical sense due to the novel and emerging nature of the topic, it was piloted among a knowledgeable group to ensure its initial relevance and coherence.
The final questionnaire consisted of three parts. The first included basic demographic information such as gender, university name, discipline and major, year of study, age, and an indication of how often the student used generative AI technologies like ChatGPT, rated on a five-point Likert scale (from Never to Always). The second and third parts contained closed questions on traditional plagiarism and AI-giarism. The closed questions pertaining to traditional plagiarism were rated on a five-point Likert scale (from Strongly disagree to Strongly agree), formulated based on the existing plagiarism policy of a university in Hong Kong and current literature on plagiarism. Seven areas were identified from the aforementioned policy, aligning with the seven items constructed for this part of the questionnaire (Items E1 to E7). Each item was further crosschecked with the literature on plagiarism to ensure that it was well supported by authoritative sources. For example, Item E1. “Copying word for word from a source without due acknowledgement of the source” was supported by the works of Armstrong (1993), Martin (1994), Maurer et al. (2006), and Park (2003). Armstrong (1993) in particular defined plagiarism as “direct verbatim lifting of passages without attribution” (p. 479).
The final part of the questionnaire concerned AI-giarism, though as mentioned, there were no existing questionnaires or validated measures to draw from given its emergent nature as an area of research. The author thus developed the survey questions for this section based on a careful study of university forum discussions among students, researchers, and teachers regarding AI in education, which enabled the creation of an original set of eleven statements that reflect current debates and concerns surrounding the use of AI technologies in academic contexts (Items F1 to F11 in the questionnaire, also rated on a five-point Likert scale from Strongly disagree to Strongly agree); these eleven areas were then verified through a comprehensive review of relevant literature in the fields of AI in education (Crompton & Burke, 2023; Kumar, 2023) and policy formation (Chan, 2023). The sources that informed the construction of the questionnaire items can be found in Appendix 1. It should be noted that the instrument was not validated; this limitation will be acknowledged and discussed in a later section of this paper.
3.2 Participants
Participants were selected using convenience sampling, a method where participants self-select to participate in a study (Stratton, 2021). This method is appropriate when it is impossible to draw a random sample from the population (Andrade, 2021); for this study, it was not feasible to draw a sample from all the university students who have used AI in their academic work; therefore, invitations to participate were sent via mass emails to all the students at a comprehensive university in Hong Kong. To ensure a broad understanding of students’ perceptions of AI-giarism and to gather diverse views on the topic, the target population included both undergraduate and postgraduate students from various disciplines. Before accessing the survey, respondents completed an informed consent form on the online survey platform, ensuring their understanding of the study’s objectives and their rights as participants.
3.3 Data analysis
The data gathered were analysed using descriptive statistics. The medians and standard deviations of the quantitative data for each questionnaire item were aggregated to shed light on the participants’ experiences, perceptions, and comprehension of plagiarism and AI-giarism in academia.
4 Findings
This study involved a total of 393 students from a university in Hong Kong. There was an near-equal split between genders with 198 males and 195 females. The average age of the students was approximately 22 years (SD = 2.59) with an overall range of 17 to 28 years old.
Regarding participants’ fields of study, the most prevalent discipline was Engineering, with 112 students. Other disciplines included Business (n = 54), Science (58), Arts (61), Architecture (30), Education (30), Social Sciences (26), Law (5), and Medicine and Dentistry (12). There were also five students who did not specify their disciplines.
In terms of the level of study, the largest group was undergraduate students, with a total of 244 participants. Among these, the distribution across the years of study was relatively even: there were 81 freshmen, 51 sophomores, 48 juniors, and 58 seniors. A smaller group of six were in their fifth or sixth year of study. The remainder of participants comprised 111 taught postgraduates and 44 research postgraduates. Out of the 393 students surveyed, the average response to the statement “I have used generative AI technologies (GenAI) like ChatGPT” was 2.27 (SD = 1.65), suggesting that most students have relatively little experience with using such technologies at the time of the study. Table 1 shows the descriptive findings.
4.1 Students’ conceptualisation of plagiarism
Students were asked to rate the extent to which they agreed that the statements presented constitute as a form of plagiarism, using a 5-point Likert scale. Across the seven items assessing their understanding of traditional plagiarism, participants generally agreed that the various behaviours described were indeed forms of plagiarism, with mean ratings ranging from 3.59 to 4.13.
The lowest level of agreement was observed for item E6, “Submitting part or all of the same assignment for different courses without acknowledging it” (M = 3.59, SD = 1.232); this behaviour seemed to be perceived as less serious or perhaps less clearly defined as a form of plagiarism, compared to the other items. On the other hand, the strongest level of agreement was for E1, “Copying word for word from a source without due acknowledgement of the source” (M = 4.13, SD = 1.197), signifying that participants largely recognised this scenario as a clear act of plagiarism.
Overall, findings from this section of the survey suggest that students generally understand and can identify instances of traditional plagiarism in the academic context, however there are discrepancies in how the various behaviours were perceived. The standard deviations for all items were greater than 1, suggesting a moderate degree of diversity in perceptions among students for each item. Such variations could be related to individual differences in students’ understanding of plagiarism or their beliefs regarding what actions constitute a breach of academic integrity, pointing to the need for clearer and more comprehensive education on the topic.
These responses further provide a reference for examining how students’ perceptions of traditional plagiarism extend to the newer concept of AI-giarism.
4.2 Students’ perceptions of AI-giarism
Across items F1 to F11, students used a 5-point Likert scale to rate the extent to which they agreed that a given scenario, involving the use of AI in academic writing, was a case of AI-giarism. A higher score reflected a stronger level of agreement, whereas a moderate score may indicate that students viewed the scenario as a form of academic misconduct but not necessarily AI-giarism, and a score at the lowest end of the scale (with 1 indicating strong disagreement) may suggest that students did not perceive an AI-assisted activity as a violation of academic integrity.
The scenarios in F1 and F2, which represent the most direct use of AI tools to generate content for an assignment, were perceived as the most significant or serious instances of AI-giarism among students, with mean scores of 3.44 and 3.37 respectively. This suggests that students generally understand and respect the principles of academic integrity when it comes to the outright use of AI to generate or paraphrase content.
Scenarios in F3 to F7 describe more specific uses of AI in academic writing such as using it to generate initial ideas or refine the student’s own work. These items had somewhat lower mean scores, ranging from 2.35 to 2.75, suggesting that students do not perceive such scenarios as AI-giarism and may not fully understand the potential ethical implications of such uses of AI in academic writing.
Finally, across the scenarios in F8 to F11, which describe the use of AI for more ancillary tasks like checking grammar or searching for resources, were perceived as the least problematic, with mean scores all below 2.35. This suggests that students see these uses of AI as legitimate and permissible for supporting their academic writing, rather than as instances of academic misconduct.
In general, these findings suggest that students have a fair understanding of the ethical implications of using AI in academic writing, recognising the potential for misconduct in certain scenarios while also appreciating the utility of AI as a tool to support their academic work. However, the variability in students’ perceptions, particularly for more nuanced scenarios, also indicates that there is room for further education and discussion about the appropriate use of AI in academic writing. For instance, while students may easily recognise that the direct use of AI tools to generate content for an assignment is a form of AI-giarism, they may not consider the use of AI to generate initial ideas as misconduct. Where the boundaries of academic integrity lie, with added the involvement of AI, may not be immediately clear to students.
Like with the previous section on traditional plagiarism, it is noteworthy that the standard deviations for the AI-giarism items were also all over 1, pointing to a high degree of variability in students’ responses. In other words, their perceptions of what constitutes academic misconduct when using AI and AI-generated content are quite diverse, which may reflect the subjectivity of this issue with differing individual interpretations of academic integrity and the ethical uses of AI.
The results from this survey suggest that students have a stronger awareness of traditional plagiarism compared to AI-giarism. The traditional plagiarism items received higher mean scores overall, with consistently high median scores of 4 (“Agree”) or 5 (“Strongly agree”). On the other hand, only two items on the AI-giarism scale – F1 and F2 – received median scores of 4; all others had a median of 3 or lower. Interestingly, the scenarios in F1 and F2 mirrored items E1 and E2 (copying and paraphrasing content), differing only in that they specifically involved the use of AI. The remaining items in the AI-giarism scale diverged further from those in the traditional plagiarism scale, with more variations in the scenarios and acknowledgement of AI use; these AI-giarism items recorded lower scores overall, indicating that there are indeed differences in students’ perceptions of the two forms of plagiarism.
5 Discussion
The concept of AI-giarism is inherently complex, given that it stands at the intersection of two highly dynamic fields: AI technology and academic integrity. This study set out to investigate students’ perceptions of what constitutes academic misconduct when using AI-generated content in higher education. In this discussion, we will reflect on the findings of the study, drawing on the existing literature on plagiarism, AI in education, and academic integrity to contextualise the findings and explore their implications.
5.1 Students’ comprehension of the two forms of plagiarism
This study found that students had a general understanding of traditional forms of plagiarism. The strongest agreement was observed for actions where content was directly copied from a source without proper attribution, demonstrating students’ clear comprehension of plagiarism in its most fundamental form. Taking into account the context of this study, such findings align with those of Li and Flowerdew (2019) who highlighted the criticisms and condemnation of plagiarism within Chinese culture. This strong awareness of plagiarism among Chinese students also contradicts the belief that plagiarism is often unrecognised or is considered acceptable within Chinese academic norms (Liu & Wu, 2020; Sowden, 2005). However, the variability in students’ perceptions of different forms of plagiarism, as reflected by the standard deviations, points to the need for clarification on the nuances of academic misconduct. Specifically, students were most unsure about subtle scenarios of plagiarism involving acknowledgement of ideas and sources such as in E6,
“Submitting part or all of the same assignment for different courses without acknowledging it,” and E4, “Collusion or unauthorised collaboration between students on a piece of work without acknowledging the assistance received”. This echoes the work of Childers and Bruton (2016) and Gullifer and Tyson (2010), who found varied understandings of plagiarism beyond straightforward examples of verbatim copying. Prior research has also shown that discrepancies in understanding plagiarism can be attributed to cultural differences, academic disciplines, and the lack of clear, universal definitions or standards (Gullifer & Tyson, 2014; Rodrigues et al., 2023). While this study did not delve into the rationales behind students’ perspectives, there is evidently a need for further research and targeted educational interventions to improve students’ understanding of appropriate conduct in academia, especially in nuanced situations. Academic institutions and educators should make a concerted effort to clarify and unify definitions of plagiarism to address this gap.
A striking finding from this research is that higher education students still do not fully comprehend traditional plagiarism rules despite the widespread zero-tolerance policies in universities. This finding appears to affirm Pecorari and Petrić’s (2014) argument against the punitive, problem-oriented approach for addressing and preventing plagiarism. AI-giarism, which is a more complex concept, can cause further confusion to students as reflected in our findings. This raises a significant concern and underscores the importance of solution-oriented approaches or education-based solutions (Wette, 2010; Zhang, 2024) to plagiarism, where relevant concepts can be introduced early in students’ educational journey, potentially creating cultures of integrity for prevention and intervention (Stephens, 2015).
In addition, this study revealed that students may struggle with concepts such as acknowledgement practices and self-plagiarism. Education on academic conduct should thereby focus on such concepts to help students gain an accurate, clear understanding of both plagiarism and AI-giarism. Previous research has demonstrated that students’ formative years are crucial for instilling ethical behaviours and understanding, thus early education could establish a strong foundation for academic integrity (Stephens, 2019). It is worth noting that the challenges of defining originality and identifying plagiarism are not unique to AI-powered writing tools; these are long-standing issues in academia, and the emergence of AI-giarism only further highlights the need for continued discussion and research.
5.2 Navigating the nuances of AI-giarism
While mean scores for AI-giarism in this study were lower than that of traditional plagiarism, students demonstrated a somewhat fair understanding of AI-giarism, which is noteworthy considering the novelty of this concept. The rise of AI has created ambiguity in understandings of plagiarism (Hutson, 2024), but the findings of this study suggest that students can draw upon existing knowledge of plagiarism to understand the ethical implications of AI and its appropriate uses. Students viewed the outright use of AI tools to generate and copy content as a significant instance of AI-giarism, in which their understanding of traditional plagiarism was extended to the use of AI. While detection tools often cannot identify AI generated text (Steponenaite & Barakat, 2023), students’ extended understanding, as demonstrated in the current study, is a positive sign and can be leveraged to educate further them on the ethical uses of AI, raising awareness of the importance of staying informed and adapting to new educational and information technologies responsibly.
However, compared to the findings on traditional plagiarism, students were more unsure about subtle misuses of AI tools, suggesting that they struggle with the blurred lines between AI as a tool to support academic writing and as a potential enabler of academic misconduct. Such ambivalence demonstrates that defining plagiarism in the context of AI use is more complex and nuanced compared to traditional plagiarism, echoing Eke’s (2023) and Perkins’ (2023) concerns regarding the complexities of AI use in students’ work and the ethical considerations on what is and is not acceptable. More specifically, while previous studies on the use of AI-powered writing tools only highlighted plagiarism as an ethical concern (e.g., Burkhard, 2022), this study identified specific instances of AI-giarism where students had varying perceptions regarding the properness of AI use. These included instances such as generating ideas, paraphrasing texts, and determining when to acknowledge AI use, with students showing some tolerance for certain scenarios and a lack of of consensus on potential improper uses of AI as well. These findings align with concerns raised by academics about the emergence of AI-giarism (Chan & Tsi, 2023; Salvagno et al., 2023). Frye (2022) noted that “the degree of originality in academic writing can vary depending on the discipline, topic, and research question” (p.949), which may be a factor in the variation among students’ responses in this study. Apart from contextual factors, questions such as the extent to which AI can be original and whether human presence is required for claims of authorship, complicates understandings of academic integrity. Students with the capability to use AI will likely utilise it to complement their academic work (Francke & Alexander, 2019); it is thus essential to set clear guidelines for the responsible and ethical use of AI tools that consider different disciplines and contexts, in order to avoid potential negative impacts. Technologies to prevent plagiarism and act as e-learning platforms already exist (Beaudoin & Avanthey, 2023), and similarly, AI technologies can be utilised to enhance students’ understanding of plagiarism and academic integrity, such as by using AI to create scenarios that educate users on possible instances of misconduct and plagiarism, thereby promoting ethical behaviour.
Finally, results from this study suggest that current regulations and standards on traditional plagiarism may not be sufficient for addressing the challenges posed by AI. Educators and institutions must to rethink their approaches to detecting and addressing instances of academic misconduct, and develop targeted measures tailored to address specific uses of AI tools. The variations in students’ responses in this study highlight the need for clear and specific guidelines on the use of AI in academic work, consistent with the recommendations from Chan (2023). While APA and MLA now offer guidelines for citing AI use, there are challenges as noted by McAdoo (2023); for instance, the same AI model or dataset can be used by different researchers to generate different outputs, making it hard to trace the origin of specific content. In addition to evolving citation guidelines, there is a need to investigate the challenges and ethical implications of AI in academia, and to update and develop strategies and guidelines for maintaining academic integrity in the face of technological advancements. A more unified understanding and set of standards for the use of AI in academic work is crucial. Educational institutions should consider holding discussions and incorporating practices that encourage AI literacy among students and teachers, including providing clear guidelines on how to properly use and cite AI-generated content (Chan & Colloton, 2024; Kong et al., 2021; Long & Magerko, 2020; Ng et al., 2021).
Defining violations of academic integrity when using AI-generated content in higher education proves challenging due to the rapidly evolving nature of generative AI technologies. While it is difficult to set concrete rules, the broad acceptance and integration of AI into education marks an inevitable shift in what is needed to prepare students for future societal demands (Chan & Lee, 2023; Chan & Zhou, 2023).
6 Implications
This study carries several significant implications, extending far beyond its direct findings, for the future of academia, policy-making, and the integration of AI technology into educational processes.
Education and curriculum development
This study reveals that there is a degree of ambiguity and a lack of understanding among students regarding what constitutes AI-giarism. This points to a pressing need for academic institutions to incorporate AI ethics, particularly addressing issues surrounding AI-giarism, within their curricula. Such education should ideally start at an early stage to ensure that students are well-prepared and informed as they enter higher education and beyond, and to ensure that AI is leveraged effectively and responsibly as an educational and information technology.
Questionnaire instrument as policy and educational guidelines
Our survey instrument, as encapsulated in Fig. 1, carries several implications for educators and policymakers. This study taken the first step in conceptualising AI-giarism by providing a scale that can be used as a guideline for defining what constitutes AI-related academic misconduct. This scale could be instrumental in assisting educators who struggle with comprehending and explaining the complex dimensions of AI-giarism to their students. It can further serve as a guide to facilitate understanding and discussions regarding AI academic misconduct, as well as to design pedagogy and assessment that adopt AI use, thereby fostering an environment that encourages academic integrity in the age of AI. In fact, with the complexity of AI-giarism as demonstrated within the scale, it may be the time for us to rethink the definition of academic misconduct with the added integration of AI in higher education.
Policy-making and guidelines
This study highlights a gap in existing academic integrity policies which at present, do not adequately cover AI-giarism. The findings from this study, such as the subtle plagiaristic behaviours that students struggle to identify misconduct within, can inform the development of comprehensive guidelines on the ethical use of AI in academic work. Policymakers should consider the nuanced nature of AI-giarism, and guidelines should be flexible enough to accommodate future advancements in AI as well.
AI tool development and transparency
This study draws attention to the need for greater transparency in the use of educational and information technologies, particularly AI tools. AI developers and providers can play a crucial role in promoting ethical AI use and reducing unintentional AI-giarism by, for example, creating features that enable users to automatically cite AI-generated content or ideas. The utilisation of AI itself in raising awareness of and deterring individuals from plagiarism and misconduct should also be explored. Integrating well-developed AI technologies in education will represent a promising step forward; with the ability to provide students with real-time feedback and guidance, AI has the potential to facilitate ethical writing and citation practices, while also encouraging ethical conduct and behaviour to help foster academic integrity.
To conclude, this study is a significant step towards understanding the complex issue of AI-giarism. Crucially, our research highlights a clear need for academic institutions to take proactive measures and educate students on the ethical implications of AI use in their work. By establishing comprehensive and adaptable policies on AI ethics and academic integrity, we can help prepare students for responsible participation in an increasingly AI-integrated future.
7 Limitations and future research
The study has several limitations. First, there is a lack of representativeness due to the convenience sampling method used. A mixed-methods approach could potentially reveal important insights that are not accessible through the quantitative survey alone. For example, qualitative data could shed light on why students have ambiguous attitudes towards the more nuanced uses of AI in academic writing, or why there is such variability in students’ responses. Factors of individual perspectives, such as disciplinary background and personal experience in using AI tools, could then be further investigated; these added insights could be invaluable for enhancing the effectiveness of developed educational interventions and policy guidelines on AI-giarism. Further work is needed to explore the views and practices of students from different disciplines, levels of study, achievement levels, and cultural contexts. Such work could help inform the design of plagiarism education and prevention programmes that cater for the specific needs of students.
Additionally, due to the emerging nature of AI-giarism and the novelty of this study, the survey instrument underwent a preliminary piloting phase rather than a formal validation process. This limitation underscores the need for future research to establish its reliability and validity. The rapidly evolving nature of AI technology also means that the above findings could quickly become outdated. Furthermore, the study’s focus on students’ perceptions leaves the perspectives of other key stakeholders, such as educators and administrators, unexplored. Future research can explore the perceptions of other stakeholders, particularly in terms of how students’ understandings of AI-giarism compare to those of academics and whether they are in line with institutional-level policies on plagiarism.
8 Conclusions
This study represents a pioneering effort to explore the emerging issue of AI-giarism in the higher education context. The findings have shed light on students’ complex attitudes towards and understandings of both traditional plagiarism and AI-giarism. Although students display an understanding of outright unethical uses of AI in academic writing, their perceptions towards more nuanced uses of AI highlight a need for further investigation, education, and comprehensive policy guidelines.
In concluding this paper, several questions that have emerged from our study and warrant further exploration are highlighted below:
-
Will human-machine partnership in academic publications be allowed in the future, without being penalised as academic misconduct?
-
What should the standards be for preventing (and if needed, disciplining) AI-giarism? Where do we draw the line?
-
Should we acknowledge AI use in full across each and every step of our work, in order to maintain transparency and accountability? How much information and detail is needed?
-
Can AI-giarism occur unintentionally and be treated less severely in such cases, or should we continue to have a zero-tolerance policy?
-
At what point does human input end and AI take over? Is there a possible partnership?
These questions not only delve into the ethical dimensions of AI use in academia, but also probes the future trajectory of academia in an AI-driven world. Though this study provides only an initial exploration of students’ perceptions of AI-giarism, it also opens up numerous avenues for further research which, along with ongoing dialogue and education, are much needed as AI technologies continue to evolve and become increasingly integrated into educational contexts.
Data availability
The datasets used and/or analysed during the current study are available from the corresponding author on reasonable request.
References
Ahmad, N., Murugesan, S., & Kshetri, N. (2023). Generative artificial intelligence and the education sector. Computer, 56(6), 72–76. https://doi.org/10.1109/MC.2023.3263576
Alshurafat, H., Al Shbail, M. O., Hamdan, A., Al-Dmour, A., & Ensour, W. (2023). Factors affecting accounting students’ misuse of chatgpt: An application of the fraud triangle theory. Journal of Financial Reporting and Accounting Advance Online Publication. https://doi.org/10.1108/JFRA-04-2023-0182
Andrade, C. (2021). The inconvenient truth about convenience and purposive samples. Indian Journal of Psychological Medicine, 43(1), 86–88. https://doi.org/10.1177%2F0253717620977000
Armstrong, J. D., II (1993). Plagiarism: What is it, whom does it offend, and how does one deal with it? American Journal of Roentgenology, 161(3), 479–484. https://doi.org/10.2214/ajr.161.3.83520
Asamoah, P., Margo, J. S., Owuwu-Bio, M. K., & Zokpe, D. (2024). Bridging the gap: Towards guided plagiarism correction strategies. Education and Information Technologies. https://doi.org/10.1007/s10639-024-12475-8
Bašić, Ž., Kružić, I., Jerković, I., Buljan, I., & Marušić, A. (2019). Attitudes and knowledge about plagiarism among university students: Cross-sectional survey at the University of Split, Croatia. Science and Engineering Ethics, 25(5), 1467–1483. https://doi.org/10.1007/s11948-018-0073-x
Beaudoin, L., & Avanthey, L. (2023). How to help digital-native students to successfully take control of their learning: A return of 8 years of experience on a computer science e-learning platform in higher education. Education and Information Technologies, 28, 5421–5451. https://doi.org/10.1007/s10639-022-11407-8
Bokosmaty, S., Ehrich, J., Eady, M. J., & Bell, K. (2019). Canadian university students’ gendered attitudes toward plagiarism. Journal of Further and Higher Education, 43(2), 276–290. https://doi.org/10.1080/0309877X.2017.1359505
Bretag, T., Mahmud, S., Wallace, M., Walker, R., McGowan, U., East, J., Green, M., Partridge, L., & James, C. (2014). Teach us how to do it properly! An Australian academic integrity student survey. Studies in Higher Education, 39(7), 1150–1169. https://doi.org/10.1080/03075079.2013.777406
Burkhard, M. (2022). Student perceptions of AI-powered writing tools: Towards individualized teaching strategies. In D. G. Sampson, D. Ifenthaler, & P. Isaías (Eds.), 19th International Conference on Cognition and Exploratory Learning in Digital Age CELDA 2022 (pp. 73–81). International Association for Development of the Information Society.
Chan, C. K. Y. (2023). A comprehensive AI policy education framework for university teaching and learning. International Journal of Educational Technology in Higher Education, 20, 38. https://doi.org/10.1186/s41239-023-00408-3
Chan, C. K. Y., & Colloton, T. (2024). Generative AI in higher education: The ChatGPT effect. Routledge. https://doi.org/10.4324/9781003459026
Chan, C. K. Y., & Hu, W. (2023). Students’ voices on generative AI: Perceptions, benefits, and challenges in higher education. International Journal of Educational Technology in Higher Education, 20, 43. https://doi.org/10.1186/s41239-023-00411-8
Chan, C. K. Y., & Lee, K. K. W. (2023). The AI generation gap: Are Gen Z students more interested in adopting generative AI such as ChatGPT in teaching and learning than their Gen X and millennial generation teachers? Smart Learning Environments, 10, 60. https://doi.org/10.1186/s40561-023-00269-3
Chan, C. K. Y., & Tsi, L. H. Y. (2023). The AI revolution in education: Will AI replace or assist teachers in higher education? [Preprint]. arXiv. https://arxiv.org/abs/2305.01185
Chan, C. K. Y., & Zhou, W. (2023). An expectancy value theory (EVT) based instrument for measuring student perceptions of generative AI. Smart Learning Environments, 10, 64. https://doi.org/10.1186/s40561-023-00284-4
Cheung, K. Y. F., Stupple, E. J. N., & Elander, J. (2017). Development and validation of the Student attitudes and beliefs about Authorship Scale: A psychometrically robust measure of authorial identity. Studies in Higher Education, 42(1), 97–114. https://doi.org/10.1080/03075079.2015.1034673
Chien, S. C. (2017). Taiwanese college students’ perceptions of plagiarism: Cultural and educational considerations. Ethics & Behavior, 27(2), 118–139. https://doi.org/10.1080/10508422.2015.1136219
Childers, D., & Bruton, S. (2016). Should it be considered plagiarism? Student perceptions of complex citation issues. Journal of Academic Ethics, 14, 1–17. https://doi.org/10.1007/s10805-015-9250-6
Cotton, D. R. E., Cotton, P. A., & Shipway, J. R. (2023). Chatting and cheating: Ensuring academic integrity in the era of ChatGPT. Innovations in Education and Teaching International. Advance online publication. https://doi.org/10.1080/14703297.2023.2190148
Crompton, H., & Burke, D. (2023). Artificial intelligence in higher education: The state of the field. International Journal of Educational Technology in Higher Education, 20(1), 22. https://doi.org/10.1186/s41239-023-00392-8
Dehouche, N. (2021). Plagiarism in the age of massive generative pre-trained transformers (GPT-3). Ethics in Science and Environmental Politics, 21, 17–23. https://doi.org/10.3354/esep00195
Eke, D. O. (2023). ChatGPT and the rise of generative AI: Threat to academic integrity? Journal of Responsible Technology, 13, 100060. https://doi.org/10.1016/j.jrt.2023.100060
Erguvan, I. D. (2022). An attempt to understand plagiarism in Kuwait through a psychometrically sound instrument. International Journal for Educational Integrity, 18(1), 1–17. https://doi.org/10.1007/s40979-022-00120-1
Francke, E., & Bennett, A. (2019, October). The potential influence of artificial intelligence on plagiarism: A higher education perspective. In European conference on the impact of artificial intelligence and robotics (ECIAIR 2019) (Vol.31, pp.131–140). https://doi.org/10.34190/ECLAIR.19.043
Freedman, M. (1994). The persistence of plagiarism, the riddle of originality. The Virginia Quarterly Review, 70(3), 504–518.
Frye, B. L. (2022). Should using an AI text generator to produce academic writing be plagiarism? Fordham Intell Prop Media & Ent LJ, 33(4), 946–968.
Gendron, Y., Andrew, J., & Cooper, C. (2022). The perils of artificial intelligence in academic publishing. Critical Perspectives on Accounting, 87, 102411. https://doi.org/10.1016/j.cpa.2021.102411
Gullifer, J., & Tyson, G. A. (2010). Exploring university students’ perceptions of plagiarism: A focus group study. Studies in Higher Education, 35(4), 463–481. https://doi.org/10.1080/03075070903096508
Gullifer, J. M., & Tyson, G. A. (2014). Who has read the policy on plagiarism? Unpacking students’ understanding of plagiarism. Studies in Higher Education, 39(7), 1202–1218. https://doi.org/10.1080/03075079.2013.777412
Howard, S. J., Ehrich, J. F., & Walton, R. (2014). Measuring students’ perceptions of plagiarism: Modification and Rasch validation of a plagiarism attitude scale. Journal of Applied Measurement, 15(4), 372–393.
Hu, G., & Lei, J. (2012). Investigating Chinese university students’ knowledge of and attitudes toward plagiarism from an integrated perspective. Language Learning, 62(3), 813–850. https://doi.org/10.1111/j.1467-9922.2011.00650.x
Hussein, M. G. (2022). The awareness of plagiarism among postgraduate students at Taif University and its relationship to certain variables. Cogent Social Sciences, 8(1). https://doi.org/10.1080/23311886.2022.2142357
Hutson, J. Rethinking plagiarism in the era of generative AI. Journal of Intelligent Communication, 4(1), 20–31. https://doi.org/10.54963/jic.v4i1.220
Ibegbulam, I. J., & Eze, J. U. (2015). Knowledge, perception and attitude of Nigerian students to plagiarism: A case study. IFLA Journal, 41(2), 120–128. https://doi.org/10.1177/0340035215580278
Issrani, R., Alduraywish, A., Prabhu, N., Alam, M. K., Basri, R., Aljohani, F. M., Alolait, M. A. A., Alghamdi, A. Y. A., Alfawzan, M. M. N., & Alruwili, A. H. M (2021). Knowledge and attitude of Saudi students towards plagiarism-A cross-sectional survey study. International Journal of Environmental Research and Public Health, 18(23), 12303. https://doi.org/10.3390/ijerph182312303
Karnalim, O., Toba, H., & Johan, M. C. Detecting AI assisted submissions in introductory programming via code anomaly. Education and Information Technologies. https://doi.org/10.1007/s10639-024-12520-6
Keleş, A., Ocak, R., Keleş, A., & Gülcü, A. (2009). ZOSMAT: Web-based intelligent tutoring system for teaching-learning process. Expert Systems with Applications, 36(2), 1229–1239. https://doi.org/10.1016/j.eswa.2007.11.064
Kumar, A. H. S. (2023). Analysis of ChatGPT tool to assess the potential of its utility for academic writing in biomedical domain. BEMS Reports, 9(1), 24–30. https://doi.org/10.5530/bems.9.1.5
Leatherman, C. (1999). At Texas A&M, conflicting charges of misconduct tear a programme apart. The Chronicle of Higher Education, 46(11), A18–A21.
Leung, C. H., & Cheng, S. C. L. (2017). An instructional approach to practical solutions for plagiarism. Universal Journal of Educational Research, 5(9), 1646–1652. https://doi.org/10.13189/ujer.2017.050922
Li, Y., & Flowerdew, J. (2019). What really is the relationship between plagiarism and culture? Some thoughts from the Chinese context. In D. Pecorari, & P. Shaw (Eds.), Student plagiarism in higher education: Reflections on teaching practice (pp. 140–156). Routledge.
Liebrenz, M., Schleifer, R., Buadze, A., Bhugra, D., & Smith, A. (2023). Generating scholarly content with ChatGPT: Ethical challenges for medical publishing. The Lancet Digital Health, 5(3), e105–e106. https://doi.org/10.1016/S2589-7500(23)00019-5
Liu, M., & Wu, Y. (2020). Chinese undergraduate EFL learners’ perceptions of Plagiarism and use of citations in course papers. Cogent Education, 7(1), 1–14. https://doi.org/10.1080/2331186X.2020.1855769
Löfström, E., Trotman, T., Furnari, M., & Shephard, K. (2015). Who teaches academic integrity and how do they teach it? Higher Education, 69, 435–448. https://doi.org/10.1007/s10734-014-9784-3
Long, D., & Magerko, B. (2020). What is AI literacy? Competencies and design considerations. Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems, 1–16. https://doi.org/10.1145/3313831.3376727
Louis, K. S., Anderson, M. S., & Rosenberg, L. (1995). Academic misconduct and values: The department’s influence. The Review of Higher Education, 18(4), 393–422. https://doi.org/10.1353/rhe.1995.0007
Mahmud, S., Bretag, T., & Foltýnek, T. (2019). Students’ perceptions of plagiarism policy in higher education: A comparison of the United Kingdom, Czechia, Poland and Romania. Journal of Academic Ethics, 17, 271–289. https://doi.org/10.1007/s10805-018-9319-0
Malik, A. R., Pratiwi, Y., Andajani, K., Numertayasa, I. W., Suharti, S., Darwis, A., & Marzuki (2023). Exploring artificial intelligence in academic essay: Higher education student’s perspective. International Journal of Educational Research Open, 5, 100296. https://doi.org/10.1016/j.ijedro.2023.100296
Martin, B. (1994). Plagiarism: A misplaced emphasis. Journal of Information Ethics, 3(2), 36–47.
Maurer, H., Kappe, F., & Zaka, B. (2006). Plagiarism – a survey. Journal of Universal Computer Science, 12(8), 1050–1084.
Mavrinac, M., Brumini, G., Bilić-Zulle, L., & Petrovecki, M. (2010). Construction and validation of attitudes toward plagiarism questionnaire. Croatian Medical Journal, 51(3), 195–201. https://doi.org/10.3325/cmj.2010.51.195
McAdoo, T. (2023, April 7). How to cite ChatGPT. https://apastyle.apa.org/blog/how-to-cite-chatgpt
McKnight, L. (2021). Electric sheep? Humans, robots, artificial intelligence, and the future of writing. Changing English, 28(4), 442–455. https://doi.org/10.1080/1358684X.2021.1941768
Ng, D. T. K., Leung, J. K. L., Chu, S. K. W., & Qiao, M. S. (2021). Conceptualizing AI literacy: An exploratory review. Computers and Education Artificial Intelligence, 2, 100041. https://doi.org/10.1016/j.caeai.2021.100041
Ngo, T. T. A. (2023). The perception by university students of the use of ChatGPT in education. International Journal of Emerging Technologies in Learning, 18(17), 4–19. https://doi.org/10.3991/ijet.v18i17.39019
Obeid, R., & Hill, D. B. (2017). An intervention designed to reduce plagiarism in a research methods classroom. Teaching of Psychology, 44(2), 155–159. https://doi.org/10.1177/009862831769
Oghabi, M., Pourdana, N., & Ghaemi, F. (2020). Developing and validating a sociocultural plagiarism questionnaire for assessing English academic writing of Iranian scholars. Applied Research on English Language, 9(2), 277–302. https://doi.org/10.22108/are.2019.118587.1485
OpenAI (2022, November 30). Introducing ChatGPT. https://openai.com/blog/chatgpt
Park, C. (2003). In other (people’s) words: Plagiarism by university students–literature and lessons. Assessment and Evaluation in Higher Education, 28(5), 471–488. https://doi.org/10.1080/02602930301677
Pecorari, D., & Petrić, B. (2014). Plagiarism in second-language writing. Language Teaching, 47(3), 269–302. https://doi.org/10.1017/S0261444814000056
Perkins, M. (2023). Academic integrity considerations of AI large language models in the post-pandemic era: ChatGPT and beyond. Journal of University Teaching & Learning Practice, 20(2). https://doi.org/10.53761/1.20.02.07. Article 7.
Rathore, F. A., Waqas, A., Zia, A. M., Mavrinac, M., & Farooq, F. (2015). Exploring the attitudes of medical faculty members and students in Pakistan towards plagiarism: A cross sectional survey. PeerJ, 3, e1031–e1031. https://doi.org/10.7717/peerj.1031
Rodrigues, F., Gupta, P., Khan, A. P., Chatterjee, T., Sandhu, N. K., & Gupta, L. (2023). The cultural context of plagiarism and research misconduct in the Asian region. Journal of Korean medical science, 38(12), e88. https://doi.org/10.3346/jkms.2023.38.e88
Romanowski, M. H. (2022). Preservice teachers’ perception of plagiarism: A case from a college of education. Journal of Academic Ethics, 20(3), 289–309. https://doi.org/10.1007/s10805-021-09395-4
Salvagno, M., Taccone, F. S., & Gerli, A. G. (2023). Can artificial intelligence help for scientific writing? Critical Care, 27(1), 75. https://doi.org/10.1186/s13054-023-04380-2
Smedley, A., Crawford, T., & Cloete, L. (2015). An intervention aimed at reducing plagiarism in undergraduate nursing students. Nurse Education in Practice, 15(3), 168–173. https://doi.org/10.1016/j.nepr.2014.12.003
Sowden, C. (2005). Plagiarism and the culture of multilingual students in higher education abroad. ELT Journal, 59(3), 226–233. https://doi.org/10.1093/elt/cci042
Stefanovič, P., Pliuskuvienė, B., Radvilaitė, U., & Ramanauskaitė, S. (2024). Machine learning model for chatGPT usage detection in students’ answers to open-ended questions: Case of Lithuanian language. Education and Information Technologies. https://doi.org/10.1007/s10639-024-12589-z
Stephens, J. M. (2015). Creating cultures of integrity: A multi-level intervention model for promoting academic honesty. In T. A. Bretag (Ed.), Handbook of academic integrity (pp. 995–1001). Springer.
Stephens, J. M. (2019). Natural and normal, but unethical and evitable: The epidemic of academic dishonesty and how we end it. Change: The Magazine of Higher Learning, 51(4), 8–17. https://doi.org/10.1080/00091383.2019.1618140
Steponenaite, A., & Barakat, B. (2023, July). Plagiarism in AI empowered world. In International Conference on Human-Computer Interaction (pp. 434–442). Springer Nature Switzerland.
Stokel-Walker, C. (2023). ChatGPT listed as author on research papers: Many scientists disapprove. Nature, 613(7945), 620–621. https://doi.org/10.1038/d41586-023-00107-z
Stratton, S. J. (2021). Population research: Convenience sampling strategies. Prehospital and Disaster Medicine, 36(4), 373–374. https://doi.org/10.1017/S1049023X21000649
Sutton, A., Taylor, D., & Johnston, C. (2014). A model for exploring student understandings of plagiarism. Journal of Further and Higher Education, 38(1), 129–146. https://doi.org/10.1080/0309877X.2012.706807
The University of Hong Kong (2023). Understanding plagiarism. https://tl.hku.hk/plagiarism/understanding-plagiarism
Tindle, R., Pozzebon, K., Willis, R., & Moustafa, A. A. (2023). Academic misconduct and generative artificial intelligence: University students’ intentions, usage, and perceptions. PsyArXiv Preprints, 13. https://doi.org/10.31234/osf.io/hwkgu
Tsai, Y. S., Perrotta, C., & Cašević, D. (2020). Empowering learners with personalised learning approaches? Agency, equity and transparency in the context of learning analytics. Assessment & Evaluation in Higher Education, 45(4), 554–567. https://doi.org/10.1080/02602938.2019.1676396
Wager, E. (2014). Defining and responding to plagiarism. Learned Publishing, 27, 33–42. https://doi.org/10.1087/20140105
Wette, R. (2010). Evaluating student learning in a university-level EAP unit on writing using sources. Journal of Second Language Writing, 19(3), 158–177. https://doi.org/10.1016/j.jslw.2010.06.002
Zawacki-Richter, O., Marín, V. I., Bond, M., & Gouverneur, F. (2019). Systematic review of research on artificial intelligence applications in higher education – where are the educators? International Journal of Educational Technology in Higher Education, 16, 39. https://doi.org/10.1186/s41239-019-0171-0
Zhang, Y. (2024). Understanding-oriented pedagogy to strengthen plagiarism-free academic writing: Findings from studies in China. Springer.
Funding
UGC FITE Funding was received for this study.
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Ethical approval
The study was approved by the Human Research Ethics Committee (HREC) at the University of Hong Kong. (Ethical Approval No.: EA230079).
Informed consent
Informed consent was obtained from all the participants involved in the study.
Conflict of interest
There is no potential conflict of interest.
Additional information
Publisher’s note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Chan, C.K.Y. Students’ perceptions of ‘AI-giarism’: investigating changes in understandings of academic misconduct. Educ Inf Technol 30, 8087–8108 (2025). https://doi.org/10.1007/s10639-024-13151-7
Received:
Accepted:
Published:
Version of record:
Issue date:
DOI: https://doi.org/10.1007/s10639-024-13151-7



