AI refers to a computer’s ability to emulate human intelligence and thought. When people refer to AI, they are often referring to the concept of Generative AI (GenAI), which refers to a computer’s ability to create new content out of synthesized data. GenAI was a novel concept that took the world by storm a little over 2 years ago when OpenAI released ChatGPT to the public in November 2022. Since ChatGPT’s release, GenAI has undoubtably changed not only the tech industry, but the foundations of the economic market and society’s perceptions of what computers are capable of/can be responsible for (Furze, 2023). GenAI has made its way into many aspects of our lives, to such a degree that it is becoming increasingly difficult to find an industry or field that is not implementing it into its daily processes and methodologies.
The rise of GenAI has led to the development several ethical dilemmas, particularly in academia, regarding the ideas of reliance and ownership. While most would agree that using GenAI to write an assignment is academically dishonest, there is considerable debate over whether and to what extent its use constitutes cheating, as it does not fit traditional definitions of plagiarism (Furze, 2023). As these debates emerged, educators scrambled to identify AI-generated work, leading to the development of detection tools like GPTZero and ZeroGPT. However, this led to an entirely new ethical dilemma, as recent studies have shown that these tools are unreliable and often resulted in false accusations of cheating that disproportionately targeted non-native English speakers and neurodivergent students (Perkins et al., 2024).
With GenAI detection unreliable, this leaves us with the original dilemmas of how can educators ensure that students are developing the critical thinking skills necessary for real-world success if they cannot reliably prevent or detect GenAI content and if GenAI generates the content, can it truly be considered the student’s work? Together, these two challenges highlight the need for a more nuanced approach to GenAI in education — one that considers both authorship and skill development while addressing the realities of an increasingly GenAI-integrated landscape.
Firstly, let’s explore the issue surrounding student reliance on GenAI. Reliance on GenAI in formulative academic years is a problem because if students are outsourcing their cognitive effort to GenAI, they risk becoming overly dependent on it, potentially leading to the lack of development of critical analytical skills needed to recognize misinformation, synthesize complex ideas, or form independent arguments (Singh, 2024). This is a critical issue in education that potentially have long lasting ramifications if not properly addressed by educators.
For many universities — and increasingly, grade-school educators — the solution to this complicated issue lies not in the prohibition of GenAI, but in the permission of it under negotiated conditions. The AI Assessment Scale (AIAS) (Perkins et al., 2024) is a proposed educational framework that offers guidelines on when and how to implement GenAI use in the classroom; outlining times when strict no-GenAI policies should be used — such as in oral debates where GenAI content would undermine critical thinking — to environments where GenAI can be encouraged — such as real-time feedback loops where students iteratively refine their work based on GenAI suggestions. The reasoning behind this framework is based on the fact that educators do not currently have a reliable way of detecting GenAI use and an outright ban would more likely encourage dishonesty, pushing students to conceal their use of GenAI. Additionally, such a prohibition could disproportionately impact less fortunate students, as wealthier students may have access to more advanced GenAI tools capable of bypassing anti-GenAI policies (Perkins et al., 2024). Whether one approves of GenAI or not, its current influence is undeniable, and its likely that skills associated with GenAI will becoming increasingly necessary as it is implemented across more and more sectors.
The goal of the AIAS is to create a balanced negotiation between educators, students, and administrators, ensuring that GenAI is integrated in ways that enhance learning while also defining contexts where its use is pedagogically inappropriate. By having this conversation, all parties can align their expectations and prevent the age-old reactionary dismissal of new technologies in educational spaces, encouraging thoughtful integration rather than outright resistance and fostering a balance between GenAI’s potential harms and benefits.
Having established that, through careful negotiation between educators and students, GenAI can be a powerful tool for enhancing learning, a pressing question remains: If GenAI generates an end product, did the student truly create it? Or does ownership belong to the AI itself — or even to the creators whose data was used to train it? Copyright infringement and intellectual property (IP) rights violations are among the most contentious issues surrounding GenAI’s development and use.
To build a GenAI model, an immense amount of data, far beyond what any individual could manually process, is required to build its foundational understanding of the world. In an ideal scenario, all training data would come from public domain sources, ensuring ethical and legal compliance. However, it is a well debated fact that this is not necessarily the case. Many GenAI models, including OpenAI’s ChatGPT, are trained on vast datasets, including Common Crawl, a dataset that includes roughly 12 years’ worth of scraped internet pages, along with other publicly available sources (Brown et al., 2020). While the dataset itself is publicly accessible, much of the content within it is not in the public domain, raising the question: Is GenAI inherently an infringement of copyright and violation of IP?
From the perspective of creators, their work was scraped and used to train AI models without consent, enabling GenAI to generate outputs that can be computationally similar to their original works. Many see this as a direct violation of their IP rights, leading to a wave of class-action lawsuits filed against GenAI companies (Milton, Enright, & Kim, 2025). On the other hand, GenAI defenders argue that there is no true originality in any creative work, and just as human artists and writers are inspired by existing media, so is GenAI. Defenders argue that becuase GenAI does not replicate but rather learns patterns and generates new content based on statistical synthesis, it is considered fair use.
This raises yet another ethical dilemma: Is there a meaningful distinction between a human creator drawing inspiration from prior works and an GenAI model trained on vast amounts of copyrighted data without permission?
Unlike the AIAS, which offers a structured approach to GenAI use in education, there is no definitive legal or ethical framework governing GenAI content ownership. Many lawsuits remain unresolved, leaving the issue of creator rights in limbo. This uncertainty presents a major challenge for educators: How can they teach students to engage with GenAI responsibly while minimizing potential harms caused by IP infringement when so much uncertainty surrounds what constitutes IP infringement regarding GenAI (as seen by the numerous ethical dilemmas posed simply trying to reach this question)?
While any number of those previously dicussing ethical dilemas could be its own essay, this essay is simply going to address the final question posed related to challenges for educators. Thankfully, there is an entire industry dedicated to discussing this incredibly complicated topic, even if US law has not yet reached a definitive conclusion: AI Literacy. AI literacy refers to the set of skills needed to effectively recognize, evaluate, and use AI systems (including GenAI) in a safe and ethically responsible way (Lee et al., 2024). With educational institutions increasingly implementing GenAI tools into their curriculums (such as suggested by the AIAS), there is an increasing need to add AI literacy courses to accompany them, similar to the digital literacy courses that emerged following the popularization of computers and the internet. While increasing AI literacy does not directly resolve the question of whether a student owns AI-generated content, it equips them with the knowledge to minimize the risk of their work falling into an ownership gray area. A well-informed student understands that responsible AI use involves active engagement — refining, restructuring, and integrating GenAI assistance rather than passively allowing GenAI to generate their entire work.
By giving students a foundation in AI literacy and teaching them the proper ways to effectively use GenAI to augment their work, educators can prepare students to enter an increasingly GenAI-driven workforce while mitigating the numerous ethical implications associated with it. The reality is that these ethical concerns will never be fully resolved as long as GenAI exists, and it’s unlikely that ChatGPT — or any other GenAI model — will suddenly disappear, taking its ethical dilemmas with it. However, by adopting a utilitarian approach, we can strive to minimize harm by equipping people with the skills and knowledge necessary to navigate and mitigate these challenges as effectively as possible.
Works Cited:
Furze, L. (2023, January 26). Teaching AI Ethics. Leon Furze. https://leonfurze.com/2023/01/26/teaching-ai-ethics/
Milton, D., Enright, H., Kim, J. (2025). Case Tracker: Artificial Intelligence, Copyrights and Class Actions. Barker & Hostetler Law. https://www.bakerlaw.com/services/artificial-intelligence-ai/case-tracker-artificial-intelligence-copyrights-and-class-actions/
Singh, R. (2024, September 24). Generative AI Can Harm Learning. Teaching Times. https://www.teachingtimes.com/generative-ai-can-harm-learning/
Brown, T. B., Mann, B., Ryder, N., Subbiah, M., Kaplan, J., Dhariwal, P., Neelakantan, A., Shyam, P., Sastry, G., Askell, A., Agarwal, S., Herbert-Voss, A., Krueger, G., Henighan, T., Child, R., Ramesh, A., Ziegler, D. M., Wu, J., Winter, C., & Amodei, D. (2020). Language Models are Few-Shot Learners (No. arXiv:2005.14165). arXiv. https://doi.org/10.48550/arXiv.2005.14165
Perkins, M., Furze, L., Roe, J., & MacVaugh, J. (2024). The AI Assessment Scale (AIAS): A Framework for Ethical Integration of Generative AI in Educational Assessment. Journal of University Teaching and Learning Practice, 21(06). https://doi.org/10.53761/q3azde36
Lee, K., Mills, K., Ruiz, P., Coenraad, M., Fusco, J., Roschelle, J., & Weisgrau, J. (2024, June 18). AI Literacy: A Framework to Understand, Evaluate, and Use Emerging Technology. Digital Promise. https://digitalpromise.org/2024/06/18/ai-literacy-a-framework-to-understand-evaluate-and-use-emerging-technology/

Leave a reply to Code, Copyright, and CoPilot: The Legal and Innovative Risks of Generative AI in Coding – HAGEN // ANALYTICS Cancel reply