The ChatGPT AI chatbot produces content that may appear to be created by a human. There are many proposed uses for the technology, but its extraordinary capabilities raise important questions about content ownership.
UK law has a definition for computer-generated works. Under the Copyright, Designs and Patents Act 1988 they are “computer generated under circumstances such that there is no human author of the work”. The law suggests that content generated by an artificial intelligence (AI) can be copyrighted. However, the original sources of responses generated by AI chatbots can be difficult to trace and may include copyrighted works.
The first question is whether ChatGPT should be allowed to use original content generated by third parties to generate its own responses. The second is whether only humans can be credited as authors of AI-generated content or whether AI itself can be considered an author, particularly when the output is creative.
Let’s tackle question one. The technology behind ChatGPT is known as the Large Language Model (LLM). To improve what it does, it is exposed to large datasets, including a large number of websites and books.
But the company says it’s up to users to ensure that how they use that content doesn’t violate any laws. The terms and conditions are also subject to change, so they do not carry the stability and force of a legal right such as copyright.
The only solution will be to clarify laws and policies. Otherwise, each organization will have to take legal action individually, with the aim of proving ownership of the works used by an AI. Furthermore, if governments do not take action, we are approaching a situation where all copyrighted materials will be used by others without the original author’s consent.
Now let’s move on to question two: who can claim copyright on AI-generated content. In the absence of a claim from the owner of the original content used to generate a response, it is possible that the copyright of a chatbot’s output could rest with the individual users or the companies that developed the artificial intelligence.
Copyright law is based on the general principle that only content created by human beings can be protected. The algorithms behind ChatGPT were developed at OpenAI, so the company would appear to hold copyright protection over those. But this may not extend to chatbot responses.
There is another option regarding ownership of AI-generated content: the AI itself. UK law would currently prohibit an AI from owning the copyright (or even acknowledging that an AI created it), as he is not a human being and therefore cannot be treated as an author or owner under the Copyright, Designs and Patents Act. This position is also unlikely to change any time soon, given the UK government’s response to the AI consultation.
If a work of literature, drama, music or art is created by an employee in the course of his employment, his employer is the first owner of any copyright in the work, unless otherwise agreed.
For now, policy makers are sticking to human creativity as the prism through which copyright is granted. However, as AI develops and is able to do more, policy makers might consider granting legal capacity to AIs themselves. This would represent a fundamental change in how copyright law operates and a revisiting of who (or what) can be classified as author and copyright owner.
Such a change would have implications for businesses as companies integrate AI into their products and services. Microsoft recently announced that it will incorporate its Copilot product – based on ChatGPT – into the company’s software, such as Word, PowerPoint and Excel. Copilot can help users with written communications and summarize large volumes of data.
More developments like this are sure to follow, and early adopters have a chance to capitalize on the current situation, using AI to increase the efficiency of their operations. Companies can often gain an advantage when they are the first to introduce a product or service to a market, a situation called “first mover advantage”.
The UK government recently conducted a consultation on AI and copyright. Two contrasting views have emerged. The tech sector believes that the copyright of AI-generated content should belong to users, while the creative sector wants this content to be completely out of ownership. The UK government did not act on the findings and instead recommended further stakeholder consultations.
If copyright law moves away from its focus on human agency in the future, one could imagine a scenario where an AI is classified as the author and the developers of that AI as the owners of the output. This could create a situation where a handful of powerful AI companies wield colossal influence.
They could end up owning hundreds of thousands of copyrighted materials: songs, published materials, images, and other digital assets. This could likely lead to a dystopian situation where most newly created works are AI-generated and owned by businesses.
It seems logical that such knowledge should remain in the public domain. Perhaps the solution is for each person or company to declare their contribution when using AI, or for their contribution to be automatically calculated by the software. As a result, they get credits or financial benefits based on the amount of work they contributed.
AI content that is itself based on copyrighted materials remains problematic. Failure to rely on copyrighted materials could compromise the AI system’s ability to respond to end-user requests. But if content is to be based on protected works, we should embrace a new era of open innovation where intellectual property rights don’t matter.
This article is republished from The Conversation under a Creative Commons license. Read the original article.
The authors do not work for, consult with, own stock in, or receive funding from any company or organization that would benefit from this article, and have disclosed no relevant affiliations beyond their academic appointment.