We are used to thinking that copyright only applies to creations of the human mind. But images created with the use of AI aren’t wholly exempt from its protection. So what exactly does the law say?
Art created by artificial intelligence (AI) has exploded into the mainstream, thanks to a range of different platforms and apps such as OpenAI’s DALL·E 2, Stability AI’s Stable Diffusion, and Prisma’s Lensa AI.
Through a combination of machine learning, written prompts, and user-uploaded images, anyone can now quickly generate countless pictures and imitate particular artistic styles, from photorealism to illustrations and cartoons.
The question of how this flurry of AI-enabled digital art impacts image owners has not gone unanswered, with both individual creators and companies taking the matter to the courts.
One group of artists and illustrators has already filed a class-action complaint against Midjourney, Deviantart, and Stability AI. Media giant Getty Images, meanwhile, has sued the latter for copyright violations and unfair competition.
How does artificial intelligence generate artwork?
Most generative AI art models scrape existing images and text-to-image pairs off the internet, using machine learning to build associations between its data and the prompt to create new content.
For example, OpenAI’s DALL·E 2 was trained on “hundreds of millions of captioned images from the internet” while Stability AI’s Stable Diffusion was trained on 2.3 billion images.
But the technology has accelerated faster than the protections for the art they require to function, causing a copyright frenzy.
Some datasets, for example, have been found to include copyrighted images. And while the exact mechanisms behind how individual images are processed and weighted are unclear to most, research has revealed that image-generating models have been shown to copy the data on which it was trained.
Although OpenAI has sought to mitigate against what it calls image regurgitation by removing large quantities of visually similar images, this does not protect copyrighted images.
Moreover, this does not stop users from reproducing a certain style – as artists such as Hollie Mengert found out – ultimately putting creators at risk of devaluation of work or potentially a loss in commissions.
Copyright implications of AI-generated artwork: What you need to know
Each country will establish its own viewpoint as time goes on. But, at present, the UK is one of a handful of countries that protects computer-generated and AI-assisted works.
The law states that every work that expresses “original human creativity” benefits from copyright protection if it requires a relative amount of skill, labor, and creative judgment to create.
When considering AI-assisted work, these parameters could apply to AI-based options found inside a camera. Provided the photographer’s creativity is still evident, the photo will still be protected as an artistic work.
Similarly, the US protects the fruits of intellectual labor as long as they are both original and fixed in tangible form; ideas cannot be copyrighted until they have taken some kind of shape or form.
However, recent legal cases demonstrate the US Copyright Office only deems works of human authorship worthy of protection – although what constitutes “human authorship” is not always clear cut.
For example, computer scientist Stephen Thaler made multiple requests for AI-generated artworks and patents to receive copyright protection, and all were unsuccessful.
On the other hand, the copyright request for Kristina Kashtanova’s comic book Zarya of The Dawn, which featured artwork generated through Midjourney, was initially granted, before being rescinded and then reissued for the storyline and characters – but not for the images.
In Kashtanova’s case, the contention lay in defining what counts as “substantial human involvement.” Although the case has been brought to a close, the blurry line between human creation and authorship remains a site of conflict.
What constitutes fair use for AI-generated art?
Copyright exceptions are permitted in both the US and the UK. In many cases, a person or party is allowed to use a copyrighted work without the owner’s permission, as long as this use is limited to specific purposes such as news reporting, comment, criticism, research, scholarship, or teaching.
Still, there are no hard and fast rules when it comes to fair use, with situations usually considered on a case-by-case basis.
This depends on the nature of use, the nature of the copyrighted work, how much of the copyrighted work is used, and the effect on the market or value of the original work.
The UK government held a public consultation on copyright related to text and data mining (TDM) for AI between October 2021 and January 2022. It concluded that TDM constitutes a copyright infringement unless a legal copyright exception or permission is granted for its use. This exception already exists: TDM is permitted if it is limited to use for non-commercial research or educational purposes.
Read more: How is AI regulated around the world?
However, the UK government has indicated it might extend this exception to include TDM for any purpose, including for commercial use. In this event, rights holders’ content would still be protected by measures such as requirements for lawful access, which means they will be able to choose the platform on which their works are available and will be able to charge for access.
The US Copyright Office and the US Patent and Trademark Office also hosted a consultation on copyright law and machine learning in the age of AI in October 2021.
At present, human authorship is currently an essential requirement for copyright protection, but Director Shira Perlmutter sees cases becoming more complex as time goes on. The US Copyright Office recently announced addressing these legal gray areas would be a priority going into 2023.
Cases arguing the extent of “transformative” use are not new. In the case of Graham vs. Prince, where photographer Donald Graham sued artists Richard Prince for using an Instagram screenshot of one of his photos, Prince’s motion to dismiss the complaint was denied as the reproduced art did not fundamentally change the underlying “composition, presentation, scale, color palette, and media” of the photo.
With this new unprecedented scale of current technological abilities, data scraping, and the opacity of AI algorithms, such disputes are likely only to become more common.
Ethical considerations when AI tools meet the world of art
As with any new technology that has the potential to become embedded in our daily lives, there are ethical points to consider. Within the art world, these considerations range from the previously mentioned issues of authorship and ownership to questions of bias and discrimination.
According to an independent analysis of 12 million images from Stable Diffusion’s dataset (LAION-5B), 47% were sourced from only 100 domains, with the largest number of these taken from Pinterest.
While the sample analyzed only accounts for 0.5% of the 2.3 billion images that the model was first trained on, and 2% of the 600 million images used to train the most recent three checkpoints, the analysis revealed a handful of interesting things.
In general, user-generated content platforms, such as WordPress-hosted blogs, Smugmug, Blogspot, and Flickr, made up a huge proportion of the image data, as did shopping sites and stock image sites.
Many artists use online platforms like these to promote their work and connect with others. Visibility is a fundamental part of exposure, business, and sales, so artists hoping to protect their intellectual property by keeping their work off social media or behind a paywall may limit reach and networking opportunities.
The study also revealed that some of the images used are copyright protected – and it’s for this reason Getty Images is suing Stability AI. (Incidentally, the stock image company had previously banned the use of AI-generated images from its platform due to copyright concerns.)
On top of that, Sydney-based artist Kim Leutwyler said she saw “almost every portrait” she had ever shared on the internet used to train popular AI models through the search engine Have I Been Trained, which allows people to discover whether their work has been used in datasets.
Another artist used the site to discover their private medical records in LAION-5B, begging the question of how much other personal data might have been included in these large-scale datasets.
Exploring the intersection of copyright and AI-generated art
The absence of a clearly defined legal framework, coupled with opaque data collection and processing methods, leaves many questions open to ethical interpretation.
An artistic style cannot be granted copyright protection. But style sets individual creators apart – especially considering that it takes years to develop a craft and hone a skill.
It’s easy for everyday users to experiment with digital images due to the open-source nature and affordability of different models, which complicates things further.
When a specific style is used as a prompt, it can easily cause financial, professional, and reputational damage. Search results might be distorted by AI illustrations, images that may go against that artist’s brand or values. Such automated generation also invalidates the practice, passion, and purpose of artists who have cultivated their abilities for years.
There is a level of creativity and originality in the selection of images, the formulation of textual prompts, post-production, and editing, that lead to the generation of an AI image. Certainly, the creation of new art depends on existing art; as creators develop, they draw on inspiration, reformulating and transforming it into something new.
However, feeding images into an AI model blurs the line between inspiration and appropriation. The same process might be happening in principle, but without the time, learned skill, and unique voice of the artist.
In the future, cases may be decided on the level of contribution of each individual component (original art, user, training data, AI) but it’s easy to see how scraping someone else’s work to generate something “new” before claiming authorship (and copyright) over it can constitute an ethical violation.
Pros and cons: Do the benefits of AI-generated art outweigh the pitfalls?
Andrey Usoltsev, founder of Prisma Labs, the company behind Lensa AI, believes this “democratization of access” is a breakthrough, promising the company would focus on steering “the use of such technology in a safe and ethical way.”
While it’s true that the tool may be helpful for visualizing screenplay or novel scenes, or for generating reference images, the lack of privacy or compensation inherent to such models must be considered.
As we have seen, the safety and ethics of these tools are already contested. There is also the privacy angle to consider; where activists have long argued against the non-consensual use of personal data by large social tech platforms, people are currently training AI models for free, using both their own and other people’s data.
Artists and users currently have little legal recourse or the means to enforce data restriction measures on the images and information used for AI models. The outcome of a number of ongoing lawsuits, however, may provide more insight into how these cases will proceed.
Where exactly this all goes from here is unclear. But one thing is certain: as technology, AI, intellectual property, and copyright laws continue to intersect in increasingly complex ways, individuals, tech companies, and publishers alike will have to pay closer attention to the data and images they use.