OpenAI, Studio Ghibli and Copyright Infringement: A Closer Look

On March 25th 2025, OpenAI launched 4o image generation for ChatGPT, which users quickly found could generate images in the distinctive style of Studio Ghibli. This feature took the world over by storm when it was discovered, but also sparked intense backlash from people who argued that it undermines the creative integrity of Studio Ghibli’s work, and raises copyright infringement issues.
OpenAI has generally taken a conservative approach to replicating the work of living artists, outright refusing prompts that request such images – most likely to avoid copyright infringement claims. This makes the decision to allow users to generate Ghibli-style images all the more puzzling. If OpenAI acknowledges the legal risks of replicating a living artist’s work, why does that concern not extend to Studio Ghibli’s distinctive aesthetic? Would they not still face liability for both the images generated and the copyrighted material used to train their AI?
While the Ghibli controversy had brought these questions into public focus, they are far from new. The clash between AI and copyright law has been progressing for a while now, and despite a growing number of lawsuits and some judicial guidance, a definitive legal framework has yet to emerge. Here’s what we know so far.
Can A Style Be Copyrighted?
The first major point here is that artistic ‘styles’ are not specifically given copyright protection. So while copyright law (in both USA and Japan) safeguard original works of authorship, such as specific illustrations, films, and character designs, it does not extend to the underlying artistic techniques or aesthetics that define a studio’s unique style.
In Studio Ghibli’s case, its distinctive hand-drawn animation, soft watercolour-like backgrounds, and whimsical fantasy elements are integral to its artistic identity, but they are not legally protected. As a result, if the OpenAI image generator produces output in Studio Ghibli’s style and aesthetic, it may be doing so by drawing inspiration from the studio’s work without directly infringing on any copyrighted material.
Fair Use and the Training Data Dilemma
A major legal consideration in this discussion is whether the input material used to train the image generator was under authorisation or license, especially with respect to Studio Ghibli’s copyrighted material. This issue ties into the multitude of copyright lawsuits seen in recent years, especially against OpenAI. For example, in New York Times v. OpenAI, the New York Times alleged that OpenAI had used millions of its articles (even those behind paywalls) without permission to train its GPT models and that this amounted to copyright infringement.
Such cases often involve the application of the ‘fair use’ doctrine to determine whether AI companies are liable for copyright infringement. The fair use doctrine permits the use of copyrighted material for specific purposes, such as academic research or when the material is transformed into something new. The provision for fair use under copyright law in the US (17 U.S. Code § 107) specifically outlines four factors to be evaluated by the court to determine whether fair use can be claimed not; (i) purpose and character of the use, (ii) nature of the copyrighted work, (iii) amount and sustainability, and (iv) effect on the market. Following the New York Times case, OpenAI has maintained that they believe training their models qualifies as fair use under copyright law.
No Legal Clarity
Despite the fact that this legal issue is quickly becoming a flashpoint between tech companies and creators worldwide, and the lawsuits against OpenAI have become so numerous that the company has moved to consolidate them, we’re still far from reaching a definitive legal doctrine.
In the widely cited case of Authors Guild Inc. v. Google Inc, the court had found that Google’s large-scale copying of books for its online library and algorithmic search engine qualified as fair use due to its transformative nature and minimal market impact. This case played a significant role is establishing the argument that copying is considered “permissible intermediary copying” if it is in furtherance of a transformative, informational use.
Moreover, in late March 2025, a federal judge ruled in favour of Anthropic, deciding that the music publishers who were seeking an injunction against the AI company (for allegedly copying song lyrics to train their LLM “Claude”) had not demonstrated sufficient business harm.
However, notable progress in legal interpretation in the other direction was also made in Thomson Reuters v. Ross Intelligence where the court rejected the argument that scraping copyrighted material for AI training was permissible as intermediary copying, reinforcing the idea that mass reproduction of works for training (even if used later for a potentially transformative purposes) can be infringing. The court ruled here that the fair use doctrine would also not hold up if AI-generated content was so similar to copyrighted content that it could compete with and cause commercial harm to the copyright holder.
Taken together, these and numerous other cases show that courts are still grappling with the different aspects of AI training and copyright law. It seems that until clearer precedent emerges, fair use in the context of AI will continue to be assessed on a highly fact-specific basis.
Is the Output Safe?
Another major legal issue here is whether the output generated (in this case, the Ghibli-style image) by an AI system violates copyright law. For the fair use defence to apply, the outputs must be sufficiently transformative, meaning they must add new expression, meaning, or purpose to the original copyrighted materials. Merely reproducing or closely imitating protected content will not meet this standard.
A few recent U.S. Supreme Court decisions have helped clarify the boundaries of what counts as transformative use. For instance, in Google LLC v. Oracle America, Inc., Oracle sued Google for copying approximately 11,500 lines of Java API code to develop its Android operating system.
The court ruled in favour of Google, holding that the use was transformative since Google's reimplementation of the code served a different purpose, promoted interoperability, and enabled innovation in a new computing environment. Importantly, the court affirmed that transformative use isn't limited to artistic or expressive works, it can also apply to functional works, like software and AI, when used in a way that creates something fundamentally new.
Conversely, in the case of Andy Warhol Foundation v. Goldsmith, the court found that Warhol’s artwork depicting an altered image of musical artist Prince, derived from a photo by Lynn Goldsmith, was not transformative. Although Warhol added visual changes, the court emphasized that both the original and the derivative image were used for the same purpose, i.e. commercial magazine publication. Although not directly related to AI, the court clarified here that simply adding new expression or style is not enough to render a work transformative if it serves a similar function in the same market, something that was observed again later in Reuters v Ross.
This means that in any legal proceedings between Studio Ghibli and OpenAI, the court will determine whether OpenAI is liable for copyright infringement based on whether the AI-generated images serve a new and distinct purpose or whether they merely repackage Ghibli’s protected works in a way that competes with the original.
The Legal Position For Now
Ultimately, based on our reading of recent case law: if an AI generated image closely resembles a copyrighted work (without being transformative) and was produced using training data containing copyrighted materials without permission or license (and does not fall under fair use), it could be considered an unauthorized derivative work and copyright holders would at least have a foundation for legal action. On the other hand, if AI generated content merely draws inspiration from a style without replicating specific protected elements such as characters or compositions (and its use weighs in favour of the four fair use factors), it may not count as copyright infringement.
In the years to come, we will most likely get further clarity on these issues as certain major lawsuits make their way through the courts. For example, in Kadrey v. Meta, authors including Sarah Silverman, Richard Kadrey, and Christopher Golden filed a putative class action against Meta, alleging that its AI model “LLaMA” was trained on datasets containing their copyrighted books without consent.
The court’s eventual ruling on whether such training and the resulting outputs of LLaMA amount to direct copyright infringement will be critical, much as Reuters v. Ross is now helping shape early judicial thinking on AI use of copyrighted material.
Studio Ghibli’s case against OpenAI, sadly, is not helped by the fact that Japanese laws enable developers to train AI models using copyright-protected materials. Therefore, any legal challenge with respect to unauthorised training would have to be brought in the USA, where all the legal considerations discussed above would come into play.
Conclusion
Of course, all this legal hand wringing may soon come to an abrupt and ignoble end under the Trump administration, which has also recently included a provision in its federal budget bill to ban states from introducing their own AI regulations for 10 years.
Moreover, OpenAI and Google are already aggressively lobbying the US Government to officially classify AI training on copyrighted data as fair use, using the reliable “China is eating our lunch” narrative. If they succeed, they will have won the AI v. copyright war, and the day of the human creator would effectively have ended.
For more reading on AI and copyright, you can visit our website and read our articles about AI and Ownership, major recent copyright lawsuits, OpenAI’s numerous run-ins with copyright law, and much more.
Any Questions?
Connect with lawyers and seek expert legal advice
Share
Find by Article Category
Browse articles by categories
Featured Partnership
Elevate HR and Search
HR Advisory | Search | Coaching
Related Articles

Is Secondary Use of Patient Data Permit…
The secondary use of patient data refers to the use of health-related data for …

Is Secondary Use of Patient Data Permitted in the…
The secondary use of patient data refers to the u…

Top 12 FAQ on the UK’s DMCC Act and Kil…
What is a Killer Acquisition? A killer acquisition occurs when a domin…

Top 12 FAQ on the UK’s DMCC Act and Killer Acquis…
What is a Killer Acquisition? A killer a…

How is Healthtech Regulated in the UAE?
Healthcare has witnessed a profound technological revolution over the past deca…

How is Healthtech Regulated in the UAE?
Healthcare has witnessed a profound technological…