Updated Nov 4
Studio Ghibli and More Demand OpenAI Stop Using Copyrighted Content for AI

Japanese Entertainment Titans vs. OpenAI: The Great Copyright Clash!

Studio Ghibli and More Demand OpenAI Stop Using Copyrighted Content for AI

Studio Ghibli, Bandai Namco, Square Enix, and other Japanese entertainment giants have called on OpenAI to halt the use of their copyrighted materials in its AI video tool, Sora 2. These companies, through the Content Overseas Distribution Association (CODA), claim that OpenAI's use of their content constitutes copyright infringement under Japanese law. The controversy highlights ongoing global concerns about AI and copyright.

Introduction: The Sora 2 Controversy

The launch and subsequent reception of OpenAI's generative AI tool, Sora 2, have ignited significant controversy within the Japanese entertainment industry. The tool, released on October 1, 2025, allows users to automatically generate short video clips, a feature that quickly found favor among enthusiasts keen on creating content reminiscent of popular Japanese franchises such as Pokémon and Dragon Ball. However, this immediate appeal ushered in a wave of apprehension from major Japanese companies, including Studio Ghibli and Bandai Namco, who are concerned over the unauthorized use of their intellectual property in training Sora 2. According to Variety, this conflict underscores a crucial cultural and legal battleground regarding copyright infringement and AI's expanding influence in media creation.
    The controversy surrounding Sora 2 pivots around the accusations led by the Content Overseas Distribution Association (CODA), a coalition representing several top‑tier entertainment companies in Japan. CODA's central claim is that OpenAI has utilized copyrighted anime, gaming, and manga content to train Sora 2 without obtaining the necessary permissions, a move they argue is incompatible with Japanese copyright law. This law emphasizes the need for 'prior permission,' a stance that contrasts sharply with OpenAI's opt‑out system that allows rights holders to remove their content from AI training datasets post hoc. Variety highlights CODA's perspective that the existing opt‑out mechanism is insufficient and fails to uphold the stringent protection norms that underpin Japan's creative industries.
      The implications of this ongoing dispute are profound, not just within Japan but globally, as generative AI tools continue to reshape content creation. OpenAI's CEO, Sam Altman, has acknowledged the allure and marketability of Japanese IP for Sora 2's user base but has yet to outline a comprehensive plan addressing CODA's demands. The call from CODA and its member companies for an immediate halt to the use of their content underscores the heightened sensitivity around copyright issues, especially as AI tools increasingly facilitate the replication of popular cultural products. This situation, as chronicled by Variety, is emblematic of broader concerns in the AI community regarding intellectual property rights and ethical AI deployment.

        The Role of CODA in Protecting Japanese IP

        The Content Overseas Distribution Association (CODA) plays a pivotal role in safeguarding Japanese intellectual property (IP) against unauthorized use, especially in the realm of rapidly advancing technologies such as artificial intelligence. With influential members like Studio Ghibli, Bandai Namco, and Square Enix, CODA is a formidable force in the global entertainment landscape. Established to combat piracy and promote lawful distribution of Japanese content worldwide, CODA's recent confrontation with OpenAI highlights its commitment to protecting the creative rights of its members. According to this report, these companies have collectively demanded that OpenAI cease using their copyrighted content to train the generative AI video tool, Sora 2.
          The case of CODA versus OpenAI underscores the broader challenges that arise when traditional intellectual property laws intersect with new technological advancements. CODA argues that OpenAI's current opt‑out system is insufficient under Japanese copyright law, which requires explicit permission for usage of protected content. This demand not only reinforces CODA's protective stance over Japanese cultural products but also presents a critical dialogue about how global AI companies manage copyrighted materials. In an age where AI‑generated content is becoming increasingly sophisticated, organizations like CODA are essential in advocating for stringent IP regulations and ensuring creators' works are used ethically and legally. Such actions reaffirm the value of cultural IP and the need for robust frameworks to address emerging digital issues, as discussed in this article.
            CODA's insistence on stricter compliance measures speaks volumes about the cultural respect for authorship inherent in the Japanese entertainment industry. While AI presents opportunities for creativity and innovation, organizations like CODA bring to light the potential risks of copyright infringement in training AI models. This vigilance is crucial not only for the protection of current assets but also for setting precedents in copyright law as AI continues to evolve. The international ripple effect of CODA's actions could lead to a reevaluation of IP laws globally, particularly in how they apply to AI, as detailed in recent reports.
              The pressure CODA has exerted on OpenAI represents a larger, global demand for AI developers to approach IP with a respect that matches legal obligations. CODA's actions convey a strong message that, while technological advancements are welcome, they must not compromise the rights and revenues of content creators who have long defined the global entertainment landscape. As indicated in this variety report, the outcome of this dispute may well influence international policies, nudging other countries to adopt similar frameworks and ensuring that innovations like Sora 2 respect prevailing IP laws and cultural sensitivities.

                Legal Concerns: Japanese Copyright Law vs. AI Training

                The intersection of Japanese copyright law and AI training presents a complex legal landscape, particularly for global AI entities like OpenAI. In Japan, copyright infringement is a serious allegation that can lead to significant legal consequences. Japanese laws traditionally emphasize the protection of creators' rights, mandating that any use of copyrighted material, especially for commercial purposes like AI training, requires explicit permission from the rights holders. This approach is evident in the ongoing dispute between OpenAI and major Japanese entertainment companies, which argue that OpenAI's generative AI video tool, Sora 2, violates these principles by using copyrighted content from popular IPs such as Pokémon and Demon Slayer without authorization. This dispute highlights the significant legal risks involved when AI systems utilize copyrighted content without adherent permissions, underscoring a potential conflict between AI innovation and strict copyright compliance. More details can be found in this article.
                  The current legal argument put forth by Japan’s Content Overseas Distribution Association (CODA) contrasts sharply with OpenAI’s practice. CODA asserts that Japanese copyright law focuses on the necessity of acquiring prior consent before using any material for AI model training, rather than relying on opt‑out provisions that have been adopted by some firms. This standpoint is backed by the culturally ingrained respect for intellectual property in Japan, making it imperative for AI tools that build on such datasets to first secure licenses. The copyright law's stringent requirements are fundamental in ensuring that creators maintain control over their works and receive due recognition and compensation for their intellectual contributions. This legal framework is not just about safeguarding economic interests but also about preserving the creative spirit that drives the Japanese content industry. For further insights, see Variety.
                    Unlike the more permissive frameworks in other jurisdictions where opt‑out models might suffice, Japan’s approach to copyright in the context of AI development signals potential shifts in how AI technologies must adapt to local legal landscapes. The debate centers around whether international AI companies can conform to Japan's strict copyright laws without stifling innovation. If OpenAI complies with CODA's demands to cease using copyrighted Japanese content for AI training, it might set a precedent that could reverberate throughout the global AI industry, demanding a reevaluation of how AI models are trained across borders. Exploring these themes further can be critical for understanding the future directions in AI and IP law, as evidenced by this ongoing legal conflict. More information is available at PC Gamer.
                      The legal concerns surrounding AI training in Japan serve as a reflection of broader international challenges where intellectual property laws must grapple with technological advancements. This unfolding situation with OpenAI suggests that global companies may increasingly face localized legal requirements that demand more than mere compliance with one nation's standards. It raises pivotal questions about the adaptability and flexibility of AI tools to operate legally across various countries while respecting each nation's unique legal traditions. This case between OpenAI and Japanese firms may ultimately lead to new global copyright policies tailored to accommodate the transformative impact of AI, requiring countries to innovate their legal frameworks to keep pace with AI developments. Further developments in this case could influence international legal discourse, as emphasized by ongoing talks and demands from Japanese industry leaders. Additional context can be found in Nintendo Life.

                        Public Reactions and Cultural Sentiments

                        The conflict between major Japanese entertainment companies and OpenAI, over the use of copyrighted Japanese content in training its AI video tool Sora 2, has sparked significant public reaction divided between two main sentiments: support for copyright protection and fascination with technological innovation. Numerous fans of Japanese content, especially those who frequent platforms like Twitter and Reddit, exhibit strong support for companies like Studio Ghibli and Square Enix. They argue that such AI tools undermine these works' value by stripping creators of control and possibly diluting their artistic integrity. On Japanese platforms such as 2channel and Mixi, discussions often emphasize a cultural respect for creators' rights, echoing CODA's push for stricter enforcement of copyright legislation against OpenAI's opt‑out system according to this source.

                          Global Implications for Generative AI and IP Rights

                          The emergence of generative AI and its growing application in different creative industries pose significant challenges and opportunities at the intersection of technology and intellectual property (IP) rights. This dynamic was starkly highlighted when the Content Overseas Distribution Association (CODA), representing heavyweight Japanese entertainment entities like Studio Ghibli, Bandai Namco, and Square Enix, voiced their concerns against OpenAI’s new tool, Sora 2. CODA's grievance underscores the complexities that arise when proprietary content is used without explicit consent, potentially breaching copyright laws and impacting revenues derived from IP as reported here.
                            The reaction of CODA reflects a broader global discourse on how generative AI models are trained using vast datasets, some of which include copyrighted content. Such practices have sparked debates over the adequacy of existing legal frameworks to protect intellectual property rights while still fostering innovation in AI technologies. The demand for OpenAI to halt the use of Japanese copyrighted works illustrates the need for AI developers to adopt more robust consent mechanisms, as the current opt‑out systems employed by OpenAI have been criticized for not aligning with Japanese legal stipulations as noted in this report.
                              This ongoing dispute does more than just highlight legal tensions; it also emphasizes the cultural nuances in protecting intellectual property. In Japan, there is a strong cultural significance attached to the rights of creators and artists, often reflected in strict IP protection laws. Thus, OpenAI's failure to secure necessary permissions reflects a significant cultural misalignment that could influence how other countries interpret IP rights in the age of AI as discussed here.
                                On a global scale, this case underlines the urgent need for international harmonization of copyright laws concerning AI. As generative AI models continue to advance and cross borders, consistent international guidelines will be critical to mitigating conflicts and ensuring that both innovation and IP rights are respected. The lack of such standards presently leaves countries and companies in a grey area, with legal actions likely to increase until clearer regulations are established as observed in current industry dialogues.

                                  Future Prospects: Legal and Industry Responses

                                  The legal and industry response to OpenAI's use of copyrighted Japanese content via its Sora 2 video tool highlights a critical flashpoint in the ongoing conversation about AI ethics and copyright law. Japanese companies such as Studio Ghibli, Bandai Namco, and Square Enix, represented by the Content Overseas Distribution Association (CODA), have formally demanded that OpenAI cease using their intellectual property without explicit consent. This situation underscores the inadequacy of OpenAI's current opt‑out system under Japanese law, which mandates prior permission for the use of copyrighted materials (source).
                                    As the generative AI landscape evolves, industry responses are increasingly shaped by this high‑profile legal standoff. Many in the tech community predict that other jurisdictions might follow Japan's lead in pushing for stricter IP controls. The cord of tension between facilitating AI innovation and safeguarding creator rights is being tested by global multimedia companies, who see AI's potential but demand a framework that respects existing intellectual property laws (source). This might result in a wave of new legal standards and an emphasis on opt‑in systems for AI training data usage.
                                      The industry's response to the OpenAI issue also foreshadows potential shifts in global IP law application, with repercussions for how creative content is used in future AI models. While CODA's demand for OpenAI to stop training on their content points towards a potential legal battleground, it also invites broader industry engagement in drafting mutually beneficial solutions. The stakes are high, with policy experts noting this could set a precedent affecting international AI regulations, compelling nations to standardize their approaches to intellectual property protection in AI contexts (source).
                                        While OpenAI has yet to publicly commit to CODA's demands, the industry is closely monitoring the company's next steps. The global response may dictate the speed at which legislative adjustments occur and the technological adaptations AI companies will need to make. A collaborative approach, meditated through legal and industry negotiations, could not only defuse existing tensions but also establish a new era of AI development that respects intellectual property while fostering innovation (source).
                                          Furthermore, the challenges faced by OpenAI in addressing the concerns raised by Japanese companies showcase a broader industry struggle to align AI system development with diverse legal landscapes. This discord is emblematic of the need for robust international dialogue and policy formulation, suggesting that future AI tools may need to be designed with compliance flexibility across borders. The industry anticipates that finding this equilibrium could result not only in improved international relations but also in more ethically guided AI innovations that consider varied intellectual property rights (source).

                                            Conclusion: Balancing Innovation and Protection

                                            The expanding capabilities of generative AI, such as OpenAI's Sora 2, present a promising horizon for technological innovation, yet they also bring forth significant challenges in safeguarding intellectual property rights. As major Japanese entertainment companies like Studio Ghibli and Bandai Namco pursue action against OpenAI, the need to balance innovation with protection becomes increasingly apparent. According to recent reports, the legal battles contend with the unauthorized use of copyrighted content, highlighting a gap in current AI training practices that need restructuring to respect creative content owners' rights.
                                              This controversy underscores a global dilemma in the technology industry—how to foster innovation without infringing on intellectual property. The push by Japanese companies to halt OpenAI's use of their content for AI training marks a pivotal moment in exploring this balance. It's not just a matter of compliance but also of shaping a future where creators' livelihoods and innovations coexist. The struggle for this balance is echoed in public sentiment, where there's strong support for tighter regulations regarding copyrighted material, ensuring that the culture and economic value tied to creative works are preserved without stifling the progression of AI technology.
                                                Ultimately, achieving balance requires a collaborative approach. As highlighted in the discussion, stakeholders must work together to craft policies that embrace technological advancements while safeguarding intellectual property rights. This includes creating more robust frameworks where AI companies and content creators can collaborate, leading to innovative licensing agreements and the development of synthetic datasets. By addressing these issues collaboratively, the industry can devise solutions that protect creators and allow AI technologies to flourish within legal boundaries.

                                                  Share this article

                                                  PostShare

                                                  Related News

                                                  OpenAI Snags Ruoming Pang from Apple to Lead New Device Team

                                                  Apr 15, 2026

                                                  OpenAI Snags Ruoming Pang from Apple to Lead New Device Team

                                                  In a move that underscores the escalating battle for AI talent, OpenAI has successfully recruited Ruoming Pang, former head of foundation models at Apple, to spearhead its newly formed "Device" team. Pang's expertise in developing on-device AI models, particularly for enhancing the capabilities of Siri, positions OpenAI to advance their ambitions in creating AI agents capable of interacting with hardware devices like smartphones and PCs. This strategic hire reflects OpenAI's shift from chatbots to more autonomous AI systems, as tech giants vie for dominance in this emerging field.

                                                  OpenAIAppleRuoming Pang
                                                  Anthropic Surges Past OpenAI with Stunning 15-Month Revenue Growth

                                                  Apr 15, 2026

                                                  Anthropic Surges Past OpenAI with Stunning 15-Month Revenue Growth

                                                  In a vibrant shift within the generative AI industry, Anthropic has achieved a miraculous revenue jump from $1 billion to $30 billion in just 15 months, positioning itself ahead of tech giants like Salesforce. This growth starkly contrasts with OpenAI's anticipated losses, marking a pivotal shift from mere technical prowess to effective commercialization strategies focused on B2B enterprise solutions. The industry stands at a commercial efficiency inflection point, revolutionizing the landscape as investors realign priorities towards proven enterprise monetization. Dive deep into how this turning point impacts the AI industry's key players and the broader tech market trends.

                                                  AnthropicOpenAIAI Industry
                                                  Perplexity AI Disrupts the AI Landscape with Explosive Growth and Innovative Products!

                                                  Apr 15, 2026

                                                  Perplexity AI Disrupts the AI Landscape with Explosive Growth and Innovative Products!

                                                  Perplexity AI's Chief Business Officer talks about the company's remarkable rise, including user growth, innovative product updates like "Perplexity Video", and strategic expansion plans, directly challenging industry giants like Google and OpenAI in the AI space.

                                                  Perplexity AIExplosive GrowthAI Innovations