Learn to use AI like a Pro. Learn More

A tangled web of AI, obscured ownership, and questionable ethics

Bellingcat Uncovers Deepfake Debacle: Clothoff and the Dubious ASU Label

Last updated:

Mackenzie Ferguson

Edited By

Mackenzie Ferguson

AI Tools Researcher & Implementation Consultant

Bellingcat's latest investigation reveals unsettling connections between Clothoff, an AI app fostering non-consensual deepfake pornography, and ASU Label, an organization whose legitimacy is questioned. Clothoff's claimed donations to ASU Label contrast sharply with its perpetuation of harm through deepfakes, while ASU Label appears to be a phantom charity with no real support for victims. As the ownership of Clothoff remains shrouded in mystery, the ethical concerns and potential legal ramifications continue to mount.

Banner for Bellingcat Uncovers Deepfake Debacle: Clothoff and the Dubious ASU Label

Introduction to Clothoff and ASU Label

The investigation into the app Clothoff and the organization ASU Label sheds light on some concerning practices in the realm of AI and digital ethics. Clothoff, an AI-powered platform, has gained notoriety for enabling the creation of non-consensual deepfake pornography. Despite the severe ethical implications and potential legal issues associated with its services, Clothoff continues to operate under a cloud of secrecy. Furthermore, its links to ASU Label, a dubious organization that claims to support the victims of AI harm, have raised further suspicion and outright skepticism about its operations and intentions. The connection between Clothoff's facilitation of harmful content and ASU Label's questionable practices underscores a significant challenge in regulating AI technology and protecting human rights in the digital age.

    ASU Label, which purports to aid victims of AI-related harm, is under scrutiny due to its lack of transparency and verifiable contributions to victim support. Investigations by Bellingcat reveal that the organization has no notable leadership, presence in non-profit registries, or legal documentation, raising doubts about its authenticity and purpose. The hollow claims of such support services, coupled with their dubious partnership with Clothoff, reveal a deeper problem within the industry, where organizations exploiting vulnerable individuals continue to operate without accountability. This scenario presents a pressing need for robust regulatory frameworks to ensure protection against AI exploitation and to provide genuine support to those affected by such technologies.

      Learn to use AI like a Pro

      Get the latest AI workflows to boost your productivity and business performance, delivered weekly by expert consultants. Enjoy step-by-step guides, weekly Q&A sessions, and full access to our AI workflow archive.

      Canva Logo
      Claude AI Logo
      Google Gemini Logo
      HeyGen Logo
      Hugging Face Logo
      Microsoft Logo
      OpenAI Logo
      Zapier Logo
      Canva Logo
      Claude AI Logo
      Google Gemini Logo
      HeyGen Logo
      Hugging Face Logo
      Microsoft Logo
      OpenAI Logo
      Zapier Logo

      To further complicate the situation, Clothoff manages to obscure its ownership through an intricate web of shell companies, effectively shielding itself from accountability. The company's silence on ethical issues is troubling, as it dismisses any concerns about non-consensual uses of its technology, framing its platform as an avenue for consensual adult exploration. This dismissive attitude not only highlights a stark lack of responsibility but also portrays a worrying trend where technology is progressing faster than the ethical standards and legal regulations meant to govern its use. It is imperative for legal systems worldwide to adapt quickly to these technological challenges to prevent misuse and to secure justice for victims.

        The broader implications for such a landscape are significant. The unchecked growth of capabilities like those manifest in Clothoff’s AI allows the proliferation of non-consensual explicit content, threatening to normalize violations of privacy and consent. Meanwhile, entities like ASU Label exploit these violations under the guise of support without delivering actual assistance, prompting skepticism of legitimate victim support initiatives. These dynamics reflect the urgent necessity for international cooperation in developing enforcement mechanisms capable of addressing the unique challenges posed by AI and deepfake technologies. This will not only protect individual rights but also maintain trust in digital and media environments.

          Connection Between Clothoff and ASU Label

          The connection between the Clothoff AI application and ASU Label reveals a troubling intersection of questionable ethics and potential fraud in the realm of artificial intelligence. Clothoff, known for its production of non-consensual deepfake pornography, claims to support ASU Label by donating to the organization, which ostensibly works to aid victims of AI-related harms. However, upon closer examination, the legitimacy of ASU Label is highly suspect, as highlighted by an exhaustive investigation by Bellingcat (). The organization lacks verifiable leadership and legal documentation, casting doubt on whether its stated mission to protect rights in the digital age is anything more than a façade.

            Clothoff’s operations are shrouded in secrecy, with ownership obscured through a network of shell companies. This lack of transparency extends to their public persona, where ethical concerns about the non-consensual nature of their content are dismissed under the guise of technological progress. The connection to ASU Label, which collaborates with Clothoff despite its dubious standing, adds another layer of deception and potential exploitation into the mix. The investigation suggests that ASU Label could be more involved in perpetuating AI harm rather than mitigating it, raising alarms about their true intentions in the field of AI ethics and victim support ().

              Learn to use AI like a Pro

              Get the latest AI workflows to boost your productivity and business performance, delivered weekly by expert consultants. Enjoy step-by-step guides, weekly Q&A sessions, and full access to our AI workflow archive.

              Canva Logo
              Claude AI Logo
              Google Gemini Logo
              HeyGen Logo
              Hugging Face Logo
              Microsoft Logo
              OpenAI Logo
              Zapier Logo
              Canva Logo
              Claude AI Logo
              Google Gemini Logo
              HeyGen Logo
              Hugging Face Logo
              Microsoft Logo
              OpenAI Logo
              Zapier Logo

              Despite Clothoff's claims of supporting ASU Label, there is no tangible evidence of real victim support activity. The partnership between these two entities showcases a complex web of deception that thrives on the ethical grey areas of current AI technology. The persistent opacity around Clothoff's true business operations and ASU Label's supposed endeavors highlights a need for more stringent regulations and oversight in the digital space to protect victims and uphold ethical standards. These revelations not only underscore the intricacies of AI-related fraud but also serve as a cautionary tale about how technology can be misused when left unchecked ().

                Legality of Clothoff's Operations

                The operations of Clothoff are framed within a complex landscape of legal ambiguities and potential infractions, primarily due to the creation and distribution of non-consensual deepfake pornography. While Clothoff publicly positions itself as an entity contributing to communities through donations to the ASU Label, investigations, such as those by Bellingcat, unveil a disturbing alignment with ethically questionable practices. The ASU Label, purportedly an organization developed to assist AI harm victims, has been unable to substantiate its legitimacy, as it lacks verifiable leadership or transparent operations. Clothoff continues its activities under a veil of secrecy, utilizing a network of shell companies to obscure ownership, raising significant concerns about the legality of their operations across different jurisdictions ().

                  Clothoff’s business model operates in a legal grey area, often skirting the edges of legality. Their creation of non-consensual sexually explicit content falls into potentially illegal territory in many jurisdictions. Despite the controversial nature of their products, Clothoff attempts to justify their services as a platform for consensual exploration. Yet, this facade is contradicted by their dismissal of ethical concerns, revealing a deeper indifference to the harm caused by their operations. Regulatory bodies may find it challenging to hold Clothoff accountable due to their complex ownership structure and evasive practices ().

                    Moreover, Clothoff's alliance with ASU Label underscores further legal and ethical challenges. The legitimacy of ASU Label itself is called into question, as it is absent from non-profit registries and fails to provide evidence of actual victim support activities, casting doubt on its declarations. Such alliances with questionable organizations like ASU Label raise alarms about potential fraud and misrepresentation, compounding the suspicion that Clothoff’s operations may not only skirt legal boundaries but blatantly disregard them. This interconnected web of dubious entities suggests a calculated effort to exploit gaps in law and oversight to sustain their business operations without facing substantial legal repercussions ().

                      Fake Ownership and Ethics of Clothoff

                      The recent investigation into Clothoff has uncovered unsettling practices that question the ethical stance of the company. Clothoff, as highlighted in the investigation by Bellingcat, claims to donate to organizations like ASU Label, which supposedly supports victims of AI-related harm. However, this gesture seems more like a veneer of responsibility than a genuine effort at ethical business conduct. In reality, Clothoff continues to foster an environment where non-consensual deepfake pornography thrives, contradicting its claims of being ethically aware. This raises significant ethical questions, especially as Clothoff operates under secretive ownership structures and conceals its true business intentions through a web of shell companies, making it difficult to hold them accountable for their actions [1](https://www.bellingcat.com/news/2025/02/21/clothoff-ai-deepfakes-asu-label/).

                        While Clothoff maintains it merely offers a platform for adult consensual exploration, the ethical implications of facilitating the creation and distribution of non-consensual explicit content cannot be overlooked. Despite framing their service as part of technological progress, the moral responsibility associated with user-generated content remains Clothoff's burden to bear. The company's dismissive attitude towards these ethical concerns reinforces the notion that profit trumps ethics in their operational ethos. Clothoff's ongoing operation in a legal grey area further complicates the ethical landscape, as they may be operating illegally in various jurisdictions. The anonymity of Clothoff's ownership and the dubious nature of its affiliations, like with ASU Label, further muddy the waters, making it challenging to discern the true intentions behind their proclaimed ethical commitments [1](https://www.bellingcat.com/news/2025/02/21/clothoff-ai-deepfakes-asu-label/).

                          Learn to use AI like a Pro

                          Get the latest AI workflows to boost your productivity and business performance, delivered weekly by expert consultants. Enjoy step-by-step guides, weekly Q&A sessions, and full access to our AI workflow archive.

                          Canva Logo
                          Claude AI Logo
                          Google Gemini Logo
                          HeyGen Logo
                          Hugging Face Logo
                          Microsoft Logo
                          OpenAI Logo
                          Zapier Logo
                          Canva Logo
                          Claude AI Logo
                          Google Gemini Logo
                          HeyGen Logo
                          Hugging Face Logo
                          Microsoft Logo
                          OpenAI Logo
                          Zapier Logo

                          Related Current Events on AI Deepfake Pornography

                          The recent Bellingcat investigation into Clothoff has shed light on the troubling connections between artificial intelligence and the creation of non-consensual deepfake pornography. Clothoff, an AI app, has come under scrutiny for producing explicit content without the consent of individuals, leveraging advanced algorithms to create disturbingly realistic deepfakes. Despite Clothoff's claims of donating to ASU Label—a purported non-profit organization championing the rights of AI harm victims—the legitimacy of this partnership is questionable. ASU Label lacks verifiable leadership and legal documentation, raising suspicions of fraudulent activity .

                            In recent developments, various platforms are grappling with the challenge of moderating AI-generated explicit content. Meta's Oversight Board, for instance, has pushed for revising content moderation policies to explicitly address non-consensual AI-generated imagery. This move was sparked by an investigation revealing the widespread availability of sexualized deepfakes on social media platforms like Facebook. The recommendations suggest incorporating comprehensive guidelines that directly address the nuances of AI-related content creation and dissemination .

                              In the UK, a government study released in 2025 outlined the impending surge in AI-generated explicit content, emphasizing the urgent need for a regulatory framework. This study predicts a significant increase in the circulation of deepfake images as technology advances, making it clear that existing legal measures may be insufficient to combat this new wave of digital manipulation .

                                A particularly alarming development is the X Platform crisis, which resulted in the temporary halt of all Taylor Swift-related searches. This action was necessary to curb the rapid spread of AI-generated explicit imagery involving the celebrity, highlighting broader challenges in content moderation that social media companies face today. The incident underscores the complexities involved in censoring inappropriate AI-generated content before it becomes widely accessible .

                                  Emerging technological solutions are beginning to address the escalating issue of deepfake pornography. The launch of the Reality Defender platform represents a significant step forward, offering tools designed to detect and analyze AI-generated media. By focusing on identifying deepfake content, initiatives like these are crucial in the effort to prevent the misuse of artificial intelligence technologies for unethical purposes .

                                    Public Concerns and Future Implications

                                    The growing concerns surrounding Clothoff and its association with ASU Label highlight significant public apprehension about the intersection of technology and personal privacy. Clothoff, by enabling the creation of non-consensual deepfake pornography, contributes to a legal grey area that challenges current regulatory frameworks. Public anxiety is fueled by the platform's dismissive attitude towards ethical criticisms and its secretive ownership, which further exacerbates trust issues within digital environments. Additionally, ASU Label's dubious claims of supporting AI harm victims without tangible evidence raise questions about the authenticity and motivations behind organizations purporting to offer victim support. As public awareness increases, these concerns underline the urgent need for clearer legal guidelines and the verification of organizations involved in AI ethics and victim support [1](https://www.bellingcat.com/news/2025/02/21/clothoff-ai-deepfakes-asu-label/).

                                      Learn to use AI like a Pro

                                      Get the latest AI workflows to boost your productivity and business performance, delivered weekly by expert consultants. Enjoy step-by-step guides, weekly Q&A sessions, and full access to our AI workflow archive.

                                      Canva Logo
                                      Claude AI Logo
                                      Google Gemini Logo
                                      HeyGen Logo
                                      Hugging Face Logo
                                      Microsoft Logo
                                      OpenAI Logo
                                      Zapier Logo
                                      Canva Logo
                                      Claude AI Logo
                                      Google Gemini Logo
                                      HeyGen Logo
                                      Hugging Face Logo
                                      Microsoft Logo
                                      OpenAI Logo
                                      Zapier Logo

                                      Looking to the future, the implications of non-consensual deepfake pornography are far-reaching and profoundly troubling. Economically, the costs associated with victim support, potential lawsuits, and the damage to reputations and brands could soar. The global financial impact of fake and misleading digital content already sits at an astonishing $78 billion annually, a figure likely to rise as the technology becomes more sophisticated. The societal trust in online interactions could deteriorate further, affecting everything from personal relationships to financial transactions. Legal systems worldwide might be pressured to evolve, perhaps drawing inspiration from initiatives like the Take It Down Act. However, enforcing these laws across different jurisdictions presents an ongoing challenge. Moreover, the mental health ramifications for victims of such deepfakes could escalate, given the pervasive nature of digital harassment and the social stigma often attached to victims, particularly women. It is crucial that future discourse and regulation in AI not only protect individuals but also ensure that organizations claiming to aid victims, like ASU Label, are held to rigorous standards of accountability [1](https://www.bellingcat.com/news/2025/02/21/clothoff-ai-deepfakes-asu-label/).

                                        Recommended Tools

                                        News

                                          Learn to use AI like a Pro

                                          Get the latest AI workflows to boost your productivity and business performance, delivered weekly by expert consultants. Enjoy step-by-step guides, weekly Q&A sessions, and full access to our AI workflow archive.

                                          Canva Logo
                                          Claude AI Logo
                                          Google Gemini Logo
                                          HeyGen Logo
                                          Hugging Face Logo
                                          Microsoft Logo
                                          OpenAI Logo
                                          Zapier Logo
                                          Canva Logo
                                          Claude AI Logo
                                          Google Gemini Logo
                                          HeyGen Logo
                                          Hugging Face Logo
                                          Microsoft Logo
                                          OpenAI Logo
                                          Zapier Logo