Updated Jan 13
It's Not AI: The Real Faces Behind Deepfake Porn

Unmasking the human perpetrators behind non-consensual deepfakes.

It's Not AI: The Real Faces Behind Deepfake Porn

A new Yahoo News article reveals that the majority of deepfake pornography is not driven by sophisticated AI but by men using accessible face‑swapping tools. With over 98% of non‑consensual deepfake content created using simple software, the story shifts the blame from generative AI to its actual human creators. The piece calls for a reevaluation of the AI panic and urges a focus on the platforms and individuals responsible.

Introduction: Misconceptions About AI and Deepfake Pornography

The intersection of artificial intelligence (AI) and digital media has brought about profound changes, particularly in the creation and distribution of deepfake pornography. One major misconception is that advanced generative AI models like those from OpenAI or Midjourney are predominantly responsible for the rise of non‑consensual deepfake pornography. However, this is not the case. As highlighted in a Yahoo News article, the vast majority of deepfake porn content is created using simple, accessible face‑swapping tools operated by individual men. This revelation challenges the narrative that sophisticated AI is at the core of this troubling trend.
    The perception that advanced AI is driving the widespread creation of deepfake pornography can lead to misguided efforts to combat such issues. Instead of focusing solely on technology, it’s crucial to acknowledge the role of the human actors who use older, simpler algorithms to produce non‑consensual content. The same article details how basic software like DeepFaceLab and Faceswap, which utilize algorithms that date back to 2017–2019, are primarily responsible for these creations. These tools do not require extensive technical know‑how or advanced hardware, making them accessible to nearly anyone with a computer. This democratization of technology highlights the broader societal issue of how digital tools are misused to violate privacy and consent.
      The implications of these misconceptions extend beyond the mere technological focus; they influence legal and regulatory measures as well. By incorrectly identifying AI as the main culprit, efforts to legislate against deepfake pornography may miss their mark by not targeting the users who abuse simple technologies. This understanding is vital for shaping effective laws and regulations that address the real sources of this content. There are also significant societal impacts, given the enormous emotional distress and reputational harm faced by victims, predominantly women, as a result of these deepfakes. Addressing both the technological and human elements is crucial for forming a comprehensive approach to fighting this evolving digital crime.

        Prevalence and Scale of Deepfake Pornography

        The prevalence and scale of deepfake pornography have grown significantly, capturing public and regulatory attention due to its predominantly non‑consensual nature. As highlighted in a recent article, over 90% of deepfake pornography targets women with hundreds of thousands of videos available online. Sites like MrDeepFakes have millions of clips available, demonstrating the sheer volume and scale of the issue. Despite popular belief, less than 2% of these videos are made using advanced artificial intelligence tools. Instead, they are primarily created using older, basic software tools like DeepFaceLab and Faceswap, accessible to virtually anyone without deep technical expertise.

          Tools Used in Creating Deepfake Pornography

          The tools primarily used for creating deepfake pornography are surprisingly basic, yet alarmingly effective. Most deepfakes are generated using face‑swapping technology rather than advanced AI. Programs like DeepFaceLab, Faceswap, and Roop have become the go‑to resources for the creation of these non‑consensual videos. These tools, developed between 2017 and 2019, are designed to be user‑friendly, requiring no specialized coding skills and can operate on standard consumer‑grade hardware. This accessibility has led to a surge in the production of deepfake pornography, with the vast majority being generated through these simplistic yet potent tools.
            The ease of access and simplicity of these tools have significantly contributed to the prevalence of deepfake pornography. For instance, tools like Roop and DeepFaceLive are available for free on platforms like GitHub, allowing users to create deepfakes within an astonishingly short time—often less than 30 minutes. Many of these programs require nothing more than uploading a target face photo and choosing an existing pornographic video, making the process dauntingly efficient and quick, even on devices without a dedicated GPU. The proliferation of tutorials online further lowers the barrier for creating deepfakes, making it accessible to a wider audience eager to exploit this technology for personal or malicious reasons.

              Understanding the Culprits Behind Deepfakes

              Deepfakes have emerged as a concerning technological phenomenon, especially when it comes to pornographic content. While many attribute the creation of such deepfakes to sophisticated generative AI technologies, the primary culprits are often individual men who utilize basic face‑swapping tools. According to a Yahoo News article, over 98% of deepfake pornography is produced with simple software tools that have been available since 2017 to 2019, rather than advanced AI models from companies like OpenAI or Midjourney. This challenges the common narrative that powerful AI is predominantly responsible for this invasion of privacy.

                Limitations of AI in Deepfake Creation

                Despite the advances in artificial intelligence, the creation of deepfakes still largely relies on simpler, more accessible tools. According to a Yahoo News report, only a small fraction of non‑consensual deepfake pornography is created using advanced AI like OpenAI's models or Midjourney. Instead, over 98% of such content is generated using older, less sophisticated face‑swapping technologies that don't require extensive technical skills or high‑powered computing resources. These basic tools, developed between 2017 and 2019, allow individuals to create deepfakes with little more than a personal computer and an internet connection, making them far more accessible and therefore more widely used than cutting‑edge AI applications.

                  Regulatory Gaps and Platform Failures

                  The evolving landscape of deepfake pornography highlights significant regulatory gaps and platform failures that continue to exacerbate the problem. Despite the technological advancements in generative AI, the vast majority of deepfake pornography is produced using rudimentary face‑swapping tools. These tools, although unsophisticated, are readily accessible and incredibly effective at swapping faces on existing content, resulting in a massive volume of non‑consensual pornography. Platforms like Pornhub have struggled to effectively moderate the influx of such content, often hosting millions of these illicit videos. Regulatory frameworks in various jurisdictions, such as the proposed DEFIANCE Act in the U.S., await implementation to address these challenges effectively, emphasizing the need for a stronger focus on targeting distributors and creators of deepfakes rather than solely concentrating on the technological enablers. Read more about regulatory challenges.
                    Moreover, platform failures continue to compound the issue. Websites that distribute deepfake pornography often lack stringent moderation practices, allowing such content to proliferate unchecked. This lax oversight not only results in the continued victimization of individuals depicted in these videos but also raises ethical concerns about the accountability of platform providers. Efforts to implement more stringent removal policies have been largely unsuccessful, with many platforms failing to remove flagged content promptly. However, there are calls for comprehensive legislation that mandates swift content removal and enforces penalties against non‑compliant platforms. For further details on platform responsibilities and failures.

                      Social and Emotional Impact on Victims

                      The emergence of deepfake pornography has profoundly affected its victims, primarily women, due to its non‑consensual nature and the emotional trauma it inflicts. Many women experience feelings of violation and betrayal, akin to a form of 'digital rape,' as their likenesses are manipulated without consent for sexual fantasies or vendettas. The violation of personal dignity and privacy can lead to severe emotional consequences, such as anxiety, depression, and post‑traumatic stress disorder (PTSD). Victims not only deal with the fear of recognition and the spread of these videos online but also grapple with self‑esteem issues as a direct result of being objectified and scrutinized by strangers online. According to a Yahoo News article, targeting often aligns with desires for revenge or fandom obsessions, which adds layers of personal violation beyond public degradation.
                        The societal response and lack of adequate regulatory protection further exacerbate the emotional distress experienced by victims of deepfake pornography. In many instances, victims find themselves in a constant battle to have unauthorized content removed while facing stigmatization and victim‑blaming, which can result from the misconceptions surrounding these technologies. The article on Yahoo News underscores that platforms frequently fail in moderating content effectively, resulting in prolonged exposure and the painful ordeal of continually resurfacing trauma. This lack of timely intervention often leaves victims feeling helpless and vulnerable, mirroring a long‑standing struggle against misogyny and gender‑based violence, as acknowledged by public sentiments expressed in the discussed article.
                          Furthermore, the repercussions extend beyond emotional and social harm, as deepfakes can also affect victims' careers and personal relationships. For example, women targeted by deepfake pornography may face unwarranted suspicion or stigmatization in professional settings and within personal circles, affecting their reputation and job prospects. The Yahoo article points out instances where deepfake pornography has led to significant career damage, exemplifying the real‑world impact of these digital crimes. Such implications echo larger societal issues, as these harms underscore the need for comprehensive policy reforms and heightened societal awareness to protect individuals from technological abuses, as highlighted by Yahoo News.

                            Strategies for Targeting Human Creators and Enablers

                            When addressing the issue of non‑consensual deepfake pornography, it's essential to focus on the human elements perpetuating this alarming trend. As noted in the Yahoo News article, the majority of deepfakes are not sophisticated AI creations but are instead the result of men using basic, user‑friendly face‑swapping tools. These individuals, often acting alone, demonstrate that the problem lies not within the technology itself but with those who misuse it.
                              Therefore, effective strategies to combat this issue must emphasize targeting the creators and distributors of such content. This approach involves a combination of legislative action, social education, and the implementation of stricter content moderation policies by platforms hosting these videos. Legislative measures, for instance, should focus on enhancing the accountability of those who misuse technology for creating and spreading deepfake pornography, taking cues from the proposals such as the DEFIANCE Act in the U.S., which aims to curb this misuse by allowing civil suits against perpetrators.
                                Social education campaigns can also play a pivotal role in changing the narrative around deepfake pornography. By raising awareness about the legal, ethical, and emotional consequences associated with the creation and distribution of non‑consensual content, society can begin to shift the perceptions that allow such behavior to proliferate. Platforms hosting this content must also double down on their efforts to moderate and remove offensive content swiftly, learning from the criticized practices of sites like Pornhub, which have faced backlash for lax moderation.
                                  Ultimately, focusing on the enablers—those who create, distribute, and consume deepfake pornography—is critical to implementing an effective prevention strategy. By shutting down distribution channels and penalizing misuse, society can better protect individuals from the harms of deepfake technology and ensure that technological advancements are used ethically and responsibly.

                                    Global Legislative Responses to Deepfake Porn

                                    The growing prevalence of deepfake pornography has prompted varied legislative responses worldwide, aiming to curb the distribution of such content and hold creators accountable. In the United States, legislative measures like the DEFIANCE Act and the TAKE IT DOWN Act have been proposed to combat the rise of non‑consensual deepfake pornography by enabling victims to pursue legal action against creators and distributors. The DEFIANCE Act, in particular, aims to provide a legal framework for civil litigation, while the TAKE IT DOWN Act focuses on empowering victims to have their images removed expeditiously from online platforms. Although these efforts represent crucial steps forward, challenges remain in enforcement and consistent application across states, as highlighted in the article by Yahoo News.
                                      In Europe, the Digital Services Act (DSA) has taken a proactive stance, enforcing the removal of deepfakes within 24 hours of reporting. This legislative measure is designed to ensure that platforms are held accountable for content moderation, thereby reducing the prevalence of non‑consensual and exploitative deepfakes. The European Union's approach highlights the importance of regulatory oversight in managing technological misuse, a sentiment echoed by various stakeholders concerned with the ethical use of AI technologies.
                                        China has also implemented strict regulations aimed at limiting the creation and sharing of deepfake content. As part of its broader strategy to regulate digital communications, China imposes hefty fines on those found to be distributing harmful deepfakes. This regulatory environment underscores China's commitment to controlling the narrative around digital content and protecting individuals from the psychological and social harm caused by deepfake pornography.
                                          Despite these legislative strides, the implementation and enforcement of laws against deepfake pornography remain inconsistent globally. The challenge of enforcing these laws is exacerbated by the rapid proliferation of simple, accessible technologies that enable individuals to create deepfake content with ease, as discussed in the article. While legislation is necessary to deter misconduct, there is a growing consensus that a multi‑faceted approach involving technology companies, governments, and educational institutions is essential to effectively address the issue.
                                            Moreover, experts urge a focus on targeting not just the creators but also the platforms that host such content, drawing parallels to regulatory frameworks used against illegal gambling sites. The complexity of this problem requires an adaptive legal framework that can swiftly respond to technological advancements and changing societal norms. As mentioned in Yahoo's report, while laws can provide a foundation for redress, collaboration between international entities will be crucial to combat the global challenge of deepfake pornography.

                                              Public and Media Reactions to Deepfake Porn

                                              Public and media reactions to deepfake pornography have been largely characterized by outrage and empathy for victims, as highlighted in various reports. This sentiment is primarily because 96‑98% of these deepfakes are non‑consensual, predominantly affecting women and often targeting teenagers and celebrities. A survey conducted by Thorn revealed that approximately 1 in 8 individuals aged 13‑20 personally knows a deepfake target, underscoring the widespread nature of this issue.
                                                Discussions across social media and forums frequently highlight the emotional trauma experienced by victims, likening the impact of deepfake pornography to forms of digital rape or gender‑based violence. Comments on platforms such as Reddit and Twitter, especially following high‑profile cases like the 2024 Taylor Swift incidents, often express anger over privacy violations and the psychological toll on victims. In particular, cases where teenagers create and share explicit content of their peers have sparked significant public concern.
                                                  Calls for stronger regulations and heightened platform accountability echo across public reactions. Users consistently express frustration over the perceived inadequacies of current laws and moderation practices on websites like Pornhub, which continue to host extensive amounts of non‑consensual content despite periodic removals. The enactment of legislative measures such as the U.S. DEFIANCE and TAKE IT DOWN Acts are supported by the public, who demand rigorous enforcement against distributors of deepfake pornography.
                                                    The skepticism towards AI hype is noteworthy; many resonate with the realization that less than 2% of deepfake pornography utilizes advanced generative AI, with the majority produced using relatively basic face‑swapping tools. Public discourse often shifts the focus from the technology itself to the creators, predominantly men aged 15‑35, as the main issue. This perspective aligns with the article’s argument that emphasizes tackling the human creators and enablers rather than overhyping AI's role.
                                                      Public discourse is also concerned with the broader implications of deepfake pornography, particularly its role in perpetuating misogyny and incel culture. Multiple platforms have reported heated debates on the motivations behind the creation of such content, reflecting deep societal issues related to gender dynamics and the objectification of women. In some cases, fringe forums downplay the impact of deepfakes as mere fantasy, but mainstream responses tend to emphasize the real‑world harms that these digital fabrications cause.

                                                        The Future of Deepfake Technology and its Implications

                                                        The rapid advancement of deepfake technology has set the stage for various implications across multiple domains. What initially began as a remarkable breakthrough in artificial intelligence and computer vision has evolved into a tool posing significant ethical and societal challenges. As highlighted in recent analyses, the future of deepfake technology will likely be catalyzed by consumer‑grade tools rather than advanced AI models.
                                                          While deepfakes hold potential for positive applications in entertainment and education, their misuse in the creation of non‑consensual adult content presents a grave issue. The dominance of simple face‑swapping tools, rather than sophisticated AI, continues to fuel the creation and distribution of deepfake pornography. This trend underscores the need for robust solutions that extend beyond technical innovation to include comprehensive societal and regulatory measures.
                                                            Economical impacts also loom large as industries scramble to mitigate risks associated with deepfakes. The market for detection and moderation tools is projected to grow exponentially, pressured by the need for companies to protect against substantial fraud and harassment claims. This economic burden underscores the importance of developing scalable and effective moderation technologies that can keep pace with the pervasive growth of deepfakes.
                                                              Moreover, the social implications of deepfake proliferation, particularly in gender‑targeted harassment, are profound. Non‑consensual deepfake content exacerbates issues of privacy invasion and online abuse, disproportionately affecting women. These developments necessitate a cultural shift towards greater digital literacy and responsible digital citizenship aimed at educating individuals, especially youth, about the ethical use of technology.
                                                                Politically, the resurgence of interest in deepfakes could drive new legislative efforts aimed at regulating and controlling their spread. While current laws are patchy and enforcement mechanisms sometimes inadequate, ongoing political discourse suggests an impending tidal wave of regulatory frameworks aimed at curbing digital victimization and promoting safer digital interactions. Overall, as deepfake technology continues to evolve, the balance between its beneficial uses and potential harms will increasingly dictate the socio‑political climate.

                                                                  Share this article

                                                                  PostShare

                                                                  Related News