Children's Commissioner Pushes for Law Reforms
Call to Ban AI Tools 'Nudifying' Kids Gains Momentum in UK
Last updated:

Edited By
Mackenzie Ferguson
AI Tools Researcher & Implementation Consultant
In a significant push for policy change, England's Children's Commissioner, Dame Rachel de Souza, calls on the government to ban AI applications that create explicit images of minors. The current focus on AI-generated child sexual abuse imagery isn't enough, she argues, advocating for all "nudifying" apps to be outlawed. This plea comes amid a staggering 380% surge in reports of AI-driven sexual abuse content from 2023 to 2024.
Introduction: The Growing Threat of AI 'Nudifying' Apps
In the digital age, artificial intelligence (AI) has opened up new avenues for creativity and innovation, yet it also presents alarming risks. One of the most pressing concerns is the rise of so-called 'nudifying' apps, which use AI to generate sexually explicit images, often by manipulating real photographs to create falsely nude versions of the subjects depicted. The potential for harm is immense, with these applications capable of targeting individuals without their consent, leading to widespread repercussions both for personal privacy and societal safety. Such misuse of technology has escalated to the point where it demands immediate attention and intervention.
The controversy surrounding 'nudifying' apps has led public figures like Dame Rachel de Souza, the Children's Commissioner for England, to call for governmental action. De Souza advocates for a ban on these apps, emphasizing their potential to generate harmful content, particularly involving children, which can leave lasting psychological trauma for victims and contribute to a broader climate of digital exploitation. According to a report, there has been a dramatic 380% increase in AI-generated child sexual abuse content from 2023 to 2024, pushing experts and policymakers to recognize the urgency for tighter regulations [source].
Learn to use AI like a Pro
Get the latest AI workflows to boost your productivity and business performance, delivered weekly by expert consultants. Enjoy step-by-step guides, weekly Q&A sessions, and full access to our AI workflow archive.














The threat posed by AI apps that manipulate images cannot be underestimated. While the technology behind these applications is impressive, using it to create non-consensual explicit content reflects a dark side of innovation. The UK is taking steps to combat this, with laws targeting the possession, creation, or distribution of AI tools designed for producing child sexual abuse material [source]. However, as Dame de Souza notes, the existing laws don't cover all applications capable of 'nudifying' images, underscoring the incomplete legal framework.
The fight against AI-generated explicit content also encompasses a global dimension. Operation Cumberland, a multinational law enforcement initiative led by Denmark, illustrates the international effort required to combat the proliferation of AI-generated child sexual abuse material. The operation resulted in multiple arrests across several continents, highlighting the necessity for coordinated global responses and the complex challenges posed by these technologies [source]. This international approach is crucial as the digital landscape knows no national boundaries, making isolated efforts inadequate.
Public concern over the misuse of AI in creating deepfakes is matched by governmental and expert apprehension. While some argue that outright bans might infringe on personal freedoms and technological exploration, the overwhelming consensus is in favor of tighter restrictions to prevent abuse. Educational and support measures equally play a vital role, underscoring the importance of informing the public about potential dangers and offering help to victims. The drive for legislative and protective measures is not merely reactive but proactive, aiming to ensure a safer internet for future generations [source].
Understanding Deepfakes and Their Impact
Deepfakes are a technological marvel that harness the power of artificial intelligence to create media content that appears astonishingly real. These manipulated videos and images can depict individuals in scenarios that never actually occurred, opening the door to a myriad of creative, albeit occasionally malicious, applications. For instance, deepfakes can be used in the entertainment industry to revive deceased actors or portray fictional scenes with real-life characters, offering new storytelling possibilities. But these same technologies also raise serious ethical and privacy concerns, particularly when used for creating deceptive or harmful content. The societal impact of deepfakes is profound, prompting significant legal and ethical debates worldwide. While they can be utilized for innovative purposes, the potential for abuse, especially in creating non-consensual explicit content, cannot be ignored. The ultimate challenge facing regulators and technologists alike is striking a balance between encouraging creative technological use and protecting individuals' rights and privacy. This delicate balancing act is even more critical amid calls from figures like the Children's Commissioner in England to ban AI applications that generate explicit images of children due to their harmful implications. The increase in AI-generated child sexual abuse material has highlighted the urgent need for comprehensive regulatory frameworks. In the UK, for instance, the legal landscape is evolving with the introduction of laws targeting the distribution and possession of AI technologies designed to create such harmful content. These legislative efforts aim to close gaps and provide clearer legal repercussions for those engaging in the creation and spread of abusive deepfakes. Law enforcement agencies are also working globally, as seen with operations like Cumberland, to tackle the international challenges posed by this technology-driven threat. Public advocacy and expert opinions overwhelmingly support heightened restrictions on "nudifying" technologies to safeguard vulnerable populations, notably children and women, from digital exploitation. Yet, this drive towards enhanced regulation must consider the dynamic and fast-paced nature of AI development. Policymakers are encouraged to develop agile, forward-thinking approaches that can adapt to new challenges as they arise, ensuring that the protective measures stay relevant amidst rapid technological advancements. Ultimately, embracing the dual nature of deepfakes—as both an innovative tool and a potential threat—requires coordinated efforts between governments, tech developers, and the public. As these technologies continue to evolve, so must our strategies to harness their positive potential while mitigating risks. Engaging in open, cross-sector dialogue and investing in technological safeguards will be crucial in addressing the impact of deepfakes on society.
Learn to use AI like a Pro
Get the latest AI workflows to boost your productivity and business performance, delivered weekly by expert consultants. Enjoy step-by-step guides, weekly Q&A sessions, and full access to our AI workflow archive.














Current UK Legislation and Its Limitations
The current legal framework in the UK regarding AI-generated content, particularly deepfakes, highlights both progress and significant gaps. Under the Online Safety Act, it is illegal to share or threaten to share explicit deepfake images. However, this legislation does not cover all technological advancements within the realm of AI manipulation. For instance, while the law explicitly targets the distribution of AI-generated child sexual abuse material, it doesn't entirely prevent the use of all 'nudifying' apps. This presents a significant challenge as these apps can still be legally operated if they do not explicitly create or distribute child abuse content. The call for a comprehensive ban seeks to address these loopholes, ensuring that all potentially harmful apps are scrutinized appropriately BBC News.
Dame Rachel de Souza, the Children's Commissioner for England, argues that the existing legislation does not effectively protect children from the harms posed by certain AI technologies. The rapid pace of technological advancement means that laws struggle to keep up, often leaving significant loopholes unaddressed BBC News. This concern is exacerbated by the reported 380% increase in AI-generated child sexual abuse reports. Such statistics highlight the urgent need for legislation that does not merely react to technological changes but anticipates and accommodates them, encompassing all types of 'nudifying' technologies, regardless of intent or output.
The current legislative approach focuses primarily on criminalizing specific acts related to AI technology, such as distributing child sexual abuse material or using platforms irresponsibly. However, Dame Rachel de Souza and other advocates suggest this is insufficient. They urge a shift towards preventive measures that would make it illegal to create or operate 'nudifying' apps in any capacity. This includes addressing the conceptual gap in current legislation which does not recognize the potential harm these technologies can cause even if they are not used to create illegal content. The emphasis is on comprehensive legal frameworks that consider both the technology and its potential for misuse BBC News.
Children's Commissioner's Advocacy for a Comprehensive Ban
Dame Rachel de Souza, the Children's Commissioner for England, is spearheading an urgent campaign for a comprehensive ban on AI applications that generate sexually explicit images, known as nudifying apps, which alter images of real people. As the prevalence of these technologies increases, she insists that the current legislative efforts targeting AI tools used specifically for generating child sexual abuse material are inadequate. Her call to action is bolstered by a staggering 380% increase in reports of AI-generated child sexual abuse content over the past year, illustrating the growing threat these technologies pose. De Souza argues that any app with the capability to undress images — whether involving minors or not — should be prohibited to prevent potential misuse ([BBC News](https://www.bbc.com/news/articles/cr78pd7p42ro)).
De Souza's advocacy for a full ban on nudifying apps is grounded in the principle of preemptive action, as she believes waiting for these technologies to be used in harmful ways is not an option. Her recommendations extend beyond banning these applications; she suggests imposing legal obligations on AI developers to address risks to children effectively. Additionally, De Souza advocates for a streamlined process to remove explicit deepfake images from the internet and recognition of deepfake sexual abuse as a form of violence against women and girls. Her approach seeks to address the underlying tools of abuse rather than the symptoms alone, urging a more robust legislative framework to protect vulnerable individuals ([BBC News](https://www.bbc.com/news/articles/cr78pd7p42ro)).
The push for banning nudifying apps comes amid broader international efforts to combat AI-generated child sexual abuse material, such as Operation Cumberland, which saw the arrest of individuals globally for their role in distributing such content. These international operations underscore the need for a strong, united stance against technologies that contribute to the sexualization of children, even when no real victims are involved. The Children's Commissioner believes that a ban on these applications will set a precedent, demonstrating political will to tackle this issue head-on and protect children's rights and dignity ([BBC News](https://www.bbc.com/news/articles/cr78pd7p42ro)).
Learn to use AI like a Pro
Get the latest AI workflows to boost your productivity and business performance, delivered weekly by expert consultants. Enjoy step-by-step guides, weekly Q&A sessions, and full access to our AI workflow archive.














The Rise in AI-Generated Child Sexual Abuse Reports
The alarming escalation of AI-generated child sexual abuse reports has triggered significant concern among policymakers, child protection agencies, and advocacy groups worldwide. A staggering 380% rise in such reports from 2023 to 2024 highlights the urgent need to address the proliferation of technologies that facilitate the creation of these harmful materials. Children's Commissioner for England, Dame Rachel de Souza, is at the forefront, calling for a comprehensive ban on all 'nudifying' apps. These applications, empowered by artificial intelligence, can realistically alter photos to create sexually explicit images of children or adults without their consent, thus presenting a grave risk to individual safety and privacy. The ability of AI to generate highly believable yet fictitious content underscores the critical need for stringent regulatory measures ([BBC News](https://www.bbc.com/news/articles/cr78pd7p42ro)).
While existing laws in countries like the UK target the sharing or distribution of sexually explicit deepfake images, Dame de Souza argues that such measures are insufficient. The current legislative focus often falls short of addressing the tools used in creating these images, leaving potential loopholes that could be exploited by malefactors. Hence, she advocates for a blanket ban on all applications capable of 'nudifying,' as the rapid technological advancements make it challenging to distinguish between innocent and nefarious uses. The impact of AI-generated child sexual abuse materials extends beyond the digital realm, instigating real-world harm and exploitation. It perpetuates the sexualization of minors, even when no actual children are directly harmed in the production of these images ([BBC News](https://www.bbc.com/news/articles/cr78pd7p42ro)).
The global crackdown on AI-generated child sexual abuse content, highlighted by initiatives like Operation Cumberland, underscores the international scope of the problem. This operation led by Denmark exemplifies the collaboration needed to combat the spread of such materials, realizing arrests across multiple countries for the creation and dissemination of AI-generated content. Europol emphasizes the difficulty in tackling this issue due to the seamless realism these fake images present, contributing to the broader challenges of managing and limiting the impact of non-consensual, AI-generated imagery on children and society at large ([CNN](https://www.cnn.com/2025/02/28/world/ai-child-sex-abuse-europol-operation-intl/index.html)).
Public concern over the burgeoning threat of AI-powered 'nudifying' apps is palpable, with surveys indicating overwhelming support for their prohibition among parents, teenagers, and educational leaders alike. The psychological toll and fear these images instigate among young people lead to changes in online behavior, underscoring the need for interventions that also prioritize educational solutions and digital literacy. The NAHT, representing school leaders, stands firm on criminalizing tools that facilitate non-consensual deepfake content, establishing a balanced approach that combines legal reforms with public awareness campaigns aimed at reducing the allure and accessibility of these harmful technologies ([Internet Matters](https://www.internetmatters.org/hub/research/parents-children-say-ban-nudifying-apps/)).
Looking forward, the solutions to this problem necessitate international cooperation and harmonized legislative efforts to keep pace with fast-evolving AI technologies. Countries are urged to update their legal frameworks to close existing loopholes, focusing on the harm inflicted by such content rather than just its point of creation. The collaboration between tech companies and law enforcement is vital in developing preventive measures against the spread of these images. The future landscape will likely require further legislation to both protect children effectively and balance individual freedoms, promoting a safe digital environment for citizens worldwide ([Internet Matters](https://www.internetmatters.org/hub/research/parents-children-say-ban-nudifying-apps/)).
Additional Recommendations from the Children's Commissioner
The Children's Commissioner for England, Dame Rachel de Souza, has been vocal about the urgent need for enhanced measures to protect children from the dangers posed by AI technologies. She has specifically highlighted the pressing issue of AI apps that 'nudify' images, which could be manipulated to create sexually explicit content of children. Dame Rachel argues that the current laws focusing solely on AI tools that produce child sexual abuse material are inadequate. Instead, she calls for a comprehensive ban on all apps capable of 'nudifying' any individual, to prevent potential harm and abuse. This recommendation comes in response to a staggering 380% increase in AI-generated child sexual abuse reports from 2023 to 2024, emphasizing the growing threat of these technologies if left unchecked.
Learn to use AI like a Pro
Get the latest AI workflows to boost your productivity and business performance, delivered weekly by expert consultants. Enjoy step-by-step guides, weekly Q&A sessions, and full access to our AI workflow archive.














Impact of Related Global Crackdowns and New Legislation
The global landscape is shifting as countries enact new laws and intensify crackdowns on AI-generated child sexual abuse material. In the UK, the legislation evolution is marked by a determined effort to criminalize the development and distribution of software designed to produce such abuse material. The introduction of new legal measures seeks to close loopholes previously overlooked by the Online Safety Act, addressing the tools themselves, not just the content created [0](https://www.bbc.com/news/articles/cr78pd7p42ro). This legislative update is seen as crucial to curbing the pervasive nature of AI-generated imagery and reflects an acknowledgment of the rapid technological advancements that continue to challenge legal frameworks.
Globally, initiatives like Operation Cumberland signal a coordinated international response to the threat posed by AI-generated child sexual abuse content. Led by Europol, this operation has already resulted in the arrest of multiple individuals across several countries, demonstrating the efficacy of collaborative international law enforcement actions [1](https://www.cnn.com/2025/02/28/world/ai-child-sex-abuse-europol-operation-intl/index.html). These crackdowns highlight the need for comprehensive strategies that include cooperation between nations, as well as robust technical measures to monitor and mitigate the creation and dissemination of harmful AI-generated content.
Despite the intent and progress reflected in these legislative and enforcement efforts, challenges remain. Legal experts point out that outright bans, while protective, could impose constraints on internet freedom and innovation. Furthermore, rapidly evolving AI technologies and their widespread accessibility make enforcement complex and resource-intensive [3](https://www.pbs.org/newshour/nation/law-enforcement-cracking-down-on-creators-of-ai-generated-child-sex-abuse-images). This underscores a critical need for governments to develop adaptive policies and foster partnerships with technology companies to ensure responsible AI development and deployment.
In this evolving scenario, public advocacy plays a crucial role. The call from the Children's Commissioner for a ban on all 'nudifying' apps, irrespective of their current legality, reflects societal demands for more proactive measures [5](https://www.childrenscommissioner.gov.uk/news-and-blogs/press-notice-childrens-commissioner-calls-for-immediate-ban-of-ai-apps-that-enable-deepfake-sexual-abuse-of-children/). However, the government's more measured approach, focusing on legal reforms targeting specific abuses, reveals a complex balancing act between safeguarding vulnerable individuals and maintaining technological freedoms [6](https://www.theguardian.com/society/2025/apr/28/commissioner-calls-for-ban-on-apps-that-make-deepfake-nude-images-of-children).
Expert Opinions: Balancing Protection and Freedom
The debate around banning AI apps that modify images to create objectionable content, particularly those targeting children, highlights an ongoing struggle between safeguarding individuals and preserving freedom. Experts from various fields, including legal and ethical, are currently evaluating the implications of imposing such bans. According to Dame Rachel de Souza, Children's Commissioner for England, banning these apps would provide a significant step towards reducing the alarming rise in AI-generated child sexual abuse reports. She points out that the status quo, which primarily penalizes tools that overtly create such content, is not enough in a rapidly evolving digital environment. Here, experts highlight the accessibility and potential misuse of these apps as a growing concern. They argue that leaving such technologies unchecked could facilitate abuse and exploitation without proper legislative action.
At the same time, there is a legitimate concern that an outright ban on AI "nudifying" apps may lead to broader restrictions on technological freedom. Law enforcement and legal experts caution against measures that could infringe on internet freedoms, suggesting the need for a more balanced approach. Critics mention that prohibitive actions may stifle technological experimentation, particularly among younger demographics who are naturally inclined to explore new technologies. As noted here, there is also the practical challenge of enforcing such laws given the rapid advancement of AI and the persistence of prior models on the web.
Learn to use AI like a Pro
Get the latest AI workflows to boost your productivity and business performance, delivered weekly by expert consultants. Enjoy step-by-step guides, weekly Q&A sessions, and full access to our AI workflow archive.














Further complicating the issue is the fact that constructing a comprehensive legal framework for AI misuse in the context of child protection requires not just technology-specific bans, but also robust measures to safeguard against misuse at the systemic level. Many advocate for a coalition between tech companies and government bodies to ensure responsible AI development and deployment. Such partnerships could lead to better platform accountability, ensuring that the efforts to safeguard minors from AI-generated abuse are not undermined by piecemeal solutions. This sentiment is echoed in discussions where the government is encouraged to pursue a multilayered strategy that incorporates platform responsibility, user education, and resource allocation for effective law enforcement monitoring. Further reading on this emphasizes the multi-faceted strategies being considered, beyond just penalizing the creators of disturbing content.
Meanwhile, public sentiment is notably in favor of stricter laws on image-altering apps, as evident from surveys indicating overwhelming support from children, parents, and educators for a ban on these tools. The general apprehension about the misuse of AI technology denotes a societal push towards prioritizing safety over unfettered access to potentially harmful digital tools. According to surveys, significant majorities among parents and teenagers support robust actions against "nudifying" apps, reflecting a societal consensus that views such technology as a threat to public safety and personal privacy. This collective approach to internet safety highlights an important shift, showing that communities are eager to collaborate with authorities to ensure digital environments are conducive to safety and respect for all individuals.
Public Reactions and Support for the Ban
The proposed ban on AI apps that create sexually explicit images of children has sparked a variety of reactions from the public, revealing widespread support among certain demographics. Children, who find themselves adapting their online behavior to avoid becoming targets of AI-generated abuse, have expressed significant concern. This fear of victimization underscores the necessity of protective measures against such invasive technologies.
Parents have shown overwhelming support for the ban, driven by the desire to safeguard their children from potential exploitation. A survey conducted by Internet Matters shows that a substantial 80% of parents are in favor of prohibiting these 'nudifying' tools. This level of support highlights a collective concern for children's safety in the digital age and a demand for stringent controls over the creation and distribution of potentially harmful AI applications.
Teenagers, too, are advocating for the ban, with 84% of those surveyed by Internet Matters backing the initiative. Their proactive stance is indicative of their awareness of the threats posed by these technologies and their desire for a safer online environment. This demographic's overwhelming support reflects the urgent need for measures that protect privacy and uphold digital integrity for future generations.
Educational leaders, represented by organizations like the NAHT, have also voiced their support for the ban, emphasizing the necessity for legal reforms to criminalize the non-consensual creation and distribution of deepfake content. Their endorsement signifies the educational sector's commitment to safeguarding students from technological misuse and ensuring a respectful digital landscape.
Learn to use AI like a Pro
Get the latest AI workflows to boost your productivity and business performance, delivered weekly by expert consultants. Enjoy step-by-step guides, weekly Q&A sessions, and full access to our AI workflow archive.














Despite broad support from these groups, the government has expressed reluctance to adopt a complete ban on all 'nudifying' apps. This cautious approach reflects concerns about potential infringements on internet freedoms and the challenges of enforcing such prohibitions. As the dialogue continues, balancing technological innovation with ethical responsibilities remains a key consideration in the formulation of future policies.
Future Implications: Economic, Social, and Political Dimensions
The advancement of artificial intelligence technologies, particularly those capable of generating deepfakes, has begun to reshape multiple aspects of society, including economic, social, and political dimensions. Economically, as these AI tools become more accessible, industries will likely experience a significant demand for enhanced cybersecurity measures and privacy protection technologies. This demand stems from both individuals and corporations seeking to shield themselves from potential digital exploitation. However, with the looming threat of AI-generated explicit content, there could be a reduction in the willingness of individuals to share personal information online, negatively impacting sectors that rely heavily on data collection and processing.
On a social level, the burgeoning misuse of AI technologies like 'nudifying' apps could severely undermine public trust in digital innovations, leading to a general wariness of online interactions. This situation highlights the urgent need for comprehensive digital literacy programs aimed at educating the public, especially younger and more susceptible demographics, about the risks and responsibilities associated with digital technology use. Furthermore, the psychological toll on victims of AI-generated content calls for the establishment of robust support systems to aid recovery and ensure mental well-being.
Politically, the global reach and ease of dissemination of AI-generated child sexual abuse material (CSAM) necessitates an international cooperative approach to combat this issue effectively. Governments are under increasing pressure to adapt existing legal frameworks to consider the unique challenges posed by AI technologies. This includes criminalizing the production and distribution of AI-generated CSAM and addressing any legislative gaps. There is also a growing need for international harmonization of laws to ensure a unified global response that prioritizes the protection of victims over the methods used to create harmful content.