Taking a Stand Against Revenge Porn
Microsoft Brings Relief to Deepfake Porn Victims with Bing Image Scrubbing Tool
Last updated:

Edited By
Mackenzie Ferguson
AI Tools Researcher & Implementation Consultant
Microsoft has partnered with StopNCII to help victims of deepfake and revenge porn by enabling a tool that scrubs explicit images from Bing's search results. By creating digital fingerprints of these images, Bing can now prevent their spread, providing much-needed relief to victims.
The advancement of generative AI tools has introduced a significant challenge on the internet—the spread of synthetic nude images that closely resemble real individuals. On Thursday, Microsoft made a significant move to assist victims of revenge porn by unveiling a tool that prevents Bing's search engine from displaying these manipulated images.
Microsoft's announcement was made in collaboration with StopNCII, an organization dedicated to supporting victims of non-consensual intimate image abuse. StopNCII allows affected individuals to generate a digital fingerprint, or 'hash,' of explicit images on their devices, whether these images are real or fabricated. This digital fingerprint is then used by StopNCII's partner organizations, including Microsoft’s Bing, Facebook, Instagram, Threads, TikTok, Snapchat, Reddit, Pornhub, and OnlyFans, to remove these images from their platforms.
Learn to use AI like a Pro
Get the latest AI workflows to boost your productivity and business performance, delivered weekly by expert consultants. Enjoy step-by-step guides, weekly Q&A sessions, and full access to our AI workflow archive.














In a blog post, Microsoft disclosed that it had already taken action on 268,000 explicit images emanating from Bing’s image search by collaborating with StopNCII’s database through the end of August. According to Microsoft, the previous method of direct user reporting was insufficient to manage the scale of the issue effectively.
Microsoft stated in the blog post, “We have heard concerns from victims, experts, and other stakeholders that user reporting alone may not scale effectively for impact or adequately address the risk that imagery can be accessed via search.” This statement underscores the limitations of relying solely on user reports to combat the widespread problem of synthetic explicit content.
One can only speculate how much more severe this issue could become on a considerably more popular search engine like Google. Although Google provides its tools for reporting and removing explicit images from its search results, it has been criticized for not partnering with StopNCII. According to a Wired investigation, Google users in South Korea reported 170,000 links related to unwanted sexual content on Google Search and YouTube since 2020.
The synthetic nude problem is pervasive. While StopNCII's tools are limited to individuals over 18, 'undressing' sites are already causing issues for high school students across the United States. Unfortunately, the U.S. lacks a comprehensive federal law governing AI-generated explicit content, resulting in a patchwork of state and local regulations to address the problem.
Learn to use AI like a Pro
Get the latest AI workflows to boost your productivity and business performance, delivered weekly by expert consultants. Enjoy step-by-step guides, weekly Q&A sessions, and full access to our AI workflow archive.














In August, San Francisco prosecutors filed a lawsuit to shut down 16 leading 'undressing' sites. According to Wired's tracker for deepfake porn laws, 23 states in the U.S. have enacted laws to tackle non-consensual deepfakes, whereas nine states have rejected such legislative proposals.