Fraud Fighters Turn to Tech
AI Granny Scammers Busted: Meet 'Daisy', the Chatbot Hero!
Last updated:

Edited By
Mackenzie Ferguson
AI Tools Researcher & Implementation Consultant
The Guardian reports on an innovative AI chatbot 'Daisy,' developed by O2 and Jim Browning, that mimics an elderly woman to thwart phone scammers. By engaging fraudsters in lengthy calls about recipes and knitting, Daisy helps prevent scams by keeping the scammers busy, recording up to 40 minutes of conversation for analysis.
Introduction to Daisy: The AI Granny Chatbot
Daisy is an AI chatbot developed as a strategic measure to combat phone scams by engaging fraudsters in lengthy, frivolous conversations. Developed by O2 in collaboration with renowned scam baiter Jim Browning, Daisy cleverly mimics the persona of an elderly grandmother, discussing benign topics such as knitting or traditional recipes to waste the time of would-be scammers. According to reports from The Guardian, Daisy can captivate fraudsters' attention for nearly 40 minutes, significantly disrupting their attempts to reach real victims ().
As a proof-of-concept project, Daisy was introduced to showcase the potential of AI technology in thwarting fraud. The developers took a strategic approach by seeding phone numbers across various platforms to attract scammers. Meanwhile, conversations with these deceitful callers were meticulously recorded to help analyze interactions that could further refine Daisy's conversational tactics (). Despite the impressive feats achieved, there are limitations. For instance, some astute scammers have managed to recognize they were dealing with an AI, diminishing its effectiveness over time. However, this initiative marks a significant step in leveraging AI for social good, particularly in fraud prevention scenarios.
Learn to use AI like a Pro
Get the latest AI workflows to boost your productivity and business performance, delivered weekly by expert consultants. Enjoy step-by-step guides, weekly Q&A sessions, and full access to our AI workflow archive.














One of the primary questions about Daisy is its operational mechanism. The AI chatbot generates natural-sounding conversations by responding to scammer prompts while maintaining its elderly facade. Employing deft delay tactics, it can hold scammers at bay, ensuring they pour their efforts into a fruitless endeavor rather than targeting real, vulnerable individuals ().
Daisy's innovative approach not only underscores the capabilities of artificial intelligence in fraud prevention but also encourages a broader use of such technology. With applications across banking, insurance, travel, and tax domains, AI systems are proving indispensable in detecting unauthorized activities and suspicious patterns that signal fraudulent behavior. This aligns with broader efforts to incorporate AI in safeguarding and enhancing the security of various facets of life ().
How Daisy Engages with Phone Scammers
Daisy, an AI chatbot developed by O2 and scam baiter Jim Browning, has proven to be a disruptive force in the world of phone scams. Known for its ability to mimic an elderly woman's voice and mannerisms, Daisy engages with scammers by drawing them into long-winded conversations. Its ultimate goal is to waste the time of these fraudsters, reducing the number of real potential victims they might otherwise contact. During calls, Daisy cleverly interjects with mundane anecdotes about knitting patterns and cherished family recipes, holding some scammers on the line for as long as 40 minutes. This strategy not only halts their immediate efforts but also serves as a proof-of-concept that AI can indeed play a vital role in fraud prevention .
The project team behind Daisy plants phone numbers on various websites, effectively baiting scammers into making the first move. Once a call is made, Daisy picks up with a warm, grandmotherly greeting, designed to put the scammer at ease and pique their interest. Through carefully crafted conversational tactics, Daisy responds to prompts while maintaining its created persona, discussing everything from the weather to favorite pastimes, like gardening. Meanwhile, these interactions are recorded and later analyzed to refine Daisy’s capabilities. However, as realistic as Daisy might sound, some scammers eventually become suspicious, discerning the AI’s true nature. Nevertheless, Daisy’s ability to tie up scammer resources remains a valuable asset in the field of cyber defense .
Learn to use AI like a Pro
Get the latest AI workflows to boost your productivity and business performance, delivered weekly by expert consultants. Enjoy step-by-step guides, weekly Q&A sessions, and full access to our AI workflow archive.














Despite its success in engaging scammers, Daisy is not without limitations. The AI is currently not available for public use, limiting its widespread application against scammers. Furthermore, Daisy's impersonations are constrained largely to a single elderly female persona, which can lead to ethical debates regarding the stereotyping of older individuals. Moreover, savvy scammers are increasingly capable of detecting when they are speaking to an AI, potentially diminishing Daisy’s effectiveness over time. Nonetheless, Daisy highlights the innovative potential of AI in scamming prevention and the need for continued investment and technological evolution to stay ahead of fraudsters .
In the broader context of fraud prevention, AI technologies like Daisy are part of a burgeoning trend in which artificial intelligence is deployed to combat financial fraud, identity theft, and other criminal activities. From banks employing machine learning to detect unauthorized transactions, to insurance companies using similar technology to verify claims, AI is becoming an increasingly instrumental tool in the fight against fraud. Significant strides have been made with AI-enhanced protocols, like the FCC's "STIR/SHAKEN," which dramatically reduced robocalls in the U.S. by authenticating caller IDs. As more sophisticated scams evolve, leveraging AI’s adaptability will be crucial. However, deploying AI for fraud prevention must be balanced with ethical considerations and the protection of privacy .
Effectiveness and Limitations of Daisy
The effectiveness of Daisy, the AI chatbot developed by O2 and partnered with scam baiter Jim Browning, largely stems from its ability to engage fraudsters in prolonged conversations, thus preventing them from reaching real potential victims. As reported by The Guardian, Daisy can hold scammers’ attention for up to 40 minutes by engaging in conversations on mundane topics such as knitting and recipes, which helps in disrupting their attempts to defraud actual people. This makes Daisy a valuable tool in illustrating how AI technology can be exceptionally effective in mitigating phone fraud threats by consuming the scammers’ time and resources. The adoption of Daisy is a prime example of innovative application of AI in fraud prevention, showing significant promise in disabling malicious activities through strategic engagement [1].
However, Daisy is not without its limitations. One of its key drawbacks is that it is not yet available for public use, which limits the scope of its impact in real-world settings. Furthermore, Daisy's reliance on a singular persona—an elderly woman—raises concerns regarding the potential for stereotyping, and reduces the diversity of conversations that could potentially deceive a wider array of scammers. Additionally, there are challenges associated with its voice mimicry capability, as some experienced scammers have been able to detect the AI nature during interactions. These limitations highlight the need for further refinement and diversity in AI personas to enhance the tool's effectiveness and adaptability [1].
Broader Applications of AI in Fraud Prevention
In the realm of fraud prevention, the deployment of artificial intelligence is not only innovative but also effective. One intriguing development is the AI chatbot known as "Daisy," which functions as an older woman to engage and distract fraudsters. By maintaining a friendly and realistic dialogue, Daisy has managed to occupy scammers for extended periods, thereby preventing them from reaching actual victims. This project, developed by O2 and fraud-baiter Jim Browning, highlights AI's potential to mimic social interactions skillfully, which is essential in preventing fraudulent calls .
Beyond just engaging scam callers, AI's vast applications extend into other industries and sectors. In banking, AI is increasingly being harnessed to swiftly identify unauthorized transactions and unusual account activities. Meanwhile, the insurance sector benefits from AI's ability to verify claims efficiently, which helps to combat fraudulent insurance activities. Similarly, in the travel industry, AI systems monitor booking patterns to detect and prevent suspicious activities. Tax fraud also sees reductions as AI identifies anomalies in tax filings, demonstrating its vital role in maintaining fiscal integrity .
Learn to use AI like a Pro
Get the latest AI workflows to boost your productivity and business performance, delivered weekly by expert consultants. Enjoy step-by-step guides, weekly Q&A sessions, and full access to our AI workflow archive.














Notably, the power of AI in fraud prevention is increasingly becoming a focus for technological innovation in policy and practice. The Federal Communications Commission (FCC), for instance, has implemented "STIR/SHAKEN" protocols, enhanced by AI, to significantly reduce robocalls across major U.S. carriers. This AI-driven approach leverages machine learning to authenticate calls and block spoofed numbers, representing a powerful application of AI in securing communications .
Moreover, collaboration efforts, such as those between Microsoft and major banks, utilize AI to detect and prevent sophisticated banking scams. Their shared database initiative has resulted in a significant improvement in identifying fraud patterns, reflecting the growing trend of using AI for comprehensive fraud prevention across financial institutions . These examples underscore AI’s transformative impact on traditional and modern fraud prevention tactics, thereby bridging the gap between technology and security.
Public Reactions to Daisy's Efforts
Public reaction to Daisy, an AI chatbot designed to outsmart phone scammers by using the guise of a chatty elderly lady, has been a mix of amusement, admiration, and critical reflection. Social media users have largely cheered the project, finding humor and satisfaction in seeing scammers duped by Daisy's innocent yet effective conversation tactics. This has been particularly noticeable in communities like Reddit, where users have lauded Daisy's capability to stall potential fraudsters by engaging them in mundane discussions [source].
Future Implications of AI in Law Enforcement
The future implications of artificial intelligence (AI) in law enforcement are vast and multifaceted. AI's capabilities to analyze extensive datasets quickly make it an invaluable tool for detecting patterns and preventing crimes. With advancements similar to "Daisy," the AI chatbot developed to combat fraud by engaging scammers in prolonged conversations , law enforcement can harness AI to reduce the incidence of scams significantly. AI systems could potentially automate the monitoring of suspicious activities, freeing up human officers for more strategic tasks.
One significant economic implication of AI in law enforcement is the potential to substantially reduce financial fraud costs. By preventing scams before they reach potential victims, as seen with O2's innovative approach , AI can save corporations and individuals millions. However, this advancement requires a significant investment in AI technologies and maintenance, pushing organizations to weigh cost-effective solutions against traditional methods in their adoption strategies.
Socially, AI's integration into law enforcement heralds a transformative period. It fosters public trust in AI's ability to solve societal issues efficiently. Yet, challenges like the ethical concerns over the portrayal of personas in AI chatbots such as Daisy point to the need for a balanced technological approach. Strategies combining AI applications with human oversight and public awareness campaigns may offer the most comprehensive solutions.
Learn to use AI like a Pro
Get the latest AI workflows to boost your productivity and business performance, delivered weekly by expert consultants. Enjoy step-by-step guides, weekly Q&A sessions, and full access to our AI workflow archive.














In the realm of policy and governance, there is a clear need for regulatory frameworks to govern AI's use in law enforcement carefully. As AI becomes more prevalent, governments are increasingly called to enhance their investments in AI-driven initiatives. This is seen in measures to improve cybercrime detection and prevention capabilities through AI, reflective of increased international cooperation for global safety .
Despite the promising advantages, future implications of AI in law enforcement aren't without challenges. There is an ongoing need for AI systems to continually adapt, keeping pace with scamming techniques that evolve rapidly. As highlighted in the discussions around Daisy , maintaining privacy and data security are critical, along with establishing ethical guidelines to prevent AI misuse and ensure beneficial outcomes for society.