AI-powered phone scams target the elderly in China
AI Voice Crooks: Scammers Use Technology to Mimic Loved Ones
In China, a new wave of phone scams is exploiting AI voice cloning to mimic the voices of relatives, targeting the elderly. This alarming trend has seen fraudsters use sophisticated technology to clone voices and request money, creating a real challenge for law enforcement and a growing concern for families worldwide.
Introduction to AI Voice Cloning Scams
How AI Voice Cloning Technology Works
Impact on Victims: Stories and Statistics
Preventive Measures and Protection Tips
Law Enforcement and Government Responses
The Growing Global Phenomenon of AI Scams
Expert Insights: Psychological and Technological Factors
Public Outrage and Reaction to Scams
Future Implications of AI Voice Cloning
Conclusion: Combating the Threat of AI Scams
Sources
- 1.SCMP article(scmp.com)
Related News
May 9, 2026
OpenAI Ships GPT-5.5-Cyber, a Near-Mythos Model for Vetted Defenders
OpenAI launched GPT-5.5-Cyber, a specialized model for cybersecurity defenders that scored 81.9% on the CyberGym benchmark and completed simulated corporate cyberattacks. The UK AISI found it nearly as capable as Anthropic's Claude Mythos — 20% vs 30% success on a 32-step attack simulation. But the strategy diverges: Anthropic locks Mythos to ~40 orgs, while OpenAI offers tiered access through its Trusted Access for Cyber program.
May 8, 2026
OpenAI Launches GPT-5.5-Cyber, Taking Direct Aim at Anthropic Mythos
OpenAI launched GPT-5.5-Cyber on May 7 — a cybersecurity-focused AI model rolling out to vetted defenders. The release comes a month after Anthropic's Claude Mythos and signals an escalating arms race in AI-powered cyber tools, with both companies jockeying for government trust.
May 3, 2026
Anthropic Mythos Exposes AI Governance Crisis as Models Gain Autonomy
Anthropic's Claude Mythos Preview model, which can autonomously execute multi-step cyberattacks and discovered decades-old software bugs, has triggered Project Glasswing — a restricted-access coalition with CISA, Microsoft, and Apple. The model's capabilities are forcing a reckoning over how companies govern AI that can act independently.