AI Poisoning: How Fake Support Scams Are Infiltrating AI Search Tools
As internet users shift their attention, cybercriminals are quick to follow. While Google Search remains dominant, a growing number of Americans—between 20-26%—are turning to AI tools like ChatGPT, Claude, and Perplexity for information. Scammers are capitalizing on this trend, moving away from traditional SEO poisoning in Google results to a more insidious threat: AI poisoning.
What Is AI Poisoning?
AI poisoning involves feeding AI search tools with convincing but fake information, such as fraudulent customer support listings. These listings redirect victims to scam phone numbers, websites, and agents posing as legitimate companies. Unlike Google Search, where users can evaluate multiple sources, AI chatbots often provide a single, confident answer—which can be incorrect 2-35% of the time, depending on the topic and model.
How AI Search Poisoning Scams Work
Attackers manipulate AI search tools by injecting false but plausible customer support entries for banks, airlines, and tech companies. When a user requests a support number, the AI model may suggest a fake one, initiating scams such as:
Unlike traditional phishing, these scams start directly within the AI’s response, making them harder to detect.
Why AI Is Easier to Poison Than Traditional Search
AI search tools are more vulnerable to poisoning for two main reasons:
Who’s Most at Risk?
The following groups are particularly vulnerable to AI search poisoning scams:
In short, anyone using AI search tools could fall victim to these scams.
Warning Signs of a Scam Support Line
If you encounter any of the following during a support call, hang up immediately:
- The agent requests payment via gift cards, wire transfers, or cryptocurrency.
- You’re pressured to “act now” or told your account will be closed.
- The agent requests remote access to your computer for a “routine” issue.
- You’re asked for a full credit card number for a non-billing issue.
- The support representative fails to verify basic account information.
If any of these red flags appear, hang up and call the official number from the company’s website.
Advice for AI Search Users
To protect yourself from AI search poisoning scams, follow these guidelines:
What Companies Should Do
While companies cannot entirely prevent AI poisoning, they can take steps to protect their customers:
Conclusion
As users migrate to AI search tools, scammers are increasingly targeting these platforms due to their lack of stringent controls against false information. Many users remain unaware that AI outputs cannot always be trusted. The solution is straightforward: always verify contact information through traditional search engines and official vendor websites, and approach all AI-generated information with caution until confirmed.