AI Search Scam Blog 1024x683 1

AI Poisoning: How Fake Support Scams Are Infiltrating AI Search Tools

As internet users shift their attention, cybercriminals are quick to follow. While Google Search remains dominant, a growing number of Americans—between 20-26%—are turning to AI tools like ChatGPT, Claude, and Perplexity for information. Scammers are capitalizing on this trend, moving away from traditional SEO poisoning in Google results to a more insidious threat: AI poisoning.

What Is AI Poisoning?

AI poisoning involves feeding AI search tools with convincing but fake information, such as fraudulent customer support listings. These listings redirect victims to scam phone numbers, websites, and agents posing as legitimate companies. Unlike Google Search, where users can evaluate multiple sources, AI chatbots often provide a single, confident answer—which can be incorrect 2-35% of the time, depending on the topic and model.

How AI Search Poisoning Scams Work

Attackers manipulate AI search tools by injecting false but plausible customer support entries for banks, airlines, and tech companies. When a user requests a support number, the AI model may suggest a fake one, initiating scams such as:

  • Payment Information Theft: Phony agents attempt to extract payment details.
  • Remote Access Scams: Users are tricked into installing remote access tools to “fix” non-existent problems.
  • Refund Scams: Victims are coerced into sending money back under false pretenses.
  • Account Takeover Attempts: Scammers begin hijacking accounts the moment a call is answered.

Unlike traditional phishing, these scams start directly within the AI’s response, making them harder to detect.

Why AI Is Easier to Poison Than Traditional Search

AI search tools are more vulnerable to poisoning for two main reasons:

  • Lack of Verification: AI models do not verify phone numbers, business listings, or URLs against authoritative sources as traditional search engines do. Attackers exploit this by:
  • Publishing fake but legitimate-looking business data.
  • Creating SEO-optimized websites that AI models scrape.
  • Submitting false support contacts on smaller directories.
  • Generating entire scam ecosystems using AI tools.
  • Self-Reinforcing Cycle: Users who receive incorrect information may unknowingly republish it, leading to a cycle where AI models consume and spread false data, lending it credibility.

Who’s Most at Risk?

The following groups are particularly vulnerable to AI search poisoning scams:

  • Older adults who rely on AI virtual assistants.
  • Anyone using AI to search for customer support phone numbers.
  • Users trying to contact airlines, delivery companies, app stores, banks, or subscription platforms.

In short, anyone using AI search tools could fall victim to these scams.

Warning Signs of a Scam Support Line

If you encounter any of the following during a support call, hang up immediately:

  • The agent requests payment via gift cards, wire transfers, or cryptocurrency.
  • You’re pressured to “act now” or told your account will be closed.
  • The agent requests remote access to your computer for a “routine” issue.
  • You’re asked for a full credit card number for a non-billing issue.
  • The support representative fails to verify basic account information.

If any of these red flags appear, hang up and call the official number from the company’s website.

Advice for AI Search Users

To protect yourself from AI search poisoning scams, follow these guidelines:

  • Never trust a support number from AI search without verifying it on the vendor’s official website.
  • Avoid calling numbers from random blogs, PDFs, Reddit posts, or forums.
  • Bookmark the official support pages of companies you frequently contact.
  • Enable Multi-Factor Authentication (MFA) to add an extra layer of security to your accounts.

What Companies Should Do

While companies cannot entirely prevent AI poisoning, they can take steps to protect their customers:

  • Register official support numbers with Google Business Profile and other directory services.
  • Train staff to verify support channels before contacting vendors.
  • Include examples of fake support numbers in phishing and social engineering awareness programs.
  • Publish official support contacts clearly and consistently.
  • Monitor the web for fraudulent listings, typo-squatted domains, or look-alike websites.
  • Implement DMARC, DKIM, and SPF protections on Mail Exchange (MX) records.

Conclusion

As users migrate to AI search tools, scammers are increasingly targeting these platforms due to their lack of stringent controls against false information. Many users remain unaware that AI outputs cannot always be trusted. The solution is straightforward: always verify contact information through traditional search engines and official vendor websites, and approach all AI-generated information with caution until confirmed.

Similar Posts