×
AI-powered search makes phone scams easier—here’s how to protect yourself
Written by
Published on
Join our daily newsletter for breaking news, product launches and deals, research breakdowns, and other industry-leading AI coverage
Join Now

Artificial intelligence has fundamentally changed how people search for information online, but this technological leap has created an unexpected vulnerability: scammers are now exploiting AI-powered search results to steal money from unsuspecting users looking for customer service numbers.

Unlike traditional search engines that display multiple results for verification, AI systems like Google’s AI Overviews and ChatGPT often present a single, authoritative-seeming answer. This streamlined approach, while convenient, creates a perfect storm for fraud when criminals manage to inject fake contact information into these AI responses.

How scammers are exploiting AI search

The mechanics of this scam are deceptively simple yet sophisticated. Criminals have discovered ways to manipulate AI systems into displaying fraudulent phone numbers when users search for customer service contacts. When someone searches for “Royal Caribbean customer service” or “Amazon support number,” they might receive what appears to be an official response from the AI—complete with a fake phone number controlled by scammers.

Alex Rivlin, owner and CEO of real estate firm Rivlin Group, learned this lesson the hard way. Despite considering himself cautious about online security, Rivlin fell victim to a Royal Caribbean scam that began with what seemed like a legitimate phone number from Google’s AI search results.

“I pride myself on being cautious,” Rivlin shared in a Facebook post. “I don’t click links, I don’t give personal info over the phone, and I always verify. But I still got caught in a very sophisticated scam—and it all started with what looked like a legit phone number for Royal Caribbean I found on Google.”

The scammers demonstrated remarkable preparation, providing accurate pricing information, industry terminology, and specific details about shuttle services. Only after discovering fraudulent charges on his credit card statement did Rivlin realize he’d been duped.

A similar incident involved Swiggy Instamart, an Indian food delivery service. When a customer’s order arrived incomplete, they searched Google for “Swiggy customer care number” and called the number that appeared in the results. The fake customer service representative asked legitimate-sounding questions before requesting the caller’s WhatsApp number and asking them to share their screen—red flags that prompted the customer to end the call. Notably, Swiggy doesn’t actually offer phone support, relying instead on chat-based assistance.

Why AI makes this problem worse

Traditional search engines present users with multiple results from various sources, naturally encouraging comparison and verification. However, AI-powered search systems are designed to provide definitive answers, often presenting a single response that appears authoritative and complete. This design philosophy, while improving user experience in legitimate cases, inadvertently increases the likelihood that users will trust and act on fraudulent information.

The problem extends beyond Google’s systems. Scammers have also successfully manipulated ChatGPT and other AI platforms using similar techniques. Security experts at Odin and ITBrew recently demonstrated how hackers can use “prompt injection”—essentially feeding specific commands to AI systems—to force platforms like Google Gemini to include scam messages and fake customer service numbers in their responses.

When an AI system encounters these injected commands, it treats them as legitimate instructions, incorporating the fraudulent information into what appears to be a standard, helpful response to a user’s query.

Company responses and ongoing challenges

Google acknowledges the problem and claims to have “strong protections and policies to prevent scams from appearing in AI Overviews or ranking highly on Search.” The company states its systems are “effective at surfacing official customer service information for the queries people search most” and that it has “taken action on several of the examples shared.”

Similarly, OpenAI reports that many pages containing fake numbers referenced by ChatGPT have been removed, though the company notes that such updates can take time to implement across all systems.

However, the cat-and-mouse nature of this problem means that as companies close one avenue of attack, scammers adapt and find new methods to exploit AI systems.

Protecting yourself from AI-powered scams

Bypass AI search entirely
The most reliable protection is avoiding AI-powered search results when looking for customer service information. Add “–AI” to your Google search query to access traditional search results that show multiple sources for comparison. Better yet, navigate directly to the company’s official website to find contact information.

Verify before you call
Before calling any customer service number found through search, cross-reference it with the official company website. Many businesses don’t actually offer phone support, relying instead on email, chat, or online support systems.

Recognize common scam tactics
Legitimate customer service representatives rarely ask customers to share their screen, provide WhatsApp numbers, or request immediate payment information without proper verification procedures. Be particularly suspicious of agents who seem overly knowledgeable about pricing and services but ask for unusual forms of contact or payment.

Check website authenticity
When visiting websites found through search results, look for signs of legitimacy: proper spelling and grammar, professional formatting, secure HTTPS connections, and official company branding. Suspicious websites often contain odd formatting, unusual fonts, or unexpected characters.

Use Google’s verification tools
Click the three dots next to search results to access Google’s “About this result” feature, which provides information about the source before you visit the website or use contact information.

For business owners
Companies should actively monitor how their customer support information appears in AI search results and work with search engines to ensure accurate contact details are prominently displayed. Consider creating structured data markup on your website to help AI systems identify and display correct customer service information.

The broader implications

This emerging threat highlights a fundamental challenge in the AI era: the same technologies that make information more accessible also create new vulnerabilities for exploitation. As AI systems become more sophisticated and widely adopted, the potential impact of successful manipulations grows correspondingly larger.

The problem is particularly concerning because it targets a basic trust relationship between users and technology. When people search for customer service information, they’re typically experiencing a problem that needs resolution—making them more vulnerable to exploitation and less likely to scrutinize results carefully.

For businesses, this trend represents both a security challenge and a customer service issue. Companies must now actively monitor how their brand appears in AI search results and ensure customers can easily distinguish between legitimate and fraudulent contact information.

As AI continues to reshape how people access information online, the responsibility for security increasingly falls on both technology companies to improve their systems and users to maintain healthy skepticism—even when dealing with seemingly authoritative AI responses.

Scammers have infiltrated Google's AI responses - how to spot them

Recent News

Iowa teachers prepare for AI workforce with Google partnership

Local businesses race to implement AI before competitors figure it out too.

Fatalist attraction: AI doomers go even harder, abandon planning as catastrophic predictions intensify

Your hairdresser faces more regulation than AI companies building superintelligent systems.

Microsoft brings AI-powered Copilot to NFL sidelines for real-time coaching

Success could accelerate AI adoption across other major sports leagues and high-stakes environments.