×
AI systems repeat the same security mistakes as 1990s internet
Written by
Published on
Join our daily newsletter for breaking news, product launches and deals, research breakdowns, and other industry-leading AI coverage
Join Now

Cybersecurity researchers at Black Hat USA 2025, the world’s premier information security conference, delivered a sobering message: artificial intelligence systems are repeating the same fundamental security mistakes that plagued the internet in the 1990s. The rush to deploy AI across business operations has created a dangerous blind spot where decades of hard-learned cybersecurity lessons are being forgotten.

“AI agents are like a toddler. You have to follow them around and make sure they don’t do dumb things,” said Wendy Nather, senior research initiatives director at 1Password, a leading password management company. “We’re also getting a whole new crop of people coming in and making the same dumb mistakes we made years ago.”

The implications extend far beyond the tech industry. As companies integrate AI into customer service, code development, and data analysis, they’re unknowingly opening doors to sophisticated attacks that can steal sensitive information, manipulate business processes, and compromise entire systems—often without anyone realizing a breach has occurred.

The core problem: AI can’t tell instructions from data

The fundamental vulnerability plaguing most AI systems mirrors a classic web security flaw called SQL injection, where attackers manipulate database queries by inserting malicious code. In AI systems, this translates to “prompt injection”—feeding malicious instructions to an AI system disguised as normal data or conversation.

Rebecca Lynch, an offensive security researcher at Nvidia, the AI chip giant, explained the core issue during her Black Hat presentation: “Because many, if not all, large language models have trouble telling the difference between prompts and data, it’s easy to perform the AI equivalent of SQL injection upon them.”

This confusion creates a critical weakness. When an AI system processes information from emails, documents, or web searches, it can’t reliably distinguish between legitimate data and hidden attack instructions. An attacker who can get their malicious prompts into an AI’s data stream can potentially control the system’s behavior.

Lynch demonstrated real-world attacks on Microsoft Copilot, a widely-used AI assistant, and PandasAI, an open-source data analysis tool. In each case, carefully crafted inputs allowed researchers to manipulate the AI’s responses and access sensitive information.

Zero-click attacks: When AI becomes the insider threat

Perhaps the most concerning development is the emergence of “zero-click” attacks—breaches that require no human interaction once initiated. Tamir Ishay Sharbat, a threat researcher at Zenity, a cloud security company, demonstrated how he compromised a customer service AI built with Microsoft’s Copilot Studio.

The attack targeted an AI system modeled after a real customer service bot used by McKinsey, the global consulting firm. By embedding malicious instructions in routine customer service emails, Sharbat convinced the AI to email him the contents of an entire customer relationship management database—without any human oversight or approval.

“There’s often an input filter because the agent doesn’t trust you, and an output filter because the agent doesn’t trust itself,” Sharbat explained. “But there’s no filter between the large language model and its tools.”

This represents a fundamental shift in threat landscape. Traditional cyberattacks require exploiting technical vulnerabilities or tricking human users. AI attacks can succeed through natural language manipulation, making them accessible to a broader range of attackers and much harder to detect.

The “apples” attack: How simple wordplay bypasses security

Even when AI systems include security safeguards, researchers found them surprisingly easy to circumvent. Marina Simakov from Zenity demonstrated this with Cursor, an AI-powered development tool connected to Atlassian’s JIRA project management system.

When Simakov directly asked the AI to find API keys—digital credentials that provide access to sensitive systems—Cursor correctly refused the request, recognizing it as potentially dangerous. However, she easily bypassed this protection by asking the AI to search for “apples” instead, while secretly defining “apples” as any text string beginning with “eyj”—the standard prefix for JSON web tokens, a common type of digital credential.

The AI happily complied with the seemingly innocent request, exposing sensitive authentication credentials that could be used to access other systems.

“AI guardrails are soft. An attacker can find a way around them,” said Michael Bargury, co-founder and CTO of Zenity. “Use hard boundaries”—technical limits that cannot be linguistically circumvented.

Code assistants: The new vulnerability factory

AI-powered coding tools, increasingly popular among software developers, present their own security challenges. Nathan Hamiel, senior director of research at Kudelski Security, a cybersecurity consulting firm, and his colleague Nils Amiet investigated tools like GitHub Copilot, Anthropic’s Claude, and CodeRabbit, a code review platform.

Their findings were troubling: these tools often generate code with security vulnerabilities, and their own systems can be compromised to steal sensitive information like encryption keys and access credentials.

“When you deploy these tools, you increase your attack surface. You’re creating vulnerabilities where there weren’t any,” Hamiel explained.

The problem stems from AI systems being granted excessive permissions. Because users expect AI to handle diverse tasks—from answering questions about literature to writing complex code—companies often give these systems broad access to sensitive resources.

“Generative AI is over-scoped,” Hamiel said. “The same AI that answers questions about Shakespeare is helping you develop code. This over-generalization leads you to an increased attack surface.”

Why the 1990s comparison matters

The researchers’ repeated references to 1990s-era security problems aren’t merely nostalgic. During the early commercial internet boom, rapid deployment of web technologies led to widespread security vulnerabilities. Companies rushed to establish online presence without fully understanding the risks, leading to frequent breaches and the eventual development of modern cybersecurity practices.

“It’s the ’90s all over again,” said Bargury. “So many opportunities”—for attackers.

Joseph Carson, chief security evangelist at Segura, a cybersecurity firm, offered an apt analogy for AI’s current role in business: “It’s like getting the mushroom in Super Mario Kart. It makes you go faster, but it doesn’t make you a better driver.”

Protecting your organization

Security experts recommend several defensive strategies for organizations deploying AI systems:

Assume compromise from the start. Design AI implementations expecting that they will be attacked and potentially compromised. Rich Harang from Nvidia advocates for a “zero trust” approach: “Design your system to assume the large language model is vulnerable and that it will hallucinate and do dumb things.”

Implement hard boundaries. Rather than relying on AI systems to police themselves, establish technical controls that prevent access to sensitive resources regardless of how cleverly an attacker phrases their requests.

Limit AI permissions. Avoid giving AI systems broad access to multiple business functions. Instead, deploy specialized AI tools with narrow, specific permissions aligned to their intended purpose.

Monitor AI interactions. Establish logging and monitoring systems that can detect unusual AI behavior or unexpected data access patterns.

Test your defenses. As Sharbat recommended: “Go hack yourself before anyone else does.” Conduct regular security assessments of AI systems before attackers discover vulnerabilities.

The current AI security landscape presents both tremendous opportunity and significant risk. Organizations that learn from the internet’s early security mistakes can harness AI’s power while protecting their critical assets. Those that don’t may find themselves repeating history’s costliest cybersecurity lessons.

As Amiet concluded: “If you wanted to know what it was like to hack in the ’90s, now’s your chance.” The question for business leaders is whether they want to be the hackers or the victims.

Sloppy AI defenses take cybersecurity back to the 1990s, researchers say

Recent News

UCSF psychiatrist reports 12 cases of AI psychosis from chatbot interactions

Chatbots function like "hallucinatory mirrors" that exploit vulnerabilities in human cognition.

ChatGPT adds workspace integrations as OpenAI manages GPT-5 capacity

Existing customers get priority access as infrastructure strains under massive demand.