×
AI hallucination bug spreads malware through “slopsquatting”
Written by
Published on
Join our daily newsletter for breaking news, product launches and deals, research breakdowns, and other industry-leading AI coverage
Join Now

AI-powered software hallucinations are creating a new cybersecurity threat as criminals exploit coding vulnerabilities. Research has identified over 205,000 hallucinated package names generated by AI models, particularly smaller open-source ones like CodeLlama and Mistral. These fictional software components provide an opportunity for attackers to create malware with matching names, embedding malicious code whenever programmers request these non-existent packages through their AI assistants.

The big picture: AI-generated code hallucinations have evolved into a sophisticated form of supply chain attack called “slopsquatting,” where cybercriminals study AI hallucinations and create malware using the same names.

  • When AI models hallucinate non-existent software packages and a developer requests these components, attackers can serve malware instead of error messages.
  • The malicious code then becomes integrated into the final software product, often undetected by developers who trust their AI coding assistants.

The technical vulnerability: Smaller open-source AI models used for local coding show particularly high hallucination rates when generating dependencies for software projects.

  • CodeLlama 7B demonstrated the worst performance with a 25% hallucination rate when generating code.
  • Other problematic models include Mistral 7B and OpenChat 7B, which frequently create fictional package references.

Historical context: This technique builds upon earlier “typosquatting” attacks, where hackers created malware using misspelled versions of legitimate package names.

  • A notable example was the “electorn” malware package, which mimicked the popular Electron application framework.
  • Modern application development’s heavy reliance on downloaded components (dependencies) makes these attacks particularly effective.

Why this matters: AI coding tools automatically request dependencies during the coding process, creating a new attack vector that’s difficult to detect.

  • The rise of AI-assisted programming will likely increase these opportunistic attacks as more developers rely on automation.
  • The malware can be subtly integrated into applications, creating security risks for end users who have no visibility into the underlying code.

Where we go from here: Security researchers are developing countermeasures to address this emerging threat.

  • Efforts are focused on improving model fine-tuning to reduce hallucinations in the first place.
  • New package verification tools are being developed to identify these hallucinations before code enters production.
Slopsquatting: The worrying AI hallucination bug that could be spreading malware

Recent News

Open-source Kimi K2 outperforms GPT-4 on coding and math benchmarks

Moonshot's breakthrough optimizer eliminates the costly training instability that plagues AI development.

$1B Solo.io’s Kagent Studio brings AI agents to Kubernetes workflows

Engineers can now diagnose system problems with AI assistance directly in their code editor.

81% of citizens lose trust when governments use AI for public services, says study

Automation disasters have already forced citizens into bankruptcy and homelessness.