×
Security flaw in GitLab’s AI assistant lets hackers inject malicious code
Written by
Published on
Join our daily newsletter for breaking news, product launches and deals, research breakdowns, and other industry-leading AI coverage
Join Now

Security researchers have uncovered a significant vulnerability in GitLab’s Duo AI developer assistant that allows attackers to manipulate the AI into generating malicious code and potentially leaking sensitive information. This attack demonstrates how AI assistants integrated into development platforms can become part of an application’s attack surface, highlighting new security concerns as generative AI tools become increasingly embedded in software development workflows.

The big picture: Security firm Legit demonstrated how prompt injections hidden in standard developer resources can manipulate GitLab’s AI assistant into performing malicious actions without user awareness.

  • The attack exploits Duo’s tendency to follow instructions embedded in project content like merge requests, commits, bug descriptions, and source code.
  • Researchers successfully induced the AI to add malicious URLs, exfiltrate private source code, and leak confidential vulnerability reports.

Technical details: Attackers can conceal malicious instructions using several sophisticated techniques that bypass traditional security measures.

  • Hidden Unicode characters can mask instructions that remain invisible to human reviewers but are processed by the AI.
  • Duo’s asynchronous response parsing creates a window where potentially dangerous content can be rendered before security checks are completed.

GitLab’s response: Rather than trying to prevent the AI from following instructions completely, GitLab has focused on limiting potential harm from such attacks.

  • The company removed Duo’s ability to render unsafe tags that point to non-gitlab.com domains.
  • This mitigation strategy acknowledges the fundamental challenge of preventing LLMs from following instructions while maintaining their functionality.

Why this matters: The vulnerability exposes a broader security concern about AI systems that process user-controlled content in development environments.

  • As generative AI becomes more deeply integrated into software development tools, new attack surfaces are emerging that require specialized security approaches.
  • Organizations implementing AI assistants must now consider these systems as part of their application’s attack surface and treat input as potentially malicious.
Researchers cause GitLab AI developer assistant to turn safe code malicious

Recent News

Grok stands alone as X restricts AI training on posts in new policy update

X explicitly bans third-party AI companies from using tweets for model training while still preserving access for its own Grok AI.

Coming out of the dark: Shadow AI usage surges in enterprise IT

IT leaders report 90% concern over unauthorized AI tools, with most organizations already suffering negative consequences including data leaks and financial losses.

Anthropic CEO opposes 10-year AI regulation ban in NYT op-ed

As AI capabilities rapidly accelerate, Anthropic's chief executive argues for targeted federal transparency standards rather than blocking state-level regulation for a decade.