×
Surfshark report reveals alarming data collection by AI chatbots
Written by
Published on
Join our daily newsletter for breaking news, product launches and deals, research breakdowns, and other industry-leading AI coverage
Join Now

AI-powered chatbots have become essential tools for information gathering and content creation, but they come with significant privacy trade-offs. A new Surfshark analysis reveals striking differences in data collection practices among popular AI services, with some platforms collecting up to 90% of possible data types. This comprehensive examination of AI data collection practices highlights the hidden costs of “free” AI assistance and underscores the importance of privacy awareness when selecting AI tools.

The big picture: All 10 popular AI chatbots analyzed by Surfshark collect some form of user data, with the average service collecting 13 out of 35 possible data types.

  • Nearly half (45%) of the examined AI apps gather location data from users.
  • Almost 30% of these AI services track user information for targeted advertising purposes.

Behind the numbers: Meta AI emerged as the most aggressive data collector, harvesting 32 out of 35 possible data types—representing 90% of all potential user information.

  • Google Gemini follows as the second most data-hungry AI, collecting 22 different data types.
  • Other significant collectors include Poe (14 data types), Claude (13 data types), and Microsoft Copilot (12 data types).

Key details: Surfshark’s analysis examined privacy information from Apple’s App Store alongside privacy policies for services like DeepSeek and ChatGPT to create a comprehensive picture of data collection practices.

  • The study tracked 35 distinct categories of user information, including sensitive data like contact details, health information, financial data, location, and biometric identifiers.
  • Particularly concerning is the collection of “sensitive info” which can include racial data, sexual orientation, pregnancy information, religious beliefs, and political opinions.

Why this matters: While data collection is standard practice across digital platforms, the extensive harvesting by AI chatbots raises significant privacy concerns as these tools become increasingly embedded in daily workflows and personal assistance.

  • Users often trade personal data for “free” AI services without fully understanding the scope of information being collected.
  • This data can potentially be used for targeted advertising, algorithmic profiling, or shared with third parties without explicit user awareness.

Reading between the lines: The dramatic variation in data collection practices between different AI providers suggests that extensive data harvesting isn’t technically necessary for providing AI assistant services.

  • Services collecting fewer data types demonstrate that AI functionality doesn’t inherently require the level of surveillance implemented by the most aggressive collectors.
Most AI chatbots devour your user data - these are the worst offenders

Recent News

Grok stands alone as X restricts AI training on posts in new policy update

X explicitly bans third-party AI companies from using tweets for model training while still preserving access for its own Grok AI.

Coming out of the dark: Shadow AI usage surges in enterprise IT

IT leaders report 90% concern over unauthorized AI tools, with most organizations already suffering negative consequences including data leaks and financial losses.

Anthropic CEO opposes 10-year AI regulation ban in NYT op-ed

As AI capabilities rapidly accelerate, Anthropic's chief executive argues for targeted federal transparency standards rather than blocking state-level regulation for a decade.