×
The dangerous psychology of why we treat AI like humans—and the risks involved
Written by
Published on
Join our daily newsletter for breaking news, product launches and deals, research breakdowns, and other industry-leading AI coverage
Join Now

The proliferation of human-like AI interfaces risks creating deceptive emotional connections between users and their virtual assistants. As chatbots like ChatGPT and Siri become increasingly sophisticated at mimicking human conversation, our psychological tendency to anthropomorphize non-human entities creates a dangerous blind spot. Understanding the psychological mechanisms behind this phenomenon is crucial as AI becomes further integrated into daily life, potentially leading to misplaced trust, emotional dependence, and distorted perceptions of machine capabilities.

The big picture: Anthropomorphism—attributing human characteristics to non-human entities—is a well-documented psychological shortcut that becomes particularly problematic with advanced AI systems.

  • Even early AI systems like ELIZA (1976) demonstrated how easily humans form emotional connections with machines, with its creator Josef Weizenbaum noting his own inclination toward emotional attachment.
  • Modern examples include describing AlphaZero’s chess-playing as “intuitive” and “romantic,” language that incorrectly implies the AI possesses human-like intentions and feelings.

Why this matters: The tendency to humanize AI systems creates false expectations about their capabilities and can lead to harmful dependencies.

  • People increasingly rate ChatGPT’s responses as more empathetic than those from actual humans, despite AI’s fundamental inability to experience emotional understanding.
  • In extreme cases, this misplaced trust has had tragic consequences, including at least one instance where a person took their own life after following advice from an AI chatbot.

The dangers: Anthropomorphizing AI systems creates four significant risks that undermine proper technology use.

  • False expectations lead users to assume AI possesses qualities like empathy, moral judgment, or creativity that algorithms fundamentally cannot achieve.
  • Emotional dependency can develop as users replace challenging human interactions with seemingly understanding AI companions.
  • Distorted understanding occurs when people confuse what AI is actually doing (following algorithms) with what it appears to be doing (thinking and feeling).
  • Language choices that frame AI as a subject rather than an object embed anthropomorphic perceptions in our subconscious, despite intellectual awareness to the contrary.

Where we go from here: The article proposes an “A-Frame” approach to maintain human agency in AI interactions.

  • Awareness: Recognize that AI systems operate on algorithms and lack true emotional capabilities.
  • Appreciation: Prioritize genuine human connections over AI interactions.
  • Acceptance: Evaluate AI accuracy before relying on it for important decisions.
  • Accountability: Take responsibility for outcomes resulting from AI interactions rather than deflecting to the technology.

Reading between the lines: Even terminology like “artificial intelligence” creates misleading parallels to human reasoning, encouraging anthropomorphization.

  • The author suggests we view AI simply as “useful” without attributing qualities like strategic thinking, kindness, or wisdom.
  • Maintaining “deliberate distance” from AI requires conscious effort and recognition of our own agency in decision-making.
Are You at Risk of Developing Feelings for Your Chatbot?

Recent News

Hacker admits using AI malware to breach Disney employee data

The case reveals how cybercriminals are exploiting AI enthusiasm to deliver sophisticated trojans targeting corporate networks and stealing personal data.

AI-powered social media monitoring expands US government reach

Federal agencies are increasingly adopting AI tools to analyze social media content, raising concerns that surveillance ostensibly targeting immigrants will inevitably capture American citizens' data.

MediaTek’s Q1 results reveal 4 key AI and mobile trends

Growing revenue but shrinking profits for MediaTek highlight the cost of competing in AI and premium mobile chips amid ongoing market volatility.