New research suggests that AI models lack a full human-level understanding of sensory and physical concepts due to their disembodied nature, despite appearing sophisticated in their language capabilities. This finding has significant implications for AI development, suggesting that multimodal training incorporating sensory information might be crucial for creating systems with more human-like comprehension.
The big picture: Researchers at Ohio State University discovered a fundamental gap between how humans and large language models understand concepts related to physical sensations and bodily interactions.
Key details: The research revealed AI models have unusual interpretations of sensory concepts due to their text-only training.
What they’re saying: “They just differ so much from humans,” notes lead researcher Qihui Xu, pointing to the limitations of text-based training for understanding sensory concepts.
Promising developments: AI models trained on multiple types of data showed more human-like understanding.
Why this matters: The findings suggest that embodiment could be crucial for developing more human-like artificial intelligence.
Potential challenges: University of Maryland researcher Philip Feldman warns that giving AI robots physical bodies presents significant safety concerns.