Google DeepMind has released Gemini Robotics-ER 1.5, a new embodied reasoning model designed specifically for robotic applications that can perform spatial reasoning, multi-step planning, and execution in physical environments. The model represents a significant shift toward separating reasoning from execution in robotics AI, potentially making it easier to adapt advanced AI capabilities across different hardware platforms and accelerating the development of general-purpose robots.
What you should know: Gemini Robotics-ER 1.5 is now available in preview through Google AI Studio and the Gemini API, offering developers a new approach to robotic intelligence.
How it works: The system operates as a dual-model architecture that separates high-level reasoning from physical execution.
Performance metrics: DeepMind, Google’s AI research division, reports that the system achieved state-of-the-art performance across 15 robotics benchmarks and demonstrated compatibility with platforms ranging from dual-arm lab robots to humanoids.
Who else is involved: Partners like Apptronik, a humanoid robotics company, and more than 60 testers are currently working with the system, indicating significant industry interest in the technology.
Competitive landscape: Gemini Robotics-ER differs from other large models applied to robotics, such as the Nvidia VLA, by emphasizing controllable reasoning depth and safety mechanisms rather than focusing solely on direct perception-to-action mapping.
What they’re saying: Industry experts see transformational potential in the general-purpose approach.
Mixed reactions: While robotics engineers highlighted the tunable reasoning budget as useful for balancing accuracy and deployment speed, some raised concerns that separating reasoning from actuation could add latency to robot operations.