New Delhi: Robots folding laundry, sorting trash or even packing your suitcase sound like a scene out of sci-fi. But Google’s DeepMind says that future may be closer than expected. On September 26, 2025, the company unveiled Gemini Robotics 1.5 and Gemini Robotics-ER 1.5, two new AI models designed to help robots think, plan and act more like humans .
The announcement marks an important shift. Robots are no longer just machines waiting for single, direct instructions. With this update, they can reason through multi-step problems, look up information online if needed, and then carry out tasks in a more general way.
New Gemini Robotics 1.5 models will enable robots to better reason, plan ahead, use digital tools like Search, and transfer learning from one kind of robot to another. Our next big step towards general-purpose robots that are truly helpful — you can see how the robot reasons as… pic.twitter.com/kw3HtbF6Dd
— Sundar Pichai (@sundarpichai) September 25, 2025
How Gemini Robotics 1.5 works
The new system comes in two parts. Gemini Robotics 1.5 is a vision-language-action model that turns instructions into motor commands. It “thinks before acting” by breaking down a complex job into smaller steps. DeepMind researchers call this embodied thinking. For example, if asked to sort laundry, the model will reason step by step: identify a red sweater, decide which bin it belongs in, then move its robotic hand to pick it up .
The second model, Gemini Robotics-ER 1.5, acts like the high-level brain. It plans tasks, reasons about physical environments, and can even use digital tools such as web search. Carolina Parada, Head of Robotics at DeepMind, explained, “With this update, we’re now moving from one instruction to actually genuine understanding and problem-solving for physical tasks.”
Robots that can learn across different machines
One striking breakthrough is that skills learned on one robot can transfer to another. A task practiced on the ALOHA 2 robot can now also work on Apptronik’s humanoid Apollo or the bi-arm Franka robot. This cross-embodiment learning is made possible through a training recipe called Motion Transfer, which helps align knowledge between different robot types .
DeepMind says this could save years of training, as robots with very different shapes and sensors no longer need to start from scratch.
Real-world examples
So what can these robots actually do? Early demos show them:
- Sorting trash into compost, recycling and landfill bins by checking local guidelines online.
- Packing a suitcase based on live weather forecasts in London or New York.
- Handling fragile or soft objects more carefully by adjusting grip strength and motion.
These may sound small, but for robotics researchers, they represent a leap toward robots that can handle the messy, unpredictable nature of everyday environments.
What about safety?
DeepMind also highlighted safety. The models are tested against ASIMOV-2.0, a benchmark for semantic safety, which checks if robots understand risks like heavy boxes or slippery floors. Gemini Robotics-ER 1.5 scored state-of-the-art in these tests, showing that safety is now built into the reasoning process .
Why this matters
For years, robots have struggled with multi-step reasoning. They could follow one instruction at a time but failed at longer tasks. By combining embodied reasoning with vision and action, Gemini Robotics 1.5 represents a step toward general-purpose robots that don’t just react but can plan, adapt and learn.
Whether it’s helping at home, in warehouses, or in hospitals, the implications are huge. The road to human-like robots is still long, but this feels like one of those milestone updates that push the industry forward.