UC Berkeley researchers are working on a new type of robot that can predict its future. How? Through a learning technology that enables machines to imagine the future of their actions so they can understand how to manipulate objects they have never encountered before. In the future, this technology could help, for example, self-driving cars anticipate future events on the road and produce even more intelligent robotic assistants to help with domestic chores. But the initial prototype focuses on learning simple manual skills: with this technology, the robot can predict what its camera will see if it performs a specific sequence of movements. These robotic imaginations are still relatively simple for the time being, but they are enough for the robot to understand how to move objects around on a table without hitting obstacles.
The robot can learn to perform these tasks without any help from humans or prior knowledge about physics, the objects or environment around it: this is because the visual imagination is learned entirely from scratch from autonomous exploration of the relevant space around the robot. After this phase, the robot builds a predictive model of the world which can help it in the future to manipulate new objects that it has not seen before: recent improvements have enabled the machine to learn to perform increasingly complex tasks, such as sliding toys around obstacles and repositioning multiple objects on the same surface.
Just like children, who can learn about the world by playing with toys, moving them around in a space they have never seen and are not familiar with, so do the robots of Professor Sergey Levine’s team discover, on their own, how the world works through autonomous interaction.