The team showed their system by influencing a two-finger robot, RiceGrip, with reshaping deformable foam in the desired shape, much like you could shape sushi. It used a deep camera and object recognition to identify the foam and then used the model to represent the foam as a dynamic diagram of deformable materials. While it already had an idea of how the particles would react, it would adapt its model to “sushi” in a way it did not expect.
It is still early days and the researchers want to improve their attitude by using partially observable situations (like knowing how a pile of boxes will fall. They would also like it to work directly with pictures. If and when that happens However, it is a breakthrough for robots, easier to manipulate virtually all types of objects, although liquids or soft solids can make the results difficult to determine in advance, although robots may not replace sushi chefs sometime soon, MIT’s learning method makes the prospect so much more realistic.