“There is no computer graphics, video acceleration or scripted playback in the demonstration. Everything is controlled by a neural network. At 1X speed”.
These are the words the Norwegian artificial intelligence and robotics company describes the work of its androids in a new video. OpenAI previously backed 1X with a $25 million Series A funding round; the subsequent $100 million Series B demonstrated just how important ChatGPT's developer focus is in this day and age.
Compared to the versions that Tesla or Agility are working on, the 1X robots look somewhat “unarmed” – the humanoid, named Eve, does not yet have dexterous and human-like hands (but there is something similar to claws), as well as legs (the robot simply rolls along on a pair of drive wheels, balancing on a third small wheel behind).
Actually, 1X already has a bipedal version – and it's Neo with seemingly well-designed arms. The company probably believes that for basic general purpose work, hands with aesthetically pleasing “musical” fingers are not needed, and that it is better to ride on wheels on the concrete floors of factory warehouses.
At the same time, enabling bipedal walking or delicate manipulation of objects is not the main obstacle to launching working androids. It's more about learning tasks quickly and then performing them autonomously (Optimus Tesla, for example, raised a lot of questions about this when he folded a shirt in a recent demo).
In this context, you can look at the 1X advantage in this video:
Front-end Basic course. Learn how to develop web interfaces and become a competent Front-end developer! Earn $800 for the beginning of your career. Find out about the course
The above tasks are not extremely difficult, but a lot of robots cope with them quite successfully and autonomously: grabbing things or picking them up from the floor, putting them in boxes and the like. They also open doors, go to charging stations, and connect to power.
Essentially, the company trained 30 Eve bots on a number of individual tasks using simulated video training and teleoperation. The behavior model was then customized to suit the capabilities of a specific environment – warehouse tasks, general door manipulation, etc. — and then, finally, they taught the bots the specific tasks they were supposed to perform.