Figure unveils first-of-its-kind brain for humanoid robots after shunning OpenAI
Helix introduces a novel approach to upper-body manipulation control.
IIn a significant move in the AI world, California-based Figure has revealed Helix, a generalist Vision-Language-Action (VLA) model that unifies perception, language understanding, and learned control to overcome multiple longstanding challenges in robotics.
Brett Adcock, founder of Figure, said that Helix is the most significant AI update in the company’s history.
“Helix thinks like a human… and to bring robots into homes, we need a step change in capabilities. Helix can generalize to virtually any household item,” Adcock said in a social media post.
“We’ve been working on this project for over a year, aiming to solve general robotics. Like a human, Helix understands speech, reasons through problems, and can grasp any object – all without needing training or code. In testing, Helix can grab almost any household object,” he added.
The launch of Helix follows Figure’s announcement of its separation from OpenAI in early February.
Adcock stated at that time, “Figure has achieved a significant breakthrough in fully end-to-end robot AI, developed entirely in-house. We are excited to reveal something that no one has ever seen before in a humanoid within the next 30 days.”
A series of the world’s first capabilities
According to Figure, Helix introduces a novel approach to upper-body manipulation control.
It offers high-rate continuous control of the entire humanoid upper body, which includes the wrists, torso, head, and individual fingers.