OpenAI

Figure robot shows OpenAI integrated conversation skills

Published

on

Figure Robotics has shared a demonstration video of conversation capability after integrating OpenAI speech reasoning for the Figure 1 robot.

There are two key operations demonstrated in the video:

Advertisement
  • Speech-to-Speech Reasoning
  • End-to-End Neural Networks
Advertisement

Two weeks ago, OpenAI announced an investment in Figure AI and both of these firms entered into a strategic partnership.

OpenAI also announced to provide AI learning and large language model (LLM) support for the Figure under their newly found partnership.

Advertisement

The video shows Figure 1 robot having a conversation with a demonstrator with end-to-end neural networks. Brett Adcock, CEO of Figure said “There is no teleop and it was filmed at 1.0x speed and shot continuously”.

Figure Robot (Image Credit: Figure)

He explained that the robot is performing fast actions and the company is aiming for human-like movements as a benchmark.

The bot’s onboard cameras take the feed and process it through a large vision-language model (VLM) trained by OpenAI. It is revealed that the neural nets are taking images at 10hz through onboard cameras. As an output, the neural net is exporting 24 degrees of freedom actions at 200hz.

Advertisement

Another thing to note is that the hand operations are also improved in this video compared to past demonstrations. This is a substantial progress for Figure robotics.

Still, adding a new end-to-end neural network would allow Figure to experiment with the robot-to-human conversation interface.

Advertisement
Exit mobile version