Home AI News New Robot Mastering Tough Terrains with Advanced Depth Perception

New Robot Mastering Tough Terrains with Advanced Depth Perception

0
New Robot Mastering Tough Terrains with Advanced Depth Perception

The Future of Robotics: Training Four-Legged Robots to See in 3D

Enhancing Robots’ Understanding of Their Surroundings

Researchers from the University of California San Diego have made a groundbreaking advancement in robotics by developing a new model that enhances four-legged robots’ ability to perceive the world in three dimensions. This breakthrough empowers robots to effortlessly navigate challenging terrains such as stairs, rocky ground, and gap-filled paths while effectively avoiding obstacles in their path.

Presentation at the 2023 Conference on Computer Vision and Pattern Recognition (CVPR)

The team will unveil their work at the upcoming 2023 CVPR in Vancouver, Canada, held June 18-22. This conference is a prestigious gathering of experts in computer vision and pattern recognition.

Improving 3D Perception Through Innovative Technology

To enhance the robot’s 3D perception, the researchers equipped it with a forward-facing depth camera positioned at an angle that allows it to capture both the surroundings and the terrain below. They then developed a model that translates the 2D images from the camera into a 3D space, enabling the robot to better comprehend its environment.

This model analyzes a short video sequence consisting of the current frame and a few previous frames. By extracting 3D information from each 2D frame, including the robot’s leg movements, such as joint angle, joint velocity, and distance from the ground, the model estimates the 3D transformation between the past and the present.

The synthesized information is merged, allowing the model to create a comprehensive representation of the scene and use it to generate the previous frames. As the robot moves, the model compares the synthesized frames with the camera’s captured frames. If they match, the model has successfully learned the correct 3D representation. If not, it makes necessary adjustments until it achieves accuracy.

Utilizing 3D Representation for Enhanced Robot Movement

The 3D representation obtained is used to control the robot’s movement. By utilizing visual information from the past, the robot can remember previously observed scenes and the corresponding leg movements, enabling it to make informed decisions for future actions.

According to study senior author Xiaolong Wang, this approach empowers the robot to construct a short-term memory of its 3D surroundings, enhancing its overall performance.

Broadening the Robot’s Capabilities

This latest study builds on the team’s previous work, where they developed algorithms combining computer vision with proprioception. This integration enabled the four-legged robot to walk and run on uneven ground, skillfully avoiding obstacles. By combining improved 3D perception with proprioception, the researchers demonstrate that the robot is now capable of traversing even more challenging terrains.

Lead author Wang explains the significance of the development, stating, “We have created a better understanding of the 3D surroundings, empowering the robot’s versatility across different scenarios.”

Limitations and Future Directions

It’s important to note that the current model does not guide the robot towards a specific destination. Instead, the robot follows a straight path and avoids obstacles by taking alternative straight paths. However, the team recognizes this limitation and plans to incorporate planning techniques and complete the navigation pipeline in future work.

The team’s noteworthy research paper, titled “Neural Volumetric Memory for Visual Locomotion Control,” lists Ruihan Yang from UC San Diego and Ge Yang from the Massachusetts Institute of Technology as co-authors.

This groundbreaking work was made possible with support from the National Science Foundation, Amazon Research Award, Qualcomm, and various other sources.

For a video demonstration, visit: https://youtu.be/vJdt610GSGk

Source link

LEAVE A REPLY

Please enter your comment!
Please enter your name here