Modern robots are being taught to recognize when they do not understand something, in order to improve safety and efficiency. Engineers at Princeton University and Google have developed a method that uses advanced language models to help robots gauge uncertainty in complex environments and decide when to ask for help. This new technique allows users to set a target degree of success and a specific threshold for uncertainty, based on the type of task the robot is performing.
The researchers tested this method on simulated robotic arms and different types of robots in various settings. The results showed high accuracy and reduced the amount of help needed by the robots compared to other methods. The system uses a statistical approach and user-specified criteria to trigger a request for human assistance when the robot is unsure about the best course of action.
This advancement is significant in ensuring that robots can function safely and efficiently in a variety of environments and tasks. The researchers acknowledge that physical limitations of robots provide insights that are not available in abstract systems. They noted that while large language models can navigate conversations, they cannot defy the laws of physics.
The collaboration between Princeton and Google has allowed for the development of new methods for calibrating the level of help that a robot should ask for, bringing about exciting possibilities for future advancements in the field of robotics. The engineers are now working on extending this work to address challenges in estimating uncertainty and determining when to trigger help for robots in a different set of tasks.