Home AI News Autonomous Robots Learn to Scoop Samples on Extraterrestrial Bodies Without Human Commands

Autonomous Robots Learn to Scoop Samples on Extraterrestrial Bodies Without Human Commands

0
Autonomous Robots Learn to Scoop Samples on Extraterrestrial Bodies Without Human Commands

Teams of human experts on Earth guide Mars rovers, instructing them on what actions to take. However, when it comes to robots on lander missions to Saturn or Jupiter’s moons, they are too far away to receive real-time commands from Earth. To address this challenge, researchers from the University of Illinois Urbana-Champaign’s Departments of Aerospace Engineering and Computer Science have developed a groundbreaking learning-based method. This method enables robots on extraterrestrial bodies to autonomously make decisions about collecting terrain samples.

According to Pranay Thangeda, a Ph.D. student in the Department of Aerospace Engineering, instead of trying to simulate scooping every possible type of rock or granular material, they came up with a different approach. They created a method that allows autonomous landers to quickly learn how to scoop when encountering new materials. Moreover, the robots also learn to adapt to changing landscapes and the properties of the materials they encounter.

Thangeda explains that using this method, a robot can learn how to scoop a new material with only a few attempts. If it fails several times, it learns not to scoop in that specific area and tries a different spot instead.

The researchers have proposed a deep Gaussian process model, trained on an offline database using deep meta-learning. The model splits the training set into mean-training and kernel-training, learning kernel parameters to minimize residuals from the mean models. During deployment, the decision-maker utilizes the trained model and adapts it to the data acquired online.

One of the main challenges encountered during this research is the lack of knowledge about ocean worlds like Europa, for example. Thangeda highlights that the best available image of Europa only has a resolution of 256 to 340 meters per pixel, which is insufficient to determine its features.

Melkior Ornik, Thangeda’s advisor, emphasizes that all they know is that the surface of Europa is made of ice, but they are unsure whether it consists of big blocks of ice or finer particles resembling snow. Additionally, they have no knowledge about what lies beneath the ice.

In some trials, the team concealed a layer of material under another layer. The robot could only observe the top layer and would try scooping it, thinking it might be suitable. However, when it reached the bottom layer, it learned that it was unscoopable and thus moved to a different area.

NASA aims to send battery-powered rovers rather than nuclear-powered ones to Europa due to concerns about potentially harmful materials contaminating ocean worlds. Thangeda explains that while nuclear power supplies have an operational lifetime of months, batteries only last around 20 days. Therefore, it is crucial to minimize the time spent on messaging back and forth. This highlights the importance of the robot’s ability to make autonomous decisions.

This unique learning method is also distinct because it enables the robot to achieve high-quality scooping actions on unfamiliar terrains by using vision and minimal online experience. It surpasses non-adaptive methods and other state-of-the-art meta-learning methods.

To create their dataset, the team examined 12 materials and terrains with various compositions. They generated a database of 6,700 points based on these materials and terrains.

To conduct their experiments, the team utilized a robot from the Department of Computer Science at the University of Illinois. This robot resembles a lander’s arm and is equipped with sensors to collect scooping data from a wide range of materials, including 1-millimeter grains of sand, 8-centimeter rocks, shredded cardboard, and packing peanuts. The resulting simulation database consists of 100 knowledge points for each of the 67 different terrains, amounting to a total of 6,700 points.

Thangeda states that they are the first to openly share a large-scale dataset on granular media. Furthermore, they have provided code that allows easy access to the dataset for others to use in their applications.

The team plans to deploy the model they created at NASA’s Jet Propulsion Laboratory’s Ocean World Lander Autonomy Testbed.

Ornik highlights their interest in developing autonomous robotic capabilities for extraterrestrial surfaces, especially challenging ones. This innovative method will assist NASA in furthering their exploration of ocean worlds.

The value of this work lies in its adaptability and the transferability of knowledge and methods from Earth to extraterrestrial bodies. Since substantial information will not be available prior to the lander’s arrival, and due to the short battery lifespan, Thangeda stresses the importance of learning and decision-making autonomy.

The open-source dataset is accessible at: drillaway.github.io/scooping-dataset.html.

Source link

LEAVE A REPLY

Please enter your comment!
Please enter your name here