MIT researchers have developed a robotic hand that can accurately identify an object after grasping it just once. This robotic hand is inspired by the human finger and uses high-resolution touch sensing technology.
A More Efficient Design
Traditional robotic hands require multiple grasps to identify an object because their sensors are only located in the fingertips. On the other hand, designs with lower-resolution sensors spread along the finger often lack detail, necessitating multiple regrasps. However, the MIT team has found a solution to this problem by creating a robotic finger with multiple high-resolution sensors incorporated under its transparent “skin.” These sensors, which use a camera and LEDs, provide continuous sensing along the finger’s entire length, allowing for accurate object identification after just one grasp.
Impressive Accuracy and Versatility
Using this innovative design, the researchers built a three-fingered robotic hand that achieved about 85 percent accuracy in identifying objects after a single grasp. The hand’s rigid skeleton enables it to lift heavy items, while the soft skin allows for a secure grasp on pliable objects without causing damage. This combination of soft and rigid elements makes it ideal for various applications, including at-home-care robots that can assist elderly individuals with tasks like lifting heavy objects or helping with personal care.
Designing the Robotic Finger
The researchers created the robotic finger by placing a rigid, 3D-printed endoskeleton in a mold and encasing it in a transparent silicone “skin.” This mold-based manufacturing process eliminates the need for fasteners or adhesives to hold the silicone in place. The finger’s curved shape when at rest, similar to human fingers, reduces wrinkles and helps with smooth grasping. Additionally, GelSight sensors, which consist of a camera and three colored LEDs, are embedded into the top and middle sections of the finger’s endoskeleton to provide detailed touch sensing along its entire length.
To identify objects, the GelSight sensors capture images as the finger grasps an object and the LEDs illuminate the skin from the inside. An algorithm then uses the illuminated contours on the soft skin to map the contours of the object’s surface. The researchers trained a machine-learning model to identify objects using the raw camera image data.
The researchers plan to further improve the robotic hand’s design by reducing wear and tear in the silicone over time and adding more actuation to the thumb for a wider range of tasks. They also suggest that adding tactile sensing to the palm could enhance the hand’s ability to make tactile distinctions. This research was funded by the Toyota Research Institute, the Office of Naval Research, and the SINTEF BIFROST project.