Robots working in kitchens can benefit from understanding the materials they are handling. MIT and Adobe Research have developed a technique that allows robots to identify pixels in an image that represent a particular material, even when objects vary in shape and size or when lighting conditions change. This technique, which uses a machine-learning model, can accurately identify all pixels representing the same material. The researchers trained their model using synthetic data but found that it works effectively on real indoor and outdoor scenes that it has never seen before. The approach can also be used for videos, allowing the model to identify objects made from the same material throughout the video. This method has potential applications in scene understanding for robotics, image editing, computational systems, and material-based web recommendation systems. The model transforms generic visual features into material-specific features and computes a material similarity score for every pixel in the image. By clicking on a pixel, users can identify all other regions in the image that have the same material. The researchers found that their model outperformed other methods in accurately predicting regions with the same material, with a 92% accuracy rate. In the future, they aim to improve the model’s ability to capture fine details in images to further enhance its accuracy. The technology has practical applications for consumers and designers, allowing them to visualize how different materials will look in various settings, such as reupholstering furniture or changing carpets.