Researchers from UCLA and the United States Army Research Laboratory have come up with a new way to improve artificial intelligence (AI)-powered computer vision technologies. They propose adding physics-based awareness to data-driven techniques. This hybrid methodology aims to enhance how AI-based machinery operates in real time, such as autonomous vehicles and robots.
Computer vision allows AIs to understand their surroundings by analyzing data and inferring properties of the physical world from images. Traditional computer vision methods have focused on data-based machine learning, while physics-based research has explored the principles behind computer vision challenges. Combining physics and data is a challenge in neural networks, where billions of nodes process massive image datasets to gain an understanding of what they “see.” However, there is promising research that seeks to integrate physics-awareness into data-driven networks.
The UCLA study seeks to create a hybrid AI by combining the power of data and physics. This can lead to advancements in areas like autonomous driving and surgical robots. The research team outlined three ways in which physics and data are being combined in computer vision AI: incorporating physics into AI datasets, network architectures, and loss functions. These approaches have already shown promising results in improving computer vision, such as more precise object tracking and generating high-resolution images in inclement weather conditions.
With further progress, deep learning-based AIs may even learn the laws of physics on their own. The study’s authors include researchers from the Army Research Laboratory and UCLA. The research was supported by grants from various institutions, including the Army Research Laboratory, the National Science Foundation, and Amazon.