Home AI News Improving Computer Vision by Mimicking Brain’s Neural Network Processing

Improving Computer Vision by Mimicking Brain’s Neural Network Processing

0
Improving Computer Vision by Mimicking Brain’s Neural Network Processing

From cameras to self-driving cars, many technologies today rely on artificial intelligence (AI) to make sense of visual information. AI technology uses artificial neural networks as its core, which are responsible for processing visual images. According to researchers from MIT and IBM, improving computer vision involves training these neural networks to mimic the way the brain’s biological neural network processes visual images.

In a study presented at the International Conference on Learning Representations, the researchers trained an artificial neural network using neural activity patterns from the brain’s inferior temporal (IT) cortex. The results showed that the neural training made the network better at identifying objects in images, and its interpretations of images closely matched those of humans.

Artificial neural networks used in computer vision already resemble the brain circuits that process visual information. As they are trained for specific tasks, these networks collectively process visual information and determine what objects are depicted in images. These networks turn out to be scientific models of the neural mechanisms underlying human vision, providing insights for neuroscientists studying the brain.

Despite their potential, computer vision systems are not perfect models of human vision. To improve them, the researchers incorporated brain-like features into the models. They trained a computer vision model using neural data from the monkey IT cortex, a part of the primate ventral visual pathway that recognizes objects. By simulating the behavior of primate vision-processing neurons, the model processed visual information differently than standard computer vision models.

Comparison with a model trained without neural data showed that the biologically informed model was a better match for neural data from the IT cortex. It even matched data from a different monkey, suggesting that the neural alignment approach can lead to improved models of the primate IT cortex.

The neurally aligned model also exhibited more human-like behavior and was more resistant to adversarial attacks. Adversarial attacks introduce small distortions into images to mislead computer vision systems. The neurally aligned model correctly identified more images even in the presence of these attacks.

The researchers are now exploring the limits of adversarial robustness in humans and plan to combine different approaches to further improve computer vision models. This work demonstrates how collaboration between neuroscience and computer science can drive progress in both fields.

Source link

LEAVE A REPLY

Please enter your comment!
Please enter your name here