The Brain’s Intuition: How Self-Supervised Learning Shapes Our Understanding of the World

How the Brain Learns: Self-Supervised Learning in AI and Neuroscience

To understand the world around us, our brains need to develop an intuitive understanding of the physical world. Scientists believe that this intuitive understanding may be developed through a process called “self-supervised learning”, which is also used in artificial intelligence (AI) to create more efficient computer vision models. Two studies from researchers at the K. Lisa Yang Integrative Computational Neuroscience (ICoN) Center at MIT provide evidence supporting this hypothesis.

The researchers trained neural networks using self-supervised learning, and found that the resulting models generated activity patterns similar to those seen in the brains of animals performing the same tasks. This suggests that these models are able to learn representations of the physical world and make accurate predictions about it, just like the mammalian brain. This research has implications for both AI and neuroscience, as it helps us understand how the brain works and improves AI technology.

The researchers used a type of self-supervised learning called contrastive self-supervised learning, which allows an algorithm to learn to classify objects based on their similarities and differences without external labels. This type of learning is powerful because it can be applied to large datasets, such as videos, to obtain flexible representations.

Neural networks consist of thousands or millions of connected processing units. As the network analyzes data, the strengths of the connections between these units change, allowing the network to learn. The activity patterns of different units in the network can be measured and represented as firing patterns, similar to the firing patterns of neurons in the brain.

In the first study, the researchers trained self-supervised models to predict the future state of their environment using naturalistic videos. After training, they tested the model on a task called Mental-Pong, where the player has to estimate the trajectory of a ball that disappears before hitting a paddle. The model was able to track the ball’s trajectory accurately, similar to the neurons in the mammalian brain. The neural activation patterns within the model were also similar to those seen in the brains of animals playing the game.

The second study focused on a type of specialized neurons called grid cells, which help with navigation. The researchers trained a contrastive self-supervised model to perform a path integration task, where the model had to predict an animal’s next location based on its starting point and velocity. This model represented space efficiently, similar to the coding properties of grid cells.

The findings of these studies suggest that self-supervised learning in AI can help us understand the workings of the brain. By emulating natural intelligence, AI technology can be improved. This research has implications for developing better robots and advancing our understanding of the brain.

In conclusion, self-supervised learning in AI allows computational models to learn representations of the physical world and make accurate predictions, similar to the mammalian brain. This research helps us understand how the brain works and improves AI technology.

Source link

Stay in the Loop

Get the daily email from AI Headliner that makes reading the news actually enjoyable. Join our mailing list to stay in the loop to stay informed, for free.

Latest stories

You might also like...