Home AI News Unlocking the Brain’s Intuitive Understanding: Self-Supervised Learning and Neural Networks

Unlocking the Brain’s Intuitive Understanding: Self-Supervised Learning and Neural Networks

0
Unlocking the Brain’s Intuitive Understanding: Self-Supervised Learning and Neural Networks

Understanding the Physical World: AI and the Brain

Scientists at MIT have conducted two studies that suggest the brain may develop an intuitive understanding of the physical world through a process similar to self-supervised learning, a type of machine learning commonly used in AI. These studies provide new evidence supporting this hypothesis.

Self-supervised learning allows computational models, known as neural networks, to learn about visual scenes based on their similarities and differences, without any labels or additional information. When trained using this method, the resulting models generated activity patterns similar to those observed in the brains of animals performing the same tasks.

The researchers believe that these models can learn representations of the physical world, which enable them to make accurate predictions about what will happen in that world. They also suggest that the mammalian brain may be using a similar strategy.

According to Aran Nayebi, a postdoc at MIT, the AI designed to build better robots also helps researchers understand the brain, although it is yet to be determined if it applies to the entire brain. Nayebi is the lead author of one of the studies, along with other researchers from MIT.

Early computer vision models relied heavily on supervised learning, where models were trained to classify labeled images. However, this approach required a lot of human-labeled data. To overcome this, researchers turned to contrastive self-supervised learning, which allows models to learn to classify objects based on their similarities, without labels.

This type of learning enables the use of large-scale datasets, especially videos, and the creation of flexible representations. Neural networks, which consist of interconnected processing units, analyze vast amounts of data and adjust the strengths of connections between nodes to perform tasks effectively.

Previous work by Nayebi and others showed that self-supervised models of vision generated activities similar to those in the visual processing system of mammalian brains. The researchers aimed to explore whether self-supervised models could show similarities to the mammalian brain in other cognitive functions.

In one study, the researchers trained self-supervised models on naturalistic videos to predict the future state of their environment. They then evaluated the models’ ability to track the trajectory of a hidden ball in a game similar to Pong, called Mental-Pong. The models achieved accuracy similar to that of neurons in the mammalian brain and exhibited neural activation patterns similar to those in the dorsomedial frontal cortex.

The other study focused on grid cells, which help animals navigate, working alongside place cells. Grid cells fire only when an animal is at specific locations, forming triangular lattice patterns. Previous models mimicked grid cell function but relied on privileged information about absolute space, which animals do not have.

The MIT team trained a contrastive self-supervised model to perform path integration, the task of predicting an animal’s next location based on its starting point and velocity. The model learned to distinguish between similar and different positions, representing space efficiently.

These studies suggest that self-supervised learning in AI models can provide insights into the brain’s workings, bringing researchers closer to building artificial systems that emulate natural intelligence.

Overall, these findings contribute to our understanding of how AI and the brain intersect, providing new avenues for research in the field of artificial intelligence and neuroscience.

Source link

LEAVE A REPLY

Please enter your comment!
Please enter your name here