Understanding the Emergence of Number Sense through Biologically Inspired Neural Architecture

Number sense, the ability to understand and work with numbers, is crucial in mathematical cognition. However, the process of how numerical representations emerge in the human brain is still not well understood. Stanford Human-Centered Artificial Intelligence (HAI) researchers have proposed that biologically inspired neural architecture can shed light on this phenomenon.

Using the neural architecture of cortical layers V1, V2, and V3 combined with intraparietal sulcus (IPS), researchers can study changes in neural representations. These cortical layers and IPS are analogous to the visual processing streams in the human brain’s visual cortex. By investigating neural coding of quantity emergence with learning, researchers can gain insights into how number sense develops.

In their experiments, the researchers at HAI found that deep neural networks, specifically convolution neural networks, trained to categorize objects in standardized ImageNet datasets, spontaneously developed quantity-sensitive neurons. By using a biologically more plausible architecture known as the number-DNN (nDNN) model, they were able to observe how visual numerosity arises due to the statistical properties of images in deep neural networks.

Most real-life images contain non-symbolic stimuli that need to be mapped to quantity representations. The researchers discovered that numerosity training leads to changes in spontaneously tuned neurons, creating a hierarchy. They also used representational similarity analysis to understand how distributed representations of numerical quantities emerge across information processes, similar to how the brain processes images.

Additionally, the researchers explored numerical skills in children and their mapping of non-symbolic representations to abstract symbolic representations. These mappings are crucial for the development of numerical problem-solving skills and rely on separate neural systems. The researchers found that children often learn small numbers by mapping them to non-symbolic representations and large numbers through counting and arithmetic principles. They also found a positive correlation between neural representational similarity and arithmetic skills in children.

Traditionally, studies on neuropsychology are conducted on animals, but their brains have limitations, and it is unclear if their understanding is the same as humans. By training deep neural networks to perform cognitive and mathematical reasoning tasks, similar to the approach taken by HAI, researchers can gain valuable insights into the development of number sense and learning of numerosity representations in children.

In conclusion, biologically inspired neural architecture and deep neural networks provide valuable tools for understanding number sense. By studying changes in neural representations and exploring numerical skills in children, researchers can unravel the mysteries of how numerical representations emerge in the human brain. These findings have important implications for cognitive development and the learning of mathematics.

Source link

Stay in the Loop

Get the daily email from AI Headliner that makes reading the news actually enjoyable. Join our mailing list to stay in the loop to stay informed, for free.

Latest stories

You might also like...