Researchers have found that computational models are getting closer to replicating human auditory function. MIT has conducted a study on how these models can replicate human brain responses when listening to different types of sounds. They found that when models are trained on auditory input with background noise, they better mimic human brain activation patterns. This suggests that machine learning models might be a step in the right direction when it comes to mimicking brain functions.
Deep neural networks are models consisting of layers of information-processing units. They are trained to perform specific tasks and are widely used in many applications. Past research has shown that when trained to perform auditory tasks, these models can replicate human brain responses to sound. In this latest study, researchers analyzed a variety of models to see if the ability to approximate human brain representations is a general trait. They found that models trained with more than one task and those trained in noisy conditions generated representations most similar to the human brain.
The study also supports the idea that the human auditory cortex has a hierarchical organization, and that models that had been trained on different tasks were better at replicating different aspects of hearing. The lab plans to use these findings to develop models that are even better at reproducing human brain responses. Ultimately, the goal is to create a computer model that can predict both brain responses and behavior, which could have significant implications for the development of hearing aids, cochlear implants, and brain-machine interfaces.