Recent advancements in machine learning and artificial intelligence (ML) have revolutionized different industries. These improved AI systems are now possible because of the progress in computing power, access to loads of data, and improved machine learning techniques. Large language models (LLMs) are one result. These models can generate human-like language and have various applications.
A recent study from MIT and Harvard University shed new light on how the human brain responds to language. Their research could be the first AI model to both drive and suppress responses in the human brain’s language network. This network is made up of specific sections of the frontal and temporal lobes. And, much is still unknown about how it functions.
The researchers conducted their study to determine how well LLMs can predict the brain’s responses to different language inputs. They also aimed to explore what kind of stimuli can rally or suppress responses within the human language network. Using GPT-style LLM, the researchers made an encoding model to predict how the human brain would react to random sentences. This model was trained on sentences from five participants. The result was a correlation coefficient of r=0.38.
The researchers put the model through multiple tests to assess its robustness. They found that whether using alternative methods for obtaining sentence embeddings or incorporating embeddings from other LLM architecture, the model maintained high predictive performance. Also, they noted that the encoding model was accurate in predicting the performance when applied to the human language regions.
The study’s implications are vast. They could change both neuroscience research and everyday applications. The ability to control neural responses in the language network opens the door to fresh insights into language processing and new approaches for treating language disorders. These models could also hugely benefit language processing technologies, like virtual assistants and chatbots.
All in all, this study is a significant new step in understanding the connection between AI and the human brain. Researchers will continue to use LLMs to uncover more about language processing and develop innovative strategies for affecting neural activity. Expect more exciting discoveries in this field as AI and ML evolve.
For more details, read the full paper. Keep up with us on Twitter, Google News, and join our ML SubReddit, Facebook Community, Discord Channel, and LinkedIn Group. Don’t forget to join our newsletter and Telegram Channel.
Rachit Ranjan is a consulting intern at MarktechPost. He is a B.Tech student at the Indian Institute of Technology (IIT) Patna and is attracted to exploring the fields of Artificial Intelligence and Data Science.