MIT PhD students working with the MIT-IBM Watson AI Lab are focused on improving natural language models for AI systems. Athul Paul Jacob, Maohao Shen, Victor Butoi, and Andi Peng are tackling various challenges associated with natural language models to make them more dependable and accurate.
Game Theory and Language Understanding
Jacob’s research explores the use of game theory to improve natural language models. He aims to understand human behavior and use this insight to build better AI systems. By studying the game “Diplomacy,” Jacob’s team developed a system to predict human behaviors and negotiate strategically. This approach aims to make natural language models more truthful and reliable, ultimately improving their performance.
Uncertainty Quantification in Language Models
Shen and his team are working on calibrating language models when they are poorly calibrated. They use uncertainty quantification to determine if a model is over- or under-confident in its predictions, ultimately ensuring that a model’s confidence aligns with its accuracy.
Vision-Language Models and Compositional Reasoning
Butoi’s team is designing techniques to allow vision-language models to reason about what they see. By training models to understand relationships between objects, such as directions, the team aims to improve the models’ compositional reasoning abilities and their overall performance in real-world scenarios.
In summary, MIT PhD students are making significant advancements in natural language models for AI systems. Their work aims to enhance models’ reliability, accuracy, and overall performance, ultimately making AI systems more effective tools for various applications.