Researchers at MIT and the MIT-IBM Watson AI Lab have designed a system to teach a user when to collaborate with an AI assistant. The training method may help in cases where the AI model cannot be fully trusted.
This onboarding system is fully automated, learning to create the onboarding process based on data from the human and AI performing a specific task. It also adapts to different tasks, expanding its potential applications.
The researchers envision this onboarding process being an important part of training for professionals in various fields, including the medical industry. “Doctors making treatment decisions with the help of AI will first have to do training similar to what we propose,” says senior author David Sontag, a professor of EECS.
The onboarding system can evolve over time, learning from data and adapting to changes in the user’s perception of the AI model’s capabilities. It is designed from a dataset containing many instances of a task, using an algorithm to create regions where incorrect collaboration between the human and AI occurs.
This onboarding system has been tested with users on tasks such as detecting traffic lights and answering multiple-choice questions. It was found to improve users’ accuracy significantly, particularly on the traffic light prediction task.
The researchers are hopeful that larger studies can be conducted to evaluate the short- and long-term effects of onboarding. They also aim to leverage unlabeled data for the onboarding process.
In conclusion, this onboarding system has the potential to be a crucial part of human-AI collaboration, serving as a tutorial to improve the user’s accuracy in determining when to trust the AI.