Introducing Human-Guided Exploration in AI
To teach an AI agent new tasks, researchers frequently rely on reinforcement learning. This is a process in which the agent tries different actions and is rewarded for actions that bring it closer to the goal. The challenge is that to set up this system, a human expert must create a reward function as incentive. This method can be difficult and slow, particularly for complex tasks.
Researchers from MIT, Harvard University, and the University of Washington have introduced a new reinforcement learning approach that uses crowdsourced feedback instead of expert-designed rewards to teach the AI agent.
This new approach enables the AI agent to learn tasks quickly and gather feedback from non-experts around the world.
The Noisy Feedback Dilemma
Most AI learning methods use a binary feedback system to optimize a reward function, but this can create a noisy reward function, leading the agent to get stuck and never reach the goal.
A New Approach
The Human Guided Exploration (HuGE) method aims to address this problem by separating the process into two parts, each guided by its algorithm.
Faster Learning
The researchers found that the HuGE method helped AI agents learn to achieve the goal faster than other methods in both real-world and simulated experiments.
Future Applications
The researchers plan to improve HuGE so that the agent can learn from natural language and physical interactions, and they are interested in expanding this method to teach multiple agents at once.
Conclusion
This innovative method is promising for scalable robot learning and may revolutionize the way AI agents are taught.