Home AI News Advancing Generalisability in Artificial Intelligence Research at ICLR 2022

Advancing Generalisability in Artificial Intelligence Research at ICLR 2022

0

Working toward greater generalisability in artificial intelligence

The Tenth International Conference on Learning Representations (ICLR 2022) is commencing today, taking place virtually from April 25-29, 2022. Researchers from all around the world have come together to share their cutting-edge work in fields such as artificial intelligence, data science, machine vision, robotics, and more.

On the first day of the conference, Pushmeet Kohli, the head of AI for Science and Robust and Verified AI teams, will be delivering a talk on how AI can significantly enhance solutions for various scientific problems, including genomics, structural biology, quantum chemistry, and even pure mathematics.

Beyond sponsoring the event and regularly organizing workshops, our research teams are presenting a total of 29 papers, including 10 collaborations. Here is a preview of our upcoming oral, spotlight, and poster presentations:

Optimizing learning

We have several key papers that emphasize our efforts to make the learning process of our AI systems more efficient. This involves improving performance, advancing few-shot learning, and developing data-efficient systems to reduce computational costs.

In the paper titled “Bootstrapped meta-learning,” which was awarded the prestigious ICLR 2022 Outstanding Paper Award, we propose an algorithm that allows an agent to learn how to learn by teaching itself. We also introduce a policy improvement algorithm that enhances AlphaZero, our system that achieved mastery in chess, shogi, and Go through self-learning, enabling it to continue improving even with minimal training simulations. Additionally, we present a regularizer that mitigates capacity loss risks in a wide range of reinforcement learning agents and environments, along with an improved architecture for efficiently training attentional models.

Exploration

Curiosity plays a crucial role in human learning, aiding in the advancement of knowledge and skills. Similarly, exploration mechanisms enable AI agents to go beyond existing knowledge and discover the unknown or try new approaches.

In our research on the question “When should agents explore?” we investigate when agents should switch to exploration mode, the appropriate timescales for switching, and the most effective signals for determining the duration and frequency of exploration periods. In another paper, we propose an “information gain exploration bonus” that allows agents to surpass the limitations of intrinsic rewards in reinforcement learning and learn a broader range of skills.

Robust AI

To effectively deploy machine learning models in real-world scenarios, they must perform well when transitioning between training, testing, and new datasets. Understanding the causal mechanisms behind this is crucial, as it allows some systems to adapt while others struggle with new challenges.

Expanding on our research in this area, we introduce an experimental framework that enables a detailed analysis of the robustness of models to distribution shifts. Robustness also aids in protecting against unintended or targeted adversarial harms. For instance, in the case of image corruptions, we propose a technique that optimizes the parameters of image-to-image models to minimize the effects of blurring, fog, and other common issues.

Emergent communication

In addition to helping machine learning researchers understand how agents develop their own communication methods to accomplish tasks, AI agents have the potential to provide insights into linguistic behaviors within populations, leading to more interactive and useful AI.

In collaboration with researchers from Inria, Google Research, and Meta AI, we explore the influence of diversity within human populations on shaping language and partially resolve a contradiction observed in computer simulations involving neural agents. Additionally, we delve into the importance of scaling up datasets, task complexity, and population size as independent factors when improving language representations in AI. Moreover, we study the tradeoffs between expressivity, complexity, and unpredictability in multiplayer games where multiple agents communicate to achieve a shared goal.

To explore our complete range of work at ICLR 2022, visit the event page here.

Source link

LEAVE A REPLY

Please enter your comment!
Please enter your name here