Home AI News Exploring AI for Real-World Challenges: Safety, Adaptability, and Efficiency

Exploring AI for Real-World Challenges: Safety, Adaptability, and Efficiency

Exploring AI for Real-World Challenges: Safety, Adaptability, and Efficiency

Exploring AI safety, adaptability, and efficiency for the real world

Next week, the International Conference on Machine Learning (ICML 2023) will begin, taking place from 23-29 July in Honolulu, Hawai’i. ICML brings together the AI community to share new ideas, tools, and datasets, and make connections to advance the field. Researchers from around the world will be presenting their latest advances in areas such as computer vision and robotics.

Shakir Mohamed, our director for science, technology & society, will be giving a talk on machine learning with social purpose at the conference. He will be addressing challenges from healthcare and climate, taking a sociotechnical view, and strengthening global communities.

We are proud to be a Platinum Sponsor of the conference and to continue working together with our long-term partners LatinX in AI, Queer in AI, and Women in Machine Learning.

During the conference, we will also be showcasing demos on AlphaFold, our advances in fusion science, and new models like PaLM-E for robotics and Phenaki for generating video from text.

Google DeepMind researchers will be presenting over 80 new papers at ICML this year. As many papers were submitted before Google Brain and DeepMind joined forces, papers initially submitted under a Google Brain affiliation will be included in a Google Research blog, while this blog features papers submitted under a DeepMind affiliation.

AI in the (simulated) world

The success of AI that can read, write, and create is underpinned by foundation models – AI systems trained on vast datasets that can learn to perform many tasks. Our latest research explores how we can translate these efforts into the real world, and lays the groundwork for more generally capable and embodied AI agents that can better understand the dynamics of the world, opening up new possibilities for more useful AI tools.

In an oral presentation, we introduce AdA, an AI agent that can adapt to solve new problems in a simulated environment, like humans do. AdA can quickly take on challenging tasks such as combining objects in novel ways, navigating unseen terrains, and cooperating with other players.

Similarly, we show how we could use vision-language models to help train embodied agents. For example, by telling a robot what it’s doing.

The future of reinforcement learning

To develop responsible and trustworthy AI, we have to understand the goals at the heart of these systems. In reinforcement learning, one way this can be defined is through reward.

In an oral presentation, we aim to settle the reward hypothesis first posited by Richard Sutton, which states that all goals can be thought of as maximizing expected cumulative reward. We explain the precise conditions under which it holds and clarify the kinds of objectives that can – and cannot – be captured by reward in a general form of the reinforcement learning problem.

When deploying AI systems, they need to be robust enough for the real world. We look at how to better train reinforcement learning algorithms within constraints, as AI tools often have to be limited for safety and efficiency.

In our research, which was recognized with an ICML 2023 Outstanding Paper Award, we explore how we can teach models complex long-term strategy under uncertainty with imperfect information games. We share how models can play to win two-player games even without knowing the other player’s position and possible moves.

Challenges at the frontier of AI

Humans can easily learn, adapt, and understand the world around us. Developing advanced AI systems that can generalize in human-like ways will help create AI tools we can use in our everyday lives and to tackle new challenges.

One way that AI adapts is by quickly changing its predictions in response to new information. In an oral presentation, we look at plasticity in neural networks and how it can be lost over the course of training – and ways to prevent loss.

We also present research that could help explain the type of in-context learning that emerges in large language models by studying neural networks meta-trained on data sources whose statistics change spontaneously, such as in natural language prediction.

In an oral presentation, we introduce a new family of recurrent neural networks (RNNs) that perform better on long-term reasoning tasks to unlock the promise of these models for the future.

Finally, in ‘quantile credit assignment’, we propose an approach to disentangle luck from skill. By establishing a clearer relationship between actions, outcomes, and external factors, AI can better understand complex, real-world environments.

Source link


Please enter your comment!
Please enter your name here