Reflections and Lessons on Our Breakthrough in AI: A Responsible Approach
Introducing DeepMind’s Commitment to Responsible AI
DeepMind is dedicated to advancing science and benefiting humanity through its mission to solve intelligence. However, with this mission comes great responsibility. We understand the importance of evaluating the ethical implications of our research and its potential applications. Building on our commitment to responsible governance, research, and impact, we have set clear principles that prioritize the benefits of artificial intelligence (AI) while mitigating its risks.
Contributing to AI Community Standards and Operating Principles
At DeepMind, we believe that responsible innovation is a collective effort. That’s why we have actively contributed to AI community standards, such as Google’s AI Principles, the Partnership on AI, and the OECD. Our Operating Principles, which have been central to our decision-making since our inception, define our commitment to widespread benefit. They guide us in determining the areas of research and applications we refuse to pursue.
From Principles to Practice
While written principles are important, their translation into practice is crucial. We understand that applying responsible governance, research, and impact to complex AI projects presents challenges. To overcome these challenges, we have developed internal toolkits, published papers on sociotechnical issues, and supported efforts to enhance deliberation and foresight within the AI community.
Empowering Responsible Innovation through Institutional Review
To ensure responsible and safeguard against harm, our interdisciplinary Institutional Review Committee (IRC) meets regularly to evaluate DeepMind projects, papers, and collaborations. This committee comprises experts from various disciplines, including machine learning researchers, ethicists, safety experts, engineers, security professionals, and policy specialists. Through their diverse perspectives, they identify ways to expand the benefits of our technologies, suggest areas of research and applications to reconsider, and determine when external consultation is necessary.
Lessons from our AlphaFold Project: Solving Protein Structure Prediction
One of DeepMind’s most significant breakthroughs is our AlphaFold AI system, which successfully solved the challenge of protein structure prediction. This achievement has accelerated progress in fields like sustainability, food security, drug discovery, and human biology.
Understanding the Potential and Risks
Before releasing AlphaFold to the wider community, we carefully analyzed the practical opportunities and risks it presented. To gain external input, we sought advice from over 30 experts in biology research, biosecurity, bioethics, and human rights. Some consistent themes emerged from our discussions:
1. Balancing Benefit and Risk: We recognized the importance of minimizing potential harm while maximizing the benefits of AlphaFold. Although AlphaFold itself does not significantly increase the risk of protein-related harm, we acknowledged the need to evaluate future advances carefully.
2. Accurate Confidence Measures: Experimental biologists stressed the importance of providing well-calibrated and usable confidence metrics for AlphaFold’s predictions. This helps users determine when to trust a prediction and when alternative approaches may be necessary.
3. Equitable Benefit Distribution: To avoid worsening disparities within the scientific community, we prioritized supporting underfunded fields and partnering with organizations working in neglected areas, such as tropical diseases.
Our Approach to Release
Based on external input and IRC endorsement, we adopted a release approach that addressed various needs:
1. Peer-reviewed Papers and Open Source Code: We published two papers on AlphaFold in Nature, accompanied by open source code. This enables researchers to implement and improve upon AlphaFold easily. Additionally, we provided a Google Colab option for users to input a protein sequence and receive predicted structures without running the code themselves.
2. Partnership with EMBL-EBI: We collaborated with EMBL’s European Bioinformatics Institute to release a comprehensive dataset of protein structure predictions. EMBL-EBI, as a public institution, allows easy access to protein structure predictions for researchers and the general public.
Continual Learning and Improvement
While we have made significant progress, responsible AI is an ongoing journey. We remain committed to learning and iterating our approach. We welcome feedback as we strive to contribute to the responsible AI community. To learn more about our process and the reflections on our AlphaFold project, please visit our detailed documentation.
In conclusion, DeepMind’s commitment to responsible AI is evident in our principles, practices, and project reflections. By prioritizing benefits, evaluating risks, and seeking external input, we aim to pioneer responsibly and contribute to the advancement of AI for the betterment of society.