Home AI News AI Safety Fund: Advancing Research and Funding to Ensure Future Security

AI Safety Fund: Advancing Research and Funding to Ensure Future Security

AI Safety Fund: Advancing Research and Funding to Ensure Future Security

The AI Safety Fund: Advancing Research and Promoting Safety in Artificial Intelligence

In recent times, the field of artificial intelligence (AI) has made remarkable advancements, driven by various industries. As these advancements keep progressing, it becomes crucial to conduct academic research focused on AI safety. To address this need, the Forum, along with philanthropic partners, is establishing the AI Safety Fund. This fund aims to support independent researchers worldwide, affiliated with academic institutions, research organizations, and startups. The initial funding of the AI Safety Fund comes from Anthropic, Google, Microsoft, and OpenAI, as well as generous contributions from philanthropic partners such as the Patrick J. McGovern Foundation, the David and Lucile Packard Foundation[1], Eric Schmidt, and Jaan Tallinn. The total initial funding amounts to over $10 million, and we anticipate further contributions from additional partners in the future.

Commitment to AI Safety and the Global Knowledge Base

Earlier this year, the Forum’s members pledged their commitment to AI safety at the White House, which included facilitating the discovery and reporting of vulnerabilities in AI systems by third-party entities. The establishment of the AI Safety Fund is an integral part of fulfilling this commitment, as it provides external researchers with the necessary funding to evaluate and understand cutting-edge systems. By promoting a wider range of voices and perspectives in the global discussion on AI safety and the general knowledge base of AI, we aim to foster a more comprehensive understanding of this crucial field.

Focus and Goals of the Fund

The primary focus of the AI Safety Fund is to support the development of new model evaluations and techniques for red teaming AI models. This support will aid in the development and testing of evaluation methods for potentially risky capabilities in advanced AI systems. By allocating more funding in this area, we believe that safety and security standards can be raised, while also providing valuable insights into mitigating and controlling the challenges presented by AI systems, for industry, governments, and civil society.

In the coming months, the Fund will announce a call for proposals. The Meridian Institute will oversee the administration of the fund, supported by an advisory committee consisting of independent external experts, experts from AI companies, and individuals with grantmaking experience.

Source link


Please enter your comment!
Please enter your name here