Newsnews

New AI Safety Fund Launched By Frontier Model Forum

new-ai-safety-fund-launched-by-frontier-model-forum

The Frontier Model Forum, a collective of leading AI companies including Anthropic, Google, Microsoft, and OpenAI, has announced the establishment of a new fund to support research on tools for testing and evaluating advanced AI models. With an initial pledge of $10 million, the fund aims to advance the development of evaluation techniques for potentially dangerous capabilities of frontier AI systems.

Key Takeaway

The Frontier Model Forum has announced a

0 million fund to support research on evaluation techniques for advanced AI models. While the pledge is relatively small compared to the commercial investments made by the Forum members, it aims to address the potential hazards of frontier AI systems. The fund will be administered by the Meridian Institute, and further contributions are anticipated. However, some experts argue that this initial tranche may not be sufficient to make a significant impact on AI safety research.

Investing in AI Safety Research

The Frontier Model Forum, in collaboration with the Patrick J. McGovern Foundation, the David and Lucile Packard Foundation, former Google CEO Eric Schmidt, and Estonian billionaire Jaan Tallinn, plans to provide support to researchers from academia, research institutions, and startups. The fund will be administered by the Meridian Institute based in Washington, D.C. The Institute, working with an advisory committee comprised of external experts, AI industry specialists, and experienced grant makers, will issue a call for proposals in the coming months.

According to the Frontier Model Forum, the primary focus of the fund will be on developing and testing evaluation techniques for the potentially hazardous capabilities of advanced AI models such as GPT-4 and ChatGPT. While the $10 million pledge is not insignificant, some have commented that it appears conservative compared to the vast amounts of money invested by Frontier Model Forum members in their commercial AI ventures.

Comparative Perspectives

For instance, Anthropic recently received investments worth billions from Amazon and Google, while Microsoft pledged $10 billion to OpenAI. OpenAI itself, with annual revenues exceeding $1 billion, is also reportedly in talks to sell shares that could value the company as high as $90 billion. In terms of AI safety grants, the Frontier Model Forum’s fund also appears relatively small compared to the commitments made by organizations such as Open Philanthropy, The Survival and Flourishing Fund, and the U.S. National Science Foundation.

Regardless, the cost of developing even smaller AI models for testing can be significant, ranging from hundreds of thousands to millions of dollars. Additionally, salaries for the research teams need to be considered. The Frontier Model Forum suggests the possibility of a larger fund in the future, which could have a greater impact on AI safety research. However, concerns about potential undue influence from for-profit backers on the research process remain.

Leave a Reply

Your email address will not be published. Required fields are marked *