Anthropic

OpenAI and Anthropic will share large models with the U.S. government 

Published

on

Generative AI giants OpenAI and Anthropic have signed an agreement with the US Government to share access to major models prior to release.

US Artificial Intelligence Safety Institute announced that this news is a collaboration on AI safety research, testing, and evaluation of new models released by both companies.

Advertisement

This institute was established in 2023 and managed by the Department of Commerce’s National Institute of Standards and Technology (NIST). It aims to secure the development, testing, and deployment of AI technologies for public users.

According to the agreement, these AI firms have established a framework to share access to new large models before and after public availability.

Advertisement

In simple words, new models will be verified and validated for public safety before their public launch and monitored during their general release.

This collaboration will also allow the AI safety institute to conduct collaborative research on best practices around the new AI features, evaluate safety risks, and explore ways to mitigate those risks.

Advertisement

Furthermore, the AI safety institute will also provide feedback to Anthropic and OpenAI on safety improvements in their new models.

“Safety is essential to fueling breakthrough technological innovation. With these agreements in place, we look forward to beginning our technical collaborations with Anthropic and OpenAI to advance the science of AI safety,” said Elizabeth Kelly, director of the U.S. AI Safety Institute. “These agreements are just the start, but they are an important milestone as we work to help responsibly steward the future of AI.”

Advertisement

For now, OpenAI and Anthropic has not shared specific details of the models they’ll share with the AI safety department.

(source)

Advertisement
Comments
Exit mobile version