Tech giants Google LLC and Microsoft Corp., along with AI startups OpenAI LLC and Anthropic PBC, have joined forces to establish the Frontier Model Forum, a collaborative initiative aimed at proactively developing safeguards for AI technology. The companies are taking this step before government regulations mandate such measures.
The Frontier Model Forum’s main objectives include advancing AI safety research, identifying best practices for responsible development and deployment of frontier models, engaging with policymakers, academics, civil society, and companies to address trust and safety concerns, and supporting efforts to develop applications that tackle society’s greatest challenges.
The forum will leverage the technical and operational expertise of its member companies to benefit the entire AI ecosystem. This will involve advancing technical evaluations and benchmarks, as well as creating a public library of solutions to promote industry best practices and standards.
To encourage broader participation, the four founding companies have outlined criteria for potential members. Organizations interested in joining the forum must be involved in developing or deploying large-scale machine-learning models used in generative artificial intelligence platforms.
By forming this supergroup, Google, Microsoft, OpenAI, and Anthropic are taking a proactive approach to AI regulation, collaborating to ensure responsible and safe AI development while working to address societal challenges through innovative AI applications.