Written by 11:40 am AI:ML, News

Google, Microsoft, OpenAI & Anthropic form industry body to ensure responsible AI 

In a big move, Anthropic, Google, Microsoft and OpenAI have come together to form the Frontier Model Forum, a new industry body focused on ensuring safe and responsible development of frontier AI models. Frontier models are defined by the Forum as large-scale machine-learning models that exceed the capabilities currently present in the most advanced existing models, and can perform a wide variety of tasks. 

The Frontier Model Forum will draw on the technical and operational expertise of its member companies to benefit the entire AI ecosystem, such as through advancing technical evaluations and benchmarks, and developing a public library of solutions to support industry best practices and standards, according to a blog post on Google’s website. 

The post states four core objectives for the Forum: 

  1. Advancing AI safety research to promote responsible development of frontier models, minimize risks, and enable independent, standardized evaluations of capabilities and safety.
  2. Identifying best practices for the responsible development and deployment of frontier models, helping the public understand the nature, capabilities, limitations, and impact of the technology.
  3. Collaborating with policymakers, academics, civil society and companies to share knowledge about trust and safety risks.
  4. Supporting efforts to develop applications that can help meet society’s greatest challenges, such as climate change mitigation and adaptation, early cancer detection and prevention, and combating cyber threats.


The official statement from Google highlighted the need to extend the existing efforts to mitigate the risks associated with AI. 

“Governments and industry agree that, while AI offers tremendous promise to benefit the world, appropriate guardrails are required to mitigate risks. Important contributions to these efforts have already been made by the US and UK governments, the European Union, the OECD, the G7 (via the Hiroshima AI process), and others.

To build on these efforts, further work is needed on safety standards and evaluations to ensure frontier AI models are developed and deployed responsibly. The Forum will be one vehicle for cross-organizational discussions and actions on AI safety and responsibility.”

Brad Smith, Vice Chair & President, Microsoft  said, “Companies creating AI technology have a responsibility to ensure that it is safe, secure, and remains under human control. This initiative is a vital step to bring the tech sector together in advancing AI responsibly and tackling the challenges so that it benefits all of humanity.” 

Also read: Artificial intelligence and human creativity: A tug of war 

Close