AI PRINCIPLES

Our principles

Our approach to developing and harnessing the potential of AI is grounded in our founding mission — to organize the world’s information and make it universally accessible and useful. We believe our approach to AI must be both bold and responsible. Bold in rapidly innovating and deploying AI in groundbreaking products used by and benefiting people everywhere, contributing to scientific advances that deepen our understanding of the world, and helping humanity address its most pressing challenges and opportunities. And responsible in developing and deploying AI that addresses both user needs and broader responsibilities, while safeguarding user safety, security, and privacy.

We approach this work together, by collaborating with a broad range of partners to make breakthroughs and maximize the broad benefits of AI, while empowering others to build their own solutions.

image

Created by Aurora Mititelu as part of the Visualising AI project launched by Google DeepMind.

Our AI Principles in action

Our AI Principles guide the development and deployment of our AI systems. These Principles inform our frameworks and policies, such as the Secure AI Framework for security and privacy, and the Frontier Safety Framework for evolving model capabilities. Our governance process covers model development, application deployment, and post-launch monitoring. We identify and assess AI risks through research, external expert input, and red teaming. We then evaluate our systems against safety, privacy, and security benchmarks. Finally, we build mitigations with techniques such safety tuning, security controls, and robust provenance solutions.

  • Open Research on Responsible AI

  • Google’s AI products and services: Guided by our AI Principles

  • The value of a shared understanding of AI models