How Microsoft AI approaches responsible AI to build a more accessible, sustainable and innovative world
At Microsoft, we believe that when you create powerful technologies, you must also ensure that the technology is used responsibly. For more than six years, Microsoft has invested in a cross-company program to ensure that our AI systems are responsible by design. Our work is guided by a core set of principles: fairness, reliability and safety, privacy and security, inclusiveness, transparency and accountability.
In 2017, we launched the Aether Committee with researchers, engineers and policy experts to focus on responsible AI issues and help craft the AI principles that we adopted in 2018, and went on to create the Office of Responsible AI to coordinate responsible AI governance. This included the first version of our Responsible AI Standard, a framework for translating our high-level principles into actionable guidance for our engineering teams. In 2021, we described the key building blocks to operationalize this program, including an expanded governance structure, training to equip our employees with new skills, and processes and tooling to support implementation. And, in 2022, we strengthened our Responsible AI Standard and took it to its second version. This sets out how we will build AI systems using practical approaches for identifying, measuring and mitigating harms ahead of time, and ensuring that controls are engineered into our systems from the outset.
As we look to the future, we will do even more. As AI models continue to advance, we know we will need to address new and open research questions, close measurement gaps and design new practices, patterns and tools. We’ll approach the road ahead with humility and a commitment to listening, learning and improving every day. But our own efforts and those of other like-minded organizations won’t be enough. This transformative moment for AI calls for a wider lens on the impacts of the technology – both positive and negative – and a much broader dialogue among stakeholders. We need to have wide-ranging and deep conversations and commit to joint action to define the guardrails for the future.
We believe we should focus on three key goals.
- First, we must ensure that AI is built and used responsibly and ethically.
- Second, we must ensure that AI advances international competitiveness and national security.
- Third, we must ensure that AI serves society broadly, not narrowly.
Our goal is to develop and deploy AI that will have a beneficial impact and earn trust from society.
We are committed to sharing our own learnings, innovations and best practices with decision makers, researchers, data scientists, developers and others, and we will continue to participate in broader societal conversations about how AI should be used responsibly.
Read more here: Meeting the AI moment: advancing the future through responsible AI
Microsoft is also focused on helping organizations take full advantage of AI, and we are investing heavily in programs that provide technology, resources, and expertise to empower those working to create a more sustainable, safe, and accessible world. Microsoft AI is helping customers solve some of society’s greatest challenges, whether it’s helping make farming more sustainable, protecting vulnerable communities from climate change, studying endangered species, or cleaning up the world’s oceans. Our cloud and AI services help businesses cut energy consumption, reduce physical footprints, and design sustainable products themselves. Using AI and Azure, customers can manage their businesses in a way that protects communities, biodiversity, and the planet.
Microsoft Research is building a more resilient society through mission-driven research and applied technology, and Microsoft has committed to invest over $185 million through our AI for Good initiative, which provides funding, technology and expertise to individuals, nonprofits and organizations so they can tackle these challenges.