Introducing Microsoft’s Responsible AI Standard
Microsoft has publicly released its Responsible AI Standard, a framework that guides the development of AI systems. This is a crucial step towards creating more trustworthy AI. The purpose of sharing this standard is to invite feedback, contribute to the discussion on building better norms and practices around AI, and share what Microsoft has learned.
Guiding AI development towards responsible outcomes
AI systems are the result of various decisions made by developers and deployers. It is important to proactively steer these decisions towards outcomes that benefit and promote fairness, reliability, safety, privacy and security, inclusiveness, transparency, and accountability. The Responsible AI Standard outlines how Microsoft aims to build AI systems that uphold these values and earn society’s trust. It offers practical guidance that goes beyond high-level principles, outlining specific goals and requirements that developers must follow throughout the lifecycle of AI systems. The Standard also provides resources and tools to aid implementation.
The need for practical guidance in the AI landscape
As AI becomes more prevalent in our lives, there is a growing need for practical guidance. Laws and regulations have not caught up with the unique risks posed by AI, and it is essential to address these by designing responsible AI systems. Microsoft acknowledges this responsibility and has refined its policy and learned from its product experiences to develop the second version of the Responsible AI Standard.
Addressing fairness in speech-to-text technology
One recognized harm associated with AI systems is their potential to exacerbate biases and inequities. A study revealed that speech-to-text technology disproportionately produced errors for Black and African American users compared to white users. Microsoft recognized the need to improve and took steps to address this issue, including engaging an expert sociolinguist and expanding data collection efforts. The Responsible AI Standard includes goals and requirements to mitigate fairness harms and guide developers in creating more equitable speech-to-text technology.
Implementing appropriate use controls
Microsoft’s Custom Neural Voice technology allows for the creation of synthetic voices that closely resemble the original source. While this technology has numerous beneficial applications, it also has the potential for misuse. Microsoft implemented a layered control framework to restrict access, define acceptable use cases, and establish technical guardrails. Similar controls will be applied to facial recognition services.
Designing AI systems fit for purpose
To ensure the trustworthiness of AI systems, they need to be appropriate for the problems they aim to solve. As part of aligning Azure Face service with the Responsible AI Standard, Microsoft is retiring capabilities that infer emotional states and identity attributes. Due to the lack of scientific consensus on emotions, challenges in generalization, and privacy concerns, Microsoft will not provide open-ended API access to technology that claims to infer emotional states based on facial expressions. The Responsible AI Standard includes goals and requirements that help assess system validity and provide guidance for high-impact use cases.
Supporting resources for responsible AI implementation
Microsoft has made available resources to support the Responsible AI Standard. These include an Impact Assessment template and guide, which help teams explore the impact of their AI systems, and Transparency Notes, which disclose the capabilities and limitations of core building block technologies. These resources enable responsible deployment choices.
A collaborative and iterative approach
The development of the Responsible AI Standard involved input from various Microsoft technologies, professions, and geographic locations. It is a significant step forward in Microsoft’s commitment to responsible AI.