Google’s Responsible AI research is driven by collaboration between diverse teams, researchers, and the wider community. The Perception Fairness team combines expertise in computer vision and ML fairness to design inclusive systems guided by Google’s AI Principles. Their research focuses on fairness and inclusion in multimodal ML systems, including classification, localization, captioning, and generative models for image and video editing. They aim to use ML to model human perception and address system biases. To reduce representational harms, they study societal context and work on scalable solutions informed by sociology and social psychology. Their tools analyze media collections for representation patterns and biases, partnering with academic researchers and brands. The team is expanding ML fairness concepts beyond images of people to include illustrations and abstract depictions. They also analyze bias properties of perceptual systems beyond summary statistics, measuring nuanced system behavior and balancing fairness metrics with other product metrics. Their work involves democratizing fairness analysis tooling, developing benchmarks and datasets, and advancing novel fairness analytics. The team also collaborates with product teams to improve algorithmic performance and mitigate biases, launching upgraded components in Google Photos and improving ranking algorithms in Google Images. They are also working on generative AI models to produce inclusive and controllable outputs. The field of perception fairness technologies has room for breakthrough techniques, and the team continues to contribute technical advances backed by interdisciplinary scholarship. Closing the gap between measuring images and understanding human identity and expression requires complex media analytics solutions.