Home AI News Navigating the Ethical Landscape of GenAI: Google’s Proactive Approach

Navigating the Ethical Landscape of GenAI: Google’s Proactive Approach

Navigating the Ethical Landscape of GenAI: Google’s Proactive Approach

Building Responsible AI & Data Systems with Google Research

The Responsible AI and Human-Centered Technology (RAI-HCT) team within Google Research is focused on bringing responsible human-centered AI to billions of users. They are working on advancing the theory and practice of responsible AI through culturally-aware research. One specific team within RAI-HCT, BRAIDS (Building Responsible AI Data and Solutions), is simplifying the adoption of responsible AI practices by creating scalable tools and processes. Their focus is currently on addressing the unique challenges associated with generative AI (GenAI).

The unprecedented capabilities of GenAI models have led to a surge in innovative applications. While GenAI has brought many benefits, it also presents risks for disinformation, bias, and security. To tackle these challenges, Google has implemented a comprehensive risk assessment framework and internal governance structures based on their AI Principles, emphasizing the prevention of harm.

The BRAIDS team is focused on creating tools and techniques for identifying ethical and safety risks in GenAI products. One key approach is adversarial testing, which systematically evaluates how models behave with potentially harmful inputs across various scenarios. Their research is centered on scaled adversarial data generation, automated test set evaluation, and rater diversity.

Scaled adversarial data generation focuses on creating test sets that stress the model capabilities under adverse circumstances. The BRAIDS team prioritizes identifying societal harms to the diverse user communities impacted by their models.

Automated test set evaluation and community engagement involve scaling the testing process for quick, efficient evaluations of model responses and involving community insights to identify “unknown unknowns”.

Rater diversity focuses on understanding and mitigating subjective safety assessments of GenAI outputs. The team explores the intersectional effects of rater demographics and content characteristics on safety perceptions.

Overall, the proactive adversarial testing program by the BRAIDS team aims to identify and mitigate GenAI risks to ensure inclusive model behavior. This comprehensive approach, along with constant collaboration with diverse user communities and industry experts, is helping to address the challenges of building responsibly with GenAI.

Source link


Please enter your comment!
Please enter your name here