Home AI News Comprehensively Evaluating Social and Ethical Risks of AI Systems: A New Framework

Comprehensively Evaluating Social and Ethical Risks of AI Systems: A New Framework

0
Comprehensively Evaluating Social and Ethical Risks of AI Systems: A New Framework

Introducing a Context-Based Framework for Evaluating the Social and Ethical Risks of AI Systems

Generative AI systems have become increasingly capable and are being used in various fields such as writing books, creating designs, and assisting medical practitioners. However, it is crucial to evaluate the potential social and ethical risks associated with these systems to ensure responsible development and deployment.

In our new paper, we propose a three-layered framework for assessing the social and ethical risks of AI systems. This framework includes evaluations of the system’s capabilities, human interaction, and systemic impacts.

We have identified three main gaps in the current state of safety evaluations: context, specific risks, and multimodality. To address these gaps, we suggest repurposing existing evaluation methods and adopting a comprehensive approach to evaluation. Our case study on misinformation demonstrates how this approach can provide insights into the likelihood of misinformation and its spread.

Context plays a critical role in evaluating AI risks. The capabilities of AI systems indicate the potential risks they may pose. For example, systems that produce inaccurate or misleading information are more likely to contribute to misinformation, which can lead to a lack of public trust. However, evaluating capabilities alone is not enough to ensure safety. It is essential to consider the context in which the AI system is used, such as the users’ goals and the intended function of the system.

Our framework extends beyond capability evaluation to include human interaction and systemic impact. Human interaction evaluation focuses on how people use the AI system and whether it performs as intended. Systemic impact evaluation examines the broader structures in which the AI system is embedded, such as social institutions and labor markets. By integrating evaluations at these different layers, we can comprehensively assess the safety of AI systems.

Ensuring the safety of AI systems is a shared responsibility. AI developers, application developers, designated public authorities, and broader public stakeholders all have a role to play in evaluating and mitigating risks. Each actor is positioned to perform evaluations at different layers of the framework.

Based on our review of safety evaluations for generative AI systems, we have identified three main gaps that need to be addressed: context, risk-specific evaluations, and multimodality. Most evaluations focus on the capabilities of AI systems and overlook human interaction and systemic impact. Evaluations also tend to be limited in the risk areas they cover and often fail to consider new modalities such as image, audio, and video.

To address these gaps, we are compiling a list of safety evaluations for generative AI systems. Repurposing existing evaluations and leveraging large models can be a practical first step. However, more comprehensive evaluation methods need to be developed for human interaction and systemic impact. Collaboration between AI developers, public actors, and other parties is essential to create a robust evaluation ecosystem.

In summary, evaluating the social and ethical risks of AI systems is crucial for responsible development and deployment. Our three-layered framework provides a comprehensive approach to evaluation, addressing gaps in current safety assessments. By taking context into account and involving various stakeholders, we can ensure the safety of widely used generative AI systems.

Source link

LEAVE A REPLY

Please enter your comment!
Please enter your name here