Bridging the Gap Between AI and Societal Context Understanding
AI products and technologies are developed and used in a societal context, which includes social, cultural, historical, political, and economic factors. However, translating this complex context into quantitative representations for machine learning (ML) poses many challenges. The problem understanding phase of AI development is crucial because it shapes how problems are formulated and how ML systems solve them. Without a deep understanding of societal context, ML solutions can be biased and fragile.
Unfortunately, AI developers often lack the tools and knowledge to consider societal context effectively. They tend to simplify and abstract away this complex context, resulting in a shallow understanding of the problems they aim to solve. On the other hand, users and stakeholders who are immersed in the context have a more qualitative understanding of these problems. This disconnect between developers and users is known as the problem understanding chasm.
The problem understanding chasm has real-world consequences. For example, a healthcare algorithm designed to select patients for special programs discovered a racial bias in its recommendations. The designers failed to consider critical factors like lack of access to healthcare and bias in diagnosis, resulting in biased predictions based on healthcare spending. This case highlights the importance of bridging the problem understanding chasm responsibly.
To bridge this gap, AI product developers need tools to access community-validated and structured knowledge of societal context. That’s where Societal Context Understanding Tools and Solutions (SCOUTS) comes in. SCOUTS is a research team within Google Research’s Responsible AI and Human-Centered Technology (RAI-HCT) team. Their mission is to empower developers with scalable and trustworthy societal context knowledge to create responsible AI solutions for complex societal problems.
Last year, Jigsaw, Google’s technology incubator, used SCOUTS’ structured societal context knowledge approach to mitigate bias in their Perspective API toxicity classifier. SCOUTS focuses on the problem understanding phase of AI product development, aiming to bridge the problem understanding chasm.
Bridging the problem understanding chasm requires two key ingredients: a reference frame for organizing structured societal context knowledge, and participatory methods to elicit community expertise. SCOUTS has made innovative progress in both areas.
They have developed a taxonomic reference frame for societal context in collaboration with other RAI-HCT teams, Google DeepMind, and external experts. This taxonomic model considers agents (individuals or institutions), precepts (beliefs and biases), and artifacts (language, data, technologies, societal problems, and products). Precepts are especially critical in understanding societal context. For example, in the case of racial bias in healthcare algorithms, the designers’ preconception that increased healthcare spending indicates complex healthcare needs led to biased predictions for black patients who face systemic barriers to healthcare access.
SCOUTS also employs community-based system dynamics (CBSD) to foster collaboration and knowledge building within historically marginalized communities. CBSD helps communities articulate their complex problems and causal theories in a visual and quantitative manner. This approach is particularly useful in healthcare, where there is a lack of training data from marginalized groups. SCOUTS has collaborated with the Data 4 Black Lives community to develop causal theories that address the data gap problem, incorporating factors like cultural memory of death and trust in medical care.
Overall, SCOUTS is committed to integrating societal context knowledge into all phases of AI product development, starting with problem understanding. By bridging the problem understanding chasm, AI developers can create more responsible and effective solutions to complex societal problems.