Artificial Intelligence in Surveys: Matching Humans in Accuracy
A recent study conducted at BYU explores the potential of artificial intelligence (AI) in survey research. The study tests the accuracy of AI algorithms by using a GPT-3 language model, which emulates the complex relationships between ideas, attitudes, and sociocultural contexts of subpopulations.
Testing AI’s Voting Accuracy
In one experiment, the researchers simulated artificial personas with specific characteristics, such as race, age, ideology, and religious beliefs. They then compared the voting patterns of these personas with those of real humans in the 2012, 2016, and 2020 U.S. presidential elections. Surprisingly, there was a high correspondence between the AI and human votes.
David Wingate, a computer science professor at BYU and co-author of the study, expressed his astonishment at the model’s accuracy. He noted that the AI was not specifically trained in political science but was simply exposed to a vast amount of text from the internet. Despite this, the AI’s responses closely aligned with how people actually voted.
Similarity in Survey Responses
In another experiment, the researchers programmed artificial personas to provide responses from a list of options in an interview-style survey. They compared these AI responses with the nuanced patterns found in human responses using the American National Election Studies (ANES) database. The results showed a high similarity between AI and human responses.
Implications for Research and Surveying
The findings hold exciting prospects for researchers, marketers, and pollsters. AI can play a valuable role in crafting better survey questions, making them more accessible and representative. It can also simulate hard-to-reach populations and be used for pre-testing surveys and messaging. This augmentation of human abilities can lead to greater efficiency in understanding people for various fields of study.
However, researchers also raise concerns about the implications of AI’s capabilities. Questions arise regarding the extent of AI’s knowledge, its impact on different populations, and the potential for misuse by scammers and fraudsters.
Defining Ethical Boundaries
The study proposes a set of criteria that future researchers can use to assess the accuracy of AI models in different subject areas. Nevertheless, the researchers emphasize that surveying artificial personas should not replace the need to survey real people. They call for collaboration among academics and experts to establish ethical boundaries in AI surveying related to social science research.
In conclusion, AI has the potential to enhance surveys and research, unlocking new capabilities in understanding human behavior. While positive benefits are expected, there is also a need to address accuracy, biases, and ethical concerns associated with AI implementation.