Home AI News The Impact of Advanced AI System on Trust in Communication: A Study

The Impact of Advanced AI System on Trust in Communication: A Study

0
The Impact of Advanced AI System on Trust in Communication: A Study

AI’s Impact on Trust

As AI technology continues to advance, it raises concerns about the trustworthiness of our interactions. Researchers from the University of Gothenburg have examined how sophisticated AI systems affect our trust in the people we communicate with.

Suspecting AI: The Problem of Trust

In one scenario, a potential scammer believes he is speaking with an elderly man, when in reality, he is interacting with a computer system that uses pre-recorded loops to communicate. The scammer spends a significant amount of time trying to deceive the “man” by patiently listening to his bewildering and repetitive stories. According to Oskar Lindwall, a communication professor at the University of Gothenburg, it often takes a while for people to realize that they are conversing with an AI system.

Examining Trust Issues

Collaborating with informatics professor Jonas Ivarsson, Lindwall delves into this subject in their article titled Suspicious Minds: The Problem of Trust and Conversational Agents. The article explores how individuals interpret and respond to situations where one of the participants might be an AI agent. The authors emphasize the adverse effects of harboring suspicion towards others, such as the damage it can cause to relationships.

Ivarsson illustrates this with an example of a romantic relationship where trust issues arise, leading to jealousy and an increased tendency to look for signs of deception. The authors argue that when individuals can’t fully trust their conversation partner’s intentions and identity, they may become excessively suspicious even without valid reasons.

AI’s Human-Like Features and Their Consequences

During their study, the researchers found that certain human behaviors were mistaken as signs of robotic characteristics during interactions between two humans.

They also suggest that AI’s increasing resemblance to humans is driven by a prevailing design perspective. While this might be appealing in some cases, it can also create problems, especially when it’s unclear who or what we are communicating with. Ivarsson questions whether AI systems should have such human-like voices as they foster a sense of intimacy and lead people to form impressions based on vocal cues alone.

For instance, in the case of the scammer calling the “older man,” the fraud only gets exposed after a significant amount of time due to the believability of the human voice and the assumption that the confused behavior is a result of old age. When an AI has a voice, we tend to infer attributes like gender, age, and socio-economic background, making it challenging to identify whether we are conversing with a computer or a human.

Promoting Transparency in AI Design

The researchers propose developing AI systems with well-functioning and eloquent voices that are still distinctly synthetic, increasing transparency about their non-human nature.

The Impact on Communication

Communication involves not just deception but also the building of relationships and shared understanding. The uncertainty of whether one is conversing with a human or a machine impacts this aspect of communication. While it may not matter in certain situations like cognitive-behavioral therapy, other forms of therapy that require a stronger human connection could suffer negative consequences.

To gather data, Ivarsson and Lindwall analyzed conversations and audience feedback on YouTube. They studied three types of interactions: a robot calling a person to schedule a hair appointment without the person’s knowledge, one person calling another for the same purpose, and telemarketers being transferred to a computer system with pre-recorded speech.

Source link

LEAVE A REPLY

Please enter your comment!
Please enter your name here