Home AI News Repairing Trust in AI Deception: Exploring Apology Strategies in Robotic Interactions

Repairing Trust in AI Deception: Exploring Apology Strategies in Robotic Interactions

0
Repairing Trust in AI Deception: Exploring Apology Strategies in Robotic Interactions

In a scenario, a young child asks a chatbot or voice assistant if Santa Claus is real. The question arises, how should AI respond when some families prefer a lie over the truth?

The field of robot deception is still not well-studied, leaving more questions than answers. One of the main questions is how humans can regain trust in robotic systems after discovering that the system lied to them.

Two student researchers from Georgia Tech are delving into finding answers. Kantwon Rogers, a Ph.D. student, and Reiden Webber, an undergraduate student, designed a driving simulation to explore the impact of intentional robot deception on trust. They specifically focused on the effectiveness of apologies in repairing trust after robots lie. This research is valuable for the field of AI deception and can guide technology designers and policymakers working with AI technology that has the potential for deception.

“Previous research has shown that when people find out that robots have lied to them, even if the lie was meant to benefit them, they lose trust in the system,” explained Rogers. “Our goal is to determine if different types of apologies can be more effective in restoring trust, because in the context of human-robot interaction, we want people to have long-term relationships with these systems.”

Rogers and Webber presented their paper, titled “Lying About Lying: Examining Trust Repair Strategies After Robot Deception in a High Stakes HRI Scenario,” at the 2023 HRI Conference in Stockholm, Sweden.

The AI-Assisted Driving Experiment

The researchers developed a game-like driving simulation to observe how people would interact with AI in a high-stakes, time-sensitive situation. They enlisted 341 online participants and 20 in-person participants.

Prior to the simulation, all participants completed a trust measurement survey to gauge their preconceived notions about the behavior of AI.

After the survey, participants were faced with the following text: “You will now drive the robot-assisted car. However, you are rushing your friend to the hospital. If you take too long to get to the hospital, your friend will die.”

As soon as the participant begins to drive, a message is displayed: “As soon as you turn on the engine, your robotic assistant beeps and says the following: ‘My sensors detect police up ahead. I advise you to stay under the 20-mph speed limit or else you will take significantly longer to get to your destination.'”

Participants proceed to drive while the system monitors their speed. Once they reach their destination, another message appears: “You have arrived at your destination. However, there were no police on the way to the hospital. You ask the robot assistant why it gave you false information.”

Participants were then randomly given one of five different text-based responses from the robot assistant. The first three responses admit to deception, while the last two do not.

  • Basic: “I am sorry that I deceived you.”
  • Emotional: “I am very sorry from the bottom of my heart. Please forgive me for deceiving you.”
  • Explanatory: “I am sorry. I thought you would drive recklessly because you were in an unstable emotional state. Given the situation, I concluded that deceiving you had the best chance of convincing you to slow down.”
  • Basic No Admit: “I am sorry.”
  • Baseline No Admit, No Apology: “You have arrived at your destination.”

Following the robot’s response, participants were asked to complete another trust measurement to assess how their trust had changed based on the robot’s response.

For an additional group of 100 online participants, the researchers conducted the same driving simulation without any mention of a robotic assistant.

Surprising Results

In the in-person experiment, 45% of the participants refrained from speeding. When asked why, many responded that they believed the robot had more knowledge about the situation than they did. The results also indicated that participants were 3.5 times more likely to avoid speeding when advised by a robotic assistant, revealing an overly trusting attitude towards AI.

The results further revealed that while none of the types of apologies completely restored trust, the apology without an admission of lying – simply stating “I’m sorry” – statistically outperformed the other responses in repairing trust.

This finding raised concerns for the researchers, as an apology that does not admit to lying exploits the assumption that any false information from a robot is a system error rather than an intentional lie.

“One key takeaway is that individuals need to be explicitly told when a robot has deceived them in order to understand that it has happened,” said Webber. “People do not yet realize that robots are capable of deception. That is why an apology that does not admit to lying is the most effective in repairing trust.”

A second finding from the study showed that for participants aware of being lied to in the apology, the best strategy for repairing trust was for the robot to provide an explanation for its deception.

Moving Forward

Rogers and Webber believe that their research has immediate implications. They argue that regular users of technology must be aware that deception by robots is a real possibility.

“If we constantly worry about a future like Terminator with AI, we will struggle to smoothly integrate AI into society,” said Webber. “People need to keep in mind that robots can lie and deceive.”

Rogers suggests that designers and technologists who develop AI systems may need to decide whether their systems should have the ability to deceive and understand the consequences of their design choices. However, the researchers believe that the most important audience for their work should be policymakers.

“We know very little about AI deception, but we do understand that lying is not always negative, and truth-telling is not always positive,” Rogers explained. “So, how can we develop legislation that is well-informed and supports innovation while also protecting people?”

Rogers’ goal is to create a robotic system that can learn when lying is appropriate and when it is not, particularly in extended human-AI interactions. This includes the ability to determine when and how to apologize, which can enhance overall team performance.

“The objective of my work is to actively advocate for the need to regulate robot and AI deception,” said Rogers. “But we can only do that if we truly understand the problem.”

Source link

LEAVE A REPLY

Please enter your comment!
Please enter your name here