Robot Deception: Some Lies Accepted, Others Rejected – Neuroscience News

by thinkia.org.in
0 comment


Summary: A new study examined how humans perceive different types of deception by robots, revealing that people accept some lies more than others. Researchers presented nearly 500 participants with scenarios where robots engaged in external, hidden, and superficial deceptions in medical, cleaning, and retail settings.

Participants disapproved most of hidden deceptions, such as a cleaning robot secretly filming, while external lies, like sparing a patient from emotional pain, were viewed more favorably. The study highlights the ethical complexities surrounding robots and deception, suggesting the need for regulation as robots become more integrated into human life.

Key Facts:

  • Hidden state deceptions, like secret filming, were the most disapproved.
  • External state deceptions, like lying to spare feelings, were more accepted.
  • The study suggests regulation is needed to protect users from robot deception.

Source: Frontiers

Honesty is the best policy… most of the time. Social norms help humans understand when we need to tell the truth and when we shouldn’t, to spare someone’s feelings or avoid harm. But how do these norms apply to robots, which are increasingly working with humans?

To understand whether humans can accept robots telling lies, scientists asked almost 500 participants to rate and justify different types of robot deception. 

Participants tended to blame these unacceptable deceptions, especially hidden state deceptions, on robot developers or owners. Credit: Neuroscience News

“I wanted to explore an understudied facet of robot ethics, to contribute to our understanding of mistrust towards emerging technologies and their developers,” said Andres Rosero, PhD candidate at George Mason University and lead author of the article in Frontiers in Robotics and AI.

 “With the advent of generative AI, I felt it was important to begin examining possible cases in which anthropomorphic design and behavior sets could be utilized to manipulate users.”  

Three kinds of lie 

The scientists selected three scenarios which reflected situations where robots already work — medical, cleaning, and retail work — and three different deception behaviors. These were external state deceptions, which lie about the world beyond the robot, hidden state deceptions, where a robot’s design hides its capabilities, and superficial state deceptions, where a robot’s design overstates its capabilities.  

In the external state deception scenario, a robot working as a caretaker for a woman with Alzheimer’s lies that her late husband will be home soon. In the hidden state deception scenario, a woman visits a house where a robot housekeeper is cleaning, unaware that the robot is also filming.

Finally, in the superficial state deception scenario, a robot working in a shop as part of a study on human-robot relations untruthfully complains of feeling pain while moving furniture, causing a human to ask someone else to take the robot’s place. 

What a tangled web we weave 

The scientists recruited 498 participants and asked them to read one of the scenarios and then answer a questionnaire. This asked participants whether they approved of the robot’s behavior, how deceptive it was, if it could be justified, and if anyone else was responsible for the deception. These responses were coded by the researchers to identify common themes and analyzed. 

The participants disapproved most of the hidden state deception, the housecleaning robot with the undisclosed camera, which they considered the most deceptive. While they considered the external state deception and the superficial state deception to be moderately deceptive, they disapproved more of superficial state deception, where a robot pretended it felt pain. This may have been perceived as manipulative. 

Participants approved most of the external state deception, where the robot lied to a patient. They justified the robot’s behavior by saying that it protected the patient from unnecessary pain — prioritizing the norm of sparing someone’s feelings over honesty.  

The ghost in the machine 

Although participants were able to present justifications for all three deceptions — for instance, some people suggested the housecleaning robot might film for security reasons — most participants declared that the hidden state deception could not be justified.

Similarly, about half the participants responding to the superficial state deception said it was unjustifiable. Participants tended to blame these unacceptable deceptions, especially hidden state deceptions, on robot developers or owners.  

“I think we should be concerned about any technology that is capable of withholding the true nature of its capabilities, because it could lead to users being manipulated by that technology in ways the user (and perhaps the developer) never intended,” said Rosero.

“We’ve already seen examples of companies using web design principles and artificial intelligence chatbots in ways that are designed to manipulate users towards a certain action. We need regulation to protect ourselves from these harmful deceptions.” 

However, the scientists cautioned that this research needs to be extended to experiments which could model real-life reactions better — for example, videos or short roleplays.  

“The benefit of using a cross-sectional study with vignettes is that we can obtain a large number of participant attitudes and perceptions in a cost-controlled manner,” explained Rosero.

“Vignette studies provide baseline findings that can be corroborated or disputed through further experimentation. Experiments with in-person or simulated human-robot interactions are likely to provide greater insight into how humans actually perceive these robot deception behaviors.” 

About this robotics and psychology research news

Author: Angharad Brewer Gillham
Source: Frontiers
Contact: Angharad Brewer Gillham – Frontiers
Image: The image is credited to Neuroscience News

Original Research: Open access.
Exploratory Analysis of Human Perceptions of Social Robot Deception Behaviors” by Andres Rosero et al. Frontiers in Robotics and AI


Abstract

Exploratory Analysis of Human Perceptions of Social Robot Deception Behaviors

Introduction: 

Robots are being introduced into increasingly social environments. As these robots become more ingrained in social spaces, they will have to abide by the social norms that guide human interactions. At times, however, robots will violate norms and perhaps even deceive their human interaction partners.

This study provides some of the first evidence for how people perceive and evaluate robot deception, especially three types of deception behaviors theorized in the technology ethics literature: External state deception (cues that intentionally misrepresent or omit details from the external world: e.g., lying), Hidden state deception (cues designed to conceal or obscure the presence of a capacity or internal state the robot possesses), and Superficial state deception (cues that suggest a robot has some capacity or internal state that it lacks).

Methods: 

Participants (N = 498) were assigned to read one of three vignettes, each corresponding to one of the deceptive behavior types. Participants provided responses to qualitative and quantitative measures, which examined to what degree people approved of the behaviors, perceived them to be deceptive, found them to be justified, and believed that other agents were involved in the robots’ deceptive behavior.

Results: 

Participants rated hidden state deception as the most deceptive and approved of it the least among the three deception types. They considered external state and superficial state deception behaviors to be comparably deceptive; but while external state deception was generally approved, superficial state deception was not. Participants in the hidden state condition often implicated agents other than the robot in the deception.

Conclusion: 

This study provides some of the first evidence for how people perceive and evaluate the deceptiveness of robot deception behavior types. This study found that people people distinguish among the three types of deception behaviors and see them as differently deceptive and approve of them differently. They also see at least the hidden state deception as stemming more from the designers than the robot itself.

You may also like

Thinkia is a professional platform where we provide informative content like current world news, all types of educational content, health awareness, food awareness, travel awareness, ideas and tips. We hope you like all the content provided by us.

Editors' Picks

Latest Posts

Copyright © 2024 | Thinkia | All Right Reserved