Proceedings of the 1st ACM SIGCHI International Workshop on Investigating Social Interactions with Artificial Agents

2017 ◽  
2020 ◽  
Vol 117 (12) ◽  
pp. 6370-6375 ◽  
Author(s):  
Margaret L. Traeger ◽  
Sarah Strohkorb Sebo ◽  
Malte Jung ◽  
Brian Scassellati ◽  
Nicholas A. Christakis

Social robots are becoming increasingly influential in shaping the behavior of humans with whom they interact. Here, we examine how the actions of a social robot can influence human-to-human communication, and not just robot–human communication, using groups of three humans and one robot playing 30 rounds of a collaborative game (n= 51 groups). We find that people in groups with a robot making vulnerable statements converse substantially more with each other, distribute their conversation somewhat more equally, and perceive their groups more positively compared to control groups with a robot that either makes neutral statements or no statements at the end of each round. Shifts in robot speech have the power not only to affect how people interact with robots, but also how people interact with each other, offering the prospect for modifying social interactions via the introduction of artificial agents into hybrid systems of humans and machines.


2011 ◽  
Vol 103 ◽  
pp. 513-517
Author(s):  
Zi Qing Ye ◽  
Xiao Yi Yu

In order to find optimal policy to govern agents’ society, artificial agents have been deployed in simulating social or economic phenomena. However, with an increase of the complexity of agents’ internal behaviors as well as their social interactions, modeling social behaviors and tracking down optimal policies in mathematical form become intractable. In this paper, the repeated evaluation genetic algorithm is used to find optimal solutions to deter criminals in order to reduce the social cost caused by the crimes in the artificial society. The society is characterized by multiple equilibria and noisy parameters. Sampling evaluation is used to evaluate every candidate. The results of experiments show that genetic algorithms can quickly find the optimal solutions.


2021 ◽  
pp. 17-27
Author(s):  
Wolf Singer

AbstractThis chapter identifies the differences between natural and artifical cognitive systems. Benchmarking robots against brains may suggest that organisms and robots both need to possess an internal model of the restricted environment in which they act and both need to adjust their actions to the conditions of the respective environment in order to accomplish their tasks. However, computational strategies to cope with these challenges are different for natural and artificial systems. Many of the specific human qualities cannot be deduced from the neuronal functions of individual brains alone but owe their existence to cultural evolution. Social interactions between agents endowed with the cognitive abilities of humans generate immaterial realities, addressed as social or cultural realities. Intentionality, morality, responsibility and certain aspects of consciousness such as the qualia of subjective experience belong to the immaterial dimension of social realities. It is premature to enter discussions as to whether artificial systems can acquire functions that we consider as intentional and conscious or whether artificial agents can be considered as moral agents with responsibility for their actions.


2018 ◽  
Author(s):  
Ruud Hortensius ◽  
Emily S. Cross

Understanding the mechanisms and consequences of attributing socialness to artificial agents has important implications for how we can use technology to lead more productive and fulfilling lives. Here, we integrate recent findings on the factors that shape behavioural and brain mechanisms that support social interactions between humans and artificial agents. We review how visual features of an agent, as well as knowledge factors within the human observer, shape attributions across dimensions of socialness. We explore how anthropomorphism and dehumanization further influence how we perceive and interact with artificial agents. Based on these findings, we argue that the cognitive reconstruction within the human observer is likely to be far more crucial in shaping our interactions with artificial agents than previously thought, while the artificial agent’s visual features are possibly of lesser importance. We integrate these findings to provide an integrative theoretical account based on the “like me” hypothesis, and discuss the key role played by the Theory-of-Mind network, especially the temporal parietal junction, in the shift from mechanistic to social attributions. We conclude by highlighting outstanding questions on the impact of long-term interactions with artificial agents on the behavioural and brain mechanisms of attributing socialness to these agents.


2016 ◽  
Vol 371 (1686) ◽  
pp. 20150075 ◽  
Author(s):  
Emily S. Cross ◽  
Richard Ramsey ◽  
Roman Liepelt ◽  
Wolfgang Prinz ◽  
Antonia F. de C. Hamilton

Although robots are becoming an ever-growing presence in society, we do not hold the same expectations for robots as we do for humans, nor do we treat them the same. As such, the ability to recognize cues to human animacy is fundamental for guiding social interactions. We review literature that demonstrates cortical networks associated with person perception, action observation and mentalizing are sensitive to human animacy information. In addition, we show that most prior research has explored stimulus properties of artificial agents (humanness of appearance or motion), with less investigation into knowledge cues (whether an agent is believed to have human or artificial origins). Therefore, currently little is known about the relationship between stimulus and knowledge cues to human animacy in terms of cognitive and brain mechanisms. Using fMRI, an elaborate belief manipulation, and human and robot avatars, we found that knowledge cues to human animacy modulate engagement of person perception and mentalizing networks, while stimulus cues to human animacy had less impact on social brain networks. These findings demonstrate that self–other similarities are not only grounded in physical features but are also shaped by prior knowledge. More broadly, as artificial agents fulfil increasingly social roles, a challenge for roboticists will be to manage the impact of pre-conceived beliefs while optimizing human-like design.


Sign in / Sign up

Export Citation Format

Share Document