The Application and Extension of the Human-Animal Team Model to Better Understand Human-Robot Interaction: Recommendations for Further Research

Author(s):  
Katelynn A. Kapalo ◽  
Elizabeth Phillips ◽  
Stephen M. Fiore

In order to create effective human-robot teams, robots must possess social capabilities that match the expectations of their human teammates. However the ability of robots to approximate human capacities is limited due to technological constraints. Human-animal teams have thus been suggested as a suitable analog for modeling teaming between humans and non-humans. Due to the limited capacity for animals to express their intentions, it follows that human-animal relationships can provide a basic framework for understanding how humans interpret information from teammates with limited social faculties. The purpose of this paper is provide research recommendations to identify specific areas in which human-animal teams can be used to model human-robot teams and to provide suggestions for investigating this model empirically in the context of social interaction.

Author(s):  
Mark Tee Kit Tsun ◽  
Lau Bee Theng ◽  
Hudyjaya Siswoyo Jo ◽  
Patrick Then Hang Hui

This chapter summarizes the findings of a study on robotics research and application for assisting children with disabilities between the years 2009 and 2013. The said disabilities include impairment of motor skills, locomotion, and social interaction that is commonly attributed to children suffering from Autistic Spectrum Disorders (ASD) and Cerebral Palsy (CP). As opposed to assistive technologies for disabilities that largely account for restoration of physical capabilities, disabled children also require dedicated rehabilitation for social interaction and mental health. As such, the breadth of this study covers existing efforts in rehabilitation of both physical and socio-psychological domains, which involve Human-Robot Interaction. Overviewed topics include assisted locomotion training, passive stretching and active movement rehabilitation, upper-extremity motor function, social interactivity, therapist-mediators, active play encouragement, as well as several life-long assistive robotics in current use. This chapter concludes by drawing attention to ethical and adoption issues that may obstruct the field's effectiveness.


2019 ◽  
Vol 374 (1771) ◽  
pp. 20180033 ◽  
Author(s):  
Birgit Rauchbauer ◽  
Bruno Nazarian ◽  
Morgane Bourhis ◽  
Magalie Ochs ◽  
Laurent Prévot ◽  
...  

We present a novel functional magnetic resonance imaging paradigm for second-person neuroscience. The paradigm compares a human social interaction (human–human interaction, HHI) to an interaction with a conversational robot (human–robot interaction, HRI). The social interaction consists of 1 min blocks of live bidirectional discussion between the scanned participant and the human or robot agent. A final sample of 21 participants is included in the corpus comprising physiological (blood oxygen level-dependent, respiration and peripheral blood flow) and behavioural (recorded speech from all interlocutors, eye tracking from the scanned participant, face recording of the human and robot agents) data. Here, we present the first analysis of this corpus, contrasting neural activity between HHI and HRI. We hypothesized that independently of differences in behaviour between interactions with the human and robot agent, neural markers of mentalizing (temporoparietal junction (TPJ) and medial prefrontal cortex) and social motivation (hypothalamus and amygdala) would only be active in HHI. Results confirmed significantly increased response associated with HHI in the TPJ, hypothalamus and amygdala, but not in the medial prefrontal cortex. Future analysis of this corpus will include fine-grained characterization of verbal and non-verbal behaviours recorded during the interaction to investigate their neural correlates. This article is part of the theme issue ‘From social brains to social robots: applying neurocognitive insights to human–robot interaction'.


2021 ◽  
Author(s):  
Elef Schellen ◽  
Francesco Bossi ◽  
Agnieszka Wykowska

As the use of humanoid robots proliferates, an increasing amount of people may find themselves face-to-“face” with a robot in everyday life. Although there is a plethora of information available on facial social cues and how we interpret them in the field of human-human social interaction, we cannot assume that these findings flawlessly transfer to human-robot interaction. Therefore, more research on facial cues in human-robot interaction is required. This study investigated deception in human-robot interaction context, focusing on the effect that eye contact with a robot has on honesty towards this robot. In an iterative task, participants could assist a humanoid robot by providing it with correct information, or potentially secure a reward for themselves by providing it with incorrect information. Results show that participants are increasingly honest after the robot establishes eye contact with them, but only if this is in response to deceptive behavior. Behavior is not influenced by the establishment of eye contact if the participant is actively engaging in honest behavior. These findings support the notion that humanoid robots can be perceived as, and treated like, social agents, since the herein described effect mirrors one present in human-human social interaction.


2009 ◽  
Vol 6 (3-4) ◽  
pp. 369-397 ◽  
Author(s):  
Kerstin Dautenhahn ◽  
Chrystopher L. Nehaniv ◽  
Michael L. Walters ◽  
Ben Robins ◽  
Hatice Kose-Bagci ◽  
...  

This paper provides a comprehensive introduction to the design of the minimally expressive robot KASPAR, which is particularly suitable for human–robot interaction studies. A low-cost design with off-the-shelf components has been used in a novel design inspired from a multi-disciplinary viewpoint, including comics design and Japanese Noh theatre. The design rationale of the robot and its technical features are described in detail. Three research studies will be presented that have been using KASPAR extensively. Firstly, we present its application in robot-assisted play and therapy for children with autism. Secondly, we illustrate its use in human–robot interaction studies investigating the role of interaction kinesics and gestures. Lastly, we describe a study in the field of developmental robotics into computational architectures based on interaction histories for robot ontogeny. The three areas differ in the way as to how the robot is being operated and its role in social interaction scenarios. Each will be introduced briefly and examples of the results will be presented. Reflections on the specific design features of KASPAR that were important in these studies and lessons learnt from these studies concerning the design of humanoid robots for social interaction will also be discussed. An assessment of the robot in terms of utility of the design for human–robot interaction experiments concludes the paper.


2021 ◽  
Vol 4 ◽  
Author(s):  
Elef Schellen ◽  
Francesco Bossi ◽  
Agnieszka Wykowska

As the use of humanoid robots proliferates, an increasing amount of people may find themselves face-to-“face” with a robot in everyday life. Although there is a plethora of information available on facial social cues and how we interpret them in the field of human-human social interaction, we cannot assume that these findings flawlessly transfer to human-robot interaction. Therefore, more research on facial cues in human-robot interaction is required. This study investigated deception in human-robot interaction context, focusing on the effect that eye contact with a robot has on honesty toward this robot. In an iterative task, participants could assist a humanoid robot by providing it with correct information, or potentially secure a reward for themselves by providing it with incorrect information. Results show that participants are increasingly honest after the robot establishes eye contact with them, but only if this is in response to deceptive behavior. Behavior is not influenced by the establishment of eye contact if the participant is actively engaging in honest behavior. These findings support the notion that humanoid robots can be perceived as, and treated like, social agents, since the herein described effect mirrors one present in human-human social interaction.


2020 ◽  
Author(s):  
Shelly Bagchi ◽  
Murat Aksu ◽  
Megan Zimmerman ◽  
Jeremy A. Marvel ◽  
Brian Antonishek ◽  
...  

AI Magazine ◽  
2017 ◽  
Vol 37 (4) ◽  
pp. 19-31 ◽  
Author(s):  
Gabriel Skantze

When humans interact and collaborate with each other, they coordinate their turn-taking behaviors using verbal and nonverbal signals, expressed in the face and voice. If robots of the future are supposed to engage in social interaction with humans, it is essential that they can generate and understand these behaviors. In this article, I give an overview of several studies that show how humans in interaction with a humanlike robot make use of the same coordination signals typically found in studies on human-human interaction, and that it is possible to automatically detect and combine these cues to facilitate real-time coordination. The studies also show that humans react naturally to such signals when used by a robot, without being given any special instructions. They follow the gaze of the robot to disambiguate referring expressions, they conform when the robot selects the next speaker using gaze, and they respond naturally to subtle cues, such as gaze aversion, breathing, facial gestures and hesitation sounds.


Author(s):  
Alexander Vadimovich Timofeev ◽  

The article analyzes the problem of human-robot cognitive interaction. For this purpose, we consider individual and joint cognitive skills that arise in this process, propose models for their practical implementation, and clarify their location within the framework of a holistic and original deliberative architecture of human-robot interaction. Special attention is paid to planning tasks with the human factor in mind, as well as to the joint achievement of tasks in the human / robot system.


Author(s):  
Yuan Feng ◽  
Giulia Perugia ◽  
Suihuai Yu ◽  
Emilia I. Barakova ◽  
Jun Hu ◽  
...  

AbstractEngaging people with dementia (PWD) in meaningful activities is the key to promote their quality of life. Design towards a higher level of user engagement has been extensively studied within the human-computer interaction community, however, few extend to PWD. It is generally considered that increased richness of experiences can lead to enhanced engagement. Therefore, this paper explores the effects of rich interaction in terms of the role of system interactivity and multimodal stimuli by engaging participants in context-enhanced human-robot interaction activities. The interaction with a social robot was considered context-enhanced due to the additional responsive sensory feedback from an augmented reality display. A field study was conducted in a Dutch nursing home with 16 residents. The study followed a two by two mixed factorial design with one within-subject variable - multimodal stimuli - and one between-subject variable - system interactivity. A mixed method of video coding analysis and observational rating scales was adopted to assess user engagement comprehensively. Results disclose that when additional auditory modality was included besides the visual-tactile stimuli, participants had significantly higher scores on attitude, more positive behavioral engagement during activity, and a higher percentage of communications displayed. The multimodal stimuli also promoted social interaction between participants and the facilitator. The findings provide sufficient evidence regarding the significant role of multimodal stimuli in promoting PWD’s engagement, which could be potentially used as a motivation strategy in future research to improve emotional aspects of activity-related engagement and social interaction with the human partner.


Sign in / Sign up

Export Citation Format

Share Document