scholarly journals An EEG investigation on planning human-robot handover tasks

Author(s):  
Sara Cooper ◽  
Stuart Gow ◽  
Samuel F.P.Fensome ◽  
Mauro Dragone ◽  
Dimitrios Kourtis

<div>Human-robot joint action is a key requirement in many advanced robotic applications where robots are not only</div><div>expected to work alongside humans but also collaborate with them in the performance of physical tasks. Robots are already programmed to model and predict human actions in order to ensure smooth collaboration and overall task efficiency. However, little is known on how humans represent and account for robot’s actions as part of their own plans. This paper presents a first joint psychological and HRI user study designed to answer this question in the context of human-robot handover scenarios.</div><div>Our analysis showed that the participants had a positive userexperience of the interaction and adopted gaze patterns similar to a large extent to the ones in human-to-human handover tasks.</div><div>The EEG analysis suggests that, compared to solo action, the human participants were at a state of higher motor readiness when they prepared to hand over the object to the robot either because they represented the robot’s action in advance or alternatively that they anticipated that passing the object to the robot would be a more effortful action, thus highlighting the increased demands in planning a human-to-robot interaction.</div><div>Our findings highlight the value of gaze as a positive method of non-verbal communication in HRI and provides new insights in the neural mechanisms that allows a person to plan an effective interaction with a robot.</div>

2020 ◽  
Author(s):  
Sara Cooper ◽  
Stuart Gow ◽  
Samuel F.P.Fensome ◽  
Mauro Dragone ◽  
Dimitrios Kourtis

<div>Human-robot joint action is a key requirement in many advanced robotic applications where robots are not only</div><div>expected to work alongside humans but also collaborate with them in the performance of physical tasks. Robots are already programmed to model and predict human actions in order to ensure smooth collaboration and overall task efficiency. However, little is known on how humans represent and account for robot’s actions as part of their own plans. This paper presents a first joint psychological and HRI user study designed to answer this question in the context of human-robot handover scenarios.</div><div>Our analysis showed that the participants had a positive userexperience of the interaction and adopted gaze patterns similar to a large extent to the ones in human-to-human handover tasks.</div><div>The EEG analysis suggests that, compared to solo action, the human participants were at a state of higher motor readiness when they prepared to hand over the object to the robot either because they represented the robot’s action in advance or alternatively that they anticipated that passing the object to the robot would be a more effortful action, thus highlighting the increased demands in planning a human-to-robot interaction.</div><div>Our findings highlight the value of gaze as a positive method of non-verbal communication in HRI and provides new insights in the neural mechanisms that allows a person to plan an effective interaction with a robot.</div>


2020 ◽  
Author(s):  
Sara Cooper ◽  
Stuart Gow ◽  
Samuel F.P.Fensome ◽  
Mauro Dragone ◽  
Dimitrios Kourtis

<div>Human-robot joint action is a key requirement in many advanced robotic applications where robots are not only</div><div>expected to work alongside humans but also collaborate with them in the performance of physical tasks. Robots are already programmed to model and predict human actions in order to ensure smooth collaboration and overall task efficiency. However, little is known on how humans represent and account for robot’s actions as part of their own plans. This paper presents a first joint psychological and HRI user study designed to answer this question in the context of human-robot handover scenarios.</div><div>Our analysis showed that the participants had a positive userexperience of the interaction and adopted gaze patterns similar to a large extent to the ones in human-to-human handover tasks.</div><div>The EEG analysis suggests that, compared to solo action, the human participants were at a state of higher motor readiness when they prepared to hand over the object to the robot either because they represented the robot’s action in advance or alternatively that they anticipated that passing the object to the robot would be a more effortful action, thus highlighting the increased demands in planning a human-to-robot interaction.</div><div>Our findings highlight the value of gaze as a positive method of non-verbal communication in HRI and provides new insights in the neural mechanisms that allows a person to plan an effective interaction with a robot.</div>


Robotics ◽  
2021 ◽  
Vol 10 (1) ◽  
pp. 51
Author(s):  
Misbah Javaid ◽  
Vladimir Estivill-Castro

Typically, humans interact with a humanoid robot with apprehension. This lack of trust can seriously affect the effectiveness of a team of robots and humans. We can create effective interactions that generate trust by augmenting robots with an explanation capability. The explanations provide justification and transparency to the robot’s decisions. To demonstrate such effective interaction, we tested this with an interactive, game-playing environment with partial information that requires team collaboration, using a game called Spanish Domino. We partner a robot with a human to form a pair, and this team opposes a team of two humans. We performed a user study with sixty-three human participants in different settings, investigating the effect of the robot’s explanations on the humans’ trust and perception of the robot’s behaviour. Our explanation-generation mechanism produces natural-language sentences that translate the decision taken by the robot into human-understandable terms. We video-recorded all interactions to analyse factors such as the participants’ relational behaviours with the robot, and we also used questionnaires to measure the participants’ explicit trust in the robot. Overall, our main results demonstrate that explanations enhanced the participants’ understandability of the robot’s decisions, because we observed a significant increase in the participants’ level of trust in their robotic partner. These results suggest that explanations, stating the reason(s) for a decision, combined with the transparency of the decision-making process, facilitate collaborative human–humanoid interactions.


2015 ◽  
Vol 9 (4) ◽  
Author(s):  
Songpo Li ◽  
Xiaoli Zhang ◽  
Fernando J. Kim ◽  
Rodrigo Donalisio da Silva ◽  
Diedra Gustafson ◽  
...  

Laparoscopic robots have been widely adopted in modern medical practice. However, explicitly interacting with these robots may increase the physical and cognitive load on the surgeon. An attention-aware robotic laparoscope system has been developed to free the surgeon from the technical limitations of visualization through the laparoscope. This system can implicitly recognize the surgeon's visual attention by interpreting the surgeon's natural eye movements using fuzzy logic and then automatically steer the laparoscope to focus on that viewing target. Experimental results show that this system can make the surgeon–robot interaction more effective, intuitive, and has the potential to make the execution of the surgery smoother and faster.


Author(s):  
Mauro Dragone ◽  
Joe Saunders ◽  
Kerstin Dautenhahn

AbstractEnabling robots to seamlessly operate as part of smart spaces is an important and extended challenge for robotics R&D and a key enabler for a range of advanced robotic applications, such as AmbientAssisted Living (AAL) and home automation. The integration of these technologies is currently being pursued from two largely distinct view-points: On the one hand, people-centred initiatives focus on improving the user’s acceptance by tackling human-robot interaction (HRI) issues, often adopting a social robotic approach, and by giving to the designer and - in a limited degree – to the final user(s), control on personalization and product customisation features. On the other hand, technologically-driven initiatives are building impersonal but intelligent systems that are able to pro-actively and autonomously adapt their operations to fit changing requirements and evolving users’ needs, but which largely ignore and do not leverage human-robot interaction and may thus lead to poor user experience and user acceptance. In order to inform the development of a new generation of smart robotic spaces, this paper analyses and compares different research strands with a view to proposing possible integrated solutions with both advanced HRI and online adaptation capabilities.


2013 ◽  
Vol 14 (3) ◽  
pp. 390-418 ◽  
Author(s):  
Tian Xu ◽  
Hui Zhang ◽  
Chen Yu

When humans are addressing multiple robots with informative speech acts (Clark & Carlson 1982), their cognitive resources are shared between all the participating robot agents. For each moment, the user’s behavior is not only determined by the actions of the robot that they are directly gazing at, but also shaped by the behaviors from all the other robots in the shared environment. We define cooperative behavior as the action performed by the robots that are not capturing the user’s direct attention. In this paper, we are interested in how the human participants adjust and coordinate their own behavioral cues when the robot agents are performing different cooperative gaze behaviors. A novel gaze-contingent platform was designed and implemented. The robots’ behaviors were triggered by the participant’s attentional shifts in real time. Results showed that the human participants were highly sensitive when the robot agents were performing different cooperative gazing behaviors. Keywords: human-robot interaction; multi-robot interaction; multiparty interaction; eye gaze cue; embodied conversational agent


PLoS ONE ◽  
2021 ◽  
Vol 16 (9) ◽  
pp. e0257378
Author(s):  
Fernanda Dantas Bueno ◽  
André Mascioli Cravo

Studies investigating the neural mechanisms of time perception often measure brain activity while participants perform a temporal task. However, several of these studies are based exclusively on tasks in which time is relevant, making it hard to dissociate activity related to decisions about time from other task-related patterns. In the present study, human participants performed a temporal or color discrimination task of visual stimuli. Participants were informed which magnitude they would have to judge before or after presenting the two stimuli (S1 and S2) in different blocks. Our behavioral results showed, as expected, that performance was better when participants knew beforehand which magnitude they would judge. Electrophysiological data (EEG) was analysed using Linear Discriminant Contrasts (LDC) and a Representational Similarity Analysis (RSA) approach to investigate whether and when information about time and color was encoded. During the presentation of S1, we did not find consistent differences in EEG activity as a function of the task. On the other hand, during S2, we found that temporal and color information was encoded in a task-relevant manner. Taken together, our results suggest that task goals strongly modulate decision-related information in EEG activity.


2020 ◽  
Author(s):  
Lukas Lengersdorff ◽  
Isabella Wagner ◽  
Claus Lamm

Humans learn quickly which actions cause them harm. As social beings, we also need to learn to avoid actions that hurt others. It is currently unknown if humans are as good at learning to avoid others' harm (prosocial learning) as they are at learning to avoid self-harm (self-relevant learning). Moreover, it remains unclear how the neural mechanisms of prosocial learning differ from those of self-relevant learning. In this fMRI study, 96 male human participants learned to avoid painful stimuli either for themselves or for another individual. We found that participants performed more optimally when learning for the other than for themselves. Computational modeling revealed that this could be explained by an increased sensitivity to subjective values of choice alternatives during prosocial learning. Increased value-sensitivity was further associated with empathic traits. On the neural level, higher value-sensitivity during prosocial learning was associated with stronger engagement of the ventromedial prefrontal cortex (VMPFC) during valuation. Moreover, the VMPFC exhibited higher connectivity with the right temporoparietal junction during prosocial, compared to self-relevant, choices. Our results suggest that humans are particularly adept at learning to protect others from harm. This ability appears implemented by neural mechanisms overlapping with those supporting self-relevant learning, but with the additional recruitment of structures associated to the social brain. Our findings contrasts with recent proposals that humans are egocentrically biased when learning to obtain monetary rewards for self or others. Prosocial tendencies may thus trump the egocentric bias in learning when another person's physical integrity is at stake.


2021 ◽  
Vol 12 (1) ◽  
pp. 402-422
Author(s):  
Kheng Lee Koay ◽  
Matt Webster ◽  
Clare Dixon ◽  
Paul Gainer ◽  
Dag Syrdal ◽  
...  

Abstract When studying the use of assistive robots in home environments, and especially how such robots can be personalised to meet the needs of the resident, key concerns are issues related to behaviour verification, behaviour interference and safety. Here, personalisation refers to the teaching of new robot behaviours by both technical and non-technical end users. In this article, we consider the issue of behaviour interference caused by situations where newly taught robot behaviours may affect or be affected by existing behaviours and thus, those behaviours will not or might not ever be executed. We focus in particular on how such situations can be detected and presented to the user. We describe the human–robot behaviour teaching system that we developed as well as the formal behaviour checking methods used. The online use of behaviour checking is demonstrated, based on static analysis of behaviours during the operation of the robot, and evaluated in a user study. We conducted a proof-of-concept human–robot interaction study with an autonomous, multi-purpose robot operating within a smart home environment. Twenty participants individually taught the robot behaviours according to instructions they were given, some of which caused interference with other behaviours. A mechanism for detecting behaviour interference provided feedback to participants and suggestions on how to resolve those conflicts. We assessed the participants’ views on detected interference as reported by the behaviour teaching system. Results indicate that interference warnings given to participants during teaching provoked an understanding of the issue. We did not find a significant influence of participants’ technical background. These results highlight a promising path towards verification and validation of assistive home companion robots that allow end-user personalisation.


2021 ◽  
Vol 12 (1) ◽  
pp. 258
Author(s):  
Marek Čorňák ◽  
Michal Tölgyessy ◽  
Peter Hubinský

The concept of “Industry 4.0” relies heavily on the utilization of collaborative robotic applications. As a result, the need for an effective, natural, and ergonomic interface arises, as more workers will be required to work with robots. Designing and implementing natural forms of human–robot interaction (HRI) is key to ensuring efficient and productive collaboration between humans and robots. This paper presents a gestural framework for controlling a collaborative robotic manipulator using pointing gestures. The core principle lies in the ability of the user to send the robot’s end effector to the location towards, which he points to by his hand. The main idea is derived from the concept of so-called “linear HRI”. The framework utilizes a collaborative robotic arm UR5e and the state-of-the-art human body tracking sensor Leap Motion. The user is not required to wear any equipment. The paper describes the overview of the framework’s core method and provides the necessary mathematical background. An experimental evaluation of the method is provided, and the main influencing factors are identified. A unique robotic collaborative workspace called Complex Collaborative HRI Workplace (COCOHRIP) was designed around the gestural framework to evaluate the method and provide the basis for the future development of HRI applications.


Sign in / Sign up

Export Citation Format

Share Document