Is it me or is it you? Behavioral and electrophysiological effects of oxytocin administration on self-other integration during joint task performance

Cortex ◽  
2015 ◽  
Vol 70 ◽  
pp. 146-154 ◽  
Author(s):  
Margit I. Ruissen ◽  
Ellen R.A. de Bruijn
2021 ◽  
Vol 8 ◽  
Author(s):  
Giulia Perugia ◽  
Maike Paetzel-Prüsmann ◽  
Madelene Alanenpää ◽  
Ginevra Castellano

Over the past years, extensive research has been dedicated to developing robust platforms and data-driven dialog models to support long-term human-robot interactions. However, little is known about how people's perception of robots and engagement with them develop over time and how these can be accurately assessed through implicit and continuous measurement techniques. In this paper, we explore this by involving participants in three interaction sessions with multiple days of zero exposure in between. Each session consists of a joint task with a robot as well as two short social chats with it before and after the task. We measure participants' gaze patterns with a wearable eye-tracker and gauge their perception of the robot and engagement with it and the joint task using questionnaires. Results disclose that aversion of gaze in a social chat is an indicator of a robot's uncanniness and that the more people gaze at the robot in a joint task, the worse they perform. In contrast with most HRI literature, our results show that gaze toward an object of shared attention, rather than gaze toward a robotic partner, is the most meaningful predictor of engagement in a joint task. Furthermore, the analyses of gaze patterns in repeated interactions disclose that people's mutual gaze in a social chat develops congruently with their perceptions of the robot over time. These are key findings for the HRI community as they entail that gaze behavior can be used as an implicit measure of people's perception of robots in a social chat and of their engagement and task performance in a joint task.


Author(s):  
Tiziana Casciaro ◽  
Miguel Lobo ◽  
Hendrik Wilhelm ◽  
Michael Wittland
Keyword(s):  

2018 ◽  
Author(s):  
Motonori Yamaguchi ◽  
Helen Joanne Wall ◽  
Bernhard Hommel

A central issue in the study of joint task performance has been one of whether co-acting individuals perform their partner’s part of the task as if it were their own. The present study addressed this issue by using joint task switching. A pair of actors shared two tasks that were presented in a random order, whereby the relevant task and actor were cued on each trial. Responses produced action effects that were either shared or separate between co-actors. When co-actors produced separate action effects, switch costs were obtained within the same actor (i.e., when the same actor performed consecutive trials) but not between co-actors (when different actors performed consecutive trials), implying that actors did not perform their co-actor’s part. When the same action effects were shared between co-actors, however, switch costs were also obtained between co-actors, implying that actors did perform their co-actor’s part. The results indicated that shared action effects induce task-set sharing between co-acting individuals.


2019 ◽  
Author(s):  
Basil Wahn ◽  
Jill A. Dosso ◽  
Alan Kingstone

In daily life, humans constantly process information from multiple sensory modalities (e.g., vision, audition). Information across sensory modalities may (or may not) be combined to form the perception of a single event via the process of multisensory integration. Recent research suggests that performing a spatial crossmodal congruency task jointly with a partner affects multisensory integration. To date, it has not been investigated whether multisensory integration in other crossmodal tasks is also affected by performing a task jointly. To address this point, we investigated whether joint task performance also affects perceptual judgments in a crossmodal motion discrimination task and a temporal order judgment task. In both tasks, pairs of participants were presented with auditory and visual stimuli that might or might not be perceived as belonging to a single event. Each participant in a pair was required to respond to stimuli from one sensory modality only (e.g., visual stimuli only). Participants performed both individual and joint conditions. Replicating earlier multisensory integration effects, we found that participants' perceptual judgments were significantly affected by stimuli in the other modality for both tasks. However, we did not find that performing a task jointly modulated these crossmodal effects. Taking this together with earlier findings, we suggest that joint task performance affects crossmodal results in a manner dependent on how these effects are quantified (i.e., via responses time or accuracy) and the specific task demands (i.e., whether tasks require processing stimuli in terms of location, motion, or timing).


2018 ◽  
Author(s):  
Motonori Yamaguchi ◽  
Helen Joanne Wall ◽  
Bernhard Hommel

Studies on joint task performance have proposed that co-acting individuals co-represent the shared task context, which implies that actors integrate their co-actor’s task components into their own task representation as if they were all their own task. Evidence supporting this proposal has been supported by results of joint tasks in which each actor is assigned a single response where selecting a response is equivalent to selecting an actor. The present study used joint task switching, which has previously shown switch costs on trials following the actor’s own trial (intrapersonal switch costs) but not on trials that followed the co-actor’s trial (interpersonal switch costs), suggesting that there is no task co-representation. We examined whether interpersonal switch costs can be obtained when action selection and actor selection are confounded as in previous joint task studies. The present results confirmed this prediction, demonstrating that switch costs can occur within a single actor as well as between co-actors when there is only a single response per actor, but not when there are two responses per actor. These results indicate that task co-representation is not necessarily implied even when effects occur across co-acting individuals and that how the task is divided between co-actors plays an important role in determining how the actors represent the divided task components.


2015 ◽  
Vol 10 (10) ◽  
pp. 1365-1372 ◽  
Author(s):  
J. de la Asuncion ◽  
C. Bervoets ◽  
M. Morrens ◽  
B. Sabbe ◽  
E. R. A. De Bruijn
Keyword(s):  

1980 ◽  
Vol 51 (3) ◽  
pp. 807-812 ◽  
Author(s):  
Gene A. Berry ◽  
Richard L. Hughes ◽  
Linda D. Jackson

Spatial and sequential tasks which were performed both independently and jointly were compared for 40 college undergraduates grouped by sex and dominant hand. The two tasks were selected based on prior research suggesting that each is controlled in a different brain hemisphere. When the two tasks were performed independently there was no handedness effect nor was there any sex-hand interaction. There was an expected slight spatial superiority of males. However, when both tasks were performed simultaneously, there was a significant advantage for right-handers and again a slight advantage for males. Results were attributed to the hemispheric interference left-handers experienced on the joint task due to their less distinct hemispheric specialization.


Sign in / Sign up

Export Citation Format

Share Document