THE WAY WE MAKE EACH OTHER FEEL: RELATIONAL AFFECT AND JOINT TASK PERFORMANCE

Author(s):  
Tiziana Casciaro ◽  
Miguel Lobo ◽  
Hendrik Wilhelm ◽  
Michael Wittland
Keyword(s):  
2020 ◽  
Vol 24 (1) ◽  
pp. 340-360
Author(s):  
Carina da Silva Santos ◽  
Ingrid Finger

The present study aimed to investigate the relationship between bilingualism and numerical cognition, more specifically, the way English-Portuguese bilinguals solve simple mathematical problems when these are presented in different formats (digits, English, and Portuguese) and whether their language history background has any effect on such behavior. The main results suggest that bilinguals are faster and more accurate in solving mathematical problems presented in digit format and in solving those problems presented in the written format when the language of the stimuli was the one in which they learned basic arithmetic. Also, the participants’ language background experience did not have any significance in their task performance.


2021 ◽  
Vol 8 ◽  
Author(s):  
Giulia Perugia ◽  
Maike Paetzel-Prüsmann ◽  
Madelene Alanenpää ◽  
Ginevra Castellano

Over the past years, extensive research has been dedicated to developing robust platforms and data-driven dialog models to support long-term human-robot interactions. However, little is known about how people's perception of robots and engagement with them develop over time and how these can be accurately assessed through implicit and continuous measurement techniques. In this paper, we explore this by involving participants in three interaction sessions with multiple days of zero exposure in between. Each session consists of a joint task with a robot as well as two short social chats with it before and after the task. We measure participants' gaze patterns with a wearable eye-tracker and gauge their perception of the robot and engagement with it and the joint task using questionnaires. Results disclose that aversion of gaze in a social chat is an indicator of a robot's uncanniness and that the more people gaze at the robot in a joint task, the worse they perform. In contrast with most HRI literature, our results show that gaze toward an object of shared attention, rather than gaze toward a robotic partner, is the most meaningful predictor of engagement in a joint task. Furthermore, the analyses of gaze patterns in repeated interactions disclose that people's mutual gaze in a social chat develops congruently with their perceptions of the robot over time. These are key findings for the HRI community as they entail that gaze behavior can be used as an implicit measure of people's perception of robots in a social chat and of their engagement and task performance in a joint task.


2019 ◽  
Vol 3 (1) ◽  
pp. 53-80
Author(s):  
Zhan Wang ◽  
Peter Skehan ◽  
Gaowei Chen

This study investigated L2 speaking performance under three different types of task-related time pressure, with a control (Control) group narrating a video at normal playing rate, an online planning (OP) group narrating the video at a slowed playing rate, and a hybrid online planning (HOP) group which combined online planning (a slowed playing rate) with content preparedness (through prewatching the video). The results show that the HOP group outperformed the Control group regarding speech accuracy and complexity, suggesting that this form of online planning, with content preparedness, helps improve speech accuracy and complexity. In addition, L2 proficiency significantly predicted speech accuracy, specifically, among all other performance measures. The implications of these findings for language teaching and learning are discussed, particularly their relevance for the way a Conceptualiser-Formulator balance is important, and for the way proficiency can best be mobilised within task performance.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Mohammad R. Saeedpour-Parizi ◽  
Shirin E. Hassan ◽  
Ariful Azad ◽  
Kelly J. Baute ◽  
Tayebeh Baniasadi ◽  
...  

AbstractThis study examined how people choose their path to a target, and the visual information they use for path planning. Participants avoided stepping outside an avoidance margin between a stationary obstacle and the edge of a walkway as they walked to a bookcase and picked up a target from different locations on a shelf. We provided an integrated explanation for path selection by combining avoidance margin, deviation angle, and distance to the obstacle. We found that the combination of right and left avoidance margins accounted for 26%, deviation angle accounted for 39%, and distance to the obstacle accounted for 35% of the variability in decisions about the direction taken to circumvent an obstacle on the way to a target. Gaze analysis findings showed that participants directed their gaze to minimize the uncertainty involved in successful task performance and that gaze sequence changed with obstacle location. In some cases, participants chose to circumvent the obstacle on a side for which the gaze time was shorter, and the path was longer than for the opposite side. Our results of a path selection judgment test showed that the threshold for participants abandoning their preferred side for circumventing the obstacle was a target location of 15 cm to the left of the bookcase shelf center.


2018 ◽  
Author(s):  
Motonori Yamaguchi ◽  
Helen Joanne Wall ◽  
Bernhard Hommel

A central issue in the study of joint task performance has been one of whether co-acting individuals perform their partner’s part of the task as if it were their own. The present study addressed this issue by using joint task switching. A pair of actors shared two tasks that were presented in a random order, whereby the relevant task and actor were cued on each trial. Responses produced action effects that were either shared or separate between co-actors. When co-actors produced separate action effects, switch costs were obtained within the same actor (i.e., when the same actor performed consecutive trials) but not between co-actors (when different actors performed consecutive trials), implying that actors did not perform their co-actor’s part. When the same action effects were shared between co-actors, however, switch costs were also obtained between co-actors, implying that actors did perform their co-actor’s part. The results indicated that shared action effects induce task-set sharing between co-acting individuals.


2019 ◽  
Author(s):  
Basil Wahn ◽  
Jill A. Dosso ◽  
Alan Kingstone

In daily life, humans constantly process information from multiple sensory modalities (e.g., vision, audition). Information across sensory modalities may (or may not) be combined to form the perception of a single event via the process of multisensory integration. Recent research suggests that performing a spatial crossmodal congruency task jointly with a partner affects multisensory integration. To date, it has not been investigated whether multisensory integration in other crossmodal tasks is also affected by performing a task jointly. To address this point, we investigated whether joint task performance also affects perceptual judgments in a crossmodal motion discrimination task and a temporal order judgment task. In both tasks, pairs of participants were presented with auditory and visual stimuli that might or might not be perceived as belonging to a single event. Each participant in a pair was required to respond to stimuli from one sensory modality only (e.g., visual stimuli only). Participants performed both individual and joint conditions. Replicating earlier multisensory integration effects, we found that participants' perceptual judgments were significantly affected by stimuli in the other modality for both tasks. However, we did not find that performing a task jointly modulated these crossmodal effects. Taking this together with earlier findings, we suggest that joint task performance affects crossmodal results in a manner dependent on how these effects are quantified (i.e., via responses time or accuracy) and the specific task demands (i.e., whether tasks require processing stimuli in terms of location, motion, or timing).


2018 ◽  
Author(s):  
Motonori Yamaguchi ◽  
Helen Joanne Wall ◽  
Bernhard Hommel

Studies on joint task performance have proposed that co-acting individuals co-represent the shared task context, which implies that actors integrate their co-actor’s task components into their own task representation as if they were all their own task. Evidence supporting this proposal has been supported by results of joint tasks in which each actor is assigned a single response where selecting a response is equivalent to selecting an actor. The present study used joint task switching, which has previously shown switch costs on trials following the actor’s own trial (intrapersonal switch costs) but not on trials that followed the co-actor’s trial (interpersonal switch costs), suggesting that there is no task co-representation. We examined whether interpersonal switch costs can be obtained when action selection and actor selection are confounded as in previous joint task studies. The present results confirmed this prediction, demonstrating that switch costs can occur within a single actor as well as between co-actors when there is only a single response per actor, but not when there are two responses per actor. These results indicate that task co-representation is not necessarily implied even when effects occur across co-acting individuals and that how the task is divided between co-actors plays an important role in determining how the actors represent the divided task components.


2015 ◽  
Vol 10 (10) ◽  
pp. 1365-1372 ◽  
Author(s):  
J. de la Asuncion ◽  
C. Bervoets ◽  
M. Morrens ◽  
B. Sabbe ◽  
E. R. A. De Bruijn
Keyword(s):  

Sign in / Sign up

Export Citation Format

Share Document