scholarly journals Competitive Game Play Attenuates Self-Other Integration during Joint Task Performance

2016 ◽  
Vol 7 ◽  
Author(s):  
Margit I. Ruissen ◽  
Ellen R. A. de Bruijn
Author(s):  
Jessica Elam ◽  
Nick Taylor

Abstract The rise of live-streaming platforms, and the related surge in popularity of esports, remind us that there is a politics of watching play. This article extends intensified scholarly interest in game spectatorship, offering a materialist consideration of the embodied work involved in spectating competitive game play. Most readily associated with the discursive alignments between competitive gaming and/as sport, the active camera mode used by esports competitors and “shoutcasters” facilitates analyzing the highly kinetic action of team-based combat in first-person shooters and multiplayer online battle arenas. Here, we draw from microanalyses of audio-visual recordings taken as individual participants spectated a Dota 2 match. Examining the cognitive and perceptual competencies they draw from, we argue that participants are incorporated into apparatuses of perception associated with militarized optical media. While at a discursive level esports spectators may be watching sports, at a material level they are playing with the logics of drones.


PLoS ONE ◽  
2014 ◽  
Vol 9 (7) ◽  
pp. e100318 ◽  
Author(s):  
J. Matias Kivikangas ◽  
Jari Kätsyri ◽  
Simo Järvelä ◽  
Niklas Ravaja

2021 ◽  
Vol 8 ◽  
Author(s):  
Giulia Perugia ◽  
Maike Paetzel-Prüsmann ◽  
Madelene Alanenpää ◽  
Ginevra Castellano

Over the past years, extensive research has been dedicated to developing robust platforms and data-driven dialog models to support long-term human-robot interactions. However, little is known about how people's perception of robots and engagement with them develop over time and how these can be accurately assessed through implicit and continuous measurement techniques. In this paper, we explore this by involving participants in three interaction sessions with multiple days of zero exposure in between. Each session consists of a joint task with a robot as well as two short social chats with it before and after the task. We measure participants' gaze patterns with a wearable eye-tracker and gauge their perception of the robot and engagement with it and the joint task using questionnaires. Results disclose that aversion of gaze in a social chat is an indicator of a robot's uncanniness and that the more people gaze at the robot in a joint task, the worse they perform. In contrast with most HRI literature, our results show that gaze toward an object of shared attention, rather than gaze toward a robotic partner, is the most meaningful predictor of engagement in a joint task. Furthermore, the analyses of gaze patterns in repeated interactions disclose that people's mutual gaze in a social chat develops congruently with their perceptions of the robot over time. These are key findings for the HRI community as they entail that gaze behavior can be used as an implicit measure of people's perception of robots in a social chat and of their engagement and task performance in a joint task.


Author(s):  
Tiziana Casciaro ◽  
Miguel Lobo ◽  
Hendrik Wilhelm ◽  
Michael Wittland
Keyword(s):  

2018 ◽  
Author(s):  
Motonori Yamaguchi ◽  
Helen Joanne Wall ◽  
Bernhard Hommel

A central issue in the study of joint task performance has been one of whether co-acting individuals perform their partner’s part of the task as if it were their own. The present study addressed this issue by using joint task switching. A pair of actors shared two tasks that were presented in a random order, whereby the relevant task and actor were cued on each trial. Responses produced action effects that were either shared or separate between co-actors. When co-actors produced separate action effects, switch costs were obtained within the same actor (i.e., when the same actor performed consecutive trials) but not between co-actors (when different actors performed consecutive trials), implying that actors did not perform their co-actor’s part. When the same action effects were shared between co-actors, however, switch costs were also obtained between co-actors, implying that actors did perform their co-actor’s part. The results indicated that shared action effects induce task-set sharing between co-acting individuals.


2019 ◽  
Author(s):  
Basil Wahn ◽  
Jill A. Dosso ◽  
Alan Kingstone

In daily life, humans constantly process information from multiple sensory modalities (e.g., vision, audition). Information across sensory modalities may (or may not) be combined to form the perception of a single event via the process of multisensory integration. Recent research suggests that performing a spatial crossmodal congruency task jointly with a partner affects multisensory integration. To date, it has not been investigated whether multisensory integration in other crossmodal tasks is also affected by performing a task jointly. To address this point, we investigated whether joint task performance also affects perceptual judgments in a crossmodal motion discrimination task and a temporal order judgment task. In both tasks, pairs of participants were presented with auditory and visual stimuli that might or might not be perceived as belonging to a single event. Each participant in a pair was required to respond to stimuli from one sensory modality only (e.g., visual stimuli only). Participants performed both individual and joint conditions. Replicating earlier multisensory integration effects, we found that participants' perceptual judgments were significantly affected by stimuli in the other modality for both tasks. However, we did not find that performing a task jointly modulated these crossmodal effects. Taking this together with earlier findings, we suggest that joint task performance affects crossmodal results in a manner dependent on how these effects are quantified (i.e., via responses time or accuracy) and the specific task demands (i.e., whether tasks require processing stimuli in terms of location, motion, or timing).


2018 ◽  
Author(s):  
Motonori Yamaguchi ◽  
Helen Joanne Wall ◽  
Bernhard Hommel

Studies on joint task performance have proposed that co-acting individuals co-represent the shared task context, which implies that actors integrate their co-actor’s task components into their own task representation as if they were all their own task. Evidence supporting this proposal has been supported by results of joint tasks in which each actor is assigned a single response where selecting a response is equivalent to selecting an actor. The present study used joint task switching, which has previously shown switch costs on trials following the actor’s own trial (intrapersonal switch costs) but not on trials that followed the co-actor’s trial (interpersonal switch costs), suggesting that there is no task co-representation. We examined whether interpersonal switch costs can be obtained when action selection and actor selection are confounded as in previous joint task studies. The present results confirmed this prediction, demonstrating that switch costs can occur within a single actor as well as between co-actors when there is only a single response per actor, but not when there are two responses per actor. These results indicate that task co-representation is not necessarily implied even when effects occur across co-acting individuals and that how the task is divided between co-actors plays an important role in determining how the actors represent the divided task components.


Sign in / Sign up

Export Citation Format

Share Document