scholarly journals Inferring task performance and confidence from displays of eye movements

2020 ◽  
Vol 34 (6) ◽  
pp. 1430-1443
Author(s):  
Selina N. Emhardt ◽  
Margot Wermeskerken ◽  
Katharina Scheiter ◽  
Tamara Gog
2020 ◽  
Vol 127 (3) ◽  
pp. 571-586
Author(s):  
Ikumi Tochikura ◽  
Daisuke Sato ◽  
Daiki Imoto ◽  
Atsuo Nuruki ◽  
Koya Yamashiro ◽  
...  

Previous studies have reported that baseball players have higher than average visual information processing abilities and outstanding motor control. The speed and position of the baseball and the batter are constantly changing, leading skilled players to acquire highly accurate visual information processing and decision-making. This study sought to clarify how movement of the eyes is associated with baseball players’ higher coincident-timing task performance. We recruited 15 right-handed baseball players and 15 age-matched track and field athletes. On a computer-based coincident-timing task, we instructed participants to stop a computer image of a moving target by pressing a button at a designated point. We presented bidirectional moving targets with various velocities, presented in a random order. The targets’ moving angular velocity varied between 100, 83, 71, 63, 56, 50, and 46 deg/s. We conducted 168 repetitions (42 reps × 4 sets) of this coincident-timing task and measured participants’ eye movements during the task using Pupil Centre Corneal Reflection. Mixed-design analysis of variance results revealed participant group effects in favor of baseball players for timing absolute error and low absolute error, as predicted from prior visual processing and decision-making research with baseball players. However, in contrast to prior research, we found significantly shorter smooth-pursuit onset latency in elite baseball players, and there were no significant group differences for saccade onset and offset latencies. This may be explained by the difference in our research paradigm with mobile targets randomly presented at various velocities from the left and right. Our data showed baseball players’ higher than normal simultaneous timing execution for making decisions and movements based on visual information, even under laboratory conditions with randomly moving mobile targets.


2018 ◽  
Author(s):  
Rachel N. Denison ◽  
Shlomit Yuval-Greenberg ◽  
Marisa Carrasco

AbstractOur visual input is constantly changing, but not all moments are equally relevant. Temporal attention, the prioritization of visual information at specific points in time, increases perceptual sensitivity at behaviorally relevant times. The dynamic processes underlying this increase are unclear. During fixation, humans make small eye movements called microsaccades, and inhibiting microsaccades improves perception of brief stimuli. Here we asked whether temporal attention changes the pattern of microsaccades in anticipation of brief stimuli. Human observers (female and male) judged brief stimuli presented within a short sequence. They were given either an informative precue to attend to one of the stimuli, which was likely to be probed, or an uninformative (neutral) precue. We found strong microsaccadic inhibition before the stimulus sequence, likely due to its predictable onset. Critically, this anticipatory inhibition was stronger when the first target in the sequence (T1) was precued (task-relevant) than when the precue was uninformative. Moreover, the timing of the last microsaccade before T1 and the first microsaccade after T1 shifted, such that both occurred earlier when T1 was precued than when the precue was uninformative. Finally, the timing of the nearest pre- and post-T1 microsaccades affected task performance. Directing voluntary temporal attention therefore impacts microsaccades, helping to stabilize fixation at the most relevant moments, over and above the effect of predictability. Just as saccading to a relevant stimulus can be an overt correlate of the allocation of spatial attention, precisely timed gaze stabilization can be an overt correlate of the allocation of temporal attention.Significance statementWe pay attention at moments in time when a relevant event is likely to occur. Such temporal attention improves our visual perception, but how it does so is not well understood. Here we discovered a new behavioral correlate of voluntary, or goal-directed, temporal attention. We found that the pattern of small fixational eye movements called microsaccades changes around behaviorally relevant moments in a way that stabilizes the position of the eyes. Microsaccades during a brief visual stimulus can impair perception of that stimulus. Therefore, such fixation stabilization may contribute to the improvement of visual perception at attended times. This link suggests that in addition to cortical areas, subcortical areas mediating eye movements may be recruited with temporal attention.


2020 ◽  
Vol 34 (1) ◽  
pp. 17-47
Author(s):  
Minke J. de Boer ◽  
Deniz Başkent ◽  
Frans W. Cornelissen

Abstract The majority of emotional expressions used in daily communication are multimodal and dynamic in nature. Consequently, one would expect that human observers utilize specific perceptual strategies to process emotions and to handle the multimodal and dynamic nature of emotions. However, our present knowledge on these strategies is scarce, primarily because most studies on emotion perception have not fully covered this variation, and instead used static and/or unimodal stimuli with few emotion categories. To resolve this knowledge gap, the present study examined how dynamic emotional auditory and visual information is integrated into a unified percept. Since there is a broad spectrum of possible forms of integration, both eye movements and accuracy of emotion identification were evaluated while observers performed an emotion identification task in one of three conditions: audio-only, visual-only video, or audiovisual video. In terms of adaptations of perceptual strategies, eye movement results showed a shift in fixations toward the eyes and away from the nose and mouth when audio is added. Notably, in terms of task performance, audio-only performance was mostly significantly worse than video-only and audiovisual performances, but performance in the latter two conditions was often not different. These results suggest that individuals flexibly and momentarily adapt their perceptual strategies to changes in the available information for emotion recognition, and these changes can be comprehensively quantified with eye tracking.


2004 ◽  
Vol 35 (1) ◽  
pp. 1582 ◽  
Author(s):  
Erik Viirre ◽  
Karl Van Orden ◽  
Shawn Wing ◽  
Bradley Chase ◽  
Christopher Pribe ◽  
...  

2020 ◽  
Vol 10 (13) ◽  
pp. 4508 ◽  
Author(s):  
Armel Quentin Tchanou ◽  
Pierre-Majorique Léger ◽  
Jared Boasen ◽  
Sylvain Senecal ◽  
Jad Adam Taher ◽  
...  

Gaze convergence of multiuser eye movements during simultaneous collaborative use of a shared system interface has been proposed as an important albeit sparsely explored construct in human-computer interaction literature. Here, we propose a novel index for measuring the gaze convergence of user dyads and address its validity through two consecutive eye-tracking studies. Eye-tracking data of user dyads were synchronously recorded while they simultaneously performed tasks on shared system interfaces. Results indicate the validity of the proposed gaze convergence index for measuring the gaze convergence of dyads. Moreover, as expected, our gaze convergence index was positively associated with dyad task performance and negatively associated with dyad cognitive load. These results suggest the utility of (theoretical or practical) applications such as synchronized gaze convergence displays in diverse settings. Further research perspectives, particularly into the construct’s nomological network, are warranted.


2019 ◽  
Vol 12 (1) ◽  
Author(s):  
José A. León ◽  
José David Moreno ◽  
Inmaculada Escudero ◽  
Johanna K. Kaakinen

Comprehension and summarizing are closely related. As more strategic and selective processing during reading should be reflected in higher quality of summaries, the aim of this study was to use eye movement patterns to analyze how readers who produce good quality summaries process texts. 40 undergraduate students were instructed to read six expository texts in order to respond a causal question introduced in the end of the first paragraph. After reading, participants produced an oral summary of the text. Based on the quality of the summaries, participants were divided into three groups: High, Medium and Low Quality Summaries. The results revealed that readers who produced High Quality Summaries made significantly more and longer fixations and regressions in the question-relevant parts of texts when compared to the other two summary groups. These results suggest that the summary task performance could be a good predictor of the reading strategies utilized during reading.


Author(s):  
Satoru Tokuda ◽  
Evan Palmer ◽  
Edgar Merkle ◽  
Alex Chaparro

This study proposes a new method to quantify mental workload (MWL) automatically, without interfering with the operator's primary task performance. An unobtrusive Tobii eye tracker recorded eye movements while participants were engaged in a cognitively demanding N-back task. Original algorithms automatically analyzed the eye data, detected specific eye deviation movements called saccadic intrusions (SIs), and automatically quantified the eye deviation accounted for SIs. This SI measure was strongly correlated with the task difficulty levels in the N-back tasks and with pupil diameter. This indicates that the SI measure appeared to reflect MWL and may be used as a measure of MWL.


1997 ◽  
Vol 77 (2) ◽  
pp. 761-774 ◽  
Author(s):  
Synnöve Carlson ◽  
Pia Rämä ◽  
Heikki Tanila ◽  
Ilkka Linnankoski ◽  
Heikki Mansikka

Carlson, Synnöve, Pia Rämä, Heikki Tanila, Ilkka Linnankoski, and Heikki Mansikka. Dissociation of mnemonic coding and other functional neuronal processing in the monkey prefrontal cortex. J. Neurophysiol. 77: 761–774, 1997. Single-neuron activity was recorded in the prefrontal cortex of three monkeys during the performance of a spatial delayed alternation (DA) task and during the presentation of a variety of visual, auditory, and somatosensory stimuli. The aim was to study the relationship between mnemonic neuronal processing and other functional neuronal responsiveness at the single-neuron level in the prefrontal cortex. Recordings were performed in both experimental situations from 152 neurons. The majority of the neurons (92%) was recorded in the prefrontal cortex. Nine of the neurons were recorded in the dorsal bank of the anterior cingulate sulcus and two in the premotor cortex. Of the total number of neurons recorded in the prefrontal area, 32% fired in relation to the DA task performance and 39% were responsive to sensory stimulation or to the movements of the monkey outside of the memory task context. Altogether 42% of the recorded neurons were neither activated by the various stimuli nor by the DA task performance. Three types of task-related neuronal activity were recorded: delay related, delay and movement related, and movement related. The majority of the task-related neurons ( n = 33, 73%) fired in relation to the delay period. Of the delay-related neurons, 26 (79%) were spatially selective. The number of spatially selective delay-related neurons of the whole population of recorded neurons was 18%. Twelve task-related neurons (27%) fired in relation to the response period of the DA task. Five of these neurons changed their firing rate during the delay period and were classified as delay/movement-related neurons. Contrary to the delay-related neurons, less than half (42%) of the response-related neurons were spatially selective. The majority (70%) of the delay-related neurons could not be activated by any of the sensory stimuli used and did not fire in relation to the movements of the monkey. The remaining portion of the delay-related neurons was activated by stationary and moving visual stimuli or by visual fixation of an object. In contrast to the delay-related neurons, the majority (66%) of the task-related neurons firing in relation to the movement period were also responsive to sensory stimulation outside of the task context. The majority of these neurons responded to visual stimulation, visual fixation of an object, or tracking eye movements. One neuron gave a somatomotor and another a polysensory response. The majority ( n = 37, 67%) of all neurons responding to stimulation outside of the task context did not fire in relation to the DA task performance. The majority of their responses was elicited by visual stimuli or was related to visual fixation of an object or to eye movements. Only six neurons fired in relation to auditory, somatosensory, or somatomotor stimulation. This study provides further evidence about the significance of the dorsolateral prefrontal cortex in spatial working memory processing. Although a considerable number of all DA task-related neurons responded to visual, somatosensory, and auditory stimulation or to the movements of the monkey, most delay-related neurons engaged in the spatial DA task did not respond to extrinsic sensory stimulation. These results indicate that most prefrontal neurons firing selectively during the delay phase of the DA task are highly specialized and process only task-related information.


2010 ◽  
Vol 3 (5) ◽  
Author(s):  
Mauro Cherubini ◽  
Marc-Antoine Nüssli ◽  
Pierre Dillenbourg

Little is known of the interplay between deixis and eye movements in remote collaboration. This paper presents quantitative results from an experiment where participant pairs had to collaborate at a distance using chat tools that differed in the way messages could be enriched with spatial information from the map in the shared workspace. We studied how the availability of what we defined as an Explicit Referencing mechanism (ER) affected the coordination of the eye movements of the participants. The manipulation of the availability of ER did not produce any significant difference on the gaze coupling. However, we found a primary relation between the pairs recurrence of eye movements and their task performance. Implications for design are discussed.


2019 ◽  
Vol 2 (1) ◽  
pp. 1-34 ◽  
Author(s):  
Moritz J. Schaeffer ◽  
Sandra L. Halverson ◽  
Silvia Hansen-Schirra

Abstract We assume that visual feedback from the written trace during translation plays an important role in monitoring the emerging translation. In this study, 44 participants translated with and without visual feedback from the target text (TT). Numerous measures were used to explore the differences between the texts that were created in the two conditions and the characteristics of the task performance in the two conditions. The impact of ST-TT semantic and syntactic relationships showed that there were differences on two of three behavioural measures across conditions. In the comparison of features of the translation process, findings show that ST reading times were longer without visual feedback, while increased translational choice (implying more monitoring) affected eye movements on the source text (ST) in the same way in both conditions. We found that, without visual feedback, when faced with more translational options, translators read the ST less linearly. Participants were more likely to look at the TT screen or read the TT the longer they read the ST and the more the more translational options the ST offered, even if the TT window was blank.


Sign in / Sign up

Export Citation Format

Share Document