scholarly journals At first sight: robots’ subtle eye movement parameters affect human attentional engagement, spontaneous attunement and perceived human-likeness

2020 ◽  
Author(s):  
Davide Ghiglino ◽  
Cesco Willemse ◽  
Davide De Tommaso ◽  
Francesco Bossi ◽  
Agnieszka Wykowska

Human-robot interaction research could benefit from knowing how various parameters of robotic eye movement control affect specific cognitive mechanisms of the user, such as attention or perception. In the present study, we systematically teased apart control parameters of Trajectory Time of robot eye movements (rTT) between two joint positions and Fixation Duration (rFD) on each of these positions of the iCub robot. We showed recordings of these behaviors to participants and asked them to rate each video on how human-like the robot’s behavior appeared. Additionally, we recorded participants’ eye movements to examine whether the different control parameters evoked different effects on cognition and attention. We found that slow but variable robot eye movements yielded relatively higher human-likeness ratings. On the other hand, the eye-tracking data suggest that the human range of rTT is most engaging and evoked spontaneous involvement in joint attention. The pattern observed in subjective ratings was paralleled only by one measure in the implicit objective metrics, namely the frequency of spontaneous attentional following. These findings provide significant clues for controller design to improve the interaction between humans and artificial agents.

2020 ◽  
Vol 11 (1) ◽  
pp. 31-39
Author(s):  
Davide Ghiglino ◽  
Cesco Willemse ◽  
Davide De Tommaso ◽  
Francesco Bossi ◽  
Agnieszka Wykowska

AbstractHuman-robot interaction research could benefit from knowing how various parameters of robotic eye movement control affect specific cognitive mechanisms of the user, such as attention or perception. In the present study, we systematically teased apart control parameters of Trajectory Time of robot eye movements (rTT) between two joint positions and Fixation Duration (rFD) on each of these positions of the iCub robot. We showed recordings of these behaviors to participants and asked them to rate each video on how human-like the robot’s behavior appeared. Additionally, we recorded participants’ eye movements to examine whether the different control parameters evoked different effects on cognition and attention. We found that slow but variable robot eye movements yielded relatively higher human-likeness ratings. On the other hand, the eye-tracking data suggest that the human range of rTT is most engaging and evoked spontaneous involvement in joint attention. The pattern observed in subjective ratings was paralleled only by one measure in the implicit objective metrics, namely the frequency of spontaneous attentional following. These findings provide significant clues for controller design to improve the interaction between humans and artificial agents.


In chapter 1 we describe the method of eye-tracking and how the interest to studying eye movements developed in time. We describe how modern eye-tracking devices work, including several most commonly used in cognitive research (SR-Research, SMI, Tobii). We also give some general information about eye movement parameters during reading and a brief over- view of main models of eye movement control in reading (SWIFT, E-Z Reader). These models take into account a significant amount of empirical data and simulate the interaction of oculo- motor and cognitive processes involved in reading. Differences between the models, as well as different interpretations allowed within the same model, reflect the complexity of reading and the ongoing discussion about the processes involved in it. The section ends up with the pros and cons of using LCD and CRT displays in eye-tracking studies.


2009 ◽  
Vol 101 (2) ◽  
pp. 934-947 ◽  
Author(s):  
Masafumi Ohki ◽  
Hiromasa Kitazawa ◽  
Takahito Hiramatsu ◽  
Kimitake Kaga ◽  
Taiko Kitamura ◽  
...  

The anatomical connection between the frontal eye field and the cerebellar hemispheric lobule VII (H-VII) suggests a potential role of the hemisphere in voluntary eye movement control. To reveal the involvement of the hemisphere in smooth pursuit and saccade control, we made a unilateral lesion around H-VII and examined its effects in three Macaca fuscata that were trained to pursue visually a small target. To the step (3°)-ramp (5–20°/s) target motion, the monkeys usually showed an initial pursuit eye movement at a latency of 80–140 ms and a small catch-up saccade at 140–220 ms that was followed by a postsaccadic pursuit eye movement that roughly matched the ramp target velocity. After unilateral cerebellar hemispheric lesioning, the initial pursuit eye movements were impaired, and the velocities of the postsaccadic pursuit eye movements decreased. The onsets of 5° visually guided saccades to the stationary target were delayed, and their amplitudes showed a tendency of increased trial-to-trial variability but never became hypo- or hypermetric. Similar tendencies were observed in the onsets and amplitudes of catch-up saccades. The adaptation of open-loop smooth pursuit velocity, tested by a step increase in target velocity for a brief period, was impaired. These lesion effects were recognized in all directions, particularly in the ipsiversive direction. A recovery was observed at 4 wk postlesion for some of these lesion effects. These results suggest that the cerebellar hemispheric region around lobule VII is involved in the control of smooth pursuit and saccadic eye movements.


1983 ◽  
Vol 27 (8) ◽  
pp. 728-732 ◽  
Author(s):  
Ted Megaw ◽  
Tayyar Sen

It has been suggested by Bahill and Stark (1975) that visual fatigue can be identified by changes in some of the saccadic eye movement parameters. These include increases in the frequency of occurrence of glissades and overlapping saccades and reductions in the peak velocity and duration of saccades. In their study, fatigue was induced by the same step tracking task that was used to evaluate the changes in saccadic parameters. However, there is evidence that subjects experience extreme feelings of fatigue while performing such a task and that somehow the task is unnatural. The present study was designed to assess whether there are any differences in the various saccadic parameters obtained while subjects perform a step tracking task and a cognitive task involving the comparison of number strings. Both tasks were presented on a VDU screen. The second objective was to establish whether there are any changes in the parameters for either task as a result of prolonged performance. The results showed no major differences in the saccadic eye movements between the two tasks and no consistent changes resulting from prolonged performance.


2020 ◽  
Author(s):  
Davide Ghiglino ◽  
Cesco Willemse ◽  
Davide De Tommaso ◽  
Agnieszka Wykowska

Artificial agents are on their way to interact with us daily. Thus, the design of embodied artificial agents that can easily cooperate with humans is crucial for their deployment in social scenarios. Endowing artificial agents with human-like behavior may boost individuals' engagement during the interaction. We tested this hypothesis in two screen-based experiments. In the first one, we compared attentional engagement displayed by participants while they observed the same set of behaviors displayed by an avatar of a humanoid robot and a human. In the second experiment, we assessed the individuals' tendency to attribute anthropomorphic traits towards the same agents displaying the same behaviors. The results of both experiments suggest that individuals need less effort to process and interpret an artificial agent's behavior when it closely resembles one of a human being. Our results support the idea that including subtle hints of human-likeness in artificial agents' behaviors would ease the communication between them and the human counterpart during interactive scenarios


Author(s):  
Maryam A. AlJassmi ◽  
Kayleigh L. Warrington ◽  
Victoria A. McGowan ◽  
Sarah J. White ◽  
Kevin B. Paterson

AbstractContextual predictability influences both the probability and duration of eye fixations on words when reading Latinate alphabetic scripts like English and German. However, it is unknown whether word predictability influences eye movements in reading similarly for Semitic languages like Arabic, which are alphabetic languages with very different visual and linguistic characteristics. Such knowledge is nevertheless important for establishing the generality of mechanisms of eye-movement control across different alphabetic writing systems. Accordingly, we investigated word predictability effects in Arabic in two eye-movement experiments. Both produced shorter fixation times for words with high compared to low predictability, consistent with previous findings. Predictability did not influence skipping probabilities for (four- to eight-letter) words of varying length and morphological complexity (Experiment 1). However, it did for short (three- to four-letter) words with simpler structures (Experiment 2). We suggest that word-skipping is reduced, and affected less by contextual predictability, in Arabic compared to Latinate alphabetic reading, because of specific orthographic and morphological characteristics of the Arabic script.


2019 ◽  
Vol 50 (2) ◽  
pp. 500-512
Author(s):  
Li Zhang ◽  
Guoli Yan ◽  
Li Zhou ◽  
Zebo Lan ◽  
Valerie Benson

Abstract The current study examined eye movement control in autistic (ASD) children. Simple targets were presented in isolation, or with central, parafoveal, or peripheral distractors synchronously. Sixteen children with ASD (47–81 months) and nineteen age and IQ matched typically developing children were instructed to look to the target as accurately and quickly as possible. Both groups showed high proportions (40%) of saccadic errors towards parafoveal and peripheral distractors. For correctly executed eye movements to the targets, centrally presented distractors produced the longest latencies (time taken to initiate eye movements), followed by parafoveal and peripheral distractor conditions. Central distractors had a greater effect in the ASD group, indicating evidence for potential atypical voluntary attentional control in ASD children.


2005 ◽  
Vol 15 (3) ◽  
pp. 149-160
Author(s):  
Jelte E. Bos ◽  
Jan van Erp ◽  
Eric L. Groen ◽  
Hendrik-Jan van Veen

This paper shows that tactile stimulation can override vestibular information regarding spinning sensations and eye movements. However, we conclude that the current data do not support the hypothesis that tactile stimulation controls eye movements directly. To this end, twenty-four subjects were passively disoriented by an abrupt stop after an increase in yaw velocity, about an Earth vertical axis, up to 120°/s. Immediately thereafter, they had to actively maintain a stationary position despite a disturbance signal. Subjects wore a tactile display vest with 48 miniature vibrators, applied in different combinations with visual and vestibular stimuli. Their performance was quantified by RMS body velocity during self-control. Fast eye movement phases were analyzed by counting samples exceeding a velocity limit, slow phases by a novel method applying a first order model. Without tactile and visual information, subjects returned to a previous level of angular motion. Tactile stimulation decreased RMS self velocity considerably, though less than vision. No differences were observed between conditions in which the vest was active during the recovery phase only or during the disorienting phase as well. All effects of tactile stimulation found on the eye movement parameters could be explained by the vestibular stimulus.


2018 ◽  
Vol 71 (1) ◽  
pp. 20-27 ◽  
Author(s):  
Manuel Perea ◽  
Ana Marcet ◽  
Beatriz Uixera ◽  
Marta Vergara-Martínez

The examination of how we read handwritten words (i.e., the original form of writing) has typically been disregarded in the literature on reading. Previous research using word recognition tasks has shown that lexical effects (e.g., the word-frequency effect) are magnified when reading difficult handwritten words. To examine this issue in a more ecological scenario, we registered the participants’ eye movements when reading handwritten sentences that varied in the degree of legibility (i.e., sentences composed of words in easy vs. difficult handwritten style). For comparison purposes, we included a condition with printed sentences. Results showed a larger reading cost for sentences with difficult handwritten words than for sentences with easy handwritten words, which in turn showed a reading cost relative to the sentences with printed words. Critically, the effect of word frequency was greater for difficult handwritten words than for easy handwritten words or printed words in the total times on a target word, but not on first-fixation durations or gaze durations. We examine the implications of these findings for models of eye movement control in reading.


2021 ◽  
Author(s):  
Anna Izmalkova ◽  
Anastasia Rzheshevskaya

The study explores the effects of graphological and semantic foregrounding on speech and gaze behavior in textual information construal of subjects with higher and lower impulsivity. Eye movements of sixteen participants were recorded as they read drama texts with interdiscourse switching (semantic foregrounding), with features of typeface distinct from the surrounding text (graphological foregrounding). Discourse modification patterns were analyzed and processed in several steps: specification of participant/object/action/event/perspective modification, parametric annotation of participants’ discourse responses, contrastive analysis of modification parameter activity and parameter synchronized activity. Significant distinctions were found in eye movement parameters (gaze count and initial fixation duration) in subjects with higher and lower impulsivity when reading parts of text with graphical foregrounding. Impulsive subjects tended to visit the areas more often with longer initial fixations than reflective subjects, which is explained in terms of stimulus-driven attention, associated with bottom-up processes. However, these differences in gaze behavior did not result in pronounced distinctions in discourse responses, which were only slightly mediated by impulsivity/reflectivity.


Sign in / Sign up

Export Citation Format

Share Document