scholarly journals The Effect of an Adaptive Simulated Inner Voice on User’s Eye-gaze Behaviour, Ownership Perception and Plausibility Judgement in Virtual Reality

Author(s):  
Ding Ding ◽  
Mark A Neerincx ◽  
Willem-Paul Brinkman

Abstract Virtual cognitions (VCs) are a stream of simulated thoughts people hear while emerged in a virtual environment, e.g. by hearing a simulated inner voice presented as a voice over. They can enhance people’s self-efficacy and knowledge about, for example, social interactions as previous studies have shown. Ownership and plausibility of these VCs are regarded as important for their effect, and enhancing both might, therefore, be beneficial. A potential strategy for achieving this is the synchronization of the VCs with people’s eye fixation using eye-tracking technology embedded in a head-mounted display. Hence, this paper tests this idea in the context of a pre-therapy for spider and snake phobia to examine the ability to guide people’s eye fixation. An experiment with 24 participants was conducted using a within-subjects design. Each participant was exposed to two conditions: one where the VCs were adapted to eye gaze of the participant and the other where they were not adapted, i.e. the control condition. The findings of a Bayesian analysis suggest that credibly more ownership was reported and more eye-gaze shift behaviour was observed in the eye-gaze-adapted condition than in the control condition. Compared to the alternative of no or negative mediation, the findings also give some more credibility to the hypothesis that ownership, at least partly, positively mediates the effect eye-gaze-adapted VCs have on eye-gaze shift behaviour. Only weak support was found for plausibility as a mediator. These findings help improve insight into how VCs affect people.

Sensors ◽  
2021 ◽  
Vol 21 (4) ◽  
pp. 1471
Author(s):  
Yongxiang Wang ◽  
William Clifford ◽  
Charles Markham ◽  
Catherine Deegan

Distractions external to a vehicle contribute to visual attention diversion that may cause traffic accidents. As a low-cost and efficient advertising solution, billboards are widely installed on side of the road, especially the motorway. However, the effect of billboards on driver distraction, eye gaze, and cognition has not been fully investigated. This study utilises a customised driving simulator and synchronised electroencephalography (EEG) and eye tracking system to investigate the cognitive processes relating to the processing of driver visual information. A distinction is made between eye gaze fixations relating to stimuli that assist driving and others that may be a source of distraction. The study compares the driver’s cognitive responses to fixations on billboards with fixations on the vehicle dashboard. The measured eye-fixation related potential (EFRP) shows that the P1 components are similar; however, the subsequent N1 and P2 components differ. In addition, an EEG motor response is observed when the driver makes an adjustment of driving speed when prompted by speed limit signs. The experimental results demonstrate that the proposed measurement system is a valid tool in assessing driver cognition and suggests the cognitive level of engagement to the billboard is likely to be a precursor to driver distraction. The experimental results are compared with the human information processing model found in the literature.


Electronics ◽  
2021 ◽  
Vol 10 (9) ◽  
pp. 1051
Author(s):  
Si Jung Kim ◽  
Teemu H. Laine ◽  
Hae Jung Suk

Presence refers to the emotional state of users where their motivation for thinking and acting arises based on the perception of the entities in a virtual world. The immersion level of users can vary when they interact with different media content, which may result in different levels of presence especially in a virtual reality (VR) environment. This study investigates how user characteristics, such as gender, immersion level, and emotional valence on VR, are related to the three elements of presence effects (attention, enjoyment, and memory). A VR story was created and used as an immersive stimulus in an experiment, which was presented through a head-mounted display (HMD) equipped with an eye tracker that collected the participants’ eye gaze data during the experiment. A total of 53 university students (26 females, 27 males), with an age range from 20 to 29 years old (mean 23.8), participated in the experiment. A set of pre- and post-questionnaires were used as a subjective measure to support the evidence of relationships among the presence effects and user characteristics. The results showed that user characteristics, such as gender, immersion level, and emotional valence, affected their level of presence, however, there is no evidence that attention is associated with enjoyment or memory.


2018 ◽  
Vol 71 (9) ◽  
pp. 1860-1872 ◽  
Author(s):  
Stephen RH Langton ◽  
Alex H McIntyre ◽  
Peter JB Hancock ◽  
Helmut Leder

Research has established that a perceived eye gaze produces a concomitant shift in a viewer’s spatial attention in the direction of that gaze. The two experiments reported here investigate the extent to which the nature of the eye movement made by the gazer contributes to this orienting effect. On each trial in these experiments, participants were asked to make a speeded response to a target that could appear in a location toward which a centrally presented face had just gazed (a cued target) or in a location that was not the recipient of a gaze (an uncued target). The gaze cues consisted of either fast saccadic eye movements or slower smooth pursuit movements. Cued targets were responded to faster than uncued targets, and this gaze-cued orienting effect was found to be equivalent for each type of gaze shift both when the gazes were un-predictive of target location (Experiment 1) and counterpredictive of target location (Experiment 2). The results offer no support for the hypothesis that motion speed modulates gaze-cued orienting. However, they do suggest that motion of the eyes per se, regardless of the type of movement, may be sufficient to trigger an orienting effect.


Author(s):  
Kristiina Jokinen ◽  
Päivi Majaranta

In this chapter, the authors explore possibilities to use novel face and gaze tracking technology in educational applications, especially in interactive teaching agents for second language learning. They focus on non-verbal feedback that provides information about how well the speaker has understood the presented information, and how well the interaction is progressing. Such feedback is important in interactive applications in general, and in educational systems, it is effectively used to construct a shared context in which learning can take place: the teacher can use feedback signals to tailor the presentation appropriate for the student. This chapter surveys previous work, relevant technology, and future prospects for such multimodal interactive systems. It also sketches future educational systems which encourage the students to learn foreign languages in a natural and inclusive manner, via participating in interaction using natural communication strategies.


Author(s):  
Chandni Parikh

Eye movements and gaze direction have been utilized to make inferences about perception and cognition since the 1800s. The driving factor behind recording overt eye movements stem from the fundamental idea that one's gaze provides tremendous insight into the information processing that takes place early on during development. One of the key deficits seen in individuals diagnosed with Autism Spectrum Disorders (ASD) involves eye gaze and social attention processing. The current chapter focuses on the use of eye-tracking technology with high-risk infants who are siblings of children diagnosed with ASD in order to highlight potential bio-behavioral markers that can inform the ascertainment of red flags and atypical behaviors associated with ASD within the first few years of development.


2020 ◽  
Vol 10 (5) ◽  
pp. 1668 ◽  
Author(s):  
Pavan Kumar B. N. ◽  
Adithya Balasubramanyam ◽  
Ashok Kumar Patil ◽  
Chethana B. ◽  
Young Ho Chai

Over the years, gaze input modality has been an easy and demanding human–computer interaction (HCI) method for various applications. The research of gaze-based interactive applications has advanced considerably, as HCIs are no longer constrained to traditional input devices. In this paper, we propose a novel immersive eye-gaze-guided camera (called GazeGuide) that can seamlessly control the movements of a camera mounted on an unmanned aerial vehicle (UAV) from the eye-gaze of a remote user. The video stream captured by the camera is fed into a head-mounted display (HMD) with a binocular eye tracker. The user’s eye-gaze is the sole input modality to maneuver the camera. A user study was conducted considering the static and moving targets of interest in a three-dimensional (3D) space to evaluate the proposed framework. GazeGuide was compared with a state-of-the-art input modality remote controller. The qualitative and quantitative results showed that the proposed GazeGuide performed significantly better than the remote controller.


2012 ◽  
Vol 4 (2) ◽  
pp. 99-114 ◽  
Author(s):  
Catherine I. Phillips ◽  
Christopher R. Sears ◽  
Penny M. Pexman

AbstractThe present research examines the effects of body-object interaction (BOI) on eye gaze behaviour in a reading task. BOI measures perceptions of the ease with which a human body can physically interact with a word's referent. A set of high BOI words (e.g. cat) and a set of low BOI words (e.g. sun) were selected, matched on imageability and concreteness (as well as other lexical and semantic variables). Facilitatory BOI effects were observed: gaze durations and total fixation durations were shorter for high BOI words, and participants made fewer regressions to high BOI words. The results provide evidence of a BOI effect on non-manual responses and in a situation that taps normal reading processes. We discuss how the results (a) suggest that stored motor information (as measured by BOI ratings) is relevant to lexical semantics, and (b) are consistent with an embodied view of cognition (Wilson 2002).


Sensors ◽  
2021 ◽  
Vol 21 (19) ◽  
pp. 6623
Author(s):  
Luisa Lauer ◽  
Kristin Altmeyer ◽  
Sarah Malone ◽  
Michael Barz ◽  
Roland Brünken ◽  
...  

Augmenting reality via head-mounted displays (HMD-AR) is an emerging technology in education. The interactivity provided by HMD-AR devices is particularly promising for learning, but presents a challenge to human activity recognition, especially with children. Recent technological advances regarding speech and gesture recognition concerning Microsoft’s HoloLens 2 may address this prevailing issue. In a within-subjects study with 47 elementary school children (2nd to 6th grade), we examined the usability of the HoloLens 2 using a standardized tutorial on multimodal interaction in AR. The overall system usability was rated “good”. However, several behavioral metrics indicated that specific interaction modes differed in their efficiency. The results are of major importance for the development of learning applications in HMD-AR as they partially deviate from previous findings. In particular, the well-functioning recognition of children’s voice commands that we observed represents a novelty. Furthermore, we found different interaction preferences in HMD-AR among the children. We also found the use of HMD-AR to have a positive effect on children’s activity-related achievement emotions. Overall, our findings can serve as a basis for determining general requirements, possibilities, and limitations of the implementation of educational HMD-AR environments in elementary school classrooms.


2021 ◽  
Author(s):  
Mariya Cherkasova ◽  
Eve Limbrick-Oldfield ◽  
Luke Clark ◽  
Jason J. S. Barton ◽  
A. Jon Stoessl ◽  
...  

The incentive sensitization theory of addiction proposes that through repeated associations with addictive rewards, addiction-related stimuli acquire a disproportionately powerful motivational pull on behaviour. Animal research suggests trait-like individual variation in the degree of incentive salience attribution to reward-predictive cues, defined phenotypically as sign-tracking (high) and goal-tracking (low incentive salience attribution). While these phenotypes have been linked to addiction features in rodents, their translational validity has been little studied. Here, we examined whether sign- and goal-tracking in healthy human volunteers modulates the effects of reward-paired cues on cost-benefit decision making. Sign-tracking was measured in a Pavlovian conditioning paradigm as the amount of eye gaze fixation on the reward-predictive cue versus the location of impending reward delivery. In Study 1 (Cherkasova et al, 2018), participants were randomly assigned to perform a two-choice lottery task in which rewards were either accompanied (cued, n=63) or unaccompanied (uncued, n=68) by money images and casino jingles. In Study 2, participants (n=58) performed cued and uncued versions of the task in a within-subjects design. Across both studies, cues promoted riskier choice, and both studies yielded evidence of goal-tracking being associated with greater risk-promoting effects of cues. These findings are at odds with the notion of sign-trackers being preferentially susceptible to the influence of reward cues on behavior and point to the role of mechanisms besides incentive salience in mediating such influences.


Sign in / Sign up

Export Citation Format

Share Document