scholarly journals Oculomotor capture during real-world scene viewing depends on cognitive load

2011 ◽  
Vol 51 (6) ◽  
pp. 546-552 ◽  
Author(s):  
Michi Matsukura ◽  
James R. Brockmole ◽  
Walter R. Boot ◽  
John M. Henderson
Author(s):  
Dibyanshu Jaiswal ◽  
Debatri Chatterjee ◽  
Rahul Gavas ◽  
Ramesh Kumar Ramakrishnan ◽  
Arpan Pal

Robotics ◽  
2021 ◽  
Vol 10 (2) ◽  
pp. 68
Author(s):  
Lei Shi ◽  
Cosmin Copot ◽  
Steve Vanlanduit

In gaze-based Human-Robot Interaction (HRI), it is important to determine human visual intention for interacting with robots. One typical HRI interaction scenario is that a human selects an object by gaze and a robotic manipulator will pick up the object. In this work, we propose an approach, GazeEMD, that can be used to detect whether a human is looking at an object for HRI application. We use Earth Mover’s Distance (EMD) to measure the similarity between the hypothetical gazes at objects and the actual gazes. Then, the similarity score is used to determine if the human visual intention is on the object. We compare our approach with a fixation-based method and HitScan with a run length in the scenario of selecting daily objects by gaze. Our experimental results indicate that the GazeEMD approach has higher accuracy and is more robust to noises than the other approaches. Hence, the users can lessen cognitive load by using our approach in the real-world HRI scenario.


2010 ◽  
Vol 3 (3) ◽  
Author(s):  
Hsueh-Cheng Wang ◽  
Alex D. Hwang ◽  
Marc Pomplun

During text reading, the durations of eye fixations decrease with greater frequency and predictability of the currently fixated word (Rayner, 1998; 2009). However, it has not been tested whether those results also apply to scene viewing. We computed object frequency and predictability from both linguistic and visual scene analysis (LabelMe, Russell et al., 2008), and Latent Semantic Analysis (Landauer et al., 1998) was applied to estimate predictability. In a scene-viewing experiment, we found that, for small objects, linguistics-based frequency, but not scene-based frequency, had effects on first fixation duration, gaze duration, and total time. Both linguistic and scene-based predictability affected total time. Similar to reading, fixation duration decreased with higher frequency and predictability. For large objects, we found the direction of effects to be the inverse of those found in reading studies. These results suggest that the recognition of small objects in scene viewing shares some characteristics with the recognition of words in reading.


2004 ◽  
Author(s):  
James R. Brockmole ◽  
Michael L. Mack ◽  
Monica S. Castelhano ◽  
Aude Oliva ◽  
John M. Henderson

2012 ◽  
Vol 12 (9) ◽  
pp. 564-564
Author(s):  
L. Loschky ◽  
R. Ringer ◽  
A. Larson ◽  
G. Hughes ◽  
K. Dean ◽  
...  

2021 ◽  
Author(s):  
Candace Elise Peacock ◽  
Elizabeth Hall ◽  
John M. Henderson

Although the physical salience of objects has previously been demonstrated to guide attention in real-world scene perception, it is unknown whether objects are also prioritized based on their meaning. To answer this question, we computed the average meaning and the average physical salience of objects in scenes. Using eye movement data from aesthetic judgment and memorization tasks, we then tested whether fixations are more likely to land on high-meaning objects than low-meaning objects while controlling for object salience. The results demonstrated that fixations are more likely to be directed to high meaning objects than low meaning objects regardless of object salience. Furthermore, the influence of object salience was progressively reduced as object meaning increased and was eliminated at the highest levels of meaning. Overall, these findings provide the first evidence that objects are prioritized by meaning for attentional selection during active scene viewing.


2017 ◽  
Vol 3 (3) ◽  
pp. 226-245 ◽  
Author(s):  
Naomi Vingron ◽  
Jason W. Gullifer ◽  
Julia Hamill ◽  
Jakob Leimgruber ◽  
Debra Titone

Abstract In daily life, we experience dynamic visual input referred to as the “linguistic landscape” (LL), comprised of images and text, for example, signs, and billboards (Gorter, 2013; Landry & Bourhis, 1997; Shohamy, Ben-Rafael and Barni 2010). While much is known about LLs descriptively, less is known about what people notice when viewing LLs. Building upon the bilingual eye movement reading literature (e.g., Whitford, Pivneva, & Titone, 2016) and the scene viewing literature (e.g., Henderson & Ferreira, 2004), we report a preliminary study of French-English bilinguals’ eye movements as they viewed LL images from Montréal. These preliminary data suggest that eye tracking is a promising new method for investigating how people with different language backgrounds process real-world LL images.


2021 ◽  
Vol 20 (2) ◽  
pp. 1-20
Author(s):  
Ali Akbari ◽  
Jonathan Martinez ◽  
Roozbeh Jafari

Annotating activities of daily living (ADL) is vital for developing machine learning models for activity recognition. In addition, it is critical for self-reporting purposes such as in assisted living where the users are asked to log their ADLs. However, data annotation becomes extremely challenging in real-world data collection scenarios, where the users have to provide annotations and labels on their own. Methods such as self-reports that rely on users’ memory and compliance are prone to human errors and become burdensome since they increase users’ cognitive load. In this article, we propose a light yet effective context-aware change point detection algorithm that is implemented and run on a smartwatch for facilitating data annotation for high-level ADLs. The proposed system detects the moments of transition from one to another activity and prompts the users to annotate their data. We leverage freely available Bluetooth low energy (BLE) information broadcasted by various devices to detect changes in environmental context. This contextual information is combined with a motion-based change point detection algorithm, which utilizes data from wearable motion sensors, to reduce the false positives and enhance the system's accuracy. Through real-world experiments, we show that the proposed system improves the quality and quantity of labels collected from users by reducing human errors while eliminating users’ cognitive load and facilitating the data annotation process.


Sign in / Sign up

Export Citation Format

Share Document