scholarly journals Predicting Spatial Visualization Problems’ Difficulty Level from Eye-Tracking Data

Sensors ◽  
2020 ◽  
Vol 20 (7) ◽  
pp. 1949
Author(s):  
Xiang Li ◽  
Rabih Younes ◽  
Diana Bairaktarova ◽  
Qi Guo

The difficulty level of learning tasks is a concern that often needs to be considered in the teaching process. Teachers usually dynamically adjust the difficulty of exercises according to the prior knowledge and abilities of students to achieve better teaching results. In e-learning, because there is no teacher involvement, it often happens that the difficulty of the tasks is beyond the ability of the students. In attempts to solve this problem, several researchers investigated the problem-solving process by using eye-tracking data. However, although most e-learning exercises use the form of filling in blanks and choosing questions, in previous works, research focused on building cognitive models from eye-tracking data collected from flexible problem forms, which may lead to impractical results. In this paper, we build models to predict the difficulty level of spatial visualization problems from eye-tracking data collected from multiple-choice questions. We use eye-tracking and machine learning to investigate (1) the difference of eye movement among questions from different difficulty levels and (2) the possibility of predicting the difficulty level of problems from eye-tracking data. Our models resulted in an average accuracy of 87.60% on eye-tracking data of questions that the classifier has seen before and an average of 72.87% on questions that the classifier has not yet seen. The results confirmed that eye movement, especially fixation duration, contains essential information on the difficulty of the questions and it is sufficient to build machine-learning-based models to predict difficulty level.

2018 ◽  
Vol 51 (1) ◽  
pp. 451-452
Author(s):  
Raimondas Zemblys ◽  
Diederick C. Niehorster ◽  
Kenneth Holmqvist

2020 ◽  
Vol 11 (1) ◽  
Author(s):  
Nachiappan Valliappan ◽  
Na Dai ◽  
Ethan Steinberg ◽  
Junfeng He ◽  
Kantwon Rogers ◽  
...  

Abstract Eye tracking has been widely used for decades in vision research, language and usability. However, most prior research has focused on large desktop displays using specialized eye trackers that are expensive and cannot scale. Little is known about eye movement behavior on phones, despite their pervasiveness and large amount of time spent. We leverage machine learning to demonstrate accurate smartphone-based eye tracking without any additional hardware. We show that the accuracy of our method is comparable to state-of-the-art mobile eye trackers that are 100x more expensive. Using data from over 100 opted-in users, we replicate key findings from previous eye movement research on oculomotor tasks and saliency analyses during natural image viewing. In addition, we demonstrate the utility of smartphone-based gaze for detecting reading comprehension difficulty. Our results show the potential for scaling eye movement research by orders-of-magnitude to thousands of participants (with explicit consent), enabling advances in vision research, accessibility and healthcare.


2021 ◽  
Vol 0 (0) ◽  
Author(s):  
Jolanta Korycka-Skorupa ◽  
Izabela Gołębiowska

Abstract Multivariate mapping is a technique in which multivariate data are encoded into a single map. A variety of design solutions for multivariate mapping refers to the number of phenomena mapped, the map type, and the visual variables applied. Unlike other authors who have mainly evaluated bivariate maps, in our empirical study we compared three solutions when mapping four variables: two types of multivariate maps (intrinsic and extrinsic) and a simple univariate alternative (serving as a baseline). We analysed usability performance metrics (answer time, answer accuracy, subjective rating of task difficulty) and eye-tracking data. The results suggested that experts used all the tested maps with similar results for answer time and accuracy, even when using four-variable intrinsic maps, which is considered to be a challenging solution. However, eye-tracking data provided more nuances in relation to the difference in cognitive effort evoked by the tested maps across task types.


Geografie ◽  
2019 ◽  
Vol 124 (2) ◽  
pp. 163-185 ◽  
Author(s):  
Jan Brus ◽  
Michal Kučera ◽  
Stanislav Popelka

Be understanding of uncertainty, or the difference between a real geographic phenomenon and the user’s understanding of that phenomenon, is essential for those who work with spatial data. From this perspective, map symbols can be used as a tool for providing information about the level of uncertainty. Nevertheless, communicating uncertainty to the user in this way can be a challenging task. Be main aim of the paper is to propose intuitive symbols to represent uncertainty. Bis goal is achieved by user testing of specially compiled point symbol sets. Emphasis is given to the intuitiveness and easy interpretation of proposed symbols. Symbols are part of a user-centered eye-tracking experiment designed to evaluate the suitability of the proposed solutions. Eye-tracking data is analyzed to determine the subject’s performance in reading the map symbols. Be analyses include the evaluation of observed parameters, user preferences, and cognitive metrics. Based on these, the most appropriate methods for designing point symbols are recommended and discussed.


2002 ◽  
Vol 55 (1) ◽  
pp. 225-240 ◽  
Author(s):  
Simon P. Liversedge ◽  
Kevin B. Paterson ◽  
Emma L. Clayes

We report an eye movement experiment investigating the influence of the focus operator only on syntactic processing of “long” relative clause sentences. Paterson, Liversedge, and Underwood (1999) found that readers were garden pathed by “short” reduced relative clause sentences containing the focus operator only. They argued that due to thematic differences between “short” and “long” relative clause sentences, garden path effect might not occur when “long” reduced relative clause sentences are read. Eye-tracking data show that garden path effects found during initial processing of the disambiguating verb of “long” reduced sentences without only were absent or delayed in the case of counterparts with only. We discuss our results in terms of current theories of sentence processing.


Sign in / Sign up

Export Citation Format

Share Document