A Cognitive Load Assessment Method Considering Individual Differences in Eye Movement Data

Author(s):  
Jun Chen ◽  
Qilin Zhang ◽  
Long Cheng ◽  
Xudong Gao ◽  
Lin Ding
2016 ◽  
Vol 2016 ◽  
pp. 1-8 ◽  
Author(s):  
Xin Liu ◽  
Tong Chen ◽  
Guoqiang Xie ◽  
Guangyuan Liu

The cognitive overload not only affects the physical and mental diseases, but also affects the work efficiency and safety. Hence, the research of measuring cognitive load has been an important part of cognitive load theory. In this paper, we proposed a method to identify the state of cognitive load by using eye movement data in a noncontact manner. We designed a visual experiment to elicit human’s cognitive load as high and low state in two light intense environments and recorded the eye movement data in this whole process. Twelve salient features of the eye movement were selected by using statistic test. Algorithms for processing some features are proposed for increasing the recognition rate. Finally we used the support vector machine (SVM) to classify high and low cognitive load. The experimental results show that the method can achieve 90.25% accuracy in light controlled condition.


Information ◽  
2019 ◽  
Vol 10 (5) ◽  
pp. 170
Author(s):  
Jian Lv ◽  
Xiaoping Xu ◽  
Ning Ding

Aimed at the problem of how to objectively obtain the threshold of a user’s cognitive load in a virtual reality interactive system, a method for user cognitive load quantification based on an eye movement experiment is proposed. Eye movement data were collected in the virtual reality interaction process by using an eye movement instrument. Taking the number of fixation points, the average fixation duration, the average saccade length, and the number of the first mouse clicking fixation points as the independent variables, and the number of backward-looking times and the value of user cognitive load as the dependent variables, a cognitive load evaluation model was established based on the probabilistic neural network. The model was validated by using eye movement data and subjective cognitive load data. The results show that the absolute error and relative mean square error were 6.52%–16.01% and 6.64%–23.21%, respectively. Therefore, the model is feasible.


2015 ◽  
pp. 902-917
Author(s):  
Yuan-Cheng Lin ◽  
Ming-Hsun Shen ◽  
Chia-Ju Liu

This study adopted Cognitive Load Theory (CLT) to investigate the influences of multimedia presentations on achievements of science learning and the correlations between eye-movement models under distinct multimedia combinations and learner-controlled modes. Three units from the Science Education Website set by the Ministry of Education (Tainan) to assist student learning were employed: Air and Combustion”, “Heat Effects toward Substances”, and “Healthy Diet.” This multifunctional website offers teaching resources, interesting experiments, inquiry experiments, virtual animations, multi-assessments, and supplementary materials, which are highly interactive and simulative. Six classes of fifth graders (n=192) participated in this study. Our findings showed that the combination of multimedia elements apparently influenced students' performance; the “animation + narration” group performed evidently better than the “animation + subtitles” group. When the animated subject matters were in small segments under the Segmentation Principle, multimedia presentations still brought affections to learning achievement, suggesting that the modality effect on students' learning exists constantly. Regarding the eye-movement models, this study focused mainly on discussing the “active-control mode” and “multimedia combination forms”. These eye movement data supplemented the evidences gained to identify the relevant results. In conclusion, inappropriate multimedia combinations may interfere with learning. More functions and information inputs do not guarantee better learning effects.


2019 ◽  
Vol 24 (4) ◽  
pp. 297-311
Author(s):  
José David Moreno ◽  
José A. León ◽  
Lorena A. M. Arnal ◽  
Juan Botella

Abstract. We report the results of a meta-analysis of 22 experiments comparing the eye movement data obtained from young ( Mage = 21 years) and old ( Mage = 73 years) readers. The data included six eye movement measures (mean gaze duration, mean fixation duration, total sentence reading time, mean number of fixations, mean number of regressions, and mean length of progressive saccade eye movements). Estimates were obtained of the typified mean difference, d, between the age groups in all six measures. The results showed positive combined effect size estimates in favor of the young adult group (between 0.54 and 3.66 in all measures), although the difference for the mean number of fixations was not significant. Young adults make in a systematic way, shorter gazes, fewer regressions, and shorter saccadic movements during reading than older adults, and they also read faster. The meta-analysis results confirm statistically the most common patterns observed in previous research; therefore, eye movements seem to be a useful tool to measure behavioral changes due to the aging process. Moreover, these results do not allow us to discard either of the two main hypotheses assessed for explaining the observed aging effects, namely neural degenerative problems and the adoption of compensatory strategies.


2014 ◽  
Author(s):  
Bernhard Angele ◽  
Elizabeth R. Schotter ◽  
Timothy Slattery ◽  
Tara L. Chaloukian ◽  
Klinton Bicknell ◽  
...  

2020 ◽  
Vol 10 (5) ◽  
pp. 92
Author(s):  
Ramtin Zargari Marandi ◽  
Camilla Ann Fjelsted ◽  
Iris Hrustanovic ◽  
Rikke Dan Olesen ◽  
Parisa Gazerani

The affective dimension of pain contributes to pain perception. Cognitive load may influence pain-related feelings. Eye tracking has proven useful for detecting cognitive load effects objectively by using relevant eye movement characteristics. In this study, we investigated whether eye movement characteristics differ in response to pain-related feelings in the presence of low and high cognitive loads. A set of validated, control, and pain-related sounds were applied to provoke pain-related feelings. Twelve healthy young participants (six females) performed a cognitive task at two load levels, once with the control and once with pain-related sounds in a randomized order. During the tasks, eye movements and task performance were recorded. Afterwards, the participants were asked to fill out questionnaires on their pain perception in response to the applied cognitive loads. Our findings indicate that an increased cognitive load was associated with a decreased saccade peak velocity, saccade frequency, and fixation frequency, as well as an increased fixation duration and pupil dilation range. Among the oculometrics, pain-related feelings were reflected only in the pupillary responses to a low cognitive load. The performance and perceived cognitive load decreased and increased, respectively, with the task load level and were not influenced by the pain-related sounds. Pain-related feelings were lower when performing the task compared with when no task was being performed in an independent group of participants. This might be due to the cognitive engagement during the task. This study demonstrated that cognitive processing could moderate the feelings associated with pain perception.


Author(s):  
Ayush Kumar ◽  
Prantik Howlader ◽  
Rafael Garcia ◽  
Daniel Weiskopf ◽  
Klaus Mueller

Sensors ◽  
2021 ◽  
Vol 21 (15) ◽  
pp. 5178
Author(s):  
Sangbong Yoo ◽  
Seongmin Jeong ◽  
Seokyeon Kim ◽  
Yun Jang

Gaze movement and visual stimuli have been utilized to analyze human visual attention intuitively. Gaze behavior studies mainly show statistical analyses of eye movements and human visual attention. During these analyses, eye movement data and the saliency map are presented to the analysts as separate views or merged views. However, the analysts become frustrated when they need to memorize all of the separate views or when the eye movements obscure the saliency map in the merged views. Therefore, it is not easy to analyze how visual stimuli affect gaze movements since existing techniques focus excessively on the eye movement data. In this paper, we propose a novel visualization technique for analyzing gaze behavior using saliency features as visual clues to express the visual attention of an observer. The visual clues that represent visual attention are analyzed to reveal which saliency features are prominent for the visual stimulus analysis. We visualize the gaze data with the saliency features to interpret the visual attention. We analyze the gaze behavior with the proposed visualization to evaluate that our approach to embedding saliency features within the visualization supports us to understand the visual attention of an observer.


1976 ◽  
Vol 43 (2) ◽  
pp. 555-561 ◽  
Author(s):  
Richard A. Wyrick ◽  
Vincent J. Tempone ◽  
Jack Capehart

The relationship between attention and incidental learning during discrimination training was studied in 30 children, aged 10 to 11. A polymetric eye-movement recorder measured direct visual attention. Consistent with previous findings, recall of incidental stimuli was greatest during the initial and terminal stages of intentional learning. Contrary to previous explanations, however, visual attention to incidental stimuli was not related to training. While individual differences in attention to incidental stimuli were predictive of recall, attention to incidental stimuli was not related to level of training. Results suggested that changes in higher order information processing rather than direct visual attention were responsible for the curvilinear learning of incidental stimuli during intentional training.


1972 ◽  
Vol 35 (1) ◽  
pp. 103-110
Author(s):  
Phillip Kleespies ◽  
Morton Wiener

This study explored (1) for evidence of visual input at so-called “subliminal” exposure durations, and (2) whether the response, if any, was a function of the thematic content of the stimulus. Thematic content (threatening versus non-threatening) and stimulus structure (angular versus curved) were varied independently under “subliminal,” “part-cue,” and “identification” exposure conditions. With Ss' reports and the frequency and latency of first eye movements (“orienting reflex”) as input indicators, there was no evidence of input differences which are a function of thematic content at any exposure duration, and the “report” data were consistent with the eye-movement data.


Sign in / Sign up

Export Citation Format

Share Document