The influence of text difficulty level and topic on eye-movement behavior and pupil size during reading

Author(s):  
Akram Bayat ◽  
Marc Pomplun
PeerJ ◽  
2017 ◽  
Vol 5 ◽  
pp. e3783 ◽  
Author(s):  
Yousri Marzouki ◽  
Valériane Dusaucy ◽  
Myriam Chanceaux ◽  
Sebastiaan Mathôt

Negative correlations between pupil size and the tendency to look at salient locations were found in recent studies (e.g., Mathôt et al., 2015). It is hypothesized that this negative correlation might be explained by the mental effort put by participants in the task that leads in return to pupil dilation. Here we present an exploratory study on the effect of expertise on eye-movement behavior. Because there is no available standard tool to evaluate WoW players’ expertise, we built an off-game questionnaire testing players’ knowledge about WoW and acquired skills through completed raids, highest rated battlegrounds, Skill Points, etc. Experts (N = 4) and novices (N = 4) in the massively multiplayer online role-playing game World of Warcraft (WoW) viewed 24 designed video segments from the game that differ in regards with their content (i.e, informative locations) and visual complexity (i.e, salient locations). Consistent with previous studies, we found a negative correlation between pupil size and the tendency to look at salient locations (experts, r =  − .17, p < .0001, and novices, r =  − .09, p < .0001). This correlation has been interpreted in terms of mental effort: People are inherently biased to look at salient locations (sharp corners, bright lights, etc.), but are able (i.e., experts) to overcome this bias if they invest sufficient mental effort. Crucially, we observed that this correlation was stronger for expert WoW players than novice players (Z =  − 3.3, p = .0011). This suggests that experts learned to improve control over eye-movement behavior by guiding their eyes towards informative, but potentially low-salient areas of the screen. These findings may contribute to our understanding of what makes an expert an expert.


2009 ◽  
Author(s):  
Polina M. Vanyukov ◽  
Erik D. Reichle ◽  
Tessa Warren

2019 ◽  
Vol 59 ◽  
pp. 254-258 ◽  
Author(s):  
Hui Liu ◽  
Ruwei Ou ◽  
Qianqian Wei ◽  
Yanbing Hou ◽  
Bei Cao ◽  
...  

2017 ◽  
Vol 13 (7S_Part_14) ◽  
pp. P709-P710
Author(s):  
Marta Luisa Goncalves de Freitas Pereira ◽  
Marina von Zuben de Arruda Camargo ◽  
Jéssica dos Santos ◽  
Fátima L.S. Nunes ◽  
Orestes Vicente Forlenza

2020 ◽  
Author(s):  
Šimon Kucharský ◽  
Daan Roelof van Renswoude ◽  
Maartje Eusebia Josefa Raijmakers ◽  
Ingmar Visser

Describing, analyzing and explaining patterns in eye movement behavior is crucial for understanding visual perception. Further, eye movements are increasingly used in informing cognitive process models. In this article, we start by reviewing basic characteristics and desiderata for models of eye movements. Specifically, we argue that there is a need for models combining spatial and temporal aspects of eye-tracking data (i.e., fixation durations and fixation locations), that formal models derived from concrete theoretical assumptions are needed to inform our empirical research, and custom statistical models are useful for detecting specific empirical phenomena that are to be explained by said theory. In this article, we develop a conceptual model of eye movements, or specifically, fixation durations and fixation locations, and from it derive a formal statistical model --- meeting our goal of crafting a model useful in both the theoretical and empirical research cycle. We demonstrate the use of the model on an example of infant natural scene viewing, to show that the model is able to explain different features of the eye movement data, and to showcase how to identify that the model needs to be adapted if it does not agree with the data. We conclude with discussion of potential future avenues for formal eye movement models.


2021 ◽  
Author(s):  
Zezhong Lv ◽  
Qing Xu ◽  
Klaus Schoeffmann ◽  
Simon Parkinson

AbstractEye movement behavior, which provides the visual information acquisition and processing, plays an important role in performing sensorimotor tasks, such as driving, by human beings in everyday life. In the procedure of performing sensorimotor tasks, eye movement is contributed through a specific coordination of head and eye in gaze changes, with head motions preceding eye movements. Notably we believe that this coordination in essence indicates a kind of causality. In this paper, we investigate transfer entropy to set up a quantity for measuring an unidirectional causality from head motion to eye movement. A normalized version of the proposed measure, demonstrated by virtual reality based psychophysical studies, behaves very well as a proxy of driving performance, suggesting that quantitative exploitation of coordination of head and eye may be an effective behaviometric of sensorimotor activity.


Author(s):  
Hayward J. Godwin ◽  
Michael C. Hout ◽  
Katrín J. Alexdóttir ◽  
Stephen C. Walenchok ◽  
Anthony S. Barnhart

AbstractExamining eye-movement behavior during visual search is an increasingly popular approach for gaining insights into the moment-to-moment processing that takes place when we look for targets in our environment. In this tutorial review, we describe a set of pitfalls and considerations that are important for researchers – both experienced and new to the field – when engaging in eye-movement and visual search experiments. We walk the reader through the research cycle of a visual search and eye-movement experiment, from choosing the right predictions, through to data collection, reporting of methodology, analytic approaches, the different dependent variables to analyze, and drawing conclusions from patterns of results. Overall, our hope is that this review can serve as a guide, a talking point, a reflection on the practices and potential problems with the current literature on this topic, and ultimately a first step towards standardizing research practices in the field.


2013 ◽  
Vol 89 ◽  
pp. 32-38 ◽  
Author(s):  
William Poynter ◽  
Megan Barber ◽  
Jason Inman ◽  
Coral Wiggins

2018 ◽  
Vol 24 (3) ◽  
pp. 338-363 ◽  
Author(s):  
Michael Yeldham

This study examined the influence of formulaic language on second language (L2) listeners’ lower-level processing, in terms of their ability to accurately identify the words in texts. On the one hand, there were reasons for expecting the presence of the formulas to advantage the learners, because the learners would process these formulaic words more holistically than the surrounding non-formulaic words. On the other hand, though, because formulas are commonly uttered in more reduced fashion than their surrounding non-formulaic words – and L2 learners commonly face challenges understanding reduced speech – it was possible that the formulas would negatively impact the learners’ processing. The participants listened to four texts, which were paused intermittently for them to transcribe the final stretch of words they had heard prior to each pause. The researcher had previously categorized these words as being part of formulas or non-formulas through corpus analysis. By comparing the listeners’ identification of the formulaic and the non-formulaic language, the study found that formulaic language facilitated their lower-level listening. This degree of advantage, however, varied across text difficulty level and listener proficiency level. Based on the findings, implications for L2 listening instruction are discussed.


Sign in / Sign up

Export Citation Format

Share Document