Generative processing of animated partial depictions fosters fish identification skills: eye tracking evidence

2017 ◽  
Vol 80 (4) ◽  
pp. 367 ◽  
Author(s):  
Jean-Michel Boucheix ◽  
Richard K. Lowe
2021 ◽  
pp. 204275302199531
Author(s):  
Fang Zhao

Interviews with scholars and experts are becoming more and more popular as e-learning materials. Yet how an interview video should be edited is mostly based on personal preference rather than on rigorous scientific research. Thus this study tested whether showing the interviewer in educational interview videos can affect the learning outcome. Two interview learning materials on two topics (eye tracking and text–picture integration) were conducted by the author and edited in two versions. One version was with the interviewer and the other version was without the interviewer, the latter’s image and voice being edited out. Psychology students ( N = 180) watched either the video with or the one without the interviewer and answered the corresponding questions. Results in an online experiment yielded a better learning outcome in the video without the interviewer than in the video with the interviewer. It is probable that the absence of the interviewer can protect participants from extraneous processing and a split-attention effect. The without-interviewer video, segmented by displaying interview questions in keywords on slides, seemed to assist participants in managing the essential processing. The absence of the interviewer may avoid the confusion of multiple instructors, which fosters the generative processing. This study provides practical and pedagogical implications and suggests that removing the image and voice of the interviewer is likely to promote learning.


2018 ◽  
Vol 2018 ◽  
pp. 1-12 ◽  
Author(s):  
Ting-Kwei Wang ◽  
Jing Huang ◽  
Pin-Chao Liao ◽  
Yanmei Piao

Augmented reality (AR) has been proposed to be an efficient tool for learning in construction. However, few researchers have quantitatively assessed the efficiency of AR from the cognitive perspective in the context of construction education. Based on the cognitive theory of multimedia learning (CTML), we evaluated the predesigned AR-based learning tool using eye-tracking data. In this study, we tracked, compared, and summarized learners’ visual behaviors in text-graph- (TG-) based, AR-based, and physical model- (PM-) based learning environments. Compared to the TG-based material, we find that both AR-based and PM-based materials foster extraneous processing and thus further promote generative processing, resulting in better learning performance. The results show that there are no significant differences between AR-based and PM-based learning environments, elucidating the advantages of AR. This study lays a foundation for problem-based learning, which is worthy of further investigation.


2020 ◽  
Vol 63 (7) ◽  
pp. 2245-2254 ◽  
Author(s):  
Jianrong Wang ◽  
Yumeng Zhu ◽  
Yu Chen ◽  
Abdilbar Mamat ◽  
Mei Yu ◽  
...  

Purpose The primary purpose of this study was to explore the audiovisual speech perception strategies.80.23.47 adopted by normal-hearing and deaf people in processing familiar and unfamiliar languages. Our primary hypothesis was that they would adopt different perception strategies due to different sensory experiences at an early age, limitations of the physical device, and the developmental gap of language, and others. Method Thirty normal-hearing adults and 33 prelingually deaf adults participated in the study. They were asked to perform judgment and listening tasks while watching videos of a Uygur–Mandarin bilingual speaker in a familiar language (Standard Chinese) or an unfamiliar language (Modern Uygur) while their eye movements were recorded by eye-tracking technology. Results Task had a slight influence on the distribution of selective attention, whereas subject and language had significant influences. To be specific, the normal-hearing and the d10eaf participants mainly gazed at the speaker's eyes and mouth, respectively, in the experiment; moreover, while the normal-hearing participants had to stare longer at the speaker's mouth when they confronted with the unfamiliar language Modern Uygur, the deaf participant did not change their attention allocation pattern when perceiving the two languages. Conclusions Normal-hearing and deaf adults adopt different audiovisual speech perception strategies: Normal-hearing adults mainly look at the eyes, and deaf adults mainly look at the mouth. Additionally, language and task can also modulate the speech perception strategy.


Author(s):  
Pirita Pyykkönen ◽  
Juhani Järvikivi

A visual world eye-tracking study investigated the activation and persistence of implicit causality information in spoken language comprehension. We showed that people infer the implicit causality of verbs as soon as they encounter such verbs in discourse, as is predicted by proponents of the immediate focusing account ( Greene & McKoon, 1995 ; Koornneef & Van Berkum, 2006 ; Van Berkum, Koornneef, Otten, & Nieuwland, 2007 ). Interestingly, we observed activation of implicit causality information even before people encountered the causal conjunction. However, while implicit causality information was persistent as the discourse unfolded, it did not have a privileged role as a focusing cue immediately at the ambiguous pronoun when people were resolving its antecedent. Instead, our study indicated that implicit causality does not affect all referents to the same extent, rather it interacts with other cues in the discourse, especially when one of the referents is already prominently in focus.


Author(s):  
Paul A. Wetzel ◽  
Gretchen Krueger-Anderson ◽  
Christine Poprik ◽  
Peter Bascom

Sign in / Sign up

Export Citation Format

Share Document