scholarly journals Does Augmented Reality Effectively Foster Visual Learning Process in Construction? An Eye-Tracking Study in Steel Installation

2018 ◽  
Vol 2018 ◽  
pp. 1-12 ◽  
Author(s):  
Ting-Kwei Wang ◽  
Jing Huang ◽  
Pin-Chao Liao ◽  
Yanmei Piao

Augmented reality (AR) has been proposed to be an efficient tool for learning in construction. However, few researchers have quantitatively assessed the efficiency of AR from the cognitive perspective in the context of construction education. Based on the cognitive theory of multimedia learning (CTML), we evaluated the predesigned AR-based learning tool using eye-tracking data. In this study, we tracked, compared, and summarized learners’ visual behaviors in text-graph- (TG-) based, AR-based, and physical model- (PM-) based learning environments. Compared to the TG-based material, we find that both AR-based and PM-based materials foster extraneous processing and thus further promote generative processing, resulting in better learning performance. The results show that there are no significant differences between AR-based and PM-based learning environments, elucidating the advantages of AR. This study lays a foundation for problem-based learning, which is worthy of further investigation.

2017 ◽  
Vol 55 (8) ◽  
pp. 1053-1068 ◽  
Author(s):  
Han-Chin Liu

Multimedia students’ dependence on information from the outside world can have an impact on their ability to identify and locate information from multiple resources in learning environments and thereby affect the construction of mental models. Field dependence-independence has been used to assess the ability to extract essential information from the environment. This study utilized eye-tracking technology to explore whether field-dependent and field-independent (FI) learners differed in their visual searching efficiency and multimedia learning performance. The FI learners outperformed field-dependent learners in posttest indices. In addition, FI learners were better able to identify visual cues and demonstrated efficient visual search patterns when learning using different information formats. The research findings echoed previous findings: The dependence on information in the context of learning can affect learners’ visual search efficiency and learning performance. The findings of this study suggest that adaptable learning environments that provide a rich variety of media may benefit learners with different levels of information-dependence. Applying eye-tracking technology enabled blueprints to be created pertaining to the learners’ information processing. However, additional research techniques, such as think-aloud exercises, would enable deeper understanding of how learners construct mental models of a knowledge base in a multimedia learning environment.


Sensors ◽  
2021 ◽  
Vol 21 (6) ◽  
pp. 2234
Author(s):  
Sebastian Kapp ◽  
Michael Barz ◽  
Sergey Mukhametov ◽  
Daniel Sonntag ◽  
Jochen Kuhn

Currently an increasing number of head mounted displays (HMD) for virtual and augmented reality (VR/AR) are equipped with integrated eye trackers. Use cases of these integrated eye trackers include rendering optimization and gaze-based user interaction. In addition, visual attention in VR and AR is interesting for applied research based on eye tracking in cognitive or educational sciences for example. While some research toolkits for VR already exist, only a few target AR scenarios. In this work, we present an open-source eye tracking toolkit for reliable gaze data acquisition in AR based on Unity 3D and the Microsoft HoloLens 2, as well as an R package for seamless data analysis. Furthermore, we evaluate the spatial accuracy and precision of the integrated eye tracker for fixation targets with different distances and angles to the user (n=21). On average, we found that gaze estimates are reported with an angular accuracy of 0.83 degrees and a precision of 0.27 degrees while the user is resting, which is on par with state-of-the-art mobile eye trackers.


Information ◽  
2021 ◽  
Vol 12 (6) ◽  
pp. 226
Author(s):  
Lisa-Marie Vortmann ◽  
Leonid Schwenke ◽  
Felix Putze

Augmented reality is the fusion of virtual components and our real surroundings. The simultaneous visibility of generated and natural objects often requires users to direct their selective attention to a specific target that is either real or virtual. In this study, we investigated whether this target is real or virtual by using machine learning techniques to classify electroencephalographic (EEG) and eye tracking data collected in augmented reality scenarios. A shallow convolutional neural net classified 3 second EEG data windows from 20 participants in a person-dependent manner with an average accuracy above 70% if the testing data and training data came from different trials. This accuracy could be significantly increased to 77% using a multimodal late fusion approach that included the recorded eye tracking data. Person-independent EEG classification was possible above chance level for 6 out of 20 participants. Thus, the reliability of such a brain–computer interface is high enough for it to be treated as a useful input mechanism for augmented reality applications.


2021 ◽  
Author(s):  
Martin Ohrndorf

Erklärvideos spielen mittlerweile auch in schulischen Kontexten eine bedeutsame Rolle. Welche Lern- und Verstehensprozesse beim Schauen von Erklärvideos relevant sind, ist bislang jedoch noch nicht erforscht. Die vorliegende Arbeit liefert einen ersten Schritt in Richtung der Erfassbarkeit kognitiver Verstehensprozesse mittels der Untersuchung von Blickbewegungen durch die Methode des Eye-Trackings. Das Medium Erklärvideo wird zunächst in die unterrichtspsychologische Forschung eingeordnet, indem seine Rolle für schulische Lehr-Lern-Prozesse anhand von Angebots-Nutzungs-Modellen untersucht wird. Diese Einordnung schafft eine analytische Trennung zwischen dem Erklärvideo als Angebot und der Nutzung dieses Mediums durch Schüler*innen, so dass diese beiden Perspektiven empirisch untersuchbar werden. Im nächsten Schritt wird beispielhaft ein Erklärvideo aus dem Bereich Funktionen als Lernangebot untersucht. Dies geschieht auf Grundlage eines Katalogs von Qualitätskriterien zur Untersuchung von lernunterstützenden Erklärvideos, welcher unter Berücksichtigung der Cognitive Theory of Multimedia Learning und fachdidaktischen Qualitätskriterien entwickelt und begründet wird. Nachfolgend wird die Erfassbarkeit kognitiver Verstehensprozesse beim funktionalen Denken anhand einer Fallstudie zur Nutzung des Erklärvideos durchgeführt. Hierzu werden Blickbewegungen und Äußerungen des nachträglichen lauten Denkens analysiert, u.a. anhand einer Ausdifferenzierung der Anderson-Krathwohl-Taxonomie für funktionales Denken. Die vorliegende Arbeit gibt einen Einblick in die aktuelle kognitionspsychologische Forschung bezüglich Erklärvideos zum funktionalen Lernen und weist nach, dass kognitive Erinnerns- und Verstehensprozesse u.a. mittels Eye-Tracking an verschiedenen Stellen sichtbar gemacht werden können.


2021 ◽  
Vol 4 (4) ◽  
pp. 605-623
Author(s):  
Tarık Talan

Augmented reality applications in STEM education have increasing importance in recent years and it draws attention that scientific studies on this subject have gained momentum in the literature. The purpose of this research is to conduct a bibliometric analysis of studies on the use of augmented reality applications in STEM education in the literature. The Web of Science database has been used to collect the data. A total of 741 studies were accessed by going through various screening processes for the research. Content analysis and bibliometric analysis have been used in the analysis of the data. In the research, the distribution of publications by years and countries and the most published authors, journals, and countries were accessed. As a result of the research, in terms of the institutions with which the authors work, "National Taiwan University of Science Technology" ranked near the top for the number of citations and "National Taiwan Normal University" ranked near the top for the number of publications as the most productive institutions. It has been detected that "Wu, H. –K." and "Chang, H. –Y" are the most effective and productive researchers. According to the analysis conducted in the context of journals, "Computers Education" and "Interactive Learning Environments" have been the journals that contributed the most to this subject. As a result of the analysis, it was found that the co-authorship network structure is predominant in England and Spain. Concepts that become apparent in clusters in co-occurrences analysis are "augmented reality", "virtual reality", "mobile learning", "science education" and "mixed reality".


Author(s):  
Hedda Martina Šola ◽  
Fayyaz Hussain Qureshi ◽  
Sarwar Khawaja

In recent years, the newly emerging discipline of neuromarketing, which employs brain (emotions and behaviour) research in an organisational context, has grown in prominence in academic and practice literature. With the increasing growth of online teaching, COVID-19 left no option for higher education institutions to go online. As a result, students who attend an online course are more prone to lose focus and attention, resulting in poor academic performance. Therefore, the primary purpose of this study is to observe the learner's behaviour while making use of an online learning platform. This study presents neuromarketing to enhance students' learning performance and motivation in an online classroom. Using a web camera, we used facial coding and eye-tracking techniques to study students' attention, motivation, and interest in an online classroom. In collaboration with Oxford Business College's marketing team, the Institute for Neuromarketing distributed video links via email, a student representative from Oxford Business College, the WhatsApp group, and a newsletter developed explicitly for that purpose to 297 students over the course of five days. To ensure the research was both realistic and feasible, the instructors in the videos were different, and students were randomly allocated to one video link lasting 90 seconds (n=142) and a second one lasting 10 minutes (n=155). An online platform for self-service called Tobii Sticky was used to measure facial coding and eye-tracking. During the 90-second online lecture, participants' gaze behaviour was tracked overtime to gather data on their attention distribution, and emotions were evaluated using facial coding. In contrast, the 10-minute film looked at emotional involvement. The findings show that students lose their listening focus when no supporting visual material or virtual board is used, even during a brief presentation. Furthermore, when they are exposed to a single shareable piece of content for longer than 5.24 minutes, their motivation and mood decline; however, when new shareable material or a class activity is introduced, their motivation and mood rise. JEL: I20; I21 <p> </p><p><strong> Article visualizations:</strong></p><p><img src="/-counters-/edu_01/0805/a.php" alt="Hit counter" /></p>


Sign in / Sign up

Export Citation Format

Share Document