instructional application
Recently Published Documents


TOTAL DOCUMENTS

16
(FIVE YEARS 1)

H-INDEX

4
(FIVE YEARS 0)

10.28945/4737 ◽  
2021 ◽  
Vol 20 ◽  
pp. 137-171
Author(s):  
Alex Fegely ◽  
Todd S Cherner

Aim/Purpose: This article presents a comprehensive rubric for evaluating educational virtual reality experiences for mobile devices. The aim of this article is to systematically analyze research to address the quality of virtual reality experiences on mobile applications in order to extend the work of Lee and Cherner (2015) and their instructional application rubric. Background: Ratings in proprietary mobile application stores – The App Store and Google Play, etc. – are generic and do not provide meaningful evaluations of the virtual reality. This article utilizes research in the areas of virtual reality and education to present a comprehensive rubric for evaluating educational virtual reality for mobile applications, which continues to advance previously published, research-based rubrics. Methodology: The methodology uses a systematic process that spans multiple stages. The first stage was to locate pre-existing rubrics for virtual reality, followed by a review of literature focused on it. The third stage was to develop and vet a research-supported rubric for evaluating educational virtual reality. Contribution: The main contribution from this article is that it fills a gap in the literature by presenting a criterion-referenced, research-supported rubric for evaluating the quality of educational virtual reality for mobile devices (e.g., smartphones, tablets, and app-connected goggles). Findings: This paper’s findings include the domains, dimensions, and criterion-referenced Likert scale indicators in the form of rubric dimensions for evaluating educational virtual reality. The evaluative domains consist of (1) Positioning of the EduVR, (2) Avatar Level, (3) Virtual Environment, and (4) Virtual Experience. Recommendations for Practitioners: This rubric is a tool for instructional coaches, teacher educators, and instructional technologists to use when recommending virtual reality experiences for instructional purposes. Recommendation for Researchers: Researchers can use this tool to monitor the quality of educational virtual reality being developed for classroom use. They can also use this rubric to examine educational virtual reality experiences they would use in their studies and evaluate how those educational virtual reality experiences impact student learning, engagement, and collaboration. Impact on Society: We foresee this rubric being an aid in the development, selection, and purchase of educational virtual reality by educational institutions, educators, researchers, edtech developers, and edu-philanthropists, thus advancing the quality and expectations for educational virtual reality experiences. Future Research: Future researchers can further enhance the validity of this rubric by collecting large amounts of data from a diverse set of end users and stakeholders. Also, subsequent rubrics for evaluating augmented reality and extended reality comprise additional research avenues.


2019 ◽  
Vol 8 (2) ◽  
pp. 1802-1809

COALESCENCE is intended to let the Chemistry students learn and for Chemistry Teachers to use as a tool in teaching the subject. This application uses Augmented Reality to present information and 3D models of laboratory equipment, periodic elements, compound elements, and planet elements. Trivia and interactive quizzes are also provided to assess the user’s knowledge. Iterative Model was applied in project development. This application was created using Photoshop, Unity 3D, Blender, and Maya. In using the application, the minimum Android OS requirement is Android 6.0 (Marshmallow) and above. The test instruments used were Functionality, Compatibility, and Conformance test aligned with Android Core App Quality Standards. It got a 100% rating for the three test instruments’ used. In evaluation, Mobile Application Rating Scale was utilized with an average mean of “3.44”, the standard deviation of “0.17” and an interpretation with a “Highly Acceptable” feedback from the evaluators in all aspect of the application.


2019 ◽  
Vol 77 ◽  
pp. 58-69 ◽  
Author(s):  
Chao-Hung Wang ◽  
Ni-Hsin Tsai ◽  
Jun-Ming Lu ◽  
Mao-Jiun J. Wang

2018 ◽  
Vol 8 (2) ◽  
pp. 1
Author(s):  
Abby Deng-Huei Lee ◽  
Richard Jenn-Rong Wu

We explored using multiple-choice cloze (MCC) tests for classroom instruction. The practice of “testing leading teaching” is frequently criticized because it might distort the original teaching objectives. We do not primarily emphasize how to get high scores; instead, we show how to use testing techniques and teaching activities to provide feedback that energizes teaching methods and increases learning effectiveness. We analyzed MCC test-taking strategies, which include leading students to: 1) skim for the first and the last sentence in cloze passages; 2) read the whole cloze passage to grasp its general idea; 3) look for contextual clues; 4) orally express (“thinking out loud”) their reasons for choosing one MCC test item instead of another; and 5) conduct group discussions. Finally, 6) teachers guided the entire class, discussed contextual and situational clues, and provided feedback about student choices and reasons. The experimental design of this research primarily compared the performance between two groups: Experimental and Control. Differences in cloze scores between the two groups were significant, but differences in reading comprehension scores were not. After six 25-minute MCC test lessons, Experimental group students had better MCC test scores than did Control group students. Our findings supported our hypothesis that MCC instruction, even for a short time, would improve performance on a cloze test. We also discuss how to use MCC tests to teach strategies for answering MCC test items.


2016 ◽  
Vol 6 (2) ◽  
pp. 65-76
Author(s):  
Karyn Cooper ◽  
Rebecca Hughes ◽  
Aliyah Shamji

This article reports on a study that engaged graduate students from one Canadian university in a knowledge creation project, which produced new evidence and insights regarding pressing socio-political issues of our time. This study resulted in the creation of an instructional application known as the IIF (the Interpretive Imagination Forum), a collaborative video research application for use in higher education courses across the disciplines (e.g., anthropology, history, media studies, philosophy, queer studies, sociology, women's studies). Further, this study resulted in the development of a technology-mediated, hermeneutic tagging technique. IIF was developed as an open-source platform for conducting video research. In keeping with open-source curriculum objectives (OSC), a curriculum framework was developed, which can be used in graduate-level courses (e.g., curriculum foundations, qualitative methodology, critical inquiry). Student participants were invited to add, delete, and modify text annotations or tags, which not only resulted in broader understandings of the themes, theories, and concepts that existed within the videotaped content, but also resulted in the development of a creative and innovative instructional and learning tool. The overarching objective of this study was to circumvent linear or normative qualitative analysis and instead facilitate non-linear, creative, and organic approaches to understanding, analyzing, representing, and disseminating theories and concepts derived from video scholarship.


Sign in / Sign up

Export Citation Format

Share Document