Embodied information behavior, mixed reality and big data

Author(s):  
Ruth West ◽  
Max J. Parola ◽  
Amelia R. Jaycen ◽  
Christopher P. Lueg
2021 ◽  
Author(s):  
◽  
Timothy Voss

<p>This is thesis explores applications of Mixed Reality, commonplace technologies and representation techniques in embodied and interactive design, through the development of an airport wayfinding system. The proposition that airports can be difficult to navigate, struggling to foster social connections, along with the challenging notion of providing an interface for Big Data spatially to users, motivates the research.  The development of personalised spatial way finding techniques aids methods for the use of location and big data to ergonomically and spatially represent users’ navigation of space. Through methods of connecting people virtually within a single physical location using a unified design language, social implications of space are enhanced and extended. Finally, space which functions efficiency provides real-time feedback.  Key theory in Human Computer Interaction and Embodied Design informs the research, through mixed reality, technology and data-form translations.  Research is done over two stages, the first explores data inputs from users and represents these in 2D graphics. The second develops three separate design elements to create a spatial way finding system, to allow user engagement. These are a virtual projection, a set of physical forms and a set of wearable device applications. Design development happens through iterations within each experiment, and are always informed by previous work.  The result is an inhabitable data space with seamless embodied design exploring the localisation of large sets of data.</p>


2021 ◽  
Vol 2021 ◽  
pp. 1-15
Author(s):  
Fatmah Abdulrahman Baothman

Artificial intelligence (AI) is progressively changing techniques of teaching and learning. In the past, the objective was to provide an intelligent tutoring system without intervention from a human teacher to enhance skills, control, knowledge construction, and intellectual engagement. This paper proposes a definition of AI focusing on enhancing the humanoid agent Nao’s learning capabilities and interactions. The aim is to increase Nao intelligence using big data by activating multisensory perceptions such as visual and auditory stimuli modules and speech-related stimuli, as well as being in various movements. The method is to develop a toolkit by enabling Arabic speech recognition and implementing the Haar algorithm for robust image recognition to improve the capabilities of Nao during interactions with a child in a mixed reality system using big data. The experiment design and testing processes were conducted by implementing an AI principle design, namely, the three-constituent principle. Four experiments were conducted to boost Nao’s intelligence level using 100 children, different environments (class, lab, home, and mixed reality Leap Motion Controller (LMC). An objective function and an operational time cost function are developed to improve Nao’s learning experience in different environments accomplishing the best results in 4.2 seconds for each number recognition. The experiments’ results showed an increase in Nao’s intelligence from 3 to 7 years old compared with a child’s intelligence in learning simple mathematics with the best communication using a kappa ratio value of 90.8%, having a corpus that exceeded 390,000 segments, and scoring 93% of success rate when activating both auditory and vision modules for the agent Nao. The developed toolkit uses Arabic speech recognition and the Haar algorithm in a mixed reality system using big data enabling Nao to achieve a 94% success learning rate at a distance of 0.09 m; when using LMC in mixed reality, the hand sign gestures recorded the highest accuracy of 98.50% using Haar algorithm. The work shows that the current work enabled Nao to gradually achieve a higher learning success rate as the environment changes and multisensory perception increases. This paper also proposes a cutting-edge research work direction for fostering child-robots education in real time.


2021 ◽  
Author(s):  
◽  
Timothy Voss

<p>This is thesis explores applications of Mixed Reality, commonplace technologies and representation techniques in embodied and interactive design, through the development of an airport wayfinding system. The proposition that airports can be difficult to navigate, struggling to foster social connections, along with the challenging notion of providing an interface for Big Data spatially to users, motivates the research.  The development of personalised spatial way finding techniques aids methods for the use of location and big data to ergonomically and spatially represent users’ navigation of space. Through methods of connecting people virtually within a single physical location using a unified design language, social implications of space are enhanced and extended. Finally, space which functions efficiency provides real-time feedback.  Key theory in Human Computer Interaction and Embodied Design informs the research, through mixed reality, technology and data-form translations.  Research is done over two stages, the first explores data inputs from users and represents these in 2D graphics. The second develops three separate design elements to create a spatial way finding system, to allow user engagement. These are a virtual projection, a set of physical forms and a set of wearable device applications. Design development happens through iterations within each experiment, and are always informed by previous work.  The result is an inhabitable data space with seamless embodied design exploring the localisation of large sets of data.</p>


Author(s):  
Jacqueline A. Towson ◽  
Matthew S. Taylor ◽  
Diana L. Abarca ◽  
Claire Donehower Paul ◽  
Faith Ezekiel-Wilder

Purpose Communication between allied health professionals, teachers, and family members is a critical skill when addressing and providing for the individual needs of patients. Graduate students in speech-language pathology programs often have limited opportunities to practice these skills prior to or during externship placements. The purpose of this study was to research a mixed reality simulator as a viable option for speech-language pathology graduate students to practice interprofessional communication (IPC) skills delivering diagnostic information to different stakeholders compared to traditional role-play scenarios. Method Eighty graduate students ( N = 80) completing their third semester in one speech-language pathology program were randomly assigned to one of four conditions: mixed-reality simulation with and without coaching or role play with and without coaching. Data were collected on students' self-efficacy, IPC skills pre- and postintervention, and perceptions of the intervention. Results The students in the two coaching groups scored significantly higher than the students in the noncoaching groups on observed IPC skills. There were no significant differences in students' self-efficacy. Students' responses on social validity measures showed both interventions, including coaching, were acceptable and feasible. Conclusions Findings indicated that coaching paired with either mixed-reality simulation or role play are viable methods to target improvement of IPC skills for graduate students in speech-language pathology. These findings are particularly relevant given the recent approval for students to obtain clinical hours in simulated environments.


ASHA Leader ◽  
2013 ◽  
Vol 18 (2) ◽  
pp. 59-59
Keyword(s):  

Find Out About 'Big Data' to Track Outcomes


Sign in / Sign up

Export Citation Format

Share Document