scholarly journals Visual Echolocation Concept for the Colorophone Sensory Substitution Device Using Virtual Reality

Sensors ◽  
2021 ◽  
Vol 21 (1) ◽  
pp. 237
Author(s):  
Patrycja Bizoń-Angov ◽  
Dominik Osiński ◽  
Michał Wierzchoń ◽  
Jarosław Konieczny

Detecting characteristics of 3D scenes is considered one of the biggest challenges for visually impaired people. This ability is nonetheless crucial for orientation and navigation in the natural environment. Although there are several Electronic Travel Aids aiming at enhancing orientation and mobility for the blind, only a few of them combine passing both 2D and 3D information, including colour. Moreover, existing devices either focus on a small part of an image or allow interpretation of a mere few points in the field of view. Here, we propose a concept of visual echolocation with integrated colour sonification as an extension of Colorophone—an assistive device for visually impaired people. The concept aims at mimicking the process of echolocation and thus provides 2D, 3D and additionally colour information of the whole scene. Even though the final implementation will be realised by a 3D camera, it is first simulated, as a proof of concept, by using VIRCO—a Virtual Reality training and evaluation system for Colorophone. The first experiments showed that it is possible to sonify colour and distance of the whole scene, which opens up a possibility to implement the developed algorithm on a hardware-based stereo camera platform. An introductory user evaluation of the system has been conducted in order to assess the effectiveness of the proposed solution for perceiving distance, position and colour of the objects placed in Virtual Reality.

The evolution of the technology takes the education to next level, where it makes the learning process more interesting and attractive. The Virtual Reality plays an important role in this evolution. The main aim of this work is to enhance the learning ability in students through virtual environment by developing an education based game. In this work, the virtual reality device-Wii mote has been used for the learning process, and also for answering the questions in the different levels of game. The learning process also involves the speech synthesis. This helps the visually impaired people to learn without others help and it also motivates even the average students to participate more actively in learning process. The game has been further divided as easy, medium and difficult levels. So the learning ability of each student can be easily tested and further steps can be taken in order to motivate them, and to optimize their learning skill. Thus, this work motivates the students for learning and to exalt their learning ability.


Electronics ◽  
2019 ◽  
Vol 8 (6) ◽  
pp. 697 ◽  
Author(s):  
Jinqiang Bai ◽  
Zhaoxiang Liu ◽  
Yimin Lin ◽  
Ye Li ◽  
Shiguo Lian ◽  
...  

Assistive devices for visually impaired people (VIP) which support daily traveling and improve social inclusion are developing fast. Most of them try to solve the problem of navigation or obstacle avoidance, and other works focus on helping VIP to recognize their surrounding objects. However, very few of them couple both capabilities (i.e., navigation and recognition). Aiming at the above needs, this paper presents a wearable assistive device that allows VIP to (i) navigate safely and quickly in unfamiliar environment, and (ii) to recognize the objects in both indoor and outdoor environments. The device consists of a consumer Red, Green, Blue and Depth (RGB-D) camera and an Inertial Measurement Unit (IMU), which are mounted on a pair of eyeglasses, and a smartphone. The device leverages the ground height continuity among adjacent image frames to segment the ground accurately and rapidly, and then search the moving direction according to the ground. A lightweight Convolutional Neural Network (CNN)-based object recognition system is developed and deployed on the smartphone to increase the perception ability of VIP and promote the navigation system. It can provide the semantic information of surroundings, such as the categories, locations, and orientations of objects. Human–machine interaction is performed through audio module (a beeping sound for obstacle alert, speech recognition for understanding the user commands, and speech synthesis for expressing semantic information of surroundings). We evaluated the performance of the proposed system through many experiments conducted in both indoor and outdoor scenarios, demonstrating the efficiency and safety of the proposed assistive system.


2017 ◽  
Vol 111 (2) ◽  
pp. 148-164 ◽  
Author(s):  
Oana Bălan ◽  
Alin Moldoveanu ◽  
Florica Moldoveanu ◽  
Hunor Nagy ◽  
György Wersényi ◽  
...  

Introduction As the number of people with visual impairments (that is, those who are blind or have low vision) is continuously increasing, rehabilitation and engineering researchers have identified the need to design sensory-substitution devices that would offer assistance and guidance to these people for performing navigational tasks. Auditory and haptic cues have been shown to be an effective approach towards creating a rich spatial representation of the environment, so they are considered for inclusion in the development of assistive tools that would enable people with visual impairments to acquire knowledge of the surrounding space in a way close to the visually based perception of sighted individuals. However, achieving efficiency through a sensory substitution device requires extensive training for visually impaired users to learn how to process the artificial auditory cues and convert them into spatial information. Methods Considering all the potential advantages game-based learning can provide, we propose a new method for training sound localization and virtual navigational skills of visually impaired people in a 3D audio game with hierarchical levels of difficulty. The training procedure is focused on a multimodal (auditory and haptic) learning approach in which the subjects have been asked to listen to 3D sounds while simultaneously perceiving a series of vibrations on a haptic headband that corresponds to the direction of the sound source in space. Results The results we obtained in a sound-localization experiment with 10 visually impaired people showed that the proposed training strategy resulted in significant improvements in auditory performance and navigation skills of the subjects, thus ensuring behavioral gains in the spatial perception of the environment.


2020 ◽  
Vol 4 (4) ◽  
pp. 79
Author(s):  
Julian Kreimeier ◽  
Timo Götzelmann

Although most readers associate the term virtual reality (VR) with visually appealing entertainment content, this technology also promises to be helpful to disadvantaged people like blind or visually impaired people. While overcoming physical objects’ and spaces’ limitations, virtual objects and environments that can be spatially explored have a particular benefit. To give readers a complete, clear and concise overview of current and past publications on touchable and walkable audio supplemented VR applications for blind and visually impaired users, this survey paper presents a high-level taxonomy to cluster the work done up to now from the perspective of technology, interaction and application. In this respect, we introduced a classification into small-, medium- and large-scale virtual environments to cluster and characterize related work. Our comprehensive table shows that especially grounded force feedback devices for haptic feedback (‘small scale’) were strongly researched in different applications scenarios and mainly from an exocentric perspective, but there are also increasingly physically (‘medium scale’) or avatar-walkable (‘large scale’) egocentric audio-haptic virtual environments. In this respect, novel and widespread interfaces such as smartphones or nowadays consumer grade VR components represent a promising potential for further improvements. Our survey paper provides a database on related work to foster the creation process of new ideas and approaches for both technical and methodological aspects.


Author(s):  
Puru Malhotra and Vinay Kumar Saini

he paper is aimed at the design of a mobility assistive device to help the visually impaired. The traditional use of a walking stick proposes its own drawbacks and limitations. Our research is motivated by the inability of the visually impaired people to ambulate and we have made an attempt to restore their independence and reduce the trouble of carrying a stick around. We offer a hands-free wearable glass which finds it utility in real-time navigation. The design of the smart glasses includes the integration of various sensors with raspberry pi. The paper presents a detailed account of the various components and the structural design of the glasses. The novelty of our work lies in providing a complete pipeline for analysis of surroundings in real-time and hence a better solution for navigating during the day to day activities using audio instructions as output.


Author(s):  
Syed Tehzeeb Alam ◽  
Sonal Shrivastava ◽  
Syed Tanzim Alam ◽  
R. Sasikala ◽  

Electronics ◽  
2021 ◽  
Vol 10 (14) ◽  
pp. 1619
Author(s):  
Otilia Zvorișteanu ◽  
Simona Caraiman ◽  
Robert-Gabriel Lupu ◽  
Nicolae Alexandru Botezatu ◽  
Adrian Burlacu

For most visually impaired people, simple tasks such as understanding the environment or moving safely around it represent huge challenges. The Sound of Vision system was designed as a sensory substitution device, based on computer vision techniques, that encodes any environment in a naturalistic representation through audio and haptic feedback. The present paper presents a study on the usability of this system for visually impaired people in relevant environments. The aim of the study is to assess how well the system is able to help the perception and mobility of the visually impaired participants in real life environments and circumstances. The testing scenarios were devised to allow the assessment of the added value of the Sound of Vision system compared to traditional assistive instruments, such as the white cane. Various data were collected during the tests to allow for a better evaluation of the performance: system configuration, completion times, electro-dermal activity, video footage, user feedback. With minimal training, the system could be successfully used in outdoor environments to perform various perception and mobility tasks. The benefit of the Sound of Vision device compared to the white cane was confirmed by the participants and by the evaluation results to consist in: providing early feedback about static and dynamic objects, providing feedback about elevated objects, walls, negative obstacles (e.g., holes in the ground) and signs.


Sign in / Sign up

Export Citation Format

Share Document