scholarly journals VES: A Mixed-Reality System to Assist Multisensory Spatial Perception and Cognition for Blind and Visually Impaired People

2020 ◽  
Vol 10 (2) ◽  
pp. 523
Author(s):  
Santiago Real ◽  
Alvaro Araujo

In this paper, the Virtually Enhanced Senses (VES) System is described. It is an ARCore-based, mixed-reality system meant to assist blind and visually impaired people’s navigation. VES operates in indoor and outdoor environments without any previous in-situ installation. It provides users with specific, runtime-configurable stimuli according to their pose, i.e., position and orientation, and the information of the environment recorded in a virtual replica. It implements three output data modalities: Wall-tracking assistance, acoustic compass, and a novel sensory substitution algorithm, Geometry-based Virtual Acoustic Space (GbVAS). The multimodal output of this algorithm takes advantage of natural human perception encoding of spatial data. Preliminary experiments of GbVAS have been conducted with sixteen subjects in three different scenarios, demonstrating basic orientation and mobility skills after six minutes training.

Sensors ◽  
2021 ◽  
Vol 21 (18) ◽  
pp. 6275
Author(s):  
Santiago Real ◽  
Alvaro Araujo

Herein, we describe the Virtually Enhanced Senses (VES) system, a novel and highly configurable wireless sensor-actuator network conceived as a development and test-bench platform of navigation systems adapted for blind and visually impaired people. It allows to immerse its users into “walkable” purely virtual or mixed environments with simulated sensors and validate navigation system designs prior to prototype development. The haptic, acoustic, and proprioceptive feedback supports state-of-art sensory substitution devices (SSD). In this regard, three SSD were integrated in VES as examples, including the well-known “The vOICe”. Additionally, the data throughput, latency and packet loss of the wireless communication can be controlled to observe its impact in the provided spatial knowledge and resulting mobility and orientation performance. Finally, the system has been validated by testing a combination of two previous visual-acoustic and visual-haptic sensory substitution schemas with 23 normal-sighted subjects. The recorded data includes the output of a “gaze-tracking” utility adapted for SSD.


2009 ◽  
Vol 18 (03) ◽  
pp. 379-397 ◽  
Author(s):  
JAMES COUGHLAN ◽  
ROBERTO MANDUCHI

We describe a wayfinding system for blind and visually impaired persons that uses a camera phone to determine the user's location with respect to color markers, posted at locations of interest (such as offices), which are automatically detected by the phone. The color marker signs are specially designed to be detected in real time in cluttered environments using computer vision software running on the phone; a novel segmentation algorithm quickly locates the borders of the color marker in each image, which allows the system to calculate how far the marker is from the phone. We present a model of how the user's scanning strategy (i.e. how he/she pans the phone left and right to find color markers) affects the system's ability to detect color markers given the limitations imposed by motion blur, which is always a possibility whenever a camera is in motion. Finally, we describe experiments with our system tested by blind and visually impaired volunteers, demonstrating their ability to reliably use the system to find locations designated by color markers in a variety of indoor and outdoor environments, and elucidating which search strategies were most effective for users.


Author(s):  
John Nicholson ◽  
Vladimir Kulyukin

Limited sensory information about a new environment often requires people with a visual impairment to rely on sighted guides for showing or describing routes around the environment. However, route descriptions provided by other blind independent navigators, (e.g., over a cell phone), can also be used to guide a traveler along a previously unknown route. A visually impaired guide can often describe a route as well or better than a sighted person since the guide is familiar with the issues of blind navigation. This chapter introduces a Collaborative Route Information Sharing System (CRISS). CRISS is a collaborative online environment where visually impaired and sighted people will be able to share and manage route descriptions for indoor and outdoor environments. It then describes the system’s Route Analysis Engine module which takes advantage of information extraction techniques to find landmarks in natural language route descriptions written by independent blind navigators.


Sensors ◽  
2021 ◽  
Vol 21 (16) ◽  
pp. 5274
Author(s):  
Ricardo Tachiquin ◽  
Ramiro Velázquez ◽  
Carolina Del-Valle-Soto ◽  
Carlos A. Gutiérrez ◽  
Miguel Carrasco ◽  
...  

This paper reports on the progress of a wearable assistive technology (AT) device designed to enhance the independent, safe, and efficient mobility of blind and visually impaired pedestrians in outdoor environments. Such device exploits the smartphone’s positioning and computing capabilities to locate and guide users along urban settings. The necessary navigation instructions to reach a destination are encoded as vibrating patterns which are conveyed to the user via a foot-placed tactile interface. To determine the performance of the proposed AT device, two user experiments were conducted. The first one requested a group of 20 voluntary normally sighted subjects to recognize the feedback provided by the tactile-foot interface. The results showed recognition rates over 93%. The second experiment involved two blind voluntary subjects which were assisted to find target destinations along public urban pathways. Results show that the subjects successfully accomplished the task and suggest that blind and visually impaired pedestrians might find the AT device and its concept approach useful, friendly, fast to master, and easy to use.


Electronics ◽  
2021 ◽  
Vol 10 (14) ◽  
pp. 1619
Author(s):  
Otilia Zvorișteanu ◽  
Simona Caraiman ◽  
Robert-Gabriel Lupu ◽  
Nicolae Alexandru Botezatu ◽  
Adrian Burlacu

For most visually impaired people, simple tasks such as understanding the environment or moving safely around it represent huge challenges. The Sound of Vision system was designed as a sensory substitution device, based on computer vision techniques, that encodes any environment in a naturalistic representation through audio and haptic feedback. The present paper presents a study on the usability of this system for visually impaired people in relevant environments. The aim of the study is to assess how well the system is able to help the perception and mobility of the visually impaired participants in real life environments and circumstances. The testing scenarios were devised to allow the assessment of the added value of the Sound of Vision system compared to traditional assistive instruments, such as the white cane. Various data were collected during the tests to allow for a better evaluation of the performance: system configuration, completion times, electro-dermal activity, video footage, user feedback. With minimal training, the system could be successfully used in outdoor environments to perform various perception and mobility tasks. The benefit of the Sound of Vision device compared to the white cane was confirmed by the participants and by the evaluation results to consist in: providing early feedback about static and dynamic objects, providing feedback about elevated objects, walls, negative obstacles (e.g., holes in the ground) and signs.


Sensors ◽  
2019 ◽  
Vol 19 (12) ◽  
pp. 2771 ◽  
Author(s):  
Simona Caraiman ◽  
Otilia Zvoristeanu ◽  
Adrian Burlacu ◽  
Paul Herghelegiu

The development of computer vision based systems dedicated to help visually impaired people to perceive the environment, to orientate and navigate has been the main research subject of many works in the recent years. A significant ensemble of resources has been employed to support the development of sensory substitution devices (SSDs) and electronic travel aids for the rehabilitation of the visually impaired. The Sound of Vision (SoV) project used a comprehensive approach to develop such an SSD, tackling all the challenging aspects that so far restrained the large scale adoption of such systems by the intended audience: Wearability, real-time operation, pervasiveness, usability, cost. This article is set to present the artificial vision based component of the SoV SSD that performs the scene reconstruction and segmentation in outdoor environments. In contrast with the indoor use case, where the system acquires depth input from a structured light camera, in outdoors SoV relies on stereo vision to detect the elements of interest and provide an audio and/or haptic representation of the environment to the user. Our stereo-based method is designed to work with wearable acquisition devices and still provide a real-time, reliable description of the scene in the context of unreliable depth input from the stereo correspondence and of the complex 6 DOF motion of the head-worn camera. We quantitatively evaluate our approach on a custom benchmarking dataset acquired with SoV cameras and provide the highlights of the usability evaluation with visually impaired users.


2019 ◽  
Vol 1 ◽  
pp. 1-1
Author(s):  
Jakub Wabiński ◽  
Albina Mościcka

<p><strong>Abstract.</strong> A lot has been done regarding automatic generation of topographic maps within National Mapping Agencies (NMAs) and there are examples of successful implementations of such projects. The main issue related to automatic map production is cartographic generalization. It is mainly used for transforming the original spatial dataset into maps of smaller scale. Everyone, who has ever worked on map generalization knows, how laborious and time-consuming this process is. This is why a lot of effort is being put to automate it. Automatic map production is very difficult but it gets even more complicated if we consider automatic production of tactile maps – maps that are being read with sense of touch and, to a limited extent, also with eyes.</p><p>In an average, a man without visual impairment is capable of distinguishing two points as separate if they are, according to different sources, 0.2&amp;ndash;0.3 millimetres apart from each other. If one would like to achieve the same but using sense of touch, a distance of 2.4&amp;ndash;3.0 millimetres is necessary. This is enough to show how intense the generalization process has to be while transforming scales of tactile maps. It also brings up a question: ‘What are the algorithms and solutions for tactile spatial data generalization and to what extent can this process be automated?’. The answer to this question is the main point of the research presented here.</p><p>During the presentation, the results of a systematic literature review on this topic basing on the primary studies from the last decade, will be presented. Automatic map generation is nothing new but this field of research lacks a systematic review, which would summarize existing literature. This review, although about automatic map generation in general, focuses on tactile maps. Therefore, the answers to the following questions will be presented:</p><ol><li>What are the generalization methods and models for automatic (tactile) map generation?</li><li>What are the existing systems and solutions allowing automatic (tactile) map generation?</li><li>How to properly design spatial database for automatic map generation?</li></ol><p>Presented research will form a significant part of Jakub Wabiński PhD dissertation, which main goal is to create a methodology that would allow blind users to create on-demand thematic maps with different level of detail and scales, out of publicly available spatial data. Due to the fact that in European Union there is the INSPIRE Directive (Infrastructure for Spatial Information in the European Community), which requires member countries to provide their citizens with current spatial data, but also aims to define common standards of describing and sharing spatial data &amp;ndash; it is possible to create universal methodology for the whole European Union. The problem is that these data have to be first adapted for use by blind and visually impaired people.</p><p>There is high demand on tactile maps and atlases but unfortunately their production is very expensive. Not all the schools for blind and visually impaired can afford to buy them (not to mention individual people). Traditional tactile maps production methods, such as ‘thermoforming’, are cost effective only in the case of production in a large scale. Pretty often individual map sheets are required to present a certain phenomenon. Fortunately, there are cheap and efficient alternatives &amp;ndash; namely 3D printing or swell-paper, which can be used at home by individual users with success. We believe that a platform allowing blind and visually impaired to generate easy-to-use, unique thematic and topographic maps that comply with the requirements regarding tactile cartographic signs designs would be highly appreciated. Similar solutions already exist but only in the field of orientation and navigation maps and they have their limitations. Thematic tactile maps are very important to perceive various information that are provided by spatial data and we would like to focus on them in our presentation.</p>


2018 ◽  
Vol 2018 ◽  
pp. 1-9 ◽  
Author(s):  
Charalampos Saitis ◽  
Mohammad Zavid Parvez ◽  
Kyriaki Kalimeri

Reliable detection of cognitive load would benefit the design of intelligent assistive navigation aids for the visually impaired (VIP). Ten participants with various degrees of sight loss navigated in unfamiliar indoor and outdoor environments, while their electroencephalogram (EEG) and electrodermal activity (EDA) signals were being recorded. In this study, the cognitive load of the tasks was assessed in real time based on a modification of the well-established event-related (de)synchronization (ERD/ERS) index. We present an in-depth analysis of the environments that mostly challenge people from certain categories of sight loss and we present an automatic classification of the perceived difficulty in each time instance, inferred from their biosignals. Given the limited size of our sample, our findings suggest that there are significant differences across the environments for the various categories of sight loss. Moreover, we exploit cross-modal relations predicting the cognitive load in real time inferring on features extracted from the EDA. Such possibility paves the way for the design on less invasive, wearable assistive devices that take into consideration the well-being of the VIP.


Sensors ◽  
2019 ◽  
Vol 19 (21) ◽  
pp. 4670
Author(s):  
Xiaochen Zhang ◽  
Hui Zhang ◽  
Linyue Zhang ◽  
Yi Zhu ◽  
Fei Hu

This paper presents the analysis and design of a new, wearable orientation guidance device in modern travel aid systems for blind and visually impaired people. The four-stage double-diamond design model was applied in the design process to achieve human-centric innovation and to ensure technical feasibility and economic viability. Consequently, a sliding tactile feedback wristband was designed and prototyped. Furthermore, a Bezier curve-based adaptive path planner is proposed to guarantee collision-free planned motion. Proof-of-concept experiments on both virtual and real-world scenarios are conducted. The evaluation results confirmed the efficiency and feasibility of the design and imply the design’s remarkable potential in spatial perception rehabilitation.


PLoS ONE ◽  
2021 ◽  
Vol 16 (4) ◽  
pp. e0250281
Author(s):  
Galit Buchs ◽  
Benedetta Haimler ◽  
Menachem Kerem ◽  
Shachar Maidenbaum ◽  
Liraz Braun ◽  
...  

Sensory Substitution Devices (SSDs) convey visual information through audition or touch, targeting blind and visually impaired individuals. One bottleneck towards adopting SSDs in everyday life by blind users, is the constant dependency on sighted instructors throughout the learning process. Here, we present a proof-of-concept for the efficacy of an online self-training program developed for learning the basics of the EyeMusic visual-to-auditory SSD tested on sighted blindfolded participants. Additionally, aiming to identify the best training strategy to be later re-adapted for the blind, we compared multisensory vs. unisensory as well as perceptual vs. descriptive feedback approaches. To these aims, sighted participants performed identical SSD-stimuli identification tests before and after ~75 minutes of self-training on the EyeMusic algorithm. Participants were divided into five groups, differing by the feedback delivered during training: auditory-descriptive, audio-visual textual description, audio-visual perceptual simultaneous and interleaved, and a control group which had no training. At baseline, before any EyeMusic training, participants SSD objects’ identification was significantly above chance, highlighting the algorithm’s intuitiveness. Furthermore, self-training led to a significant improvement in accuracy between pre- and post-training tests in each of the four feedback groups versus control, though no significant difference emerged among those groups. Nonetheless, significant correlations between individual post-training success rates and various learning measures acquired during training, suggest a trend for an advantage of multisensory vs. unisensory feedback strategies, while no trend emerged for perceptual vs. descriptive strategies. The success at baseline strengthens the conclusion that cross-modal correspondences facilitate learning, given SSD algorithms are based on such correspondences. Additionally, and crucially, the results highlight the feasibility of self-training for the first stages of SSD learning, and suggest that for these initial stages, unisensory training, easily implemented also for blind and visually impaired individuals, may suffice. Together, these findings will potentially boost the use of SSDs for rehabilitation.


Sign in / Sign up

Export Citation Format

Share Document