Android-Based Visual Tag Detection for Visually Impaired Users

Author(s):  
Hao Dong ◽  
Jieqi Kang ◽  
James Schafer ◽  
Aura Ganz

In this paper the authors introduce PERCEPT-V indoor navigation for the blind system. PERCEPT-V enhances PERCEPT system by enabling visually impaired users to navigate in open indoor spaces that differ in size and lighting conditions. The authors deploy visual tags in the environment at specific landmarks and introduce a visual tag detection algorithm using a sampling probe and cascading approach. The authors provide guidelines for the visual tag size, which is a function of various environmental, and usage scenarios, which differ in lighting, dimensions of the indoor environment and angle of usage. The authors also developed a Smartphone based user interface for the visually impaired users that uses Android accessibility features.

Ophthalmology ◽  
2018 ◽  
pp. 317-334
Author(s):  
Hao Dong ◽  
Jieqi Kang ◽  
James Schafer ◽  
Aura Ganz

In this paper the authors introduce PERCEPT-V indoor navigation for the blind system. PERCEPT-V enhances PERCEPT system by enabling visually impaired users to navigate in open indoor spaces that differ in size and lighting conditions. The authors deploy visual tags in the environment at specific landmarks and introduce a visual tag detection algorithm using a sampling probe and cascading approach. The authors provide guidelines for the visual tag size, which is a function of various environmental, and usage scenarios, which differ in lighting, dimensions of the indoor environment and angle of usage. The authors also developed a Smartphone based user interface for the visually impaired users that uses Android accessibility features.


Sensors ◽  
2020 ◽  
Vol 20 (21) ◽  
pp. 6238
Author(s):  
Payal Mahida ◽  
Seyed Shahrestani ◽  
Hon Cheung

Wayfinding and navigation can present substantial challenges to visually impaired (VI) people. Some of the significant aspects of these challenges arise from the difficulty of knowing the location of a moving person with enough accuracy. Positioning and localization in indoor environments require unique solutions. Furthermore, positioning is one of the critical aspects of any navigation system that can assist a VI person with their independent movement. The other essential features of a typical indoor navigation system include pathfinding, obstacle avoidance, and capabilities for user interaction. This work focuses on the positioning of a VI person with enough precision for their use in indoor navigation. We aim to achieve this by utilizing only the capabilities of a typical smartphone. More specifically, our proposed approach is based on the use of the accelerometer, gyroscope, and magnetometer of a smartphone. We consider the indoor environment to be divided into microcells, with the vertex of each microcell being assigned two-dimensional local coordinates. A regression-based analysis is used to train a multilayer perceptron neural network to map the inertial sensor measurements to the coordinates of the vertex of the microcell corresponding to the position of the smartphone. In order to test our proposed solution, we used IPIN2016, a publicly-available multivariate dataset that divides the indoor environment into cells tagged with the inertial sensor data of a smartphone, in order to generate the training and validating sets. Our experiments show that our proposed approach can achieve a remarkable prediction accuracy of more than 94%, with a 0.65 m positioning error.


Indoor Navigation system is gaining lot of importance these days. It is particularly important to locate places inside a large university campus, Airport, Railway station or Museum. There are many mobile applications developed recently using different techniques. The work proposed in this paper is focusing on the need of visually challenged people while navigating in indoor environment. The approach proposed here implements the system using Beacon. The application developed with the system gives audio guidance to the user for navigation.


Author(s):  
G. Sithole

<p><strong>Abstract.</strong> The conventional approach to path planning for indoor navigation is to infer routes from a subdivided floor map of the indoor space. The floor map describes the spatial geometry of the space. Contained in this floor map are logical units called subspaces. For the purpose of path planning the possible routes between the subspaces have to be modelled. Typical these models employing a graph structures, or skeletons, in which the interconnected subspaces (e.g., rooms, corridors, etc.) are represented as linked nodes, i.e. a graph.</p><p>This paper presents a novel method for creating generalised graphs of indoor spaces that doesn’t require the subdivision of indoor space. The method creates the generalised graph by gradually simplifying/in-setting the floor map until a graph is obtained, a process described here as chained deflation. The resulting generalised graph allows for more flexible and natural paths to be determined within the indoor environment. Importantly the method allows the indoor space to be encoded and encrypted and supplied to users in a way that emulates the use of physical keys in the real world. Another important novelty of the method is that the space described by the graph is adaptable. The space described by the graph can be deflated or inflated according to the needs of the path planning. Finally, the proposed method can be readily generalised to the third dimension.</p><p>The concept and logic of the method are explained. A full implementation of the method will be discussed in a future paper.</p>


2019 ◽  
Vol 37 (2) ◽  
pp. 140-153 ◽  
Author(s):  
Watthanasak Jeamwatthanachai ◽  
Mike Wald ◽  
Gary Wills

A number of visually impaired people suffer from navigation-related activities due to mishaps that discourage them from going out for social activities and interactions. In contrast to outdoors, traveling inside public spaces is a different story, as many environmental cues cannot be used and have their own set of difficulties. Some technologies have come into play in helping these people to have freedom in navigation (e.g., accessible map, indoor navigation systems, and wearable computing devices). However, technologies like accessible maps or indoor navigation systems are insufficient to fulfill the independent navigation gap as additional information is required (obstacles, barriers, and accessibility). To promote indoor navigation and create better use of technologies for visually impaired people, it is essential to understand the facts and actual problems that they experience, and what behaviors and strategies they use to overcome any problems; these are the concerns that led to this study. In all, 30 visually impaired people and 15 experts were recruited to give an interview about the behavior and strategies used to navigate indoor spaces, especially public spaces, for example, universities, hospitals, malls, museums, and airports. The findings from this study reveal that navigating inside buildings and public spaces full of unfamiliar features is too difficult to attempt the first time for a number of reasons, reducing their confidence in independent navigation.


2020 ◽  
Vol 9 (2) ◽  
pp. 66 ◽  
Author(s):  
Seula Park ◽  
Kiyun Yu ◽  
Jiyoung Kim

The increasing complexity of modern buildings has challenged the mobility of people with disabilities (PWD) in the indoor environment. To help overcome this problem, this paper proposes a data model that can be easily applied to indoor spatial information services for people with disabilities. In the proposed model, features are defined based on relevant regulations that stipulate significant mobility factors for people with disabilities. To validate the model’s capability to describe the indoor spaces in terms that are relevant to people with mobility disabilities, the model was used to generate data in a path planning application, considering two different cases in a shopping mall. The application confirmed that routes for people with mobility disabilities are significantly different from those of ordinary pedestrians, in a way that reflects features and attributes defined in the proposed data model. The latter can be inserted as an IndoorGML extension, and is thus expected to facilitate relevant data generation for the design of various services for people with disabilities.


Author(s):  
Louis Lecrosnier ◽  
Redouane Khemmar ◽  
Nicolas Ragot ◽  
Benoit Decoux ◽  
Romain Rossi ◽  
...  

This paper deals with the development of an Advanced Driver Assistance System (ADAS) for a smart electric wheelchair in order to improve the autonomy of disabled people. Our use case, built from a formal clinical study, is based on the detection, depth estimation, localization and tracking of objects in wheelchair’s indoor environment, namely: door and door handles. The aim of this work is to provide a perception layer to the wheelchair, enabling this way the detection of these keypoints in its immediate surrounding, and constructing of a short lifespan semantic map. Firstly, we present an adaptation of the YOLOv3 object detection algorithm to our use case. Then, we present our depth estimation approach using an Intel RealSense camera. Finally, as a third and last step of our approach, we present our 3D object tracking approach based on the SORT algorithm. In order to validate all the developments, we have carried out different experiments in a controlled indoor environment. Detection, distance estimation and object tracking are experimented using our own dataset, which includes doors and door handles.


Sign in / Sign up

Export Citation Format

Share Document