A Vision-Based Localization Algorithm for an Indoor Navigation App

Author(s):  
Oscar Deniz ◽  
Julio Paton ◽  
Jesus Salido ◽  
Gloria Bueno ◽  
Janahan Ramanan
Author(s):  
Zhuorui Yang ◽  
Aura Ganz

In this paper, we introduce an egocentric landmark-based guidance system that enables visually impaired users to interact with indoor environments. The user who wears Google Glasses will capture his surroundings within his field of view. Using this information, we provide the user an accurate landmark-based description of the environment including his relative distance and orientation to each landmark. To achieve this functionality, we developed a near real time accurate vision based localization algorithm. Since the users are visually impaired our algorithm accounts for captured images using Google Glasses that have severe blurriness, motion blurriness, low illumination intensity and crowd obstruction. We tested the algorithm performance in a 12,000 ft2 open indoor environment. When we have mint query images our algorithm obtains mean location accuracy within 5ft., mean orientation accuracy less than 2 degrees and reliability above 88%. After applying deformation effects to the query images such blurriness, motion blurriness and illumination changes, we observe that the reliability is above 75%.


Electronics ◽  
2020 ◽  
Vol 9 (8) ◽  
pp. 1326
Author(s):  
Jatin Upadhyay ◽  
Abhishek Rawat ◽  
Dipankar Deb ◽  
Vlad Muresan ◽  
Mihaela-Ligia Unguresan

A robotic navigation system operates flawlessly under an adequate GPS signal range, whereas indoor navigation systems use the simultaneous localization and mapping system or other vision-based localization systems. The sensor used in indoor navigation systems is not suitable for low power and small scale robotic systems. The wireless area network transmitting devices have fixed transmission power, and the receivers get the different values of signal strength based on their surrounding environments. In the proposed method, the received signal strength index (RSSI) values of three fixed transmitter units are measured every 1.6 m in mesh format and analyzed by the classifiers, and robot position can be mapped in the indoor area. After navigation, the robot analyzes objects and detects and recognize human faces with the help of object recognition and facial recognition-based classification methods respectively. This robot detects the intruder with the current position in an indoor environment.


2018 ◽  
pp. 1483-1499
Author(s):  
Zhuorui Yang ◽  
Aura Ganz

In this paper, we introduce an egocentric landmark-based guidance system that enables visually impaired users to interact with indoor environments. The user who wears Google Glasses will capture his surroundings within his field of view. Using this information, we provide the user an accurate landmark-based description of the environment including his relative distance and orientation to each landmark. To achieve this functionality, we developed a near real time accurate vision based localization algorithm. Since the users are visually impaired our algorithm accounts for captured images using Google Glasses that have severe blurriness, motion blurriness, low illumination intensity and crowd obstruction. We tested the algorithm performance in a 12,000 ft2 open indoor environment. When we have mint query images our algorithm obtains mean location accuracy within 5ft., mean orientation accuracy less than 2 degrees and reliability above 88%. After applying deformation effects to the query images such blurriness, motion blurriness and illumination changes, we observe that the reliability is above 75%.


Sign in / Sign up

Export Citation Format

Share Document