scholarly journals Optical Correlator based Optical Flow Processor for Real Time Visual Navigation

10.5772/4990 ◽  
2007 ◽  
Author(s):  
Valerij Tchernykh ◽  
Martin Beck ◽  
Klaus Janschek
Author(s):  
A. V. Bratulin ◽  
◽  
M. B. Nikiforov ◽  
A. I. Efimov ◽  
◽  
...  

2021 ◽  
Vol 17 (2) ◽  
pp. 1-22
Author(s):  
Jingao Xu ◽  
Erqun Dong ◽  
Qiang Ma ◽  
Chenshu Wu ◽  
Zheng Yang

Existing indoor navigation solutions usually require pre-deployed comprehensive location services with precise indoor maps and, more importantly, all rely on dedicatedly installed or existing infrastructure. In this article, we present Pair-Navi, an infrastructure-free indoor navigation system that circumvents all these requirements by reusing a previous traveler’s (i.e., leader) trace experience to navigate future users (i.e., followers) in a Peer-to-Peer mode. Our system leverages the advances of visual simultaneous localization and mapping ( SLAM ) on commercial smartphones. Visual SLAM systems, however, are vulnerable to environmental dynamics in the precision and robustness and involve intensive computation that prohibits real-time applications. To combat environmental changes, we propose to cull non-rigid contexts and keep only the static and rigid contents in use. To enable real-time navigation on mobiles, we decouple and reorganize the highly coupled SLAM modules for leaders and followers. We implement Pair-Navi on commodity smartphones and validate its performance in three diverse buildings and two standard datasets (TUM and KITTI). Our results show that Pair-Navi achieves an immediate navigation success rate of 98.6%, which maintains as 83.4% even after 2 weeks since the leaders’ traces were collected, outperforming the state-of-the-art solutions by >50%. Being truly infrastructure-free, Pair-Navi sheds lights on practical indoor navigations for mobile users.


2008 ◽  
Vol 05 (03) ◽  
pp. 223-233 ◽  
Author(s):  
RONG LIU ◽  
MAX Q. H. MENG

Time-to-contact (TTC) provides vital information for obstacle avoidance and for the visual navigation of a robot. In this paper, we present a novel method to estimate the TTC information of a moving object for monocular mobile robots. In specific, the contour of the moving object is extracted first using an active contour model; then the height of the motion contour and its temporal derivative are evaluated to generate the desired TTC estimates. Compared with conventional techniques employing the first-order derivatives of optical flow, the proposed estimator is less prone to errors of optical flow. Experiments using real-world images are conducted and the results demonstrate that the developed method can successfully achieve TTC with an average relative error (ARVE) of 0.039 with a single calibrated camera.


Sign in / Sign up

Export Citation Format

Share Document