A two-camera-based vision system for image feature identification, feature tracking and distance measurement by a mobile robot

Author(s):  
Avishek Chatterjee ◽  
N. Nirmal Singh ◽  
Olive Ray ◽  
Amitava Chatterjee ◽  
Anjan Rakshit
2015 ◽  
Vol 27 (6) ◽  
pp. 681-690 ◽  
Author(s):  
Hayato Hagiwara ◽  
◽  
Yasufumi Touma ◽  
Kenichi Asami ◽  
Mochimitsu Komori

<div class=""abs_img""><img src=""[disp_template_path]/JRM/abst-image/00270006/10.jpg"" width=""300"" /> Mobile robot with a stereo vision</div>This paper describes an autonomous mobile robot stereo vision system that uses gradient feature correspondence and local image feature computation on a field programmable gate array (FPGA). Among several studies on interest point detectors and descriptors for having a mobile robot navigate are the Harris operator and scale-invariant feature transform (SIFT). Most of these require heavy computation, however, and using them may burden some computers. Our purpose here is to present an interest point detector and a descriptor suitable for FPGA implementation. Results show that a detector using gradient variance inspection performs faster than SIFT or speeded-up robust features (SURF), and is more robust against illumination changes than any other method compared in this study. A descriptor with a hierarchical gradient structure has a simpler algorithm than SIFT and SURF descriptors, and the result of stereo matching achieves better performance than SIFT or SURF.


2021 ◽  
Vol 11 (8) ◽  
pp. 3360
Author(s):  
Huei-Yung Lin ◽  
Chien-Hsing He

This paper presents a novel self-localization technique for mobile robots based on image feature matching from omnidirectional vision. The proposed method first constructs a virtual space with synthetic omnidirectional imaging to simulate a mobile robot equipped with an omnidirectional vision system in the real world. In the virtual space, a number of vertical and horizontal lines are generated according to the structure of the environment. They are imaged by the virtual omnidirectional camera using the catadioptric projection model. The omnidirectional images derived from the virtual and real environments are then used to match the synthetic lines and real scene edges. Finally, the pose and trajectory of the mobile robot in the real world are estimated by the efficient perspective-n-point (EPnP) algorithm based on the line feature matching. In our experiments, the effectiveness of the proposed self-localization technique was validated by the navigation of a mobile robot in a real world environment.


Author(s):  
Gamma Aditya Rahardi ◽  
Khairul Anam ◽  
Ali Rizal Chaidir ◽  
Devita Ayu Larasati

2021 ◽  
Author(s):  
Maximillian Van Wyk de Vries ◽  
Shashank Bhushan ◽  
David Shean ◽  
Etienne Berthier ◽  
César Deschamps-Berger ◽  
...  

&lt;p&gt;On the 7&lt;sup&gt;th&lt;/sup&gt; of February 2021, a large rock-ice avalanche triggered a debris flow in Chamoli district, Uttarakhand, India, resulting in over 200 dead or missing and widespread infrastructure damage. The rock-ice avalanche originated from a steep, glacierized north-facing slope with a history of instability, most recently a 2016 ice avalanche. In this work, we assess whether the slope exhibited any precursory displacement prior to collapse. We evaluate monthly slope motion over the 2015 and 2021 period through feature tracking of high-resolution optical satellite imagery from Sentinel-2 (10 m Ground Sampling Distance) and PlanetScope (3-4 m Ground Sampling Distance). Assessing slope displacement of the underlying rock is complicated by the presence of glaciers over a portion of the collapse area, which display surface displacements due to internal ice deformation. We overcome this through tracking the motion over ice-free portions of the slide area, and evaluating the spatial pattern of velocity changes in glaciated areas. Preliminary results show that the rock-ice avalanche bloc slipped over 10 m in the 5 years prior to collapse, with particularly rapid slip occurring in the summer of 2017 and 2018. These results provide insight into the precursory conditions of the deadly rock-ice avalanche, and highlight the potential of high-resolution optical satellite image feature tracking for monitoring the stability of high-risk slopes.&lt;/p&gt;


2021 ◽  
Author(s):  
Md Forhad Ebn Anwar

Collision of vehicles in highways are very frequent. Because of high speed (more than 100 km/hour), the momentum of collision is too high that leads sever casualty. Automatic Driving Assistance system can assist the vehicle operators to take decision based on realistic practical calculation on safety measures. It is always better to have third eye working parallel with human to avoid road accident. There are several technologies used to develop perfect driving assistance system to achieve higher accuracy in detection, identification and distance measurement of obstacles where vision based system is one of them. Mono-vision system provides cheap and fast solution rather stereo vision. This project work conducted with objective to comprehend computational complexity in implementation of mono-vison camera based object detection where system will generate warning if the detected object has a motion towards target. Processing and analyzing of captured video image is the focused mechanism of implementation and used internal image generator module to mimic actual video camera. Appeared size of the shape of object considered for the decision making. The simulated image pattern can change it’s dimension to represent vehicle movement in one direction (Back and forth). In this work the on-chip car image generation sub-system was proposed designed and partially implemented on the base of the FPGA where Xilinx Zynq-7010 (ZYNQ XC7Z010-1CLG400C) FPGA development board used. Keyword: Computer Vision, mono vision, image processing on FPGA, Automatic Driving Assistance, Vehicle Detection.


Sign in / Sign up

Export Citation Format

Share Document