Mobile robot self-location using model-image feature correspondence

1996 ◽  
Vol 12 (1) ◽  
pp. 63-77 ◽  
Author(s):  
R. Talluri ◽  
J.K. Aggarwal
2015 ◽  
Vol 27 (6) ◽  
pp. 681-690 ◽  
Author(s):  
Hayato Hagiwara ◽  
◽  
Yasufumi Touma ◽  
Kenichi Asami ◽  
Mochimitsu Komori

<div class=""abs_img""><img src=""[disp_template_path]/JRM/abst-image/00270006/10.jpg"" width=""300"" /> Mobile robot with a stereo vision</div>This paper describes an autonomous mobile robot stereo vision system that uses gradient feature correspondence and local image feature computation on a field programmable gate array (FPGA). Among several studies on interest point detectors and descriptors for having a mobile robot navigate are the Harris operator and scale-invariant feature transform (SIFT). Most of these require heavy computation, however, and using them may burden some computers. Our purpose here is to present an interest point detector and a descriptor suitable for FPGA implementation. Results show that a detector using gradient variance inspection performs faster than SIFT or speeded-up robust features (SURF), and is more robust against illumination changes than any other method compared in this study. A descriptor with a hierarchical gradient structure has a simpler algorithm than SIFT and SURF descriptors, and the result of stereo matching achieves better performance than SIFT or SURF.


2015 ◽  
Vol 27 (4) ◽  
pp. 392-400 ◽  
Author(s):  
Keita Kurashiki ◽  
◽  
Mareus Aguilar ◽  
Sakon Soontornvanichkit

<div class=""abs_img""> <img src=""[disp_template_path]/JRM/abst-image/00270004/09.jpg"" width=""300"" /> Mobile robot with a stereo camera</div> Autonomous mobile robots has been an active research recently. In Japan, the Tsukuba Challenge is held annually since 2007 in order to realize autonomous mobile robots that coexist with human beings safely in society. Through technological incentives of such effort, laser range finder (LRF) based navigation has rapidly improved. A technical issue of these techniques is the reduction of the prior information because most of them require precise 3D model of the environment, that is poor in both maintainability and scalability. On the other hand, in spite of intensive studies on vision based navigation using cameras, no robot in the Challenge could achieve full camera navigation. In this paper, an image based control law to follow the road boundary is proposed. This method is a part of the topological navigation to reduce prior information and enhance scalability of the map. As the controller is designed based on the interaction model of the robot motion and image feature in the front image, the method is robust to the camera calibration error. The proposed controller is tested through several simulations and indoor/outdoor experiments to verify its performance and robustness. Finally, our results in Tsukuba Challenge 2014 using the proposed controller is presented. </span>


2021 ◽  
Vol 11 (8) ◽  
pp. 3360
Author(s):  
Huei-Yung Lin ◽  
Chien-Hsing He

This paper presents a novel self-localization technique for mobile robots based on image feature matching from omnidirectional vision. The proposed method first constructs a virtual space with synthetic omnidirectional imaging to simulate a mobile robot equipped with an omnidirectional vision system in the real world. In the virtual space, a number of vertical and horizontal lines are generated according to the structure of the environment. They are imaged by the virtual omnidirectional camera using the catadioptric projection model. The omnidirectional images derived from the virtual and real environments are then used to match the synthetic lines and real scene edges. Finally, the pose and trajectory of the mobile robot in the real world are estimated by the efficient perspective-n-point (EPnP) algorithm based on the line feature matching. In our experiments, the effectiveness of the proposed self-localization technique was validated by the navigation of a mobile robot in a real world environment.


2012 ◽  
Vol 2012 ◽  
pp. 1-8
Author(s):  
Lejla Banjanovic-Mehmedovic ◽  
Dzenisan Golic ◽  
Fahrudin Mehmedovic ◽  
Jasna Havic

This paper presents a visual/motor behavior learning approach, based on neural networks. We propose Behavior Chain Model (BCM) in order to create a way of behavior learning. Our behavior-based system evolution task is a mobile robot detecting a target and driving/acting towards it. First, the mapping relations between the image feature domain of the object and the robot action domain are derived. Second, a multilayer neural network for offline learning of the mapping relations is used. This learning structure through neural network training process represents a connection between the visual perceptions and motor sequence of actions in order to grip a target. Last, using behavior learning through a noticed action chain, we can predict mobile robot behavior for a variety of similar tasks in similar environment. Prediction results suggest that the methodology is adequate and could be recognized as an idea for designing different mobile robot behaviour assistance.


2020 ◽  
Vol 29 ◽  
pp. 3506-3519
Author(s):  
Chen Zhao ◽  
Zhiguo Cao ◽  
Jiaqi Yang ◽  
Ke Xian ◽  
Xin Li

Sign in / Sign up

Export Citation Format

Share Document