Mobile Robot Playback Navigation Based on Robot Pose Calculation Using Memorized Omnidirectional Images

2002 ◽  
Vol 14 (4) ◽  
pp. 366-374 ◽  
Author(s):  
Lixin Tang ◽  
◽  
Shin'ichi Yuta

We propose a method of autonomous navigation for mobile robots in indoor environments by a teaching and playback scheme. During teaching, an operator guides a robot to move by manual control. While moving, the robot memorizes its motion measured by odometry and an environmental image taken by an omnidirectional camera at each time interval, and regards places where images were taken as target positions. When navigating autonomously, the robot plays back memorized motion to track each target position and corrects its position by calculating its relative pose using current and memorized images, to follow the taught route. In this method, vertical edges existing in the environment are used as landmarks to calculate robot position, and an evaluation function defined by us is used to find corresponding vertical edges between two images. The robot thus can navigate robustly in real building environments. The system can avoid the problem of the operator covering a part of the environment in images during the teaching stage.

2019 ◽  
Vol 9 (3) ◽  
pp. 377 ◽  
Author(s):  
Sergio Cebollada ◽  
Luis Payá ◽  
Walterio Mayol ◽  
Oscar Reinoso

This paper presents an extended study about the compression of topological models of indoor environments. The performance of two clustering methods is tested in order to know their utility both to build a model of the environment and to solve the localization task. Omnidirectional images are used to create the compact model, as well as to estimate the robot position within the environment. These images are characterized through global appearance descriptors, since they constitute a straightforward mechanism to build a compact model and estimate the robot position. To evaluate the goodness of the proposed clustering algorithms, several datasets are considered. They are composed of either panoramic or omnidirectional images captured in several environments, under real operating conditions. The results confirm that compression of visual information contributes to a more efficient localization process through saving computation time and keeping a relatively good accuracy.


Author(s):  
JUAN ANDRADE-CETTO ◽  
ALBERTO SANFELIU

A system that builds and maintains a dynamic map for a mobile robot is presented. A learning rule associated to each observed landmark is used to compute its robustness. The position of the robot during map construction is estimated by combining sensor readings, motion commands, and the current map state by means of an Extended Kalman Filter. The combination of landmark strength validation and Kalman filtering for map updating and robot position estimation allows for robust learning of moderately dynamic indoor environments.


2013 ◽  
Vol 20 (4) ◽  
pp. 40-48 ◽  
Author(s):  
Shaojie Shen ◽  
Nathan Michael ◽  
Vijay Kumar

2021 ◽  
Vol 15 (03) ◽  
pp. 337-357
Author(s):  
Alexander Julian Golkowski ◽  
Marcus Handte ◽  
Peter Roch ◽  
Pedro J. Marrón

For many application areas such as autonomous navigation, the ability to accurately perceive the environment is essential. For this purpose, a wide variety of well-researched sensor systems are available that can be used to detect obstacles or navigation targets. Stereo cameras have emerged as a very versatile sensing technology in this regard due to their low hardware cost and high fidelity. Consequently, much work has been done to integrate them into mobile robots. However, the existing literature focuses on presenting the concepts and algorithms used to implement the desired robot functions on top of a given camera setup. As a result, the rationale and impact of choosing this camera setup are usually neither discussed nor described. Thus, when designing the stereo camera system for a mobile robot, there is not much general guidance beyond isolated setups that worked for a specific robot. To close the gap, this paper studies the impact of the physical setup of a stereo camera system in indoor environments. To do this, we present the results of an experimental analysis in which we use a given software setup to estimate the distance to an object while systematically changing the camera setup. Thereby, we vary the three main parameters of the physical camera setup, namely the angle and distance between the cameras as well as the field of view and a rather soft parameter, the resolution. Based on the results, we derive several guidelines on how to choose the parameters for an application.


Agriculture ◽  
2021 ◽  
Vol 11 (10) ◽  
pp. 954
Author(s):  
Abhijeet Ravankar ◽  
Ankit A. Ravankar ◽  
Arpit Rawankar ◽  
Yohei Hoshino

In recent years, autonomous robots have extensively been used to automate several vineyard tasks. Autonomous navigation is an indispensable component of such field robots. Autonomous and safe navigation has been well studied in indoor environments and many algorithms have been proposed. However, unlike structured indoor environments, vineyards pose special challenges for robot navigation. Particularly, safe robot navigation is crucial to avoid damaging the grapes. In this regard, we propose an algorithm that enables autonomous and safe robot navigation in vineyards. The proposed algorithm relies on data from a Lidar sensor and does not require a GPS. In addition, the proposed algorithm can avoid dynamic obstacles in the vineyard while smoothing the robot’s trajectories. The curvature of the trajectories can be controlled, keeping a safe distance from both the crop and the dynamic obstacles. We have tested the algorithm in both a simulation and with robots in an actual vineyard. The results show that the robot can safely navigate the lanes of the vineyard and smoothly avoid dynamic obstacles such as moving people without abruptly stopping or executing sharp turns. The algorithm performs in real-time and can easily be integrated into robots deployed in vineyards.


2021 ◽  
Author(s):  
Ahmet TOP ◽  
Muammer GÖKBULUT

Abstract In this study, a Bluetooth-based Android application interface is developed to perform a manual and automatic control of a four-wheel-driven mobile robot designed for education, research, health, military, and many other fields. The proposed application with MIT App Inventor consists of three components: the main screen, the manual control screen, and the automatic control screen. The main screen is where the actions of the control preference selection such as manual control and automatic control and the Bluetooth connection between the mobile robot and Android phone occur. When the robot is operated manually for calibration or manual positioning purposes, the manual control screen is employed to adjust the desired robot movement and speed by hand. In the case of the need for automatic motion control, the desired robot position and speed data are inserted into the mobile robot processor through the automatic control screen. At the first stage of the work, the proposed Android application is developed with the design and block editors of the MIT App Inventor. The compiled application is then installed on the Android phone. Next, the communication between the Arduino microcontroller used for the robot control with the Bluetooth protocol and the Android application is established. The accuracy of the data dispatched to the Arduino is tested on the serial connection screen. It is validated that the data from the Android application is transferred to Arduino smoothly. At the end of this study, the manual and automatic controls of the proposed mobile robot are performed experimentally and success of the coordination between the Android application and the mobile robot are demonstrated.


2021 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Guangbing Zhou ◽  
Jing Luo ◽  
Shugong Xu ◽  
Shunqing Zhang ◽  
Shige Meng ◽  
...  

Purpose Indoor localization is a key tool for robot navigation in indoor environments. Traditionally, robot navigation depends on one sensor to perform autonomous localization. This paper aims to enhance the navigation performance of mobile robots, a multiple data fusion (MDF) method is proposed for indoor environments. Design/methodology/approach Here, multiple sensor data i.e. collected information of inertial measurement unit, odometer and laser radar, are used. Then, an extended Kalman filter (EKF) is used to incorporate these multiple data and the mobile robot can perform autonomous localization according to the proposed EKF-based MDF method in complex indoor environments. Findings The proposed method has experimentally been verified in the different indoor environments, i.e. office, passageway and exhibition hall. Experimental results show that the EKF-based MDF method can achieve the best localization performance and robustness in the process of navigation. Originality/value Indoor localization precision is mostly related to the collected data from multiple sensors. The proposed method can incorporate these collected data reasonably and can guide the mobile robot to perform autonomous navigation (AN) in indoor environments. Therefore, the output of this paper would be used for AN in complex and unknown indoor environments.


Electronics ◽  
2019 ◽  
Vol 8 (2) ◽  
pp. 220 ◽  
Author(s):  
Ruibin Guo ◽  
Keju Peng ◽  
Dongxiang Zhou ◽  
Yunhui Liu

Orientation estimation is a crucial part of robotics tasks such as motion control, autonomous navigation, and 3D mapping. In this paper, we propose a robust visual-based method to estimate robots’ drift-free orientation with RGB-D cameras. First, we detect and track hybrid features (i.e., plane, line, and point) from color and depth images, which provides reliable constraints even in uncharacteristic environments with low texture or no consistent lines. Then, we construct a cost function based on these features and, by minimizing this function, we obtain the accurate rotation matrix of each captured frame with respect to its reference keyframe. Furthermore, we present a vanishing direction-estimation method to extract the Manhattan World (MW) axes; by aligning the current MW axes with the global MW axes, we refine the aforementioned rotation matrix of each keyframe and achieve drift-free orientation. Experiments on public RGB-D datasets demonstrate the robustness and accuracy of the proposed algorithm for orientation estimation. In addition, we have applied our proposed visual compass to pose estimation, and the evaluation on public sequences shows improved accuracy.


Micromachines ◽  
2018 ◽  
Vol 9 (7) ◽  
pp. 351 ◽  
Author(s):  
Luca Brayda ◽  
Fabrizio Leo ◽  
Caterina Baccelliere ◽  
Elisabetta Ferrari ◽  
Claudia Vigini

Autonomous navigation in novel environments still represents a challenge for people with visual impairment (VI). Pin array matrices (PAM) are an effective way to display spatial information to VI people in educative/rehabilitative contexts, as they provide high flexibility and versatility. Here, we tested the effectiveness of a PAM in VI participants in an orientation and mobility task. They haptically explored a map showing a scaled representation of a real room on the PAM. The map further included a symbol indicating a virtual target position. Then, participants entered the room and attempted to reach the target three times. While a control group only reviewed the same, unchanged map on the PAM between trials, an experimental group also received an updated map representing, in addition, the position they previously reached in the room. The experimental group significantly improved across trials by having both reduced self-location errors and reduced completion time, unlike the control group. We found that learning spatial layouts through updated tactile feedback on programmable displays outperforms conventional procedures on static tactile maps. This could represent a powerful tool for navigation, both in rehabilitation and everyday life contexts, improving spatial abilities and promoting independent living for VI people.


Author(s):  
Primo Zingaretti ◽  
Andrea Ascani ◽  
Adriano Mancini ◽  
Emanuele Frontoni

Monte Carlo Localization (MCL) is a common method for self-localization of a mobile robot under the assumption that a map of the environment is available. In addition to laser scanners and sonar sensors, localization approaches using vision sensors have also been recently developed with good results. In this paper we present two variations to improve the standard implementation of the MCL algorithm. The first change consists in a new strategy for the generation of particles, both at the initialization and at the resampling stage, which tries to generate new particles near the position of images in the learning dataset or in the neighborhood of particles with higher weights in the previous estimate, respectively. The second variation is related to a new approach to the estimate of the robot position, now based on two steps: clustering of particles and taking as estimate of robot position the center of the cluster, computed as a weighted sum of particle weights, with higher weight. The improved MCL algorithm described in this paper is compared with the standard MCL algorithm in terms of localization accuracy. In particular, tests were performed using local feature matching of omnidirectional images implemented on a real robot system operating in large outdoor environments with high dynamic content. Obtained results show that the localization accuracy of the improved MCL algorithm is more than twice that of the standard MCL algorithm.


Sign in / Sign up

Export Citation Format

Share Document