scholarly journals A State-of-the-Art Review on Mapping and Localization of Mobile Robots Using Omnidirectional Vision Sensors

2017 ◽  
Vol 2017 ◽  
pp. 1-20 ◽  
Author(s):  
L. Payá ◽  
A. Gil ◽  
O. Reinoso

Nowadays, the field of mobile robotics is experiencing a quick evolution, and a variety of autonomous vehicles is available to solve different tasks. The advances in computer vision have led to a substantial increase in the use of cameras as the main sensors in mobile robots. They can be used as the only source of information or in combination with other sensors such as odometry or laser. Among vision systems, omnidirectional sensors stand out due to the richness of the information they provide the robot with, and an increasing number of works about them have been published over the last few years, leading to a wide variety of frameworks. In this review, some of the most important works are analysed. One of the key problems the scientific community is addressing currently is the improvement of the autonomy of mobile robots. To this end, building robust models of the environment and solving the localization and navigation problems are three important abilities that any mobile robot must have. Taking it into account, the review concentrates on these problems; how researchers have addressed them by means of omnidirectional vision; the main frameworks they have proposed; and how they have evolved in recent years.

Author(s):  
Alauddin Yousif Al-Omary

In this chapter, the benefit of equipping the robot with odor sensors is investigated. The chapter addresses the types of tasks the mobile robots can accomplish with the help of olfactory sensing capabilities, the technical challenges in mobile robot olfaction, the status of mobile robot olfaction. The chapter also addresses simple and complex electronic olfaction sensors used in mobile robotics, the challenge of using chemical sensors, the use of many types of algorithms for robot olfaction, and the future research directions in the field of mobile robot olfaction.


Robotics ◽  
2020 ◽  
Vol 9 (4) ◽  
pp. 109
Author(s):  
Uwe Jahn ◽  
Daniel Heß ◽  
Merlin Stampa ◽  
Andreas Sutorma ◽  
Christof Röhrig ◽  
...  

Mobile robotics is a widespread field of research, whose differentiation from general robotics is often based only on the ability to move. However, mobile robots need unique capabilities, such as the function of navigation. Also, there are limiting factors, such as the typically limited energy, which must be considered when developing a mobile robot. This article deals with the definition of an archetypal robot, which is represented in the form of a taxonomy. Types and fields of application are defined. A systematic literature review is carried out for the definition of typical capabilities and implementations, where reference systems, textbooks, and literature references are considered.


2016 ◽  
Vol 2016 ◽  
pp. 1-21 ◽  
Author(s):  
L. Payá ◽  
O. Reinoso ◽  
Y. Berenguer ◽  
D. Úbeda

Nowadays, the design of fully autonomous mobile robots is a key discipline. Building a robust model of the unknown environment is an important ability the robot must develop. Using this model, this robot must be able to estimate its current position and to navigate to the target points. The use of omnidirectional vision sensors is usual to solve these tasks. When using this source of information, the robot must extract relevant information from the scenes both to build the model and to estimate its position. The possible frameworks include the classical approach of extracting and describing local features or working with the global appearance of the scenes, which has emerged as a conceptually simple and robust solution. While feature-based techniques have been extensively studied in the literature, appearance-based ones require a full comparative evaluation to reveal the performance of the existing methods and to tune correctly their parameters. This work carries out a comparative evaluation of four global-appearance techniques in map building tasks, using omnidirectional visual information as the only source of data from the environment.


2021 ◽  
pp. 69-72
Author(s):  
Aryan Verma

Presently computer vision is amongst the hottest topics in Artificial Intelligence and is being extensively used in Robotics, Detecting Objects, Classification of Images, Autonomous Vehicles & tracking, Semantic Segmentation along with photo correction in various apps. In Self driven cars/ vehicles, vision remains the main source of information for detecting lanes, traffic lights, pedestrian crossing and other visual features. [2]


2020 ◽  
Vol 12 (1–3) ◽  
pp. 1-308 ◽  
Author(s):  
Joel Janai ◽  
Fatma Güney ◽  
Aseem Behl ◽  
Andreas Geiger

Author(s):  
Ulrich Nehmzow

Mobile robotics can be a useful tool for the life scientist in that they combine perception, computation and action, and are therefore comparable to living beings. They have, however, the distinct advantage that their behaviour can be manipulated by changing their programs and/or their hardware. In this chapter, quantitative measurements of mobile robot behaviour and a theory of robot-environment interaction that can easily be applied to the analysis of behaviour of mobile robots and animals is presented. Interestingly such an analysis is based on chaos theory.


Author(s):  
Lorenzo Fernández Rojo ◽  
Luis Paya ◽  
Francisco Amoros ◽  
Oscar Reinoso

Mobile robots have extended to many different environments, where they have to move autonomously to fulfill an assigned task. With this aim, it is necessary that the robot builds a model of the environment and estimates its position using this model. These two problems are often faced simultaneously. This process is known as SLAM (simultaneous localization and mapping) and is very common since when a robot begins moving in a previously unknown environment it must start generating a model from the scratch while it estimates its position simultaneously. This chapter is focused on the use of computer vision to solve this problem. The main objective is to develop and test an algorithm to solve the SLAM problem using two sources of information: (1) the global appearance of omnidirectional images captured by a camera mounted on the mobile robot and (2) the robot internal odometry. A hybrid metric-topological approach is proposed to solve the SLAM problem.


2015 ◽  
Vol 27 (4) ◽  
pp. 318-326 ◽  
Author(s):  
Shin'ichi Yuta ◽  
◽  

<div class=""abs_img""> <img src=""[disp_template_path]/JRM/abst-image/00270004/01.jpg"" width=""300"" /> Autonomous mobile robot in RWRC 2014</div> The Tsukuba Challenge, an open experiment for autonomous mobile robotics researchers, lets mobile robots travel in a real – and populated – city environment. Following the challenge in 2013, the mobile robots must navigate autonomously to their destination while, as the task of Tsukuba Challenge 2014, looking for and finding specific persons sitting in the environment. Total 48 teams (54 robots) seeking success in this complex challenge. </span>


2017 ◽  
Vol 2017 ◽  
pp. 1-14 ◽  
Author(s):  
Rodrigo Munguía ◽  
Carlos López-Franco ◽  
Emmanuel Nuño ◽  
Adriana López-Franco

This work presents a method for implementing a visual-based simultaneous localization and mapping (SLAM) system using omnidirectional vision data, with application to autonomous mobile robots. In SLAM, a mobile robot operates in an unknown environment using only on-board sensors to simultaneously build a map of its surroundings, which it uses to track its position. The SLAM is perhaps one of the most fundamental problems to solve in robotics to build mobile robots truly autonomous. The visual sensor used in this work is an omnidirectional vision sensor; this sensor provides a wide field of view which is advantageous in a mobile robot in an autonomous navigation task. Since the visual sensor used in this work is monocular, a method to recover the depth of the features is required. To estimate the unknown depth we propose a novel stochastic triangulation technique. The system proposed in this work can be applied to indoor or cluttered environments for performing visual-based navigation when GPS signal is not available. Experiments with synthetic and real data are presented in order to validate the proposal.


1999 ◽  
Vol 18 (3-4) ◽  
pp. 275-285
Author(s):  
J. Batlle ◽  
P. Ridao

It is known that mobile robot applications have a preponderant role in industrial and social environments and, more specifically, helping human beings in carrying out difficult tasks in hostile environments. From teleoperated systems to autonomous robots, there is a wide variety of possibilities requiring a high technological level. Many concepts such as perception, manipulator design, grasping, dynamic control, etc. are involved in the field of industrial mobile robots. In this context, human–robot interaction is one of the most widely studied topics over the last few years together with computer vision techniques and virtual reality tools. In all these technical fields, a common goal is pursued, i.e., robots to come closer to human skills. In this paper, first some important research projects and contributions on mobile robots in industrial environments are overviewed. Second, a proposal for classification of mobile robot architectures is described. Third, results achieved in two specific application areas of mobile robotics are reported. The first is related to the tele-operation of a mobile robot called ROGER by means of a TCP/IP network. The control system of the robot is built up as a distributed system, using distributed object oriented software, CORBA compatible. The second is related to the teleoperation of an underwater robot called GARBI. (Research project co-ordinated with the Polytechnic University of Catalonia (Prof. Josep Amat) and financed by the Spanish Government.) The utility of this kind of prototype is demonstrated in tasks such as welding applications in underwater environments, inspection of dammed walls, etc. Finally, an industrial project involving the use of intelligent autonomous robots is presented showing how the experience gained in robotics has been applied.


Sign in / Sign up

Export Citation Format

Share Document