Development of Autonomous Mobile Robot “MML-05” Based on i-Cart Mini for Tsukuba Challenge 2015

2016 ◽  
Vol 28 (4) ◽  
pp. 461-469 ◽  
Author(s):  
Tomoyoshi Eda ◽  
◽  
Tadahiro Hasegawa ◽  
Shingo Nakamura ◽  
Shin’ichi Yuta

[abstFig src='/00280004/04.jpg' width='300' text='Autonomous mobile robots entered in the Tsukuba Challenge 2015' ] This paper describes a self-localization method for autonomous mobile robots entered in the Tsukuba Challenge 2015. One of the important issues in autonomous mobile robots is accurately estimating self-localization. An occupancy grid map, created manually before self-localization has typically been utilized to estimate the self-localization of autonomous mobile robots. However, it is difficult to create an accurate map of complex courses. We created an occupancy grid map combining local grid maps built using a leaser range finder (LRF) and wheel odometry. In addition, the self-localization of a mobile robot was calculated by integrating self-localization estimated by a map and matching it to wheel odometry information. The experimental results in the final run of the Tsukuba Challenge 2015 showed that the mobile robot traveled autonomously until the 600 m point of the course, where the occupancy grid map ended.

Author(s):  
Gintautas Narvydas ◽  
Vidas Raudonis ◽  
Rimvydas Simutis

In the control of autonomous mobile robots there exist two types of control: global control and local control. The requirement to solve global and local tasks arises respectively. This chapter concentrates on local tasks and shows that robots can learn to cope with some local tasks within minutes. The main idea of the chapter is to show that, while creating intelligent control systems for autonomous mobile robots, the beginning is most important as we have to transfer as much as possible human knowledge and human expert-operator skills into the intelligent control system. Successful transfer ensures fast and good results. One of the most advanced techniques in robotics is an autonomous mobile robot on-line learning from the experts’ demonstrations. Further, the latter technique is briefly described in this chapter. As an example of local task the wall following is taken. The main goal of our experiment is to teach the autonomous mobile robot within 10 minutes to follow the wall of the maze as fast and as precisely as it is possible. This task also can be transformed to the obstacle circuit on the left or on the right. The main part of the suggested control system is a small Feed-Forward Artificial Neural Network. In some particular cases – critical situations – “If-Then” rules undertake the control, but our goal is to minimize possibility that these rules would start controlling the robot. The aim of the experiment is to implement the proposed technique on the real robot. This technique enables to reach desirable capabilities in control much faster than they would be reached using Evolutionary or Genetic Algorithms, or trying to create the control systems by hand using “If-Then” rules or Fuzzy Logic. In order to evaluate the quality of the intelligent control system to control an autonomous mobile robot we calculate objective function values and the percentage of the robot work loops when “If-Then” rules control the robot.


Author(s):  
KS Nagla ◽  
Moin Uddin ◽  
Dilbag Singh

<p>Sensor based perception of the environment is an emerging area of the mobile robot research where sensors play a pivotal role. For autonomous mobile robots, the fundamental requirement is the convergent of the range information in to high level internal representation. Internal representation in the form of occupancy grid is commonly used in autonomous mobile robots due to its various advantages. There are several sensors such as vision sensor, laser rage finder, and ultrasonic and infrared sensors etc. play roles in mapping. However the sensor information failure, sensor inaccuracies, noise, and slow response are the major causes of an error in the mapping. To improve the reliability of the mobile robot mapping multisensory data fusion is considered as an optimal solution. This paper presents a novel architecture of sensor fusion frame work in which a dedicated filter (DF) is proposed to increase the robustness of the occupancy grid for indoor environment. The technique has been experimentally verified for different indoor test environments. The proposed configuration shows improvement in the occupancy grid with the implementation of dedicated filters.</p>


2020 ◽  
Vol 17 (4) ◽  
pp. 535-542
Author(s):  
Ravinder Singh ◽  
Akshay Katyal ◽  
Mukesh Kumar ◽  
Kirti Singh ◽  
Deepak Bhola

Purpose Sonar sensor-based mobile robot mapping is an efficient and low cost technique for the application such as localization, autonomous navigation, SLAM and path planning. In multi-robots system, numbers of sonar sensors are used and the sound waves from sonar are interacting with the sound wave of other sonar causes wave interference. Because of wave interference, the generated sonar grid maps get distorted which resulted in decreasing the reliability of mobile robot’s navigation in the generated grid maps. This research study focus in removing the effect of wave interfaces in the sonar mapping to achieve robust navigation of mobile robot. Design/methodology/approach The wrong perception (occupancy grid map) of the environment due to cross talk/wave interference is eliminated by randomized the triggering time of sonar by varying the delay/sleep time of each sonar sensor. A software-based approach randomized triggering technique (RTT) is design in laboratory virtual instrument engineering workbench (LabVIEW) that randomized the triggering time of the sonar sensor to eliminate the effect of wave interference/cross talk when multiple sonar are placed in face-forward directions. Findings To check the reliability of the RTT technique, various real-world experiments are perform and it is experimentally obtained that 64.8% improvement in terms of probabilities in the generated occupancy grid map has been attained when compared with the conventional approaches. Originality/value This proposed RTT technique maybe implementing for SLAM, reliable autonomous navigation, optimal path planning, efficient robotics vision, consistent multi-robotic system, etc.


2018 ◽  
Vol 30 (4) ◽  
pp. 540-551 ◽  
Author(s):  
Shingo Nakamura ◽  
◽  
Tadahiro Hasegawa ◽  
Tsubasa Hiraoka ◽  
Yoshinori Ochiai ◽  
...  

The Tsukuba Challenge is a competition, in which autonomous mobile robots run on a route set on a public road under a real environment. Their task includes not only simple running but also finding multiple specific persons at the same time. This study proposes a method that would realize person searching. While many person-searching algorithms use a laser sensor and a camera in combination, our method only uses an omnidirectional camera. The search target is detected using a convolutional neural network (CNN) that performs a classification of the search target. Training a CNN requires a great amount of data for which pseudo images created by composition are used. Our method is implemented in an autonomous mobile robot, and its performance has been verified in the Tsukuba Challenge 2017.


2011 ◽  
Vol 121-126 ◽  
pp. 2416-2420 ◽  
Author(s):  
Yan Fen Mao ◽  
Hans Wiedmann ◽  
Ming Chen

The paper describes an autonomous mobile robot named Robotino®, and how it is used for education of Bachelor-students in the majors AES (Automotive Engineering & Service) as well as in MT (Mechatronics) in CDHAW, Tongji University. A fine positioning project using image processing is introduced, and vision-based functions from Robotino®View are presented. It is sketched out how this system also can be used as a research platform for automotive assistance systems.


Author(s):  
Casi Setianingsih ◽  
Kusprasapta Mutijarsa ◽  
Muhammad Ary Murti

Autonomous robot adalah suatu robot yang mampu bekerja secara mandiri tanpa pengendalian langsung dari manusia. Robot bekerja berdasarkan sensor-sensor yang dimilikinya, mengambil keputusan sendiri untuk menyelesaikan misi dalam lingkungan kerjanya. Dalam dunia nyata, lingkungan kerja robot sangat dinamis, selalu berubah, dan tidak terstruktur. Membuat suatu model lingkungan yang tidak terstruktur sangat sulit. Memperoleh model matematik yang tepat dari lingkungan seperti ini hampir tidak mungkin dilakukan. Untuk membuat suatu autonomous mobile robots yang mampu bekerja pada lingkungan yang tidak terstruktur dan dinamis,diperlukansuatumetodatertentuyangadaptifdanmampubelajar. Berdasarkan permasalahan tersebut maka pada riset ini dirancang suatu autonomous mobile robot dengan arsitektur berbasis perilaku yang dapat belajar dan bekerja secara mandiri pada lingkungan yang tidak terstruktur, menggunakan metoda Reinforcement Learning. Tujuan metoda ini diterapkan agar robot mampu belajar dan beradaptasi terhadap lingkungan yang tidak terstruktur. Selanjutnya robot dikembangkan agar mampu menyelesaikan misi menemukan target pada posisi tertentu berdasarkan informasi yang diperoleh dari sensor sensor yang ada. Hasil simulasi menunjukan bahwa algoritma pembelajaran Reinforcement Learning berhasil diterapkan pada arsitektur kendali berbasis perilaku di autonomous mobile robot dengan akurasi sebesar 85,71%.


Author(s):  
Evangelos Georgiou ◽  
Jian S. Dai ◽  
Michael Luck

In small mobile robot research, autonomous platforms are severely constrained in navigation environments by the limitations of accurate sensory data to preform critical path planning, obstacle avoidance and self-localization tasks. The motivation for this work is to enable small autonomous mobile robots with a local stereo vision system that will provide an accurate reconstruction of a navigation environment for critical navigation tasks. This paper presents the KCLBOT, which was developed in King’s College London’s Centre for Robotic Research and is a small autonomous mobile robot with a stereo vision system.


Sign in / Sign up

Export Citation Format

Share Document