Person Searching Through an Omnidirectional Camera Using CNN in the Tsukuba Challenge

2018 ◽  
Vol 30 (4) ◽  
pp. 540-551 ◽  
Author(s):  
Shingo Nakamura ◽  
◽  
Tadahiro Hasegawa ◽  
Tsubasa Hiraoka ◽  
Yoshinori Ochiai ◽  
...  

The Tsukuba Challenge is a competition, in which autonomous mobile robots run on a route set on a public road under a real environment. Their task includes not only simple running but also finding multiple specific persons at the same time. This study proposes a method that would realize person searching. While many person-searching algorithms use a laser sensor and a camera in combination, our method only uses an omnidirectional camera. The search target is detected using a convolutional neural network (CNN) that performs a classification of the search target. Training a CNN requires a great amount of data for which pseudo images created by composition are used. Our method is implemented in an autonomous mobile robot, and its performance has been verified in the Tsukuba Challenge 2017.

Author(s):  
Gintautas Narvydas ◽  
Vidas Raudonis ◽  
Rimvydas Simutis

In the control of autonomous mobile robots there exist two types of control: global control and local control. The requirement to solve global and local tasks arises respectively. This chapter concentrates on local tasks and shows that robots can learn to cope with some local tasks within minutes. The main idea of the chapter is to show that, while creating intelligent control systems for autonomous mobile robots, the beginning is most important as we have to transfer as much as possible human knowledge and human expert-operator skills into the intelligent control system. Successful transfer ensures fast and good results. One of the most advanced techniques in robotics is an autonomous mobile robot on-line learning from the experts’ demonstrations. Further, the latter technique is briefly described in this chapter. As an example of local task the wall following is taken. The main goal of our experiment is to teach the autonomous mobile robot within 10 minutes to follow the wall of the maze as fast and as precisely as it is possible. This task also can be transformed to the obstacle circuit on the left or on the right. The main part of the suggested control system is a small Feed-Forward Artificial Neural Network. In some particular cases – critical situations – “If-Then” rules undertake the control, but our goal is to minimize possibility that these rules would start controlling the robot. The aim of the experiment is to implement the proposed technique on the real robot. This technique enables to reach desirable capabilities in control much faster than they would be reached using Evolutionary or Genetic Algorithms, or trying to create the control systems by hand using “If-Then” rules or Fuzzy Logic. In order to evaluate the quality of the intelligent control system to control an autonomous mobile robot we calculate objective function values and the percentage of the robot work loops when “If-Then” rules control the robot.


2016 ◽  
Vol 28 (4) ◽  
pp. 461-469 ◽  
Author(s):  
Tomoyoshi Eda ◽  
◽  
Tadahiro Hasegawa ◽  
Shingo Nakamura ◽  
Shin’ichi Yuta

[abstFig src='/00280004/04.jpg' width='300' text='Autonomous mobile robots entered in the Tsukuba Challenge 2015' ] This paper describes a self-localization method for autonomous mobile robots entered in the Tsukuba Challenge 2015. One of the important issues in autonomous mobile robots is accurately estimating self-localization. An occupancy grid map, created manually before self-localization has typically been utilized to estimate the self-localization of autonomous mobile robots. However, it is difficult to create an accurate map of complex courses. We created an occupancy grid map combining local grid maps built using a leaser range finder (LRF) and wheel odometry. In addition, the self-localization of a mobile robot was calculated by integrating self-localization estimated by a map and matching it to wheel odometry information. The experimental results in the final run of the Tsukuba Challenge 2015 showed that the mobile robot traveled autonomously until the 600 m point of the course, where the occupancy grid map ended.


2011 ◽  
Vol 121-126 ◽  
pp. 2416-2420 ◽  
Author(s):  
Yan Fen Mao ◽  
Hans Wiedmann ◽  
Ming Chen

The paper describes an autonomous mobile robot named Robotino®, and how it is used for education of Bachelor-students in the majors AES (Automotive Engineering & Service) as well as in MT (Mechatronics) in CDHAW, Tongji University. A fine positioning project using image processing is introduced, and vision-based functions from Robotino®View are presented. It is sketched out how this system also can be used as a research platform for automotive assistance systems.


Author(s):  
Casi Setianingsih ◽  
Kusprasapta Mutijarsa ◽  
Muhammad Ary Murti

Autonomous robot adalah suatu robot yang mampu bekerja secara mandiri tanpa pengendalian langsung dari manusia. Robot bekerja berdasarkan sensor-sensor yang dimilikinya, mengambil keputusan sendiri untuk menyelesaikan misi dalam lingkungan kerjanya. Dalam dunia nyata, lingkungan kerja robot sangat dinamis, selalu berubah, dan tidak terstruktur. Membuat suatu model lingkungan yang tidak terstruktur sangat sulit. Memperoleh model matematik yang tepat dari lingkungan seperti ini hampir tidak mungkin dilakukan. Untuk membuat suatu autonomous mobile robots yang mampu bekerja pada lingkungan yang tidak terstruktur dan dinamis,diperlukansuatumetodatertentuyangadaptifdanmampubelajar. Berdasarkan permasalahan tersebut maka pada riset ini dirancang suatu autonomous mobile robot dengan arsitektur berbasis perilaku yang dapat belajar dan bekerja secara mandiri pada lingkungan yang tidak terstruktur, menggunakan metoda Reinforcement Learning. Tujuan metoda ini diterapkan agar robot mampu belajar dan beradaptasi terhadap lingkungan yang tidak terstruktur. Selanjutnya robot dikembangkan agar mampu menyelesaikan misi menemukan target pada posisi tertentu berdasarkan informasi yang diperoleh dari sensor sensor yang ada. Hasil simulasi menunjukan bahwa algoritma pembelajaran Reinforcement Learning berhasil diterapkan pada arsitektur kendali berbasis perilaku di autonomous mobile robot dengan akurasi sebesar 85,71%.


Author(s):  
Evangelos Georgiou ◽  
Jian S. Dai ◽  
Michael Luck

In small mobile robot research, autonomous platforms are severely constrained in navigation environments by the limitations of accurate sensory data to preform critical path planning, obstacle avoidance and self-localization tasks. The motivation for this work is to enable small autonomous mobile robots with a local stereo vision system that will provide an accurate reconstruction of a navigation environment for critical navigation tasks. This paper presents the KCLBOT, which was developed in King’s College London’s Centre for Robotic Research and is a small autonomous mobile robot with a stereo vision system.


2020 ◽  
Vol 2020 (4) ◽  
pp. 4-14
Author(s):  
Vladimir Budak ◽  
Ekaterina Ilyina

The article proposes the classification of lenses with different symmetrical beam angles and offers a scale as a spot-light’s palette. A collection of spotlight’s images was created and classified according to the proposed scale. The analysis of 788 pcs of existing lenses and reflectors with different LEDs and COBs carried out, and the dependence of the axial light intensity from beam angle was obtained. A transfer training of new deep convolutional neural network (CNN) based on the pre-trained GoogleNet was performed using this collection. GradCAM analysis showed that the trained network correctly identifies the features of objects. This work allows us to classify arbitrary spotlights with an accuracy of about 80 %. Thus, light designer can determine the class of spotlight and corresponding type of lens with its technical parameters using this new model based on CCN.


Author(s):  
P.L. Nikolaev

This article deals with method of binary classification of images with small text on them Classification is based on the fact that the text can have 2 directions – it can be positioned horizontally and read from left to right or it can be turned 180 degrees so the image must be rotated to read the sign. This type of text can be found on the covers of a variety of books, so in case of recognizing the covers, it is necessary first to determine the direction of the text before we will directly recognize it. The article suggests the development of a deep neural network for determination of the text position in the context of book covers recognizing. The results of training and testing of a convolutional neural network on synthetic data as well as the examples of the network functioning on the real data are presented.


2020 ◽  
Vol 14 ◽  
Author(s):  
Lahari Tipirneni ◽  
Rizwan Patan

Abstract:: Millions of deaths all over the world are caused by breast cancer every year. It has become the most common type of cancer in women. Early detection will help in better prognosis and increases the chance of survival. Automating the classification using Computer-Aided Diagnosis (CAD) systems can make the diagnosis less prone to errors. Multi class classification and Binary classification of breast cancer is a challenging problem. Convolutional neural network architectures extract specific feature descriptors from images, which cannot represent different types of breast cancer. This leads to false positives in classification, which is undesirable in disease diagnosis. The current paper presents an ensemble Convolutional neural network for multi class classification and Binary classification of breast cancer. The feature descriptors from each network are combined to produce the final classification. In this paper, histopathological images are taken from publicly available BreakHis dataset and classified between 8 classes. The proposed ensemble model can perform better when compared to the methods proposed in the literature. The results showed that the proposed model could be a viable approach for breast cancer classification.


Sign in / Sign up

Export Citation Format

Share Document