scholarly journals Visually Guided Picking Control of an Omnidirectional Mobile Manipulator Based on End-to-End Multi-Task Imitation Learning

IEEE Access ◽  
2020 ◽  
Vol 8 ◽  
pp. 1882-1891
Author(s):  
Chi-Yi Tsai ◽  
Yung-Shan Chou ◽  
Ching-Chang Wong ◽  
Yu-Cheng Lai ◽  
Chien-Che Huang
2020 ◽  
Vol 1 ◽  
pp. 6
Author(s):  
Alexandra Vedeler ◽  
Narada Warakagoda

The task of obstacle avoidance using maritime vessels, such as Unmanned Surface Vehicles (USV), has traditionally been solved using specialized modules that are designed and optimized separately. However, this approach requires a deep insight into the environment, the vessel, and their complex dynamics. We propose an alternative method using Imitation Learning (IL) through Deep Reinforcement Learning (RL) and Deep Inverse Reinforcement Learning (IRL) and present a system that learns an end-to-end steering model capable of mapping radar-like images directly to steering actions in an obstacle avoidance scenario. The USV used in the work is equipped with a Radar sensor and we studied the problem of generating a single action parameter, heading. We apply an IL algorithm known as generative adversarial imitation learning (GAIL) to develop an end-to-end steering model for a scenario where avoidance of an obstacle is the goal. The performance of the system was studied for different design choices and compared to that of a system that is based on pure RL. The IL system produces results that indicate it is able to grasp the concept of the task and that in many ways are on par with the RL system. We deem this to be promising for future use in tasks that are not as easily described by a reward function.  


Author(s):  
Yunpeng Pan ◽  
Ching-An Cheng ◽  
Kamil Saigol ◽  
Keuntaek Lee ◽  
Xinyan Yan ◽  
...  

Author(s):  
Sagar Gubbi Venkatesh ◽  
Raviteja Upadrashta ◽  
Shishir Kolathaya ◽  
Bharadwaj Amrutur

2021 ◽  
Vol 9 (5) ◽  
pp. 33-43
Author(s):  
Ashraf Nabil ◽  
Ayman Kassem

Autonomous Driving is one of the difficult problems faced the automotive applications. Nowadays, it is restricted due to the presence of some laws that prevent cars from being fully autonomous for the fear of accidents occurrence. Researchers try to improve the accuracy and safety of their models with the aim of having a strong push against these restricted Laws. Autonomous driving is a sought-after solution which isn’t easily solved by classical approaches. Deep Learning is considered as a strong Artificial Intelligence paradigm which can teach machines how to behave in difficult situations. It proved its success in many differ domains, but it still has sometime in the automotive applications. The presented work will use the end-to-end deep machine learning field in order to reach to our goal of having Full Autonomous Driving Vehicle that can behave correctly in different scenarios. CARLA simulator will be used to learn and test the deep neural networks. Results will show not only performance on CARLA’s simulator as an end-to-end solution for autonomous driving, but also how the same approach can be used on one of the most popular real datasets of automotive that includes camera images with the corresponding driver’s control action.


Author(s):  
Joshua SUPRATMAN ◽  
Yasuo HAYASHIBARA ◽  
Kiyoshi IRIE

2020 ◽  
Vol 12 (5) ◽  
pp. 15-27
Author(s):  
Fenjiro Youssef ◽  
◽  
Benbrahim Houda

Self-driving car is one of the most amazing applications and most active research of artificial intelligence. It uses end-to-end deep learning models to take orientation and speed decisions, using mainly Convolutional Neural Networks for computer vision, plugged to a fully connected network to output control commands. In this paper, we introduce the Self-driving car domain and the CARLA simulation environment with a focus on the lane-keeping task, then we present the two main end-to-end models, used to solve this problematic, beginning by Deep imitation learning (IL) and specifically the Conditional Imitation Learning (COIL) algorithm, that learns through expert labeled demonstrations, trying to mimic their behaviors, and thereafter, describing Deep Reinforcement Learning (DRL), and precisely DQN and DDPG (respectively Deep Q learning and deep deterministic policy gradient), that uses the concepts of learning by trial and error, while adopting the Markovian decision processes (MDP), to get the best policy for the driver agent. In the last chapter, we compare the two algorithms IL and DRL based on a new approach, with metrics used in deep learning (Loss during training phase) and Self-driving car (the episode's duration before a crash and Average distance from the road center during the testing phase). The results of the training and testing on CARLA simulator reveals that the IL algorithm performs better than DRL algorithm when the agents are already trained on a given circuit, but DRL agents show better adaptability when they are on new roads.


2021 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Zeguo Yang ◽  
Mantian Li ◽  
Fusheng Zha ◽  
Xin Wang ◽  
Pengfei Wang ◽  
...  

Purpose This paper aims to introduce an imitation learning framework for a wheeled mobile manipulator based on dynamical movement primitives (DMPs). A novel mobile manipulator with the capability to learn from demonstration is introduced. Then, this study explains the whole process for a wheeled mobile manipulator to learn a demonstrated task and generalize to new situations. Two visual tracking controllers are designed for recording human demonstrations and monitoring robot operations. The study clarifies how human demonstrations can be learned and generalized to new situations by a wheel mobile manipulator. Design/methodology/approach The kinematic model of a mobile manipulator is analyzed. An RGB-D camera is applied to record the demonstration trajectories and observe robot operations. To avoid human demonstration behaviors going out of sight of the camera, a visual tracking controller is designed based on the kinematic model of the mobile manipulator. The demonstration trajectories are then represented by DMPs and learned by the mobile manipulator with corresponding models. Another tracking controller is designed based on the kinematic model of the mobile manipulator to monitor and modify the robot operations. Findings To verify the effectiveness of the imitation learning framework, several daily tasks are demonstrated and learned by the mobile manipulator. The results indicate that the presented approach shows good performance for a wheeled mobile manipulator to learn tasks through human demonstrations. The only thing a robot-user needs to do is to provide demonstrations, which highly facilitates the application of mobile manipulators. Originality/value The research fulfills the need for a wheeled mobile manipulator to learn tasks via demonstrations instead of manual planning. Similar approaches can be applied to mobile manipulators with different architecture.


Sign in / Sign up

Export Citation Format

Share Document