Path planning algorithm for space manipulator with a minimum energy demand

Author(s):  
Wing Kwong Chung ◽  
Yangsheng Xu
2013 ◽  
Vol 2013 ◽  
pp. 1-15 ◽  
Author(s):  
Wing Kwong Chung ◽  
Yangsheng Xu

The energy of a space station is a precious resource, and the minimization of energy consumption of a space manipulator is crucial to maintain its normal functionalities. This paper first presents novel gaits for space manipulators by equipping a new gripping mechanism. With the use of wheels locomotion, lower energy demand gaits can be achieved. With the use of the proposed gaits, we further develop a global path planning algorithm for space manipulators which can plan a moving path on a space station with a minimum total energy demand. Different from existing approaches, we emphasize both the use of the proposed low energy demand gaits and the gaits composition during the path planning process. To evaluate the performance of the proposed gaits and path planning algorithm, numerous simulations are performed. Results show that the energy demand of both the proposed gaits and the resultant moving path is also minimum.


2021 ◽  
Vol 9 (3) ◽  
pp. 252
Author(s):  
Yushan Sun ◽  
Xiaokun Luo ◽  
Xiangrui Ran ◽  
Guocheng Zhang

This research aims to solve the safe navigation problem of autonomous underwater vehicles (AUVs) in deep ocean, which is a complex and changeable environment with various mountains. When an AUV reaches the deep sea navigation, it encounters many underwater canyons, and the hard valley walls threaten its safety seriously. To solve the problem on the safe driving of AUV in underwater canyons and address the potential of AUV autonomous obstacle avoidance in uncertain environments, an improved AUV path planning algorithm based on the deep deterministic policy gradient (DDPG) algorithm is proposed in this work. This method refers to an end-to-end path planning algorithm that optimizes the strategy directly. It takes sensor information as input and driving speed and yaw angle as outputs. The path planning algorithm can reach the predetermined target point while avoiding large-scale static obstacles, such as valley walls in the simulated underwater canyon environment, as well as sudden small-scale dynamic obstacles, such as marine life and other vehicles. In addition, this research aims at the multi-objective structure of the obstacle avoidance of path planning, modularized reward function design, and combined artificial potential field method to set continuous rewards. This research also proposes a new algorithm called deep SumTree-deterministic policy gradient algorithm (SumTree-DDPG), which improves the random storage and extraction strategy of DDPG algorithm experience samples. According to the importance of the experience samples, the samples are classified and stored in combination with the SumTree structure, high-quality samples are extracted continuously, and SumTree-DDPG algorithm finally improves the speed of the convergence model. Finally, this research uses Python language to write an underwater canyon simulation environment and builds a deep reinforcement learning simulation platform on a high-performance computer to conduct simulation learning training for AUV. Data simulation verified that the proposed path planning method can guide the under-actuated underwater robot to navigate to the target without colliding with any obstacles. In comparison with the DDPG algorithm, the stability, training’s total reward, and robustness of the improved Sumtree-DDPG algorithm planner in this study are better.


Sign in / Sign up

Export Citation Format

Share Document