A hierarchical architecture to control autonomous robots evolving in an unknown environment

Author(s):  
G. Mourioux ◽  
C. Novales ◽  
G. Poisson
2017 ◽  
Vol 2017 ◽  
pp. 1-11 ◽  
Author(s):  
Peng Cui ◽  
Weisheng Yan ◽  
Yintao Wang

Autonomous robots need to be recharged and exchange information with the host through docking in the long-distance tasks. Therefore, feasible path is required in the docking process to guide the robot and adjust its pose. However, when there are unknown obstacles in the work area, it becomes difficult to determine the feasible path for docking. This paper presents a reactive path planning approach named Dubins-APF (DAPF) to solve the path planning problem for docking in unknown environment with obstacles. In this proposed approach the Dubins curves are combined with the designed obstacle avoidance potential field to plan the feasible path. Firstly, an initial path is planned and followed according to the configurations of the robot and the docking station. Then when the followed path is evaluated to be infeasible, the intermediate configuration is calculated as well as the replanned path based on the obstacle avoidance potential field. The robot will be navigated to the docking station with proper pose eventually via the DAPF approach. The proposed DAPF approach is efficient and does not require the prior knowledge about the environment. Simulation results are given to validate the effectiveness and feasibility of the proposed approach.


2021 ◽  
Author(s):  
Kai Yuan ◽  
Noor Sajid ◽  
Karl Friston ◽  
Zhibin Li

Abstract Humans can produce complex movements when interacting with their surroundings. This relies on the planning of various movements and subsequent execution. In this paper, we investigated this fundamental aspect of motor control in the setting of autonomous robotic operations. We consider hierarchical generative modelling—for autonomous task completion—that mimics the deep temporal architecture of human motor control. Here, temporal depth refers to the nested time scales at which successive levels of a forward or generative model unfold: for example, the apprehension and delivery of an object requires both a global plan that contextualises the fast coordination of multiple local limb movements. This separation of temporal scales can also be motivated from a robotics and control perspective. Specifically, to ensure versatile sensorimotor control, it is necessary to hierarchically structure high-level planning and low-level motor control of individual limbs. We use numerical experiments to establish the efficacy of this formulation and demonstrate how a humanoid robot can autonomously solve a complex task requiring locomotion, manipulation, and grasping, using a hierarchical generative model. In particular, the humanoid robot can retrieve and deliver a box, open and walk through a door to reach the final destination. Our approach, and experiments, illustrate the effectiveness of using human-inspired motor control algorithms, which provide a scalable hierarchical architecture for autonomous performance of complex goal-directed tasks.


Author(s):  
Chunlin Song ◽  
Cheng Chen ◽  
Naigang Cui

Used widely in military and civil applications, autonomous robots have shown promising in planet exploration, seabed survey, and disaster rescue. A lot of robotic research concentrates on localization and mapping dealing with the basic problems in robotic research: “Where I am?” and “How is the environment like?”. The two problems consist a coupled problem named Simultaneous Localization and Mapping (SLAM) in unknown environment exploration. This problem is summarized by Hugh D. Whyte in his paper published in 1991 [1]. Forced by requirement of motion in unknown environment, many researchers in robotics make efforts to solve SLAM problem in recent decades.


Author(s):  
PAUL A. BOXER

Autonomous robots are unsuccessful at operating in complex, unconstrained environments. They lack the ability to learn about the physical behavior of different objects through the use of vision. We combine Bayesian networks and qualitative spatial representation to learn general physical behavior by visual observation. We input training scenarios that allow the system to observe and learn normal physical behavior. The position and velocity of the visible objects are represented as qualitative states. Transitions between these states over time are entered as evidence into a Bayesian network. The network provides probabilities of future transitions to produce predictions of future physical behavior. We use test scenarios to determine how well the approach discriminates between normal and abnormal physical behavior and actively predicts future behavior. We examine the ability of the system to learn three naive physical concepts, "no action at a distance", "solidity" and "movement on continuous paths". We conclude that the combination of qualitative spatial representations and Bayesian network techniques is capable of learning these three rules of naive physics.


Sign in / Sign up

Export Citation Format

Share Document