Direct Reinforcement Learning for Autonomous Power Configuration and Control in Wireless Networks

Author(s):  
Adrian Udenze ◽  
Klaus McDonald-Maier
2009 ◽  
Vol 129 (4) ◽  
pp. 363-367
Author(s):  
Tomoyuki Maeda ◽  
Makishi Nakayama ◽  
Hiroshi Narazaki ◽  
Akira Kitamura

Author(s):  
Ivan Herreros

This chapter discusses basic concepts from control theory and machine learning to facilitate a formal understanding of animal learning and motor control. It first distinguishes between feedback and feed-forward control strategies, and later introduces the classification of machine learning applications into supervised, unsupervised, and reinforcement learning problems. Next, it links these concepts with their counterparts in the domain of the psychology of animal learning, highlighting the analogies between supervised learning and classical conditioning, reinforcement learning and operant conditioning, and between unsupervised and perceptual learning. Additionally, it interprets innate and acquired actions from the standpoint of feedback vs anticipatory and adaptive control. Finally, it argues how this framework of translating knowledge between formal and biological disciplines can serve us to not only structure and advance our understanding of brain function but also enrich engineering solutions at the level of robot learning and control with insights coming from biology.


Electronics ◽  
2021 ◽  
Vol 10 (9) ◽  
pp. 999
Author(s):  
Ahmad Taher Azar ◽  
Anis Koubaa ◽  
Nada Ali Mohamed ◽  
Habiba A. Ibrahim ◽  
Zahra Fathy Ibrahim ◽  
...  

Unmanned Aerial Vehicles (UAVs) are increasingly being used in many challenging and diversified applications. These applications belong to the civilian and the military fields. To name a few; infrastructure inspection, traffic patrolling, remote sensing, mapping, surveillance, rescuing humans and animals, environment monitoring, and Intelligence, Surveillance, Target Acquisition, and Reconnaissance (ISTAR) operations. However, the use of UAVs in these applications needs a substantial level of autonomy. In other words, UAVs should have the ability to accomplish planned missions in unexpected situations without requiring human intervention. To ensure this level of autonomy, many artificial intelligence algorithms were designed. These algorithms targeted the guidance, navigation, and control (GNC) of UAVs. In this paper, we described the state of the art of one subset of these algorithms: the deep reinforcement learning (DRL) techniques. We made a detailed description of them, and we deduced the current limitations in this area. We noted that most of these DRL methods were designed to ensure stable and smooth UAV navigation by training computer-simulated environments. We realized that further research efforts are needed to address the challenges that restrain their deployment in real-life scenarios.


2020 ◽  
Vol 26 (3) ◽  
pp. 169-183
Author(s):  
Phudit Ampririt ◽  
Yi Liu ◽  
Makoto Ikeda ◽  
Keita Matsuo ◽  
Leonard Barolli ◽  
...  

The Fifth Generation (5G) networks are expected to be flexible to satisfy demands of high-quality services such as high speed, low latencies and enhanced reliability from customers. Also, the rapidly increasing amount of user devices and high user’s requests becomes a problem. Thus, the Software-Defined Network (SDN) will be the key function for efficient management and control. To deal with these problems, we propose a Fuzzy-based SDN approach. This paper presents and compares two Fuzzy-based Systems for Admission Control (FBSAC) in 5G wireless networks: FBSAC1 and FBSAC2. The FBSAC1 considers for admission control decision three parameters: Grade of Service (GS), User Request Delay Time (URDT) and Network Slice Size (NSS). In FBSAC2, we consider as an additional parameter the Slice Priority (SP). So, FBSAC2 has four input parameters. The simulation results show that the FBSAC2 is more complex than FBSAC1, but it has a better performance for admission control.


Author(s):  
Ju Xie ◽  
Xing Xu ◽  
Feng Wang ◽  
Haobin Jiang

The driver model is the decision-making and control center of intelligent vehicle. In order to improve the adaptability of intelligent vehicles under complex driving conditions, and simulate the manipulation characteristics of the skilled driver under the driver-vehicle-road closed-loop system, a kind of human-like longitudinal driver model for intelligent vehicles based on reinforcement learning is proposed. This paper builds the lateral driver model for intelligent vehicles based on optimal preview control theory. Then, the control correction link of longitudinal driver model is established to calculate the throttle opening or brake pedal travel for the desired longitudinal acceleration. Moreover, the reinforcement learning agents for longitudinal driver model is parallel trained by comprehensive evaluation index and skilled driver data. Lastly, training performance and scenarios verification between the simulation experiment and the real car test are performed to verify the effectiveness of the reinforcement learning based longitudinal driver model. The results show that the proposed human-like longitudinal driver model based on reinforcement learning can help intelligent vehicles effectively imitate the speed control behavior of the skilled driver in various path-following scenarios.


1996 ◽  
Vol 2 (3) ◽  
pp. 249-261 ◽  
Author(s):  
Antonio Iera ◽  
Salvatore Marano ◽  
Antonella Molinaro

Sign in / Sign up

Export Citation Format

Share Document