Image Quality-Driven Octorotor Flight Control via Reinforcement Learning

Author(s):  
Qiang Li ◽  
Yunjun Xu

This article presents the design of a reinforcement learning method based flight controller to enhance the qualities of image taken from an octorotor platform. Concerning the effect of a low resolution and a high blur rate of target images on feature extraction and target detection, we started by analyzing the relationship between these two kinds of image qualities and altitude and velocity of the octorotor. This leads to the generation of corresponding control commands. We then applied a reinforcement learning technique to automatically design the altitude and velocity controllers of the octorotor. The image analysis and the control command generation algorithms are successfully tested on the octorotor platform, and the controllers demonstrate a satisfactory performance in simulations.

Author(s):  
Dani Gunawan

This study was directed to develop a learning technique, to analyze the obstacles faced by teachers in implementing the lesson, and to overcome the problems faced by teachers in enhancing elementary students’ reading and writing comprehension. In order to fulfill the mentioned goals, this study tried to use scramble-based learning technique. It was cconducted at SDN Gentra Masekdas 1, Kecamatan Tarogong Kaler involving 32 first grade students. A pilot study was conducted on 9 March 2017 for about 35 minutes. The first cycle started on 18 April 2017, while the second one was on 24 April 2017. It was found that there was an increasing trend after the implementation. The analysis proccess generated data as followed: during pilot study, eight students succeeded to reach the standard indicator with percentage of 25%. Cycle I generated 15 students with learning completion percentage of 46.8.%. And, during second cycle, there were 27 students who succeeded in reaching completion standard with completion percentage of 84.3%.


2009 ◽  
Vol 129 (7) ◽  
pp. 1253-1263
Author(s):  
Toru Eguchi ◽  
Takaaki Sekiai ◽  
Akihiro Yamada ◽  
Satoru Shimizu ◽  
Masayuki Fukai

2020 ◽  
Author(s):  
Vricha Chavan ◽  
​Jimit Shah ◽  
Mrugank Vora ◽  
Mrudula Vora ◽  
Shubhashini Verma

Author(s):  
Gokhan Demirkiran ◽  
Ozcan Erdener ◽  
Onay Akpinar ◽  
Pelin Demirtas ◽  
M. Yagiz Arik ◽  
...  

Author(s):  
Jun Long ◽  
Yueyi Luo ◽  
Xiaoyu Zhu ◽  
Entao Luo ◽  
Mingfeng Huang

AbstractWith the developing of Internet of Things (IoT) and mobile edge computing (MEC), more and more sensing devices are widely deployed in the smart city. These sensing devices generate various kinds of tasks, which need to be sent to cloud to process. Usually, the sensing devices do not equip with wireless modules, because it is neither economical nor energy saving. Thus, it is a challenging problem to find a way to offload tasks for sensing devices. However, many vehicles are moving around the city, which can communicate with sensing devices in an effective and low-cost way. In this paper, we propose a computation offloading scheme through mobile vehicles in IoT-edge-cloud network. The sensing devices generate tasks and transmit the tasks to vehicles, then the vehicles decide to compute the tasks in the local vehicle, MEC server or cloud center. The computation offloading decision is made based on the utility function of the energy consumption and transmission delay, and the deep reinforcement learning technique is adopted to make decisions. Our proposed method can make full use of the existing infrastructures to implement the task offloading of sensing devices, the experimental results show that our proposed solution can achieve the maximum reward and decrease delay.


Sign in / Sign up

Export Citation Format

Share Document