Deep Reinforcement Learning-Based Irrigation Scheduling

2020 ◽  
Vol 63 (3) ◽  
pp. 549-556
Author(s):  
Yanxiang Yang ◽  
Jiang Hu ◽  
Dana Porter ◽  
Thomas Marek ◽  
Kevin Heflin ◽  
...  

Highlights Deep reinforcement learning-based irrigation scheduling is proposed to determine the amount of irrigation required at each time step considering soil moisture level, evapotranspiration, forecast precipitation, and crop growth stage. The proposed methodology was compared with traditional irrigation scheduling approaches and some machine learning based scheduling approaches based on simulation. Abstract. Machine learning has been widely applied in many areas, with promising results and large potential. In this article, deep reinforcement learning-based irrigation scheduling is proposed. This approach can automate the irrigation process and can achieve highly precise water application that results in higher simulated net return. Using this approach, the irrigation controller can automatically determine the optimal or near-optimal water application amount. Traditional reinforcement learning can be superior to traditional periodic and threshold-based irrigation scheduling. However, traditional reinforcement learning fails to accurately represent a real-world irrigation environment due to its limited state space. Compared with traditional reinforcement learning, the deep reinforcement learning method can better model a real-world environment based on multi-dimensional observations. Simulations for various weather conditions and crop types show that the proposed deep reinforcement learning irrigation scheduling can increase net return. Keywords: Automated irrigation scheduling, Deep reinforcement learning, Machine learning.

2021 ◽  
pp. 027836492098785
Author(s):  
Julian Ibarz ◽  
Jie Tan ◽  
Chelsea Finn ◽  
Mrinal Kalakrishnan ◽  
Peter Pastor ◽  
...  

Deep reinforcement learning (RL) has emerged as a promising approach for autonomously acquiring complex behaviors from low-level sensor observations. Although a large portion of deep RL research has focused on applications in video games and simulated control, which does not connect with the constraints of learning in real environments, deep RL has also demonstrated promise in enabling physical robots to learn complex skills in the real world. At the same time, real-world robotics provides an appealing domain for evaluating such algorithms, as it connects directly to how humans learn: as an embodied agent in the real world. Learning to perceive and move in the real world presents numerous challenges, some of which are easier to address than others, and some of which are often not considered in RL research that focuses only on simulated domains. In this review article, we present a number of case studies involving robotic deep RL. Building off of these case studies, we discuss commonly perceived challenges in deep RL and how they have been addressed in these works. We also provide an overview of other outstanding challenges, many of which are unique to the real-world robotics setting and are not often the focus of mainstream RL research. Our goal is to provide a resource both for roboticists and machine learning researchers who are interested in furthering the progress of deep RL in the real world.


2020 ◽  
Vol 34 (07) ◽  
pp. 11773-11781 ◽  
Author(s):  
Karl Moritz Hermann ◽  
Mateusz Malinowski ◽  
Piotr Mirowski ◽  
Andras Banki-Horvath ◽  
Keith Anderson ◽  
...  

Navigating and understanding the real world remains a key challenge in machine learning and inspires a great variety of research in areas such as language grounding, planning, navigation and computer vision. We propose an instruction-following task that requires all of the above, and which combines the practicality of simulated environments with the challenges of ambiguous, noisy real world data. StreetNav is built on top of Google Street View and provides visually accurate environments representing real places. Agents are given driving instructions which they must learn to interpret in order to successfully navigate in this environment. Since humans equipped with driving instructions can readily navigate in previously unseen cities, we set a high bar and test our trained agents for similar cognitive capabilities. Although deep reinforcement learning (RL) methods are frequently evaluated only on data that closely follow the training distribution, our dataset extends to multiple cities and has a clean train/test separation. This allows for thorough testing of generalisation ability. This paper presents the StreetNav environment and tasks, models that establish strong baselines, and extensive analysis of the task and the trained agents.


Energies ◽  
2018 ◽  
Vol 11 (10) ◽  
pp. 2615 ◽  
Author(s):  
Yang Du ◽  
Ke Yan ◽  
Zixiao Ren ◽  
Weidong Xiao

A maximum power point tracker (MPPT) should be designed to deal with various weather conditions, which are different from region to region. Customization is an important step for achieving the highest solar energy harvest. The latest development of modern machine learning provides the possibility to classify the weather types automatically and, consequently, assist localized MPPT design. In this study, a localized MPPT algorithm is developed, which is supported by a supervised weather-type classification system. Two classical machine learning technologies are employed and compared, namely, the support vector machine (SVM) and extreme learning machine (ELM). The simulation results show the outperformance of the proposed method in comparison with the traditional MPPT design.


2021 ◽  
Author(s):  
Antonio Serrano Muñoz ◽  
Nestor Arana-Arexolaleiba ◽  
Dimitrios Chrysostomou ◽  
Simon Bøgh

Abstract Remanufacturing automation must be designed to be flexible and robust enough to overcome the uncertainties, conditions of the products, and complexities in the process's planning and operation. Machine learning, particularly reinforcement learning, methods are presented as techniques to learn, improve, and generalise the automation of many robotic manipulation tasks (most of them related to grasping, picking, or assembly). However, not much has been exploited in remanufacturing, in particular in disassembly tasks. This work presents the State-of-the-Art of contact-rich disassembly using reinforcement learning algorithms and a study about the object extraction skill's generalisation when applied to contact-rich disassembly tasks. The generalisation capabilities of two State-of-the-Art reinforcement learning agents (trained in simulation) are tested and evaluated in simulation and real-world while perform a disassembly task. Results shows that, at least, one of the agents can generalise the contact-rich extraction skill. Also, this work identifies key concepts and gaps for the reinforcement learning algorithms' research and application on disassembly tasks.


Author(s):  
Krishna Kumar Joshi ◽  
Neelam Joshi ◽  
Ravi Ray Chaudhari

Nowadays, Artificial intelligence is an important part in everyone's life. It can be derived in two categories named as Machine learning and deep learning. Machine learning is the emerging field of the current era. With the help of the machine learning, we can develop the computers in such a way so that they can learn themselves. There are various types of leaning algorithms used for machine learning. With the help of these algorithms, machines can learn various things and they can behave almost like the human beings. Nowadays, the role of the machine is not limited in some defined fields only; it is playing an important role in almost every field such as education, entertainment, medical diagnosis etc. In this research paper, the basics about machine learning is discussed we have discussed about various learning techniques such as supervised learning, unsupervised learning and reinforcement learning in detail. A small portion is also used to cover some basics about the Convolutional Neural Networks (CNN). Some information about the various languages and APIs, designed and mostly used for Machine Learning and its applications are also provided in this paper.


Author(s):  
Ritesh Noothigattu ◽  
Djallel Bouneffouf ◽  
Nicholas Mattei ◽  
Rachita Chandra ◽  
Piyush Madan ◽  
...  

Autonomous cyber-physical agents play an increasingly large role in our lives. To ensure that they behave in ways aligned with the values of society, we must develop techniques that allow these agents to not only maximize their reward in an environment, but also to learn and follow the implicit constraints of society. We detail a novel approach that uses inverse reinforcement learning to learn a set of unspecified constraints from demonstrations and reinforcement learning to learn to maximize environmental rewards. A contextual bandit-based orchestrator then picks between the two policies: constraint-based and environment reward-based. The contextual bandit orchestrator allows the agent to mix policies in novel ways, taking the best actions from either a reward-maximizing or constrained policy. In addition, the orchestrator is transparent on which policy is being employed at each time step. We test our algorithms using Pac-Man and show that the agent is able to learn to act optimally, act within the demonstrated constraints, and mix these two functions in complex ways.


2019 ◽  
Vol XVI (4) ◽  
pp. 95-113
Author(s):  
Muhammad Tariq ◽  
Tahir Mehmood

Accurate detection, classification and mitigation of power quality (PQ) distortive events are of utmost importance for electrical utilities and corporations. An integrated mechanism is proposed in this paper for the identification of PQ distortive events. The proposed features are extracted from the waveforms of the distortive events using modified form of Stockwell’s transform. The categories of the distortive events were determined based on these feature values by applying extreme learning machine as an intelligent classifier. The proposed methodology was tested under the influence of both the noisy and noiseless environments on a database of seven thousand five hundred simulated waveforms of distortive events which classify fifteen types of PQ events such as impulses, interruptions, sags and swells, notches, oscillatory transients, harmonics, and flickering as single stage events with their possible integrations. The results of the analysis indicated satisfactory performance of the proposed method in terms of accuracy in classifying the events in addition to its reduced sensitivity under various noisy environments.


Author(s):  
Ivan Herreros

This chapter discusses basic concepts from control theory and machine learning to facilitate a formal understanding of animal learning and motor control. It first distinguishes between feedback and feed-forward control strategies, and later introduces the classification of machine learning applications into supervised, unsupervised, and reinforcement learning problems. Next, it links these concepts with their counterparts in the domain of the psychology of animal learning, highlighting the analogies between supervised learning and classical conditioning, reinforcement learning and operant conditioning, and between unsupervised and perceptual learning. Additionally, it interprets innate and acquired actions from the standpoint of feedback vs anticipatory and adaptive control. Finally, it argues how this framework of translating knowledge between formal and biological disciplines can serve us to not only structure and advance our understanding of brain function but also enrich engineering solutions at the level of robot learning and control with insights coming from biology.


Sign in / Sign up

Export Citation Format

Share Document