Locally Adaptive Online Trajectory Optimization in Unknown Environments With RRTs

Author(s):  
Ethan N. Evans ◽  
Patrick Meyer ◽  
Samuel Seifert ◽  
Dimitri N. Mavris ◽  
Evangelos A. Theodorou

Rapidly Exploring Random Trees (RRTs) have gained significant attention due to provable properties such as completeness and asymptotic optimality. However, offline methods are only useful when the entire problem landscape is known a priori. Furthermore, many real world applications have problem scopes that are orders of magnitude larger than typical mazes and bug traps that require large numbers of samples to match typical sample densities, resulting in high computational effort for reasonably low-cost trajectories. In this paper we propose an online trajectory optimization algorithm for uncertain large environments using RRTs, which we call Locally Adaptive Rapidly Exploring Random Tree (LARRT). This is achieved through two main contributions. We use an adaptive local sampling region and adaptive sampling scheme which depend on states of the dynamic system and observations of obstacles. We also propose a localized approach to planning and re-planning through fixing the root node to the current vehicle state and adding tree update functions. LARRT is designed to leverage local problem scope to reduce computational complexity and obtain a total lower-cost solution compared to a classical RRT of a similar number of nodes. Using this technique we can ensure that popular variants of RRT will remain online even for prohibitively large planning problems by transforming a large trajectory optimization approach to one that resembles receding horizon optimization. Finally, we demonstrate our approach in simulation and discuss various algorithmic trade-offs of the proposed approach.

Mathematics ◽  
2019 ◽  
Vol 7 (4) ◽  
pp. 355 ◽  
Author(s):  
Jens Jauch ◽  
Felix Bleimund ◽  
Michael Frey ◽  
Frank Gauterin

The B-spline function representation is commonly used for data approximation and trajectory definition, but filter-based methods for NWLS approximation are restricted to a bounded definition range. We present an algorithm termed NRBA for an iterative NWLS approximation of an unbounded set of data points by a B-spline function. NRBA is based on a MPF, in which a KF solves the linear subproblem optimally while a PF deals with nonlinear approximation goals. NRBA can adjust the bounded definition range of the approximating B-spline function during run-time such that, regardless of the initially chosen definition range, all data points can be processed. In numerical experiments, NRBA achieves approximation results close to those of the Levenberg–Marquardt algorithm. An NWLS approximation problem is a nonlinear optimization problem. The direct trajectory optimization approach also leads to a nonlinear problem. The computational effort of most solution methods grows exponentially with the trajectory length. We demonstrate how NRBA can be applied for a multiobjective trajectory optimization for a BEV in order to determine an energy-efficient velocity trajectory. With NRBA, the effort increases only linearly with the processed data points and the trajectory length.


Water ◽  
2021 ◽  
Vol 13 (7) ◽  
pp. 934
Author(s):  
Mariacrocetta Sambito ◽  
Gabriele Freni

In the urban drainage sector, the problem of polluting discharges in sewers may act on the proper functioning of the sewer system, on the wastewater treatment plant reliability and on the receiving water body preservation. Therefore, the implementation of a chemical monitoring network is necessary to promptly detect and contain the event of contamination. Sensor location is usually an optimization exercise that is based on probabilistic or black-box methods and their efficiency is usually dependent on the initial assumption made on possible eligibility of nodes to become a monitoring point. It is a common practice to establish an initial non-informative assumption by considering all network nodes to have equal possibilities to allocate a sensor. In the present study, such a common approach is compared with different initial strategies to pre-screen eligible nodes as a function of topological and hydraulic information, and non-formal 'grey' information on the most probable locations of the contamination source. Such strategies were previously compared for conservative xenobiotic contaminations and now they are compared for a more difficult identification exercise: the detection of nonconservative immanent contaminants. The strategies are applied to a Bayesian optimization approach that demonstrated to be efficient in contamination source location. The case study is the literature network of the Storm Water Management Model (SWMM) manual, Example 8. The results show that the pre-screening and ‘grey’ information are able to reduce the computational effort needed to obtain the optimal solution or, with equal computational effort, to improve location efficiency. The nature of the contamination is highly relevant, affecting monitoring efficiency, sensor location and computational efforts to reach optimality.


2007 ◽  
Vol 111 (1120) ◽  
pp. 389-396 ◽  
Author(s):  
G. Campa ◽  
M. R. Napolitano ◽  
M. Perhinschi ◽  
M. L. Fravolini ◽  
L. Pollini ◽  
...  

Abstract This paper describes the results of an effort on the analysis of the performance of specific ‘pose estimation’ algorithms within a Machine Vision-based approach for the problem of aerial refuelling for unmanned aerial vehicles. The approach assumes the availability of a camera on the unmanned aircraft for acquiring images of the refuelling tanker; also, it assumes that a number of active or passive light sources – the ‘markers’ – are installed at specific known locations on the tanker. A sequence of machine vision algorithms on the on-board computer of the unmanned aircraft is tasked with the processing of the images of the tanker. Specifically, detection and labeling algorithms are used to detect and identify the markers and a ‘pose estimation’ algorithm is used to estimate the relative position and orientation between the two aircraft. Detailed closed-loop simulation studies have been performed to compare the performance of two ‘pose estimation’ algorithms within a simulation environment that was specifically developed for the study of aerial refuelling problems. Special emphasis is placed on the analysis of the required computational effort as well as on the accuracy and the error propagation characteristics of the two methods. The general trade offs involved in the selection of the pose estimation algorithm are discussed. Finally, simulation results are presented and analysed.


2017 ◽  
Vol 139 (11) ◽  
Author(s):  
Wei Chen ◽  
Mark Fuge

To solve a design problem, sometimes it is necessary to identify the feasible design space. For design spaces with implicit constraints, sampling methods are usually used. These methods typically bound the design space; that is, limit the range of design variables. But bounds that are too small will fail to cover all possible designs, while bounds that are too large will waste sampling budget. This paper tries to solve the problem of efficiently discovering (possibly disconnected) feasible domains in an unbounded design space. We propose a data-driven adaptive sampling technique—ε-margin sampling, which learns the domain boundary of feasible designs and also expands our knowledge on the design space as available budget increases. This technique is data-efficient, in that it makes principled probabilistic trade-offs between refining existing domain boundaries versus expanding the design space. We demonstrate that this method can better identify feasible domains on standard test functions compared to both random and active sampling (via uncertainty sampling). However, a fundamental problem when applying adaptive sampling to real world designs is that designs often have high dimensionality and thus require (in the worst case) exponentially more samples per dimension. We show how coupling design manifolds with ε-margin sampling allows us to actively expand high-dimensional design spaces without incurring this exponential penalty. We demonstrate this on real-world examples of glassware and bottle design, where our method discovers designs that have different appearance and functionality from its initial design set.


Author(s):  
Shibnath Mukherjee ◽  
Aryya Gangopadhyay ◽  
Zhiyuan Chen

While data mining has been widely acclaimed as a technology that can bring potential benefits to organizations, such efforts may be negatively impacted by the possibility of discovering sensitive patterns, particularly in patient data. In this article the authors present an approach to identify the optimal set of transactions that, if sanitized, would result in hiding sensitive patterns while reducing the accidental hiding of legitimate patterns and the damage done to the database as much as possible. Their methodology allows the user to adjust their preference on the weights assigned to benefits in terms of the number of restrictive patterns hidden, cost in terms of the number of legitimate patterns hidden, and damage to the database in terms of the difference between marginal frequencies of items for the original and sanitized databases. Most approaches in solving the given problem found in literature are all-heuristic based without formal treatment for optimality. While in a few work, ILP has been used previously as a formal optimization approach, the novelty of this method is the extremely low cost-complexity model in contrast to the others. They implement our methodology in C and C++ and ran several experiments with synthetic data generated with the IBM synthetic data generator. The experiments show excellent results when compared to those in the literature.


Actuators ◽  
2020 ◽  
Vol 9 (1) ◽  
pp. 17 ◽  
Author(s):  
Niklas König ◽  
Matthias Nienhaus ◽  
Emanuele Grasso

Techniques for estimating the plunger position have successfully proven to support operation and monitoring of electromagnetic actuators without the necessity of additional sensors. Sophisticated techniques in this field make use of an oversampled measurement of the rippled driving current in order to reconstruct the position. However, oversampling algorithms place high demands on AD converters and require significant computational effort which are not desirable in low-cost actuation systems. Moreover, such low-cost actuators are affected by eddy currents and parasitic capacitances, which influence the current ripple significantly. Therefore, in this work, those current ripples are modeled and analyzed extensively taking into account those effects. The Integrator-Based Direct Inductance Measurement (IDIM) technique, used for processing the current ripples, is presented and compared experimentally to an oversampling technique in terms of noise robustness and implementation effort. A practical use case scenario in terms of a sensorless end-position detection for a switching solenoid is discussed and evaluated. The obtained results prove that the IDIM technique outperforms oversampling algorithms under certain conditions in terms of noise robustness, thereby requiring less sampling and calculation effort. The IDIM technique is shown to provide a robust position estimation in low-cost applications as in the presented example involving a end-position detection.


Electronics ◽  
2019 ◽  
Vol 8 (2) ◽  
pp. 177 ◽  
Author(s):  
Gianpiero Cabodi ◽  
Paolo Camurati ◽  
Alessandro Garbo ◽  
Michele Giorelli ◽  
Stefano Quer ◽  
...  

Research on autonomous cars, early intensified in the 1990s, is becoming one of the main research paths in automotive industry. Recent works use Rapidly-exploring Random Trees to explore the state space along a given reference path, and to compute the minimum time collision-free path in real time. Those methods do not require good approximations of the reference path, they are able to cope with discontinuous routes, they are capable of navigating in realistic traffic scenarios, and they derive their power from an extensive computational effort directed to improve the quality of the trajectory from step to step. In this paper, we focus on re-engineering an existing state-of-the-art sequential algorithm to obtain a CUDA-based GPGPU (General Purpose Graphics Processing Units) implementation. To do that, we show how to partition the original algorithm among several working threads running on the GPU, how to propagate information among threads, and how to synchronize those threads. We also give detailed evidence on how to organize memory transfers between the CPU and the GPU (and among different CUDA kernels) such that planning times are optimized and the available memory is not exceeded while storing massive amounts of fuse data. To sum up, in our application the GPU is used for all main operations, the entire application is developed in the CUDA language, and specific attention is paid to concurrency, synchronization, and data communication. We run experiments on several real scenarios, comparing the GPU implementation with the CPU one in terms of the quality of the generated paths and in terms of computation (wall-clock) times. The results of our experiments show that embedded GPUs can be used as an enabler for real-time applications of computationally expensive planning approaches.


Sensors ◽  
2020 ◽  
Vol 20 (2) ◽  
pp. 373 ◽  
Author(s):  
Piotr Augustyniak

A non-uniform distribution of diagnostic information in the electrocardiogram (ECG) has been commonly accepted and is the background to several compression, denoising and watermarking methods. Gaze tracking is a widely recognized method for identification of an observer’s preferences and interest areas. The statistics of experts’ scanpaths were found to be a convenient quantitative estimate of medical information density for each particular component (i.e., wave) of the ECG record. In this paper we propose the application of generalized perceptual features to control the adaptive sampling of a digital ECG. Firstly, based on temporal distribution of the information density, local ECG bandwidth is estimated and projected to the actual positions of components in heartbeat representation. Next, the local sampling frequency is calculated pointwise and the ECG is adaptively low-pass filtered in all simultaneous channels. Finally, sample values are interpolated at new time positions forming a non-uniform time series. In evaluation of perceptual sampling, an inverse transform was used for the reconstruction of regularly sampled ECG with a percent root-mean-square difference (PRD) error of 3–5% (for compression ratios 3.0–4.7, respectively). Nevertheless, tests performed with the use of the CSE Database show good reproducibility of ECG diagnostic features, within the IEC 60601-2-25:2015 requirements, thanks to the occurrence of distortions in less relevant parts of the cardiac cycle.


Sign in / Sign up

Export Citation Format

Share Document