state trajectory
Recently Published Documents


TOTAL DOCUMENTS

73
(FIVE YEARS 7)

H-INDEX

10
(FIVE YEARS 0)

Author(s):  
Emmanuel E. Skamangas ◽  
John A. Lawton ◽  
Jonathan T. Black
Keyword(s):  

2021 ◽  
Vol 2021 ◽  
pp. 1-19
Author(s):  
Ziquan Jiao ◽  
Zhiqiang Feng ◽  
Na Lv ◽  
Wenjing Liu ◽  
Haijian Qin

A clustering similarity particle filter based on state trajectory consistency is presented for the mathematical modeling, performance estimation, and smart sensing of nonlinear systems. Starting from an information fusion model based on the consistency principle of the spatial state trajectory, the predicted observation information of the current particle filter (original trajectory) and future multistage Gaussian particle filter (modified trajectory) are selected as the state trajectories of the sampling particles. Clustering similarity methods are used to measure the state trajectories of the sampling particles and the actual system (reference trajectory). The importance weight of a first-order Markov model is updated with the measurement results. By integrating the targeted compensation scheme of the latest measurement information into the sequential importance sampling process, the adverse effects of the particle degradation phenomenon are effectively reduced. The convergence theorems of the improved particle filter are proposed and proved. The improved filter is applied to practical cases of nonlinear process estimation, economic statistical prediction, and battery health assessment, and the simulation results show that the improved particle filter is superior to traditional filters in estimation accuracy, efficiency, and robustness.


2021 ◽  
Vol 55 (1 (254)) ◽  
pp. 56-63
Author(s):  
Arman S. Shahinyan

The linearized dynamics of a UAV is considered along with a pendulum hanging from it. The state trajectories of the center of mass of the UAV are given. Given the trajectory of the center of mass of the UAV and the state trajectory of its yaw angle, we have to find the control actions and conditions under which the UAV would follow the path while holding the pendulum stable around its lower equilibrium point. The problem is solved using the method for solving inverse problems of dynamics. All the state trajectories of the system and all the control actions are calculated. The condition is obtained under which a solution to the path following problem exists. A specified simple trajectory is chosen as an example for visualizing the results.


Entropy ◽  
2020 ◽  
Vol 22 (10) ◽  
pp. 1120
Author(s):  
Tom Lefebvre ◽  
Guillaume Crevecoeur

In this article, we present a generalized view on Path Integral Control (PIC) methods. PIC refers to a particular class of policy search methods that are closely tied to the setting of Linearly Solvable Optimal Control (LSOC), a restricted subclass of nonlinear Stochastic Optimal Control (SOC) problems. This class is unique in the sense that it can be solved explicitly yielding a formal optimal state trajectory distribution. In this contribution, we first review the PIC theory and discuss related algorithms tailored to policy search in general. We are able to identify a generic design strategy that relies on the existence of an optimal state trajectory distribution and finds a parametric policy by minimizing the cross-entropy between the optimal and a state trajectory distribution parametrized by a parametric stochastic policy. Inspired by this observation, we then aim to formulate a SOC problem that shares traits with the LSOC setting yet that covers a less restrictive class of problem formulations. We refer to this SOC problem as Entropy Regularized Trajectory Optimization. The problem is closely related to the Entropy Regularized Stochastic Optimal Control setting which is often addressed lately by the Reinforcement Learning (RL) community. We analyze the theoretical convergence behavior of the theoretical state trajectory distribution sequence and draw connections with stochastic search methods tailored to classic optimization problems. Finally we derive explicit updates and compare the implied Entropy Regularized PIC with earlier work in the context of both PIC and RL for derivative-free trajectory optimization.


Author(s):  
Sai Phani Kumar Malladi ◽  
Jayanta Mukhopadhyay ◽  
Mohamed-Chaker Larabi ◽  
Santanu Chaudhury

2020 ◽  
Vol 69 (9) ◽  
pp. 6016-6029
Author(s):  
Branislav Rudic ◽  
Markus Pichler-Scheder ◽  
Dmitry Efrosinin ◽  
Veronika Putz ◽  
Erwin Schimback ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document