Optimal, Worst Case Filter Design via Convex Optimization

Author(s):  
Kunpeng Sun ◽  
Andy Packard
Author(s):  
Yanhui Li ◽  
Yan Liang ◽  
Xionglin Luo

The paper investigates the problems of delay-dependent L1 filtering for linear parameter-varying (LPV) systems with parameter-varying delays, in which the state-space data and the time delays are dependent on parameters that are measurable in real-time and vary in a compact set with bounded variation rate. The attention is focused on the design of L1 filter that guarantees the filtering error system to be asymptotically stable and satisfies the worst-case peak-to-peak gain of the filtering error system. In particular, we concentrate on the delay-dependent case, using parameter-dependent Lyapunov function, the decoupled peak-to-peak performance criterion is first established for a class of LPV systems. Under this condition, the admissible filter can be found in terms of linear matrix inequality (LMI) technology. According to approximate basis function and the gridding technique, the filter design problem is transformed into feasible solution problem of the finite parameter LMIs. Finally, a numerical example is provided to illustrate the feasibility of the developed approach.


2020 ◽  
Vol 34 (04) ◽  
pp. 6162-6169
Author(s):  
Guanghui Wang ◽  
Shiyin Lu ◽  
Yao Hu ◽  
Lijun Zhang

We aim to design universal algorithms for online convex optimization, which can handle multiple common types of loss functions simultaneously. The previous state-of-the-art universal method has achieved the minimax optimality for general convex, exponentially concave and strongly convex loss functions. However, it remains an open problem whether smoothness can be exploited to further improve the theoretical guarantees. In this paper, we provide an affirmative answer by developing a novel algorithm, namely UFO, which achieves O(√L*), O(d log L*) and O(log L*) regret bounds for the three types of loss functions respectively under the assumption of smoothness, where L* is the cumulative loss of the best comparator in hindsight, and d is dimensionality. Thus, our regret bounds are much tighter when the comparator has a small loss, and ensure the minimax optimality in the worst case. In addition, it is worth pointing out that UFO is the first to achieve the O(log L*) regret bound for strongly convex and smooth functions, which is tighter than the existing small-loss bound by an O(d) factor.


2019 ◽  
Vol 68 (1) ◽  
pp. 393-404 ◽  
Author(s):  
Ricardo Tadashi Kobayashi ◽  
Taufik Abrao

Author(s):  
Yurii Nesterov

AbstractIn this paper we develop new tensor methods for unconstrained convex optimization, which solve at each iteration an auxiliary problem of minimizing convex multivariate polynomial. We analyze the simplest scheme, based on minimization of a regularized local model of the objective function, and its accelerated version obtained in the framework of estimating sequences. Their rates of convergence are compared with the worst-case lower complexity bounds for corresponding problem classes. Finally, for the third-order methods, we suggest an efficient technique for solving the auxiliary problem, which is based on the recently developed relative smoothness condition (Bauschke et al. in Math Oper Res 42:330–348, 2017; Lu et al. in SIOPT 28(1):333–354, 2018). With this elaboration, the third-order methods become implementable and very fast. The rate of convergence in terms of the function value for the accelerated third-order scheme reaches the level $$O\left( {1 \over k^4}\right) $$O1k4, where k is the number of iterations. This is very close to the lower bound of the order $$O\left( {1 \over k^5}\right) $$O1k5, which is also justified in this paper. At the same time, in many important cases the computational cost of one iteration of this method remains on the level typical for the second-order methods.


Sign in / Sign up

Export Citation Format

Share Document