On a Successive Approximation Technique in Solving Some Control System Optimization Problems

1963 ◽  
Vol 85 (2) ◽  
pp. 177-180 ◽  
Author(s):  
Masanao Aoki

It has been realized for some time that most realistic optimization problems defy analytical solutions in closed forms and that in most cases it is necessary to resort to judicious combinations of analytical and computational procedures to solve problems. For example, in many optimization problems, one is interested in obtaining structural information on optimal and “good” suboptimal policies. Very often, various analytical as well as computational approximation techniques need be employed to obtain clear understandings of structures of policy spaces. The paper discusses a successive approximation technique to construct minimizing sequences for functionals in extremal problems, and the techniques will be applied, to a class of control optimization problems given by: Minv  J(v)=Minv  ∫01g(u.v)dt, where du/dt = h(u, v), h(u, v) linear in u and v, and where u and v are, in general, elements of Banach spaces. In Section 2, the minimizing sequences are constructed by approximating g(u, v) by appropriate quadratic expressions with linear constraining differential equations. It is shown that under the stated conditions the functional values converge to the minimal value monotonically. In Section 3, an example is included to illustrate some of the techniques discussed in the paper.

Author(s):  
Hosam K. Fathy ◽  
Panos Y. Papalambros ◽  
A. Galip Ulsoy

The plant and control optimization problems are coupled in the sense that solving them sequentially does not guarantee system optimality. This paper extends previous studies of this coupling by relaxing their assumption of full state measurement availability. An original derivation of first-order necessary conditions for plant, observer, controller, and combined optimality furnishes coupling terms quantifying the underlying trilateral coupling. Special scenarios where the problems decouple are pinpointed, and a nested optimization strategy that guarantees system optimization strategy that guarantees system optimality is adopted otherwise. Applying these results to combined passive/active car suspension optimization produces a suspension design outperforming its passive, active, and sequentially optimized passive/active counterparts.


Author(s):  
Bong Seong Jung ◽  
Bryan W. Karney

Genetic algorithms have been used to solve many water distribution system optimization problems, but have generally been limited to steady state or quasi-steady state optimization. However, transient events within pipe system are inevitable and the effect of water hammer should not be overlooked. The purpose of this paper is to optimize the selection, sizing and placement of hydraulic devices in a pipeline system considering its transient response. A global optimal solution using genetic algorithm suggests optimal size, location and number of hydraulic devices to cope with water hammer. This study shows that the integration of a genetic algorithm code with a transient simulator can improve both the design and the response of a pipe network. This study also shows that the selection of optimum protection strategy is an integrated problem, involving consideration of loading condition, device and system characteristics, and protection strategy. Simpler transient control systems are often found to outperform more complex ones.


2021 ◽  
Vol 15 (3) ◽  
pp. 1-31
Author(s):  
Haida Zhang ◽  
Zengfeng Huang ◽  
Xuemin Lin ◽  
Zhe Lin ◽  
Wenjie Zhang ◽  
...  

Driven by many real applications, we study the problem of seeded graph matching. Given two graphs and , and a small set of pre-matched node pairs where and , the problem is to identify a matching between and growing from , such that each pair in the matching corresponds to the same underlying entity. Recent studies on efficient and effective seeded graph matching have drawn a great deal of attention and many popular methods are largely based on exploring the similarity between local structures to identify matching pairs. While these recent techniques work provably well on random graphs, their accuracy is low over many real networks. In this work, we propose to utilize higher-order neighboring information to improve the matching accuracy and efficiency. As a result, a new framework of seeded graph matching is proposed, which employs Personalized PageRank (PPR) to quantify the matching score of each node pair. To further boost the matching accuracy, we propose a novel postponing strategy, which postpones the selection of pairs that have competitors with similar matching scores. We show that the postpone strategy indeed significantly improves the matching accuracy. To improve the scalability of matching large graphs, we also propose efficient approximation techniques based on algorithms for computing PPR heavy hitters. Our comprehensive experimental studies on large-scale real datasets demonstrate that, compared with state-of-the-art approaches, our framework not only increases the precision and recall both by a significant margin but also achieves speed-up up to more than one order of magnitude.


Author(s):  
Ashok V. Kumar ◽  
David C. Gossard

Abstract A sequential approximation technique for non-linear programming is presented here that is particularly suited for problems in engineering design and structural optimization, where the number of variables are very large and function and sensitivity evaluations are computationally expensive. A sequence of sub-problems are iteratively generated using a linear approximation for the objective function and setting move limits on the variables using a barrier method. These sub-problems are strictly convex. Computation per iteration is significantly reduced by not solving the sub-problems exactly. Instead at each iteration, a few Newton-steps are taken for the sub-problem. A criteria for moving the move limit, is described that reduces or eliminates stepsize reduction during line search. The method was found to perform well for unconstrained and linearly constrained optimization problems. It requires very few function evaluations, does not require the hessian of the objective function and evaluates its gradient only once per iteration.


2020 ◽  
Vol 189 ◽  
pp. 106984 ◽  
Author(s):  
Behzad Pouladi ◽  
Abdorreza Karkevandi-Talkhooncheh ◽  
Mohammad Sharifi ◽  
Shahab Gerami ◽  
Alireza Nourmohammad ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document