Optimal Parametric Control of Nonlinear Random Vibrating Systems

2020 ◽  
Vol 143 (4) ◽  
Author(s):  
Wenwen Chang ◽  
Xiaoling Jin ◽  
Zhilong Huang

Abstract Due to the great progresses in the fields of smart structures, especially smart soft materials and structures, the parametric control of nonlinear systems attracts extensive attentions in scientific and industrial communities. This paper devotes to the derivation of the optimal parametric control strategy for nonlinear random vibrating systems, in which the excitations are confined to Gaussian white noises. For a prescribed performance index balancing the control performance and control cost, the stochastic dynamic programming equation with respect to the value function is first derived by the principle of dynamic programming. The optimal feedback control law is established according to the extremum condition. The explicit expression of the value function is determined by approximately expressing as a quadratic function of state variables and by solving the final dynamic programming equation. The application and efficacy of the optimal parametric control are illustrated by a random-excited Duffing oscillator and a dielectric elastomer balloon with random pressure. The numerical results show that the optimal parameter control possesses good effectiveness, high efficiency, and high robustness to excitation intensity, and is superior than the associated optimal bounded parametric control.

Author(s):  
Zhili Tian ◽  
Weidong Han ◽  
Warren B. Powell

Problem definition: Clinical trials are crucial to new drug development. This study investigates optimal patient enrollment in clinical trials with interim analyses, which are analyses of treatment responses from patients at intermediate points. Our model considers uncertainties in patient enrollment and drug treatment effectiveness. We consider the benefits of completing a trial early and the cost of accelerating a trial by maximizing the net present value of drug cumulative profit. Academic/practical relevance: Clinical trials frequently account for the largest cost in drug development, and patient enrollment is an important problem in trial management. Our study develops a dynamic program, accurately capturing the dynamics of the problem, to optimize patient enrollment while learning the treatment effectiveness of an investigated drug. Methodology: The model explicitly captures both the physical state (enrolled patients) and belief states about the effectiveness of the investigated drug and a standard treatment drug. Using Bayesian updates and dynamic programming, we establish monotonicity of the value function in state variables and characterize an optimal enrollment policy. We also introduce, for the first time, the use of backward approximate dynamic programming (ADP) for this problem class. We illustrate the findings using a clinical trial program from a leading firm. Our study performs sensitivity analyses of the input parameters on the optimal enrollment policy. Results: The value function is monotonic in cumulative patient enrollment and the average responses of treatment for the investigated drug and standard treatment drug. The optimal enrollment policy is nondecreasing in the average response from patients using the investigated drug and is nonincreasing in cumulative patient enrollment in periods between two successive interim analyses. The forward ADP algorithm (or backward ADP algorithm) exploiting the monotonicity of the value function reduced the run time from 1.5 months using the exact method to a day (or 20 minutes) within 4% of the exact method. Through an application to a leading firm’s clinical trial program, the study demonstrates that the firm can have a sizable gain of drug profit following the optimal policy that our model provides. Managerial implications: We developed a new model for improving the management of clinical trials. Our study provides insights of an optimal policy and insights into the sensitivity of value function to the dropout rate and prior probability distribution. A firm can have a sizable gain in the drug’s profit by managing its trials using the optimal policies and the properties of value function. We illustrated that firms can use the ADP algorithms to develop their patient enrollment strategies.


2020 ◽  
Vol 26 ◽  
pp. 109
Author(s):  
Manil T. Mohan

In this work, we consider the controlled two dimensional tidal dynamics equations in bounded domains. A distributed optimal control problem is formulated as the minimization of a suitable cost functional subject to the controlled 2D tidal dynamics equations. The existence of an optimal control is shown and the dynamic programming method for the optimal control of 2D tidal dynamics system is also described. We show that the feedback control can be obtained from the solution of an infinite dimensional Hamilton-Jacobi equation. The non-differentiability and lack of smoothness of the value function forced us to use the method of viscosity solutions to obtain a solution of the infinite dimensional Hamilton-Jacobi equation. The Bellman principle of optimality for the value function is also obtained. We show that a viscosity solution to the Hamilton-Jacobi equation can be used to derive the Pontryagin maximum principle, which give us the first order necessary conditions of optimality. Finally, we characterize the optimal control using the adjoint variable.


1997 ◽  
Vol 1 (1) ◽  
pp. 255-277 ◽  
Author(s):  
MICHAEL A. TRICK ◽  
STANLEY E. ZIN

We review the properties of algorithms that characterize the solution of the Bellman equation of a stochastic dynamic program, as the solution to a linear program. The variables in this problem are the ordinates of the value function; hence, the number of variables grows with the state space. For situations in which this size becomes computationally burdensome, we suggest the use of low-dimensional cubic-spline approximations to the value function. We show that fitting this approximation through linear programming provides upper and lower bounds on the solution to the original large problem. The information contained in these bounds leads to inexpensive improvements in the accuracy of approximate solutions.


Author(s):  
Min Sun

AbstractWe consider in this article an evolutionary monotone follower problem in [0,1]. State processes under consideration are controlled diffusion processes , solutions of dyx(t) = g(yx(t), t)dt + σu(yx(t), t) dwt + dυt with yx(0) = x ∈[0, 1], where the control processes υt are increasing, positive, and adapted. The cost functional is of integral type, with certain explicit cost of control action including the cost of jumps. We shall present some analytic results of the value function, mainly its characterisation, by standard dynamic programming arguments.


2021 ◽  
Vol 2021 ◽  
pp. 1-12
Author(s):  
Yanan Li

This paper examines the optimal annuitization, investment, and consumption strategies of an individual facing a time-dependent mortality rate in the tax-deferred annuity model and considers both the case when the rate of buying annuities is unrestricted and the case when it is restricted. At the beginning, by using the dynamic programming principle, we obtain the corresponding HJB equation. Since the existence of the tax and the time-dependence of the value function make the corresponding HJB equation hard to solve, firstly, we analyze the problem in a simpler case and use some numerical methods to get the solution and some of its useful properties. Then, by using the obtained properties and Kuhn–Tucker conditions, we discuss the problem in general cases and get the value functions and the optimal annuitization strategies, respectively.


Sign in / Sign up

Export Citation Format

Share Document