scholarly journals Statistical Machine Learning in Model Predictive Control of Nonlinear Processes

Mathematics ◽  
2021 ◽  
Vol 9 (16) ◽  
pp. 1912
Author(s):  
Zhe Wu ◽  
David Rincon ◽  
Quanquan Gu ◽  
Panagiotis D. Christofides

Recurrent neural networks (RNNs) have been widely used to model nonlinear dynamic systems using time-series data. While the training error of neural networks can be rendered sufficiently small in many cases, there is a lack of a general framework to guide construction and determine the generalization accuracy of RNN models to be used in model predictive control systems. In this work, we employ statistical machine learning theory to develop a methodological framework of generalization error bounds for RNNs. The RNN models are then utilized to predict state evolution in model predictive controllers (MPC), under which closed-loop stability is established in a probabilistic manner. A nonlinear chemical process example is used to investigate the impact of training sample size, RNN depth, width, and input time length on the generalization error, along with the analyses of probabilistic closed-loop stability through the closed-loop simulations under Lyapunov-based MPC.

Author(s):  
Dominic Liao-Mc Pherson ◽  
Terrence Skibik ◽  
Jordan Leung ◽  
Ilya V. Kolmanovsky ◽  
Marco M Nicotra

2012 ◽  
Vol 2012 ◽  
pp. 1-8
Author(s):  
Wei Shanbi ◽  
Chai Yi ◽  
Li Penghua

This paper addresses a distributed model predictive control (DMPC) scheme for multiagent systems with improving control performance. In order to penalize the deviation of the computed state trajectory from the assumed state trajectory, the deviation punishment is involved in the local cost function of each agent. The closed-loop stability is guaranteed with a large weight for deviation punishment. However, this large weight leads to much loss of control performance. Hence, the time-varying compatibility constraints of each agent are designed to balance the closed-loop stability and the control performance, so that the closed-loop stability is achieved with a small weight for the deviation punishment. A numerical example is given to illustrate the effectiveness of the proposed scheme.


2021 ◽  
Vol 12 (1) ◽  
Author(s):  
Abdulkadir Canatar ◽  
Blake Bordelon ◽  
Cengiz Pehlevan

AbstractA theoretical understanding of generalization remains an open problem for many machine learning models, including deep networks where overparameterization leads to better performance, contradicting the conventional wisdom from classical statistics. Here, we investigate generalization error for kernel regression, which, besides being a popular machine learning method, also describes certain infinitely overparameterized neural networks. We use techniques from statistical mechanics to derive an analytical expression for generalization error applicable to any kernel and data distribution. We present applications of our theory to real and synthetic datasets, and for many kernels including those that arise from training deep networks in the infinite-width limit. We elucidate an inductive bias of kernel regression to explain data with simple functions, characterize whether a kernel is compatible with a learning task, and show that more data may impair generalization when noisy or not expressible by the kernel, leading to non-monotonic learning curves with possibly many peaks.


Sign in / Sign up

Export Citation Format

Share Document