Asymptotic convergence properties of the annealing evolution algorithm

Author(s):  
Y.J. Cao
2008 ◽  
Vol 25 (06) ◽  
pp. 735-751
Author(s):  
KAZUKI IWAMOTO ◽  
TADASHI DOHI ◽  
NAOTO KAIO

This paper addresses statistical estimation problems of the optimal repair-cost limits minimizing the long-run average costs per unit time in discrete seting. Two discrete repair-cost limit replacement models with/without imperfect repair are considered. We derive the optimal repair-cost limits analytically and develop the statistical non-parametric procedures to estimate them from the complete sample of repair cost. Then the discrete total time on test (DTTT) concept is introduced and applied to propose the resulting estimators. Numerical experiments through Monte Carlo simulation are provided to show their asymptotic convergence properties as the number of repair-cost data increases. A comprehensive bibliography in this research topic is also provided.


2011 ◽  
Vol 23 (8) ◽  
pp. 2140-2168 ◽  
Author(s):  
Yan Yang ◽  
Jinwen Ma

Mixture of experts (ME) is a modular neural network architecture for supervised classification. The double-loop expectation-maximization (EM) algorithm has been developed for learning the parameters of the ME architecture, and the iteratively reweighted least squares (IRLS) algorithm and the Newton-Raphson algorithm are two popular schemes for learning the parameters in the inner loop or gating network. In this letter, we investigate asymptotic convergence properties of the EM algorithm for ME using either the IRLS or Newton-Raphson approach. With the help of an overlap measure for the ME model, we obtain an upper bound of the asymptotic convergence rate of the EM algorithm in each case. Moreover, we find that for the Newton approach as a specific Newton-Raphson approach to learning the parameters in the inner loop, the upper bound of asymptotic convergence rate of the EM algorithm locally around the true solution Θ* is [Formula: see text], where ϵ>0 is an arbitrarily small number, o(x) means that it is a higher-order infinitesimal as x → 0, and e(Θ*) is a measure of the average overlap of the ME model. That is, as the average overlap of the true ME model with large sample tends to zero, the EM algorithm with the Newton approach to learning the parameters in the inner loop tends to be asymptotically superlinear. Finally, we substantiate our theoretical results by simulation experiments.


2021 ◽  
Vol 31 (4) ◽  
pp. 1-26
Author(s):  
Jungmin Han ◽  
Seong-Hee Kim ◽  
Chuljin Park

Penalty function with memory (PFM) in Park and Kim [2015] is proposed for discrete optimization via simulation problems with multiple stochastic constraints where performance measures of both an objective and constraints can be estimated only by stochastic simulation. The original PFM is shown to perform well, finding a true best feasible solution with a higher probability than other competitors even when constraints are tight or near-tight. However, PFM applies simple budget allocation rules (e.g., assigning an equal number of additional observations) to solutions sampled at each search iteration and uses a rather complicated penalty sequence with several user-specified parameters. In this article, we propose an improved version of PFM, namely IPFM, which can combine the PFM with any simulation budget allocation procedure that satisfies some conditions within a general DOvS framework. We present a version of a simulation budget allocation procedure useful for IPFM and introduce a new penalty sequence, namely PS 2 + , which is simpler than the original penalty sequence yet holds convergence properties within IPFM with better finite-sample performances. Asymptotic convergence properties of IPFM with PS 2 + are proved. Our numerical results show that the proposed method greatly improves both efficiency and accuracy compared to the original PFM.


Sign in / Sign up

Export Citation Format

Share Document