noisy optimization
Recently Published Documents


TOTAL DOCUMENTS

55
(FIVE YEARS 6)

H-INDEX

13
(FIVE YEARS 0)

2021 ◽  
pp. 000370282110598
Author(s):  
Jie Ke ◽  
Chuang Gao ◽  
Ana A. Folgueiras-Amador ◽  
Katherine E Jolley ◽  
Oscar de Frutos ◽  
...  

A continuous-flow electrochemical synthesis platform has been developed to enable self-optimization of reaction conditions of organic electrochemical reactions using attenuated total reflection Fourier transform infrared spectroscopy (ATR FT-IR) and gas chromatography (GC) as online real-time monitoring techniques. We have overcome the challenges in using ATR FT-IR as the downstream analytical methods imposed when a large amount of hydrogen gas is produced from the counter electrode by designing two types of gas–liquid separators (GLS) for analysis of the product mixture flowing from the electrochemical reactor. In particular, we report an integrated GLS with an ATR FT-IR probe at the reactor outlet to give a facile and low-cost solution to determining the concentrations of products in gas–liquid two-phase flow. This approach provides a reliable method for quantifying low-volatile analytes, which can be problematic to be monitored by GC. Two electrochemical reactions the methoxylation of 1-formylpyrrolidine and the oxidation of 3-bromobenzyl alcohol were investigated to demonstrate that the optimal conditions can be located within the pre-defined multi-dimensional reaction parameter spaces without intervention of the operator by using the stable noisy optimization by branch and FIT (SNOBFIT) algorithm.


2021 ◽  
Author(s):  
◽  
Juan Rada-Vilela

<p>Particle Swarm Optimization (PSO) is a metaheuristic where a swarm of particles explores the search space of an optimization problem to find good solutions. However, if the problem is subject to noise, the quality of the resulting solutions significantly deteriorates. The literature has attributed such a deterioration to particles suffering from inaccurate memories and from the incorrect selection of their neighborhood best solutions. For both cases, the incorporation of noise mitigation mechanisms has improved the quality of the results, but the analyses beyond such improvements often fall short of empirical evidence supporting their claims in terms other than the quality of the results. Furthermore, there is not even evidence showing the extent to which inaccurate memories and incorrect selection affect the particles in the swarm. Therefore, the performance of PSO on noisy optimization problems remains largely unexplored. The overall goal of this thesis is to study the effect of noise on PSO beyond the known deterioration of its results in order to develop more efficient noise mitigation mechanisms. Based on the allocation of function evaluations by the noise mitigation mechanisms, we distinguish three groups of PSO algorithms as: single-evaluation, which sacrifice the accuracy of the objective values over performing more iterations; resampling-based, which sacrifice performing more iterations over better estimating the objective values; and hybrids, which merge methods from the previous two. With an empirical approach, we study and analyze the performance of existing and new PSO algorithms from each group on 20 large-scale benchmark functions subject to different levels of multiplicative Gaussian noise. Throughout the search process, we compute a set of 16 population statistics that measure different characteristics of the swarms and provide useful information that we utilize to design better PSO algorithms. Our study identifies and defines deception, blindness and disorientation as three conditions from which particles suffer in noisy optimization problems. The population statistics for different PSO algorithms reveal that particles often suffer from large proportions of deception, blindness and disorientation, and show that reducing these three conditions would lead to better results. The sensitivity of PSO to noisy optimization problems is confirmed and highlights the importance of noise mitigation mechanisms. The population statistics for single-evaluation PSO algorithms show that the commonly used evaporation mechanism produces too much disorientation, leading to divergent behaviour and to the worst results within the group. Two better algorithms are designed, the first utilizes probabilistic updates to reduce disorientation, and the second computes a centroid solution as the neighborhood best solution to reduce deception. The population statistics for resampling-based PSO algorithms show that basic resampling still leads to large proportions of deception and blindness, and its results are the worst within the group. Two better algorithms are designed to reduce deception and blindness. The first provides better estimates of the personal best solutions, and the second provides even better estimates of a few solutions from which the neighborhood best solutions are selected. However, an existing PSO algorithm is the best within the group as it strives to asymptotically minimize deception by sequentially reducing both blindness and disorientation. The population statistics for hybrid PSO algorithms show that they provide the best results thanks to a combined reduction of deception, blindness and disorientation. Amongst the hybrids, we find a promising algorithm whose simplicity, flexibility and quality of results questions the importance of overly complex methods designed to minimize deception. Overall, our research presents a thorough study to design, evaluate and tune PSO algorithms to address optimization problems subject to noise.</p>


2021 ◽  
Author(s):  
◽  
Juan Rada-Vilela

<p>Particle Swarm Optimization (PSO) is a metaheuristic where a swarm of particles explores the search space of an optimization problem to find good solutions. However, if the problem is subject to noise, the quality of the resulting solutions significantly deteriorates. The literature has attributed such a deterioration to particles suffering from inaccurate memories and from the incorrect selection of their neighborhood best solutions. For both cases, the incorporation of noise mitigation mechanisms has improved the quality of the results, but the analyses beyond such improvements often fall short of empirical evidence supporting their claims in terms other than the quality of the results. Furthermore, there is not even evidence showing the extent to which inaccurate memories and incorrect selection affect the particles in the swarm. Therefore, the performance of PSO on noisy optimization problems remains largely unexplored. The overall goal of this thesis is to study the effect of noise on PSO beyond the known deterioration of its results in order to develop more efficient noise mitigation mechanisms. Based on the allocation of function evaluations by the noise mitigation mechanisms, we distinguish three groups of PSO algorithms as: single-evaluation, which sacrifice the accuracy of the objective values over performing more iterations; resampling-based, which sacrifice performing more iterations over better estimating the objective values; and hybrids, which merge methods from the previous two. With an empirical approach, we study and analyze the performance of existing and new PSO algorithms from each group on 20 large-scale benchmark functions subject to different levels of multiplicative Gaussian noise. Throughout the search process, we compute a set of 16 population statistics that measure different characteristics of the swarms and provide useful information that we utilize to design better PSO algorithms. Our study identifies and defines deception, blindness and disorientation as three conditions from which particles suffer in noisy optimization problems. The population statistics for different PSO algorithms reveal that particles often suffer from large proportions of deception, blindness and disorientation, and show that reducing these three conditions would lead to better results. The sensitivity of PSO to noisy optimization problems is confirmed and highlights the importance of noise mitigation mechanisms. The population statistics for single-evaluation PSO algorithms show that the commonly used evaporation mechanism produces too much disorientation, leading to divergent behaviour and to the worst results within the group. Two better algorithms are designed, the first utilizes probabilistic updates to reduce disorientation, and the second computes a centroid solution as the neighborhood best solution to reduce deception. The population statistics for resampling-based PSO algorithms show that basic resampling still leads to large proportions of deception and blindness, and its results are the worst within the group. Two better algorithms are designed to reduce deception and blindness. The first provides better estimates of the personal best solutions, and the second provides even better estimates of a few solutions from which the neighborhood best solutions are selected. However, an existing PSO algorithm is the best within the group as it strives to asymptotically minimize deception by sequentially reducing both blindness and disorientation. The population statistics for hybrid PSO algorithms show that they provide the best results thanks to a combined reduction of deception, blindness and disorientation. Amongst the hybrids, we find a promising algorithm whose simplicity, flexibility and quality of results questions the importance of overly complex methods designed to minimize deception. Overall, our research presents a thorough study to design, evaluate and tune PSO algorithms to address optimization problems subject to noise.</p>


2021 ◽  
Vol 11 (15) ◽  
pp. 6922
Author(s):  
Jeongmin Kim ◽  
Ellen J. Hong ◽  
Youngjee Yang ◽  
Kwang Ryel Ryu

In this paper, we claim that the operation schedule of automated stacking cranes (ASC) in the storage yard of automated container terminals can be built effectively and efficiently by using a crane dispatching policy, and propose a noisy optimization algorithm named N-RTS that can derive such a policy efficiently. To select a job for an ASC, our dispatching policy uses a multi-criteria scoring function to calculate the score of each candidate job using a weighted summation of the evaluations in those criteria. As the calculated score depends on the respective weights of these criteria, and thus a different weight vector gives rise to a different best candidate, a weight vector can be deemed as a policy. A good weight vector, or policy, can be found by a simulation-based search where a candidate policy is evaluated through a computationally expensive simulation of applying the policy to some operation scenarios. We may simplify the simulation to save time but at the cost of sacrificing the evaluation accuracy. N-RTS copes with this dilemma by maintaining a good balance between exploration and exploitation. Experimental results show that the policy derived by N-RTS outperforms other ASC scheduling methods. We also conducted additional experiments using some benchmark functions to validate the performance of N-RTS.


Author(s):  
Florian Hase ◽  
Matteo Aldeghi ◽  
Riley Hickman ◽  
Loic Roch ◽  
Elena Liles ◽  
...  

Author(s):  
Zhenhua Li ◽  
Shuo Zhang ◽  
Xinye Cai ◽  
Qingfu Zhang ◽  
Xiaomin Zhu ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document