Application of the global discrete-continuous optimization method with selective variables averaging to design of a fast NIR lens

2021 ◽  
Author(s):  
Alexander Terentyev ◽  
Eduard R. Muslimov ◽  
Nadezhda K. Pavlycheva

Medical image registration has important value in actual clinical applications. From the traditional time-consuming iterative similarity optimization method to the short time-consuming supervised deep learning to today's unsupervised learning, the continuous optimization of the registration strategy makes it more feasible in clinical applications. This survey mainly focuses on unsupervised learning methods and introduces the latest solutions for different registration relationships. The registration for inter-modality is a more challenging topic. The application of unsupervised learning in registration for inter-modality is the focus of this article. In addition, this survey also proposes ideas for future research methods to show directions of the future research.


Author(s):  
Johannes Palmer ◽  
Aaron Schartner ◽  
Andrey Danilov ◽  
Vincent Tse

Abstract Magnetic Flux Leakage (MFL) is a robust technology with high data coverage. Decades of continuous sizing improvement allowed for industry-accepted sizing reliability. The continuous optimization of sizing processes ensures accurate results in categorizing metal loss features. However, the identified selection of critical anomalies is not always optimal; sometimes anomalies are dug up too early or unnecessarily, this can be caused by the feature type in the field (true metal loss shape) being incorrectly identified which affects sizing and tolerance. In addition, there is the possibility for incorrectly identifying feature types causing false under-calls. Today, complex empirical formulas together with multifaceted lookup tables fed by pull tests, synthetic data, dig verifications, machine learning, artificial intelligence and last but not least human expertise translate MFL signals into metal loss assessments with high levels of success. Nevertheless, two important principal elements are limiting the possible MFL sizing optimization. One is the empirical character of the signal interpretation. The other is the implicitly induced data and result simplification. The reason to go this principal route for many years is simple: it is methodologically impossible to calculate the metal source geometry directly from the signals. In addition, the pure number of possible relevant geometries is so large that simplification is necessary and inevitable. Moreover, the second methodological reason is the ambiguity of the signal, which defines the target of metal loss sizing as the most probable solution. However, even under the best conditions, the most probable one is not necessarily the correct one. This paper describes a novel, fundamentally different approach as a basic alternative to the common MFL-analysis approach described above. A calculation process is presented, which overcomes the empirical nature of traditional approaches by using a result optimization method that relies on intense computing and avoids any simplification. Additionally, the strategy to overcome MFL ambiguity will be shown. Together with the operator, detailed blind-test examples demonstrate the enormous level of detail, repeatability and accuracy of this groundbreaking technological method with the potential to reduce tool tolerance, increase sizing accuracy, increase growth rate accuracy, and help optimize the dig program to target critical features with greater confidence.


2019 ◽  
Vol 2019 ◽  
pp. 1-23 ◽  
Author(s):  
Amir Shabani ◽  
Behrouz Asgarian ◽  
Saeed Asil Gharebaghi ◽  
Miguel A. Salido ◽  
Adriana Giret

In this paper, a new optimization algorithm called the search and rescue optimization algorithm (SAR) is proposed for solving single-objective continuous optimization problems. SAR is inspired by the explorations carried out by humans during search and rescue operations. The performance of SAR was evaluated on fifty-five optimization functions including a set of classic benchmark functions and a set of modern CEC 2013 benchmark functions from the literature. The obtained results were compared with twelve optimization algorithms including well-known optimization algorithms, recent variants of GA, DE, CMA-ES, and PSO, and recent metaheuristic algorithms. The Wilcoxon signed-rank test was used for some of the comparisons, and the convergence behavior of SAR was investigated. The statistical results indicated SAR is highly competitive with the compared algorithms. Also, in order to evaluate the application of SAR on real-world optimization problems, it was applied to three engineering design problems, and the results revealed that SAR is able to find more accurate solutions with fewer function evaluations in comparison with the other existing algorithms. Thus, the proposed algorithm can be considered an efficient optimization method for real-world optimization problems.


2020 ◽  
Vol 39 (3) ◽  
pp. 3183-3193
Author(s):  
Jieya Li ◽  
Liming Yang

The classical principal component analysis (PCA) is not sparse enough since it is based on the L2-norm that is also prone to be adversely affected by the presence of outliers and noises. In order to address the problem, a sparse robust PCA framework is proposed based on the min of zero-norm regularization and the max of Lp-norm (0 < p ≤ 2) PCA. Furthermore, we developed a continuous optimization method, DC (difference of convex functions) programming algorithm (DCA), to solve the proposed problem. The resulting algorithm (called DC-LpZSPCA) is convergent linearly. In addition, when choosing different p values, the model can keep robust and is applicable to different data types. Numerical simulations are simulated in artificial data sets and Yale face data sets. Experiment results show that the proposed method can maintain good sparsity and anti-outlier ability.


Medical image registration has important value in actual clinical applications. From the traditional time-consuming iterative similarity optimization method to the short time-consuming supervised deep learning to today's unsupervised learning, the continuous optimization of the registration strategy makes it more feasible in clinical applications. This survey mainly focuses on unsupervised learning methods and introduces the latest solutions for different registration relationships. The registration for inter-modality is a more challenging topic. The application of unsupervised learning in registration for inter-modality is the focus of this article. In addition, this survey also proposes ideas for future research methods to show directions of the future research.


2016 ◽  
Vol 7 (4) ◽  
pp. 23-51 ◽  
Author(s):  
Mahamed G.H. Omran ◽  
Maurice Clerc

This paper proposes a new population-based simplex method for continuous function optimization. The proposed method, called Adaptive Population-based Simplex (APS), is inspired by the Low-Dimensional Simplex Evolution (LDSE) method. LDSE is a recent optimization method, which uses the reflection and contraction steps of the Nelder-Mead Simplex method. Like LDSE, APS uses a population from which different simplexes are selected. In addition, a local search is performed using a hyper-sphere generated around the best individual in a simplex. APS is a tuning-free approach, it is easy to code and easy to understand. APS is compared with five state-of-the-art approaches on 23 functions where five of them are quasi-real-world problems. The experimental results show that APS generally performs better than the other methods on the test functions. In addition, a scalability study has been conducted and the results show that APS can work well with relatively high-dimensional problems.


Author(s):  
Sung K. Koh ◽  
G. K. Ananthasuresh

The sequence of 20 types of amino acid residues in a heteropolymer chain of a protein is believed to be the basis for the 3-D conformation (folded structure) that a protein assumes to serve its functions. We present a deterministic optimization method to design the sequence of a simplified model of proteins for a desired conformation. A design methodology developed for the topology optimization of compliant mechanisms is adapted here by converting the discrete combinatorial problem of protein sequence design to a continuous optimization problem. It builds upon our recent work which used a minimum energy criterion on a deterministic approach to protein design using continuous models. This paper focuses on the energy gap criterion, which is argued to be one of the most important characteristics determining the stable folding of a protein chain. The concepts, methodology, and illustrative examples are presented using HP models of proteins where only two types (H: hydrophobic and P: polar) of monomers are considered instead of 20. The highlight of the method presented in this paper is the drastic reduction in computational costs.


2017 ◽  
Vol 29 (11) ◽  
pp. 3014-3039 ◽  
Author(s):  
Liming Yang ◽  
Zhuo Ren ◽  
Yidan Wang ◽  
Hongwei Dong

This work proposes a robust regression framework with nonconvex loss function. Two regression formulations are presented based on the Laplace kernel-induced loss (LK-loss). Moreover, we illustrate that the LK-loss function is a nice approximation for the zero-norm. However, nonconvexity of the LK-loss makes it difficult to optimize. A continuous optimization method is developed to solve the proposed framework. The problems are formulated as DC (difference of convex functions) programming. The corresponding DC algorithms (DCAs) converge linearly. Furthermore, the proposed algorithms are applied directly to determine the hardness of licorice seeds using near-infrared spectral data with noisy input. Experiments in eight spectral regions show that the proposed methods improve generalization compared with the traditional support vector regressions (SVR), especially in high-frequency regions. Experiments on several benchmark data sets demonstrate that the proposed methods achieve better results than the traditional regression methods in most of data sets we have considered.


Energies ◽  
2020 ◽  
Vol 13 (13) ◽  
pp. 3337
Author(s):  
Ruiye Li ◽  
Peng Cheng ◽  
Yingyi Hong ◽  
Hai Lan ◽  
He Yin

The extensive use of finite element models accurately simulates the temperature distribution of electrical machines. The simulation model can be quickly modified to reflect changes in design. However, the long runtime of the simulation prevents any direct application of the optimization algorithm. In this paper, research focused on improving efficiency with which expensive analysis (finite element method) is used in generator temperature distribution. A novel surrogate model based optimization method is presented. First, the Taguchi orthogonal array relates a series of stator geometric parameters as input and the temperatures of a generator as output by sampling the design decision space. A number of stator temperature designs were generated and analyzed using 3-D multi-physical field collaborative finite element model. A suitable shallow neural network was then selected and fitted to the available data to obtain a continuous optimization objective function. The accuracy of the function was verified using randomly generated geometric parameters to the extent that they were feasible. Finally, a multi-objective genetic optimization algorithm was applied in the function to reduce the average and maximum temperature of the machine simultaneously. As a result, when the Pareto front was compared with the initial data, these temperatures showed a significant decrease.


Sign in / Sign up

Export Citation Format

Share Document