scholarly journals Antenna Optimization Design Based on Deep Gaussian Process Model

2020 ◽  
Vol 2020 ◽  
pp. 1-10
Author(s):  
Xin-Yu Zhang ◽  
Yu-Bo Tian ◽  
Xie Zheng

When using Gaussian process (GP) machine learning as a surrogate model combined with the global optimization method for rapid optimization design of electromagnetic problems, a large number of covariance calculations are required, resulting in a calculation volume which is cube of the number of samples and low efficiency. In order to solve this problem, this study constructs a deep GP (DGP) model by using the structural form of convolutional neural network (CNN) and combining it with GP. In this network, GP is used to replace the fully connected layer of the CNN, the convolutional layer and the pooling layer of the CNN are used to reduce the dimension of the input parameters and GP is used to predict output, while particle swarm optimization (PSO) is used algorithm to optimize network structure parameters. The modeling method proposed in this paper can compress the dimensions of the problem to reduce the demand of training samples and effectively improve the modeling efficiency while ensuring the modeling accuracy. In our study, we used the proposed modeling method to optimize the design of a multiband microstrip antenna (MSA) for mobile terminals and obtained good optimization results. The optimized antenna can work in the frequency range of 0.69–0.96 GHz and 1.7–2.76 GHz, covering the wireless LTE 700, GSM 850, GSM 900, DCS 1800, PCS1900, UMTS 2100, LTE 2300, and LTE 2500 frequency bands. It is shown that the DGP network model proposed in this paper can replace the electromagnetic simulation software in the optimization process, so as to reduce the time required for optimization while ensuring the design accuracy.

2021 ◽  
Vol 5 (4 (113)) ◽  
pp. 45-54
Author(s):  
Alexander Nechaev ◽  
Vasily Meltsov ◽  
Dmitry Strabykin

Many advanced recommendatory models are implemented using matrix factorization algorithms. Experiments show that the quality of their performance depends significantly on the selected hyperparameters. Analysis of the effectiveness of using various methods for solving this problem of optimizing hyperparameters was made. It has shown that the use of classical Bayesian optimization which treats the model as a «black box» remains the standard solution. However, the models based on matrix factorization have a number of characteristic features. Their use makes it possible to introduce changes in the optimization process leading to a decrease in the time required to find the sought points without losing quality. Modification of the Gaussian process core which is used as a surrogate model for the loss function when performing the Bayesian optimization was proposed. The described modification at first iterations increases the variance of the values predicted by the Gaussian process over a given region of the hyperparameter space. In some cases, this makes it possible to obtain more information about the real form of the investigated loss function in less time. Experiments were carried out using well-known data sets for recommendatory systems. Total optimization time when applying the modification was reduced by 16 % (or 263 seconds) at best and remained the same at worst (less than 1-second difference). In this case, the expected error of the recommendatory model did not change (the absolute difference in values is two orders of magnitude lower than the value of error reduction in the optimization process). Thus, the use of the proposed modification contributes to finding a better set of hyperparameters in less time without loss of quality


Author(s):  
Lijian Shi ◽  
Fangping Tang ◽  
Rongsheng Xie ◽  
Lilong Qi ◽  
Zhengdong Yang

This paper research the influence of cascade dense degree and airfoil placed angle on hydralic performance of axial flow pump blades. Which combines the numerical optimization technology with the advanced CFD simulation technique, replaces designers’ experience by mathematical models for controlling of the blade design direction. Finally, a platform for of the optimization design of axial-flow pump blades is built in this paper. The platform which based on the multidisciplinary optimization software iSIGHT is to design and optimize the axial flow blades. The automatic optimization design platform for axial-flow blade was established, in which the parameterization modeling, mesh, flow computation and numerical optimization are combined together. The use of the numerical simulation software CFD for disciplinary analysis improved the reliability and accuracy of the results of the prediction model. Found the approximate geometric design parameters of the design conditions based on numerical simulation, and the technology of numerical optimization was used for constrained optimized analysis based on these parameters. Optimized impeller efficiency improved about 0.7% while satisfying the constraint condition, shows that the optimization method for axial flow blade base on iSIGHT platform is effective and feasible. Meanwhile, the optimization method can greatly shorten the design cycle, reduce design cost optimization.


Complexity ◽  
2020 ◽  
Vol 2020 ◽  
pp. 1-12 ◽  
Author(s):  
Jing Gao ◽  
Yubo Tian ◽  
Xie Zheng ◽  
Xuezhi Chen

For the optimal design of electromagnetic devices, it is the most time consuming to obtain the training samples from full wave electromagnetic simulation software, including HFSS, CST, and IE3D. Traditional machine learning methods usually use only labeled samples or unlabeled samples, but in practical problems, labeled samples and unlabeled samples coexist, and the acquisition cost of labeled samples is relatively high. This paper proposes a semisupervised learning Gaussian Process (GP), which combines unlabeled samples to improve the accuracy of the GP model and reduce the number of labeled training samples required. The proposed GP model consists two parts: initial training and self-training. In the process of initial training, a small number of labeled samples obtained by full wave electromagnetic simulation are used for training the initial GP model. Afterwards, the trained GP model is copied to another GP model in the process of self-training, and then the two GP models will update after crosstraining with different unlabeled samples. Using the same test samples for testing and updating, a model with a smaller error will replace another. Repeat the self-training process until a predefined stopping criterion is met. Four different benchmark functions and resonant frequency modeling problems of three different microstrip antennas are used to evaluate the effectiveness of the GP model. The results show that the proposed GP model has a good fitting effectiveness on benchmark functions. For microstrip antennas resonant frequency modeling problems, in the case of using the same labeled samples, its predictive ability is better than that of the traditional supervised GP model.


2011 ◽  
Vol 55-57 ◽  
pp. 1619-1624
Author(s):  
Ming Yi Zhu ◽  
Zhi Hua Cui ◽  
Qing Liu

Based on parameters and design requirements of the original LJ276M gasoline engine, using Helmholtz theory to calculate resonance frequency, intake manifold length and diameter and other structural parameters, rapid design a new resonance intake system. And then apply simulation software GT-Power to build working process model of the LJ276M Electronically Controlled Gasoline engine, and determine the final structural parameters of the intake system through the use of rapid prototyping and test verification methods. Compared with the original machine, the dynamic performance is improved significantly by the use of optimum design of the intake system.


2012 ◽  
Vol 479-481 ◽  
pp. 1745-1749
Author(s):  
Xing Li ◽  
Ya Zhou Chen

As designing the loader working device related to multi-variables, multi-objective and it is a non-linear constrained complex optimization problem essentially so using traditional optimization method to design the loader working device is low efficiency. A new method is proposed which combines the sensitivity analysis with the genetic algorithms to reduce the design variables and to improve the optimization efficiency. The optimization mathematical model is established. The key design variables which have greater impact on the loader digging force can be obtained by the sensitivity analysis and then input into genetic algorithms to conduct an optimization. By using this method the result showed that the loader digging force can be increased by 5.9 percent on the premise of meeting the overall performance requirements of the loader.


2018 ◽  
Author(s):  
Caitlin C. Bannan ◽  
David Mobley ◽  
A. Geoff Skillman

<div>A variety of fields would benefit from accurate pK<sub>a</sub> predictions, especially drug design due to the affect a change in ionization state can have on a molecules physiochemical properties.</div><div>Participants in the recent SAMPL6 blind challenge were asked to submit predictions for microscopic and macroscopic pK<sub>a</sub>s of 24 drug like small molecules.</div><div>We recently built a general model for predicting pK<sub>a</sub>s using a Gaussian process regression trained using physical and chemical features of each ionizable group.</div><div>Our pipeline takes a molecular graph and uses the OpenEye Toolkits to calculate features describing the removal of a proton.</div><div>These features are fed into a Scikit-learn Gaussian process to predict microscopic pK<sub>a</sub>s which are then used to analytically determine macroscopic pK<sub>a</sub>s.</div><div>Our Gaussian process is trained on a set of 2,700 macroscopic pK<sub>a</sub>s from monoprotic and select diprotic molecules.</div><div>Here, we share our results for microscopic and macroscopic predictions in the SAMPL6 challenge.</div><div>Overall, we ranked in the middle of the pack compared to other participants, but our fairly good agreement with experiment is still promising considering the challenge molecules are chemically diverse and often polyprotic while our training set is predominately monoprotic.</div><div>Of particular importance to us when building this model was to include an uncertainty estimate based on the chemistry of the molecule that would reflect the likely accuracy of our prediction. </div><div>Our model reports large uncertainties for the molecules that appear to have chemistry outside our domain of applicability, along with good agreement in quantile-quantile plots, indicating it can predict its own accuracy.</div><div>The challenge highlighted a variety of means to improve our model, including adding more polyprotic molecules to our training set and more carefully considering what functional groups we do or do not identify as ionizable. </div>


Open Physics ◽  
2019 ◽  
Vol 17 (1) ◽  
pp. 927-934
Author(s):  
Tao Song ◽  
Chao Liu ◽  
Hengxuan Zhu ◽  
Min Zeng ◽  
Jin Wang

Abstract Normal operation of gas turbines will be affected by deposition on turbine blades from particles mixed in fuels. This research shows that it is difficult to monitor the mass of the particles deposition on the wall surface in real time. With development of electronic technology, the antenna made of printed circuit board (PCB) has been widely used in many industrial fields. Microstrip antenna is first proposed for monitoring particles deposition to analyse the deposition law of the particles accumulated on the wall. The simulation software Computer Simulation Technology Microwave Studio (CST MWS) 2015 is used to conduct the optimization design of the PCB substrate antenna. It is found that the S11 of vivaldi antenna with arc gradient groove exhibits a monotonous increase with the increase of dielectric layer thickness, and this antenna is highly sensitive to the dielectric layer thickness. Moreover, a cold-state test is carried out by using atomized wax to simulate the deposition of pollutants. A relationship as a four number of times function is found between the capacitance and the deposited mass. These results provide an important reference for the mass detection of the particle deposition on the wall, and this method is suitable for other related engineering fields.


Author(s):  
Achim Dörre

AbstractWe study a selective sampling scheme in which survival data are observed during a data collection period if and only if a specific failure event is experienced. Individual units belong to one of a finite number of subpopulations, which may exhibit different survival behaviour, and thus cause heterogeneity. Based on a Poisson process model for individual emergence of population units, we derive a semiparametric likelihood model, in which the birth distribution is modeled nonparametrically and the lifetime distributions parametrically, and define maximum likelihood estimators. We propose a Newton–Raphson-type optimization method to address numerical challenges caused by the high-dimensional parameter space. The finite-sample properties and computational performance of the proposed algorithms are assessed in a simulation study. Personal insolvencies are studied as a special case of double truncation and we fit the semiparametric model to a medium-sized dataset to estimate the mean age at insolvency and the birth distribution of the underlying population.


Energies ◽  
2021 ◽  
Vol 14 (15) ◽  
pp. 4392
Author(s):  
Jia Zhou ◽  
Hany Abdel-Khalik ◽  
Paul Talbot ◽  
Cristian Rabiti

This manuscript develops a workflow, driven by data analytics algorithms, to support the optimization of the economic performance of an Integrated Energy System. The goal is to determine the optimum mix of capacities from a set of different energy producers (e.g., nuclear, gas, wind and solar). A stochastic-based optimizer is employed, based on Gaussian Process Modeling, which requires numerous samples for its training. Each sample represents a time series describing the demand, load, or other operational and economic profiles for various types of energy producers. These samples are synthetically generated using a reduced order modeling algorithm that reads a limited set of historical data, such as demand and load data from past years. Numerous data analysis methods are employed to construct the reduced order models, including, for example, the Auto Regressive Moving Average, Fourier series decomposition, and the peak detection algorithm. All these algorithms are designed to detrend the data and extract features that can be employed to generate synthetic time histories that preserve the statistical properties of the original limited historical data. The optimization cost function is based on an economic model that assesses the effective cost of energy based on two figures of merit: the specific cash flow stream for each energy producer and the total Net Present Value. An initial guess for the optimal capacities is obtained using the screening curve method. The results of the Gaussian Process model-based optimization are assessed using an exhaustive Monte Carlo search, with the results indicating reasonable optimization results. The workflow has been implemented inside the Idaho National Laboratory’s Risk Analysis and Virtual Environment (RAVEN) framework. The main contribution of this study addresses several challenges in the current optimization methods of the energy portfolios in IES: First, the feasibility of generating the synthetic time series of the periodic peak data; Second, the computational burden of the conventional stochastic optimization of the energy portfolio, associated with the need for repeated executions of system models; Third, the inadequacies of previous studies in terms of the comparisons of the impact of the economic parameters. The proposed workflow can provide a scientifically defendable strategy to support decision-making in the electricity market and to help energy distributors develop a better understanding of the performance of integrated energy systems.


Sign in / Sign up

Export Citation Format

Share Document