Data centric nanocomposites design via mixed-variable Bayesian optimization

2020 ◽  
Vol 5 (8) ◽  
pp. 1376-1390
Author(s):  
Akshay Iyer ◽  
Yichi Zhang ◽  
Aditya Prasad ◽  
Praveen Gupta ◽  
Siyu Tao ◽  
...  

Integrating experimental data with computational methods enables multicriteria design of nanocomposites using quantitative and qualitative design variables.

Symmetry ◽  
2020 ◽  
Vol 13 (1) ◽  
pp. 60
Author(s):  
Md Arifuzzaman ◽  
Muhammad Aniq Gul ◽  
Kaffayatullah Khan ◽  
S. M. Zakir Hossain

There are several environmental factors such as temperature differential, moisture, oxidation, etc. that affect the extended life of the modified asphalt influencing its desired adhesive properties. Knowledge of the properties of asphalt adhesives can help to provide a more resilient and durable asphalt surface. In this study, a hybrid of Bayesian optimization algorithm and support vector regression approach is recommended to predict the adhesion force of asphalt. The effects of three important variables viz., conditions (fresh, wet and aged), binder types (base, 4% SB, 5% SB, 4% SBS and 5% SBS), and Carbon Nano Tube doses (0.5%, 1.0% and 1.5%) on adhesive force are taken into consideration. Real-life experimental data (405 specimens) are considered for model development. Using atomic force microscopy, the adhesive strength of nanoscales of test specimens is determined according to functional groups on the asphalt. It is found that the model predictions overlap with the experimental data with a high R2 of 90.5% and relative deviation are scattered around zero line. Besides, the mean, median and standard deviations of experimental and the predicted values are very close. In addition, the mean absolute Error, root mean square error and fractional bias values were found to be low, indicating the high performance of the developed model.


2020 ◽  
Vol 21 (20) ◽  
pp. 7702 ◽  
Author(s):  
Sofya I. Scherbinina ◽  
Philip V. Toukach

Analysis and systematization of accumulated data on carbohydrate structural diversity is a subject of great interest for structural glycobiology. Despite being a challenging task, development of computational methods for efficient treatment and management of spatial (3D) structural features of carbohydrates breaks new ground in modern glycoscience. This review is dedicated to approaches of chemo- and glyco-informatics towards 3D structural data generation, deposition and processing in regard to carbohydrates and their derivatives. Databases, molecular modeling and experimental data validation services, and structure visualization facilities developed for last five years are reviewed.


Author(s):  
David Forbes ◽  
Gary Page ◽  
Martin Passmore ◽  
Adrian Gaylard

This study is an evaluation of the computational methods in reproducing experimental data for a generic sports utility vehicle (SUV) geometry and an assessment on the influence of fixed and rotating wheels for this geometry. Initially, comparisons are made in the wake structure and base pressures between several CFD codes and experimental data. It was shown that steady-state RANS methods are unsuitable for this geometry due to a large scale unsteadiness in the wake caused by separation at the sharp trailing edge and rear wheel wake interactions. unsteady RANS (URANS) offered no improvements in wake prediction despite a significant increase in computational cost. The detached-eddy simulation (DES) and Lattice–Boltzmann methods showed the best agreement with the experimental results in both the wake structure and base pressure, with LBM running in approximately a fifth of the time for DES. The study then continues by analysing the influence of rotating wheels and a moving ground plane over a fixed wheel and ground plane arrangement. The introduction of wheel rotation and a moving ground was shown to increase the base pressure and reduce the drag acting on the vehicle when compared to the fixed case. However, when compared to the experimental standoff case, variations in drag and lift coefficients were minimal but misleading, as significant variations to the surface pressures were present.


Author(s):  
Xiaolin Li ◽  
Zijiang Yang ◽  
L. Catherine Brinson ◽  
Alok Choudhary ◽  
Ankit Agrawal ◽  
...  

In Computational Materials Design (CMD), it is well recognized that identifying key microstructure characteristics is crucial for determining material design variables. However, existing microstructure characterization and reconstruction (MCR) techniques have limitations to be applied for materials design. Some MCR approaches are not applicable for material microstructural design because no parameters are available to serve as design variables, while others introduce significant information loss in either microstructure representation and/or dimensionality reduction. In this work, we present a deep adversarial learning methodology that overcomes the limitations of existing MCR techniques. In the proposed methodology, generative adversarial networks (GAN) are trained to learn the mapping between latent variables and microstructures. Thereafter, the low-dimensional latent variables serve as design variables, and a Bayesian optimization framework is applied to obtain microstructures with desired material property. Due to the special design of the network architecture, the proposed methodology is able to identify the latent (design) variables with desired dimensionality, as well as capturing complex material microstructural characteristics. The validity of the proposed methodology is tested numerically on a synthetic microstructure dataset and its effectiveness for materials design is evaluated through a case study of optimizing optical performance for energy absorption. Additional features, such as scalability and transferability, are also demonstrated in this work. In essence, the proposed methodology provides an end-to-end solution for microstructural design, in which GAN reduces information loss and preserves more microstructural characteristics, and the GP-Hedge optimization improves the efficiency of design exploration.


2020 ◽  
Author(s):  
Kobi Felton ◽  
Daniel Wigh ◽  
Alexei Lapkin

Recent work has shown how Bayesian optimization (BO) is an efficient method for optimizing expensive experiments such as chemical reactions. However, in previous studies, each optimization has been started from scratch with no information about previous or similar chemical optimization studies. Therefore, BO can still require more iterations than many experimental budgets provide. Here, we overcome this challenge using multi-task BO. Through<i> in silico</i> benchmarking studies, we show how past experimental data can be leveraged to improve the quality and speed of reaction optimization.


Author(s):  
Paolo Marcatili ◽  
Anna Tramontano

This chapter provides an overview of the current computational methods for PPI network cleansing. The authors first present the issue of identifying reliable PPIs from noisy and incomplete experimental data. Next, they address the questions of which are the expected results of the different experimental studies, of what can be defined as true interactions, of which kind of data are to be integrated in assigning reliability levels to PPIs and which gold standard should the authors use in training and testing PPI filtering methods. Finally, Marcatili and Tramontano describe the state of the art in the field, presenting the different classes of algorithms and comparing their results. The aim of the chapter is to guide the reader in the choice of the most convenient methods, experiments and integrative data and to underline the most common biases and errors to obtain a portrait of PINs which is not only reliable but as well able to correctly retrieve the biological information contained in such data.


Molecules ◽  
2020 ◽  
Vol 25 (20) ◽  
pp. 4783
Author(s):  
Reinier Cárdenas ◽  
Javier Martínez-Seoane ◽  
Carlos Amero

Experimental methods are indispensable for the study of the function of biological macromolecules, not just as static structures, but as dynamic systems that change conformation, bind partners, perform reactions, and respond to different stimulus. However, providing a detailed structural interpretation of the results is often a very challenging task. While experimental and computational methods are often considered as two different and separate approaches, the power and utility of combining both is undeniable. The integration of the experimental data with computational techniques can assist and enrich the interpretation, providing new detailed molecular understanding of the systems. Here, we briefly describe the basic principles of how experimental data can be combined with computational methods to obtain insights into the molecular mechanism and expand the interpretation through the generation of detailed models.


Author(s):  
Sean M. McGuffie ◽  
Mike A. Porter ◽  
Dennis H. Martens

During the scale-up design of a slurry bubble column reactor from a pilot demonstration facility to a production reactor, the design team used computational fluid dynamics (CFD) as a tool to quantify design variables, such as gas holdup and liquid velocities/structural pressures within the reactor. At the time of the analysis, all available physics models for modeling the multi-phase flow had significant limitations that would require “tuning” of the CFD input parameters to ensure confidence in the results. The authors initially conducted a literature search to find data that could be used to calibrate the model. While a wide variety of literature is available, none provided the exact data required for model calibration. For this reason, the authors constructed a test column and performed experiments to derive data for tuning the CFD models. Statistical analysis of the experimental data provided distributions on the input parameters of interest. CFD studies were then used to tune the CFD input parameters to match the experimental data. A correlation was developed, tested and verified. This correlation was then used to provide confidence in the results of the design analysis performed on the scaled up reactor.


Author(s):  
Conner Sharpe ◽  
Carolyn Conner Seepersad ◽  
Seth Watts ◽  
Dan Tortorelli

Advances in additive manufacturing processes have made it possible to build mechanical metamaterials with bulk properties that exceed those of naturally occurring materials. One class of these metamaterials is structural lattices that can achieve high stiffness to weight ratios. Recent work on geometric projection approaches has introduced the possibility of optimizing these architected lattice designs in a drastically reduced parameter space. The reduced number of design variables enables application of a new class of methods for exploring the design space. This work investigates the use of Bayesian optimization, a technique for global optimization of expensive non-convex objective functions through surrogate modeling. We utilize formulations for implementing probabilistic constraints in Bayesian optimization to aid convergence in this highly constrained engineering problem, and demonstrate results with a variety of stiff lightweight lattice designs.


2004 ◽  
Vol 108 (1090) ◽  
pp. 611-620 ◽  
Author(s):  
R. P. Clayton ◽  
R. F. Martinez-Botas

AbstractDirect optimisation techniques using different methods are presented and compared for the solution of two common flows: a two dimensional diffuser and a drag minimisation problem of a fixed area body. The methods studied are a truncated Newton algorithm (gradient method), a simplex approach (direct search method) and a genetic algorithm (stochastic method). The diffuser problem has a known solution supported by experimental data, it has one design performance measure (the pressure coefficient) and two design variables. The fixed area body also has one performance measure (the drag coefficient), but this time there are four design variables; no experimental data is available, this computation is performed to assess the speed/progression of solution.In all cases the direct search approach (simplex method) required significantly smaller number of evaluations than the generic algorithm method. The simplest approach, the gradient method (Newton) performed equally to the simplex approach for the diffuser problem but it was unable to provide a solution to the four-variable problem of a fixed area body drag minimisation. The level of robustness obtained by the use of generic algorithm is in principle superior to the other methods, but a large price in terms of evaluations has to be paid.


Sign in / Sign up

Export Citation Format

Share Document