scholarly journals Machine learning and computational design

Ubiquity ◽  
2020 ◽  
Vol 2020 (May) ◽  
pp. 1-10
Author(s):  
Silvio Carta
2021 ◽  
Author(s):  
Simone Gallarati ◽  
Raimon Fabregat ◽  
Ruben Laplaza ◽  
Sinjini Bhattacharjee ◽  
Matthew D. Wodrich ◽  
...  

HHundreds of catalytic methods are developed each year to meet the demand for high-purity chiral compounds. The computational design of enantioselective organocatalysts remains a significant challenge, as catalysts are typically...


RSC Advances ◽  
2015 ◽  
Vol 5 (55) ◽  
pp. 44361-44370 ◽  
Author(s):  
A. W. Thornton ◽  
D. A. Winkler ◽  
M. S. Liu ◽  
M. Haranczyk ◽  
D. F. Kennedy

Computational search of structure database for CO2 reduction catalysts using molecular simulation and machine learning.


Author(s):  
Yuliya Sinke ◽  
Sebastian Gatz ◽  
Martin Tamke ◽  
Mette Ramsgaard Thomsen

AbstractThis paper examines the use of machine learning in creating digitally integrated design-to-fabrication workflows. As computational design allows for new methods of material specification and fabrication, it enables direct functional grading of material at high detail thereby tuning the design performance in response to performance criteria. However, the generation of fabrication data is often cumbersome and relies on in-depth knowledge of the fabrication processes. Parametric models that set up for automatic detailing of incremental changes, unfortunately, do not accommodate the larger topological changes to the material set up. The paper presents the speculative case study KnitVault. Based on earlier research projects Isoropia and Ombre, the study examines the use of machine learning to train models for fabrication data generation in response to desired performance criteria. KnitVault demonstrates and validates methods for shortcutting parametric interfacing and explores how the trained model can be employed in design cases that exceed the topology of the training examples.


Author(s):  
Wang Yi ◽  
Guangchen Liu ◽  
Jianbao Gao ◽  
Lijun Zhang

Casting aluminum alloys are commonly used in industries due to their excellent comprehensive performance. Alloying/microalloying and post-solidification heat treatments are the most common measures to tune the microstructure for enhancing their mechanical properties. However, it is very challenging to achieve accurate and efficient development of novel casting aluminum alloys using the traditional trial-and-error method. With the rapid development of computer technology, the computational thermodynamics (CT) in the framework of the CALculation of PHAse Diagram approach, the data-driven machine learning (ML) technique, and also their combinations have been proved to be effective approaches for the design of casting aluminum alloys. In this review, the state-of-the-art computational alloy design approaches driven by CT and ML techniques, as well as their combinations, were comprehensively summarized. The current status of the thermodynamic database for aluminum alloys, as the core for CT, was also briefly introduced. After that, a variety of successful case studies on the design of different casting aluminum alloys driven by CT, ML, and their combinations were demonstrated, including common applications, CT-driven design of Sc-additional Al-Si-Mg series casting alloys, and design of Srmodified A356 alloys driven by combing CT and ML. Finally, the conclusions of this review were drawn, and perspectives for boosting the computational design approach driven by combining CT and ML techniques were pointed out.


2021 ◽  
Author(s):  
Julie Pongetti ◽  
Timoleon Kipouros ◽  
Marc Emmanuelli ◽  
Richard Ahlfeld ◽  
Shahrokh Shahpar

Abstract Machine learning models are becoming an increasingly popular way to exploit data from fluid dynamics simulations. This project investigates how autoencoders and output consolidation can be used to increase the accuracy of machine learning models, by injecting knowledge of the full flow field in the predictions of Quantities of Interest (QoI) used in the optimisation of highly loaded transonic compressor blades. Results show that the accuracy of the predicted QoI can indeed be increased, by using both an appropriate autoencoder and output consolidation. Most significantly, the prediction accuracy is increased in the range of QoI values which is involved in optimisation problems. As a result, a more accurate and faster computational design approach driven by machine learning methods has been demonstrated.


2021 ◽  
Author(s):  
Rahmad Akbar ◽  
Philippe A. Robert ◽  
Cédric R. Weber ◽  
Michael Widrich ◽  
Robert Frank ◽  
...  

Generative machine learning (ML) has been postulated to be a major driver in the computational design of antigen-specific monoclonal antibodies (mAb). However, efforts to confirm this hypothesis have been hindered by the infeasibility of testing arbitrarily large numbers of antibody sequences for their most critical design parameters: paratope, epitope, affinity, and developability. To address this challenge, we leveraged a lattice-based antibody-antigen binding simulation framework, which incorporates a wide range of physiological antibody binding parameters. The simulation framework enables both the computation of antibody-antigen 3D-structures as well as functions as an oracle for unrestricted prospective evaluation of the antigen specificity of ML-generated antibody sequences. We found that a deep generative model, trained exclusively on antibody sequence (1D) data can be used to design native-like conformational (3D) epitope-specific antibodies, matching or exceeding the training dataset in affinity and developability variety. Furthermore, we show that transfer learning enables the generation of high-affinity antibody sequences from low-N training data. Finally, we validated that the antibody design insight gained from simulated antibody-antigen binding data is applicable to experimental real-world data. Our work establishes a priori feasibility and the theoretical foundation of high-throughput ML-based mAb design.


2020 ◽  
Vol 43 ◽  
Author(s):  
Myrthe Faber

Abstract Gilead et al. state that abstraction supports mental travel, and that mental travel critically relies on abstraction. I propose an important addition to this theoretical framework, namely that mental travel might also support abstraction. Specifically, I argue that spontaneous mental travel (mind wandering), much like data augmentation in machine learning, provides variability in mental content and context necessary for abstraction.


2020 ◽  
Author(s):  
Mohammed J. Zaki ◽  
Wagner Meira, Jr
Keyword(s):  

Sign in / Sign up

Export Citation Format

Share Document