Improving model-based optical proximity correction accuracy using improved process data generation

2006 ◽  
Author(s):  
Mark Lu ◽  
Dion King ◽  
Curtis Liang ◽  
Lawrence S. Melvin III
2021 ◽  
Vol 1 ◽  
pp. 2127-2136
Author(s):  
Olivia Borgue ◽  
John Stavridis ◽  
Tomas Vannucci ◽  
Panagiotis Stavropoulos ◽  
Harry Bikas ◽  
...  

AbstractAdditive manufacturing (AM) is a versatile technology that could add flexibility in manufacturing processes, whether implemented alone or along other technologies. This technology enables on-demand production and decentralized production networks, as production facilities can be located around the world to manufacture products closer to the final consumer (decentralized manufacturing). However, the wide adoption of additive manufacturing technologies is hindered by the lack of experience on its implementation, the lack of repeatability among different manufacturers and a lack of integrated production systems. The later, hinders the traceability and quality assurance of printed components and limits the understanding and data generation of the AM processes and parameters. In this article, a design strategy is proposed to integrate the different phases of the development process into a model-based design platform for decentralized manufacturing. This platform is aimed at facilitating data traceability and product repeatability among different AM machines. The strategy is illustrated with a case study where a car steering knuckle is manufactured in three different facilities in Sweden and Italy.


Author(s):  
XinMei Shi ◽  
Daan M. Maijer ◽  
Guy Dumont

Controlling and eliminating defects, such as macro-porosity, in die casting processes is an on-going challenge for manufacturers. Current strategies for eliminating defects focus on the execution of a pre-set casting cycle, die structure design or the combination of both. To respond to process variability and mitigate its negative effects, advanced process control methodologies may be employed to dynamically adjust the operational parameters of the process. In this work, a finite element heat transfer model, validated by comparison with experimental data, has been developed to predict the evolution of temperatures and the volume of liquid encapsulation in an experimental casting process. A virtual process, made up of the heat transfer model and a wrapper script for communication, has been employed to simulate the continuous operation of the real process. A stochastic state-space model, based on data from measurements and the virtual process, has been developed to provide a reliable representation of this virtual process. The parameters of the deterministic portion result from system identification of the virtual process, whereas the parameters of the stochastic portion arise from the analysis and comparison of measurement data with virtual process data. The resulting state-space model, which can be extended to a multi-input multi-output model, will facilitate the design of a model-based controller for this process.


2019 ◽  
Vol 58 (38) ◽  
pp. 17871-17884
Author(s):  
Natércia C. P. Fernandes ◽  
Andrey Romanenko ◽  
Marco S. Reis

2009 ◽  
Vol 48 (6) ◽  
pp. 06FA05 ◽  
Author(s):  
Jianliang Li ◽  
Xiaohai Li ◽  
Robert Lugg ◽  
Lawrence S. Melvin

Author(s):  
Yinlam Chow ◽  
Brandon Cui ◽  
Moonkyung Ryu ◽  
Mohammad Ghavamzadeh

Model-based reinforcement learning (RL) algorithms allow us to combine model-generated data with those collected from interaction with the real system in order to alleviate the data efficiency problem in RL. However, designing such algorithms is often challenging because the bias in simulated data may overshadow the ease of data generation. A potential solution to this challenge is to jointly learn and improve model and policy using a universal objective function. In this paper, we leverage the connection between RL and probabilistic inference, and formulate such an objective function as a variational lower-bound of a log-likelihood. This allows us to use expectation maximization (EM) and iteratively fix a baseline policy and learn a variational distribution, consisting of a model and a policy (E-step), followed by improving the baseline policy given the learned variational distribution (M-step). We propose model-based and model-free policy iteration (actor-critic) style algorithms for the E-step and show how the variational distribution learned by them can be used to optimize the M-step in a fully model-based fashion. Our experiments on a number of continuous control tasks show that our model-based (E-step) algorithm, called variational model-based policy optimization (VMBPO), is more sample-efficient and robust to hyper-parameter tuning than its model-free (E-step) counterpart. Using the same control tasks, we also compare VMBPO with several state-of-the-art model-based and model-free RL algorithms and show its sample efficiency and performance.


Sign in / Sign up

Export Citation Format

Share Document