scholarly journals Research on Two-Stage Joint Optimization Problem of Green Manufacturing and Maintenance for Semiconductor Wafer

2020 ◽  
Vol 2020 ◽  
pp. 1-22 ◽  
Author(s):  
Jun Dong ◽  
Chunming Ye

This paper proposes a two-stage joint optimization problem of green manufacturing and maintenance for semiconductor wafer (TSGMM-SW) considering manufacturing stage, inspection, and repair stage simultaneously, which is a typical NP-hard problem with practical research significance and value. Aiming at this problem, a green scheduling model with the objective of minimizing makespan, total carbon emissions, and total preventive maintenance (PM) costs is constructed, and an improved hybrid multiobjective multiverse optimization (IHMMVO) algorithm is proposed in this paper. The joint optimization of green manufacturing and maintenance is realized by designing synchronous scheduling and maintenance strategy for wafer manufacturing and equipment PM. The diversity of the population is expanded and the optimization performance of IHMMVO is improved by designing the initial population fusion strategy and subpopulation evolution strategy. In the experimental phase, we perform the simulation experiments of 900 test cases randomly generated from 90 parameter combinations. The IHMMVO algorithm is compared with other existing algorithms to verify the effectiveness and feasibility for TSGMM-SW.

2019 ◽  
Vol 9 (14) ◽  
pp. 2879 ◽  
Author(s):  
Jun Dong ◽  
Chunming Ye

Production scheduling of semiconductor wafer manufacturing is a challenging research topic in the field of industrial engineering. Based on this, the green manufacturing collaborative optimization problem of the semiconductor wafer distributed heterogeneous factory is first proposed, which is also a typical NP-hard problem with practical application value and significance. To solve this problem, it is very important to find an effective algorithm for rational allocation of jobs among various factories and the production scheduling of allocated jobs within each factory, so as to realize the collaborative optimization of the manufacturing process. In this paper, a scheduling model for green manufacturing collaborative optimization of the semiconductor wafer distributed heterogeneous factory is constructed. By designing a new learning strategy of initial population and leadership level, designing a new search strategy of the predatory behavior for the grey wolf algorithm, which is a new swarm intelligence optimization algorithm proposed in recent years, the diversity of the population is expanded and the local optimum of the algorithm is avoided. In the experimental stage, two factories’ and three factories’ test cases are generated, respectively. The effectiveness and feasibility of the algorithm proposed in this paper are verified through the comparative study with the improved Grey Wolf Algorithms—MODGWO, MOGWO, the fast and elitist multi-objective genetic algorithm—NSGA-II.


Author(s):  
Liping Zhou ◽  
Na Geng ◽  
Zhibin Jiang ◽  
Shan Jiang

The joint optimization problem of multiresource capacity planning and multitype patient scheduling under uncertain demands and random capacity consumption poses a significant computational challenge. The common practice in solving this problem is to first identify capacity levels and then determine patient scheduling decisions separately, which typically leads to suboptimal decisions that often result in ineffective outcomes of care. In order to overcome these inefficiencies, in this paper, we propose a novel two-stage stochastic optimization model that integrates these two decisions, which can lower costs by exploring the coupling relationship between patient scheduling and capacity configuration. The patient scheduling problem is modeled as a Markov decision process. We first analyze the properties for the multitype patient case under specific assumptions and then establish structural properties of the optimal scheduling policy for the one-type patient case. Based on these findings, we propose optimal solution algorithms to solve the joint optimization problem for this special case. Because it is intractable to solve the original two-stage problem for a general multitype system with large state space, we propose a heuristic policy and a two-stage stochastic mixed-integer programming model solved by the Benders decomposition algorithm, which is further improved by combining an approximate linear program and the look-ahead strategy. To illustrate the efficiency of our approaches and draw managerial insights, we apply our solutions to a data set from the day surgery center of a large public hospital in Shanghai, China. The results show that the joint optimization of capacity planning and patient scheduling could significantly improve the performance. Furthermore, our model can be applied to a rolling-horizon framework to optimize dynamic patient scheduling decisions. Through extensive numerical analyses, we demonstrate that our approaches yield good performances, as measured by the gap against an upper bound, and that these approaches outperform several benchmark policies. Summary of Contribution: First, this paper investigates the joint optimization problem of multiresource capacity planning and multitype patient scheduling under uncertain demands and random capacity consumption, which poses a significant computational challenge. It belongs to the scope of computing and operations research. Second, this paper formulates a mathematical model, establishes optimality properties, proposes solution algorithms, and performs extensive numerical experiments using real-world data. This work includes aspects of dynamic stochastic control, computing algorithms, and experiments. Moreover, this paper is motivated by a practical problem (joint management of capacity planning and patient scheduling in the day surgery center) in our cooperative hospital, which is also key to numerous other applications, for example, the make-to-order manufacturing systems and computing facility systems. By using the optimality properties, solution algorithms, and management insights derived in this paper, the practitioners can be equipped with a decision support tool for efficient and effective operation decisions.


2021 ◽  
Vol 17 (4) ◽  
pp. 1-20
Author(s):  
Serena Wang ◽  
Maya Gupta ◽  
Seungil You

Given a classifier ensemble and a dataset, many examples may be confidently and accurately classified after only a subset of the base models in the ensemble is evaluated. Dynamically deciding to classify early can reduce both mean latency and CPU without harming the accuracy of the original ensemble. To achieve such gains, we propose jointly optimizing the evaluation order of the base models and early-stopping thresholds. Our proposed objective is a combinatorial optimization problem, but we provide a greedy algorithm that achieves a 4-approximation of the optimal solution under certain assumptions, which is also the best achievable polynomial-time approximation bound. Experiments on benchmark and real-world problems show that the proposed Quit When You Can (QWYC) algorithm can speed up average evaluation time by 1.8–2.7 times on even jointly trained ensembles, which are more difficult to speed up than independently or sequentially trained ensembles. QWYC’s joint optimization of ordering and thresholds also performed better in experiments than previous fixed orderings, including gradient boosted trees’ ordering.


Author(s):  
Tianqi Jing ◽  
Shiwen He ◽  
Fei Yu ◽  
Yongming Huang ◽  
Luxi Yang ◽  
...  

AbstractCooperation between the mobile edge computing (MEC) and the mobile cloud computing (MCC) in offloading computing could improve quality of service (QoS) of user equipments (UEs) with computation-intensive tasks. In this paper, in order to minimize the expect charge, we focus on the problem of how to offload the computation-intensive task from the resource-scarce UE to access point’s (AP) and the cloud, and the density allocation of APs’ at mobile edge. We consider three offloading computing modes and focus on the coverage probability of each mode and corresponding ergodic rates. The resulting optimization problem is a mixed-integer and non-convex problem in the objective function and constraints. We propose a low-complexity suboptimal algorithm called Iteration of Convex Optimization and Nonlinear Programming (ICONP) to solve it. Numerical results verify the better performance of our proposed algorithm. Optimal computing ratios and APs’ density allocation contribute to the charge saving.


10.6036/10099 ◽  
2021 ◽  
Vol DYNA-ACELERADO (0) ◽  
pp. [ 8 pp.]-[ 8 pp.]
Author(s):  
SALAH KAMAL ◽  
ATTIA EL-FERGANY ◽  
EHAB EHAB ELSAYED ELATTAR ◽  
AHMED AGWA

The accuracy of fuel cell (FC) models is important for the further numerical simulations and analysis at several conditions. The electrical (I-V) characteristic of the polymer exchange membrane fuel cells (PEMFCs) has high degree of nonlinearity comprising uncertain seven parameters as they aren’t given in fabricator's datasheets. These seven parameters need to be obtained to have the PEMFC model in order. This research addresses an up-to-date application of the gradient-based optimizer (GBO) to generate the best estimated values of such uncertain parameters. The estimation of these uncertain parameters is adapted as optimization problem having a cost function (CF) subjects to set of self-constrained limits. Three test cases of widely used PEMFCs units; namely, SR-12, 250-W module and NedStack PS6 to appraise the performance of the GBO are demonstrated and analyzed. The best values of the CF are 0.000142, 0.33598, and 2.10025 V2 for SR-12, 250-W module and NedStack PS6; respectively. Furthermore, the assessment of the GBO-based model is made by comparing its obtained results with the experiential results of these typical PEMFCs plus comparisons to other methods. At a due stage, many scenarios as a result of operating variations in regard to inlet regulation pressures and unit temperatures are performed. The copped reported results of the studied scenarios indicate the effectiveness of the GBO in establishing an accurate PEMFC model.


Author(s):  
R. Hänsch ◽  
I. Drude ◽  
O. Hellwich

The task to compute 3D reconstructions from large amounts of data has become an active field of research within the last years. Based on an initial estimate provided by structure from motion, bundle adjustment seeks to find a solution that is optimal for all cameras and 3D points. The corresponding nonlinear optimization problem is usually solved by the Levenberg-Marquardt algorithm combined with conjugate gradient descent. While many adaptations and extensions to the classical bundle adjustment approach have been proposed, only few works consider the acceleration potentials of GPU systems. This paper elaborates the possibilities of time and space savings when fitting the implementation strategy to the terms and requirements of realizing a bundler on heterogeneous CPUGPU systems. Instead of focusing on the standard approach of Levenberg-Marquardt optimization alone, nonlinear conjugate gradient descent and alternating resection-intersection are studied as two alternatives. The experiments show that in particular alternating resection-intersection reaches low error rates very fast, but converges to larger error rates than Levenberg-Marquardt. PBA, as one of the current state-of-the-art bundlers, converges slower in 50 % of the test cases and needs 1.5-2 times more memory than the Levenberg- Marquardt implementation.


Author(s):  
ZOHEIR EZZIANE

Probabilistic and stochastic algorithms have been used to solve many hard optimization problems since they can provide solutions to problems where often standard algorithms have failed. These algorithms basically search through a space of potential solutions using randomness as a major factor to make decisions. In this research, the knapsack problem (optimization problem) is solved using a genetic algorithm approach. Subsequently, comparisons are made with a greedy method and a heuristic algorithm. The knapsack problem is recognized to be NP-hard. Genetic algorithms are among search procedures based on natural selection and natural genetics. They randomly create an initial population of individuals. Then, they use genetic operators to yield new offspring. In this research, a genetic algorithm is used to solve the 0/1 knapsack problem. Special consideration is given to the penalty function where constant and self-adaptive penalty functions are adopted.


Author(s):  
Ning Quan ◽  
Harrison Kim

The power maximizing grid-based wind farm layout optimization problem seeks to determine the layout of a given number of turbines from a grid of possible locations such that wind farm power output is maximized. The problem in general is a nonlinear discrete optimization problem which cannot be solved to optimality, so heuristics must be used. This article proposes a new two stage heuristic that first finds a layout that minimizes the maximum pairwise power loss between any pair of turbines. The initial layout is then changed one turbine at a time to decrease sum of pairwise power losses. The proposed heuristic is compared to the greedy algorithm using real world data collected from a site in Iowa. The results suggest that the proposed heuristic produces layouts with slightly higher power output, but are less robust to changes in the dominant wind direction.


2020 ◽  
Vol 34 (02) ◽  
pp. 1378-1386
Author(s):  
Andrew Perrault ◽  
Bryan Wilder ◽  
Eric Ewing ◽  
Aditya Mate ◽  
Bistra Dilkina ◽  
...  

Stackelberg security games are a critical tool for maximizing the utility of limited defense resources to protect important targets from an intelligent adversary. Motivated by green security, where the defender may only observe an adversary's response to defense on a limited set of targets, we study the problem of learning a defense that generalizes well to a new set of targets with novel feature values and combinations. Traditionally, this problem has been addressed via a two-stage approach where an adversary model is trained to maximize predictive accuracy without considering the defender's optimization problem. We develop an end-to-end game-focused approach, where the adversary model is trained to maximize a surrogate for the defender's expected utility. We show both in theory and experimental results that our game-focused approach achieves higher defender expected utility than the two-stage alternative when there is limited data.


Sign in / Sign up

Export Citation Format

Share Document