A Decomposed Gradient-Based Approach for Generalized Platform Selection and Variant Design in Product Family Optimization

2008 ◽  
Vol 130 (7) ◽  
Author(s):  
Aida Khajavirad ◽  
Jeremy J. Michalek

A core challenge in product family optimization is to jointly determine (1) the optimal selection of components to be shared across product variants and (2) the optimal values for design variables that define those components. Each of these subtasks depends on the other; however, due to the combinatorial nature and high computational cost of the joint problem, prior methods have forgone optimality of the full problem by fixing the platform a priori, restricting the platform configuration to all-or-none component sharing, or optimizing the joint problem in multiple stages. In this paper, we address these restrictions by (1) introducing an extended metric to account for generalized commonality, (2) relaxing the metric to the continuous space to enable gradient-based optimization, and (3) proposing a decomposed single-stage method for optimizing the joint problem. The approach is demonstrated on a family of ten bathroom scales. Results indicate that generalized commonality dramatically improves the quality of optimal solutions, and the decomposed single-stage approach offers substantial improvement in scalability and tractability of the joint problem, providing a practical tool for optimizing families consisting of many variants.

Author(s):  
Aida Khajavirad ◽  
Jeremy J. Michalek

A core challenge in product family optimization is to develop a single-stage approach that can optimally select the set of variables to be shared in the platform(s) while simultaneously designing the platform(s) and variants within an algorithm that is efficient and scalable. However, solving the joint product family platform selection and design problem involves significant complexity and computational cost, so most prior methods have narrowed the scope by treating the platform as fixed or have relied on stochastic algorithms or heuristic two-stage approaches that may sacrifice optimality. In this paper, we propose a single-stage approach for optimizing the joint problem using gradient-based methods. The combinatorial platform-selection variables are relaxed to the continuous space by applying the commonality index and consistency relaxation function introduced in a companion paper. In order to improve scalability properties, we exploit the structure of the product family problem and decompose the joint product family optimization problem into a two-level optimization problem using analytical target cascading so that the system-level problem determines the optimal platform configuration while each subsystem optimizes a single product in the family. Finally, we demonstrate the approach through optimization of a family of ten bathroom scales; Results indicate encouraging success with scalability and computational expense.


Author(s):  
Aida Khajavirad ◽  
Jeremy J. Michalek

One critical aim of product family design is to offer distinct variants that attract a variety of market segments while maximizing the number of common parts to reduce manufacturing cost. Several indices have been developed for measuring the degree of commonality in existing product lines to compare product families or assess improvement of a redesign. In the product family optimization literature, commonality metrics are used to define the multi-objective tradeoff between commonality and individual variant performance. These metrics for optimization differ from indices in the first group: While the optimization metrics provide desirable computational properties, they generally lack the desirable properties of indices intended to act as appropriate proxies for the benefits of commonality, such as reduced tooling and supply chain costs. In this paper, we propose a method for computing the commonality index introduced by Martin and Ishii using the available input data for any product family without predefined configuration. The proposed method for computing the commonality index, which was originally defined for binary formulations (common / not common), is relaxed to the continuous space in order to solve the discrete problem with a series of continuous relaxations, and the effect of relaxation on the metric behavior is investigated. Several relaxation formulations are examined, and a new function with desirable properties is introduced and compared with prior formulations. The new properties of the proposed metric enable development of an efficient and robust single-stage gradient-based optimization of the joint product family platform selection and design problem, which is examined in a companion paper.


Author(s):  
Sriram Shankaran ◽  
Brian Barr

The objective of this study is to develop and assess a gradient-based algorithm that efficiently traverses the Pareto front for multi-objective problems. We use high-fidelity, computationally intensive simulation tools (for eg: Computational Fluid Dynamics (CFD) and Finite Element (FE) structural analysis) for function and gradient evaluations. The use of evolutionary algorithms with these high-fidelity simulation tools results in prohibitive computational costs. Hence, in this study we use an alternate gradient-based approach. We first outline an algorithm that can be proven to recover Pareto fronts. The performance of this algorithm is then tested on three academic problems: a convex front with uniform spacing of Pareto points, a convex front with non-uniform spacing and a concave front. The algorithm is shown to be able to retrieve the Pareto front in all three cases hence overcoming a common deficiency in gradient-based methods that use the idea of scalarization. Then the algorithm is applied to a practical problem in concurrent design for aerodynamic and structural performance of an axial turbine blade. For this problem, with 5 design variables, and for 10 points to approximate the front, the computational cost of the gradient-based method was roughly the same as that of a method that builds the front from a sampling approach. However, as the sampling approach involves building a surrogate model to identify the Pareto front, there is the possibility that validation of this predicted front with CFD and FE analysis results in a different location of the “Pareto” points. This can be avoided with the gradient-based method. Additionally, as the number of design variables increases and/or the number of required points on the Pareto front is reduced, the computational cost favors the gradient-based approach.


2021 ◽  
Author(s):  
Atul Kumar Sharma ◽  
Gal Shmuel ◽  
Oded Amir

Dielectric elastomers are active materials that undergo large deformations and change their instantaneous moduli when they are actuated by electric fields. By virtue of these features, composites made of soft dielectrics can filter waves across frequency bands that are electrostatically tunable. To date, to improve the performance of these adaptive phononic crystals, such as the width of these bands at the actuated state, metaheuristics-based topology optimization was used. However, the design freedom offered by this approach is limited because the number of function evaluations increases exponentially with the number of design variables. Here, we go beyond the limitations of this approach, by developing an efficient gradient-based topology optimization method. The numerical results of the method developed here demonstrate prohibited frequency bands that are indeed wider than those obtained from the previous metaheuristics-based method, while the computational cost to identify them is reduced by orders of magnitude.


Author(s):  
Yaniv Aspis ◽  
Krysia Broda ◽  
Alessandra Russo ◽  
Jorge Lobo

We introduce a novel approach for the computation of stable and supported models of normal logic programs in continuous vector spaces by a gradient-based search method. Specifically, the application of the immediate consequence operator of a program reduct can be computed in a vector space. To do this, Herbrand interpretations of a propositional program are embedded as 0-1 vectors in $\mathbb{R}^N$ and program reducts are represented as matrices in $\mathbb{R}^{N \times N}$. Using these representations we prove that the underlying semantics of a normal logic program is captured through matrix multiplication and a differentiable operation. As supported and stable models of a normal logic program can now be seen as fixed points in a continuous space, non-monotonic deduction can be performed using an optimisation process such as Newton's method. We report the results of several experiments using synthetically generated programs that demonstrate the feasibility of the approach and highlight how different parameter values can affect the behaviour of the system.


10.29007/2k64 ◽  
2018 ◽  
Author(s):  
Pat Prodanovic ◽  
Cedric Goeury ◽  
Fabrice Zaoui ◽  
Riadh Ata ◽  
Jacques Fontaine ◽  
...  

This paper presents a practical methodology developed for shape optimization studies of hydraulic structures using environmental numerical modelling codes. The methodology starts by defining the optimization problem and identifying relevant problem constraints. Design variables in shape optimization studies are configuration of structures (such as length or spacing of groins, orientation and layout of breakwaters, etc.) whose optimal orientation is not known a priori. The optimization problem is solved numerically by coupling an optimization algorithm to a numerical model. The coupled system is able to define, test and evaluate a multitude of new shapes, which are internally generated and then simulated using a numerical model. The developed methodology is tested using an example of an optimum design of a fish passage, where the design variables are the length and the position of slots. In this paper an objective function is defined where a target is specified and the numerical optimizer is asked to retrieve the target solution. Such a definition of the objective function is used to validate the developed tool chain. This work uses the numerical model TELEMAC- 2Dfrom the TELEMAC-MASCARET suite of numerical solvers for the solution of shallow water equations, coupled with various numerical optimization algorithms available in the literature.


Sensors ◽  
2021 ◽  
Vol 21 (10) ◽  
pp. 3327
Author(s):  
Vicente Román ◽  
Luis Payá ◽  
Adrián Peidró ◽  
Mónica Ballesta ◽  
Oscar Reinoso

Over the last few years, mobile robotics has experienced a great development thanks to the wide variety of problems that can be solved with this technology. An autonomous mobile robot must be able to operate in a priori unknown environments, planning its trajectory and navigating to the required target points. With this aim, it is crucial solving the mapping and localization problems with accuracy and acceptable computational cost. The use of omnidirectional vision systems has emerged as a robust choice thanks to the big quantity of information they can extract from the environment. The images must be processed to obtain relevant information that permits solving robustly the mapping and localization problems. The classical frameworks to address this problem are based on the extraction, description and tracking of local features or landmarks. However, more recently, a new family of methods has emerged as a robust alternative in mobile robotics. It consists of describing each image as a whole, what leads to conceptually simpler algorithms. While methods based on local features have been extensively studied and compared in the literature, those based on global appearance still merit a deep study to uncover their performance. In this work, a comparative evaluation of six global-appearance description techniques in localization tasks is carried out, both in terms of accuracy and computational cost. Some sets of images captured in a real environment are used with this aim, including some typical phenomena such as changes in lighting conditions, visual aliasing, partial occlusions and noise.


2021 ◽  
Vol 7 (1) ◽  
Author(s):  
Sansit Patnaik ◽  
Fabio Semperlotti

AbstractThis study presents the formulation, the numerical solution, and the validation of a theoretical framework based on the concept of variable-order mechanics and capable of modeling dynamic fracture in brittle and quasi-brittle solids. More specifically, the reformulation of the elastodynamic problem via variable and fractional-order operators enables a unique and extremely powerful approach to model nucleation and propagation of cracks in solids under dynamic loading. The resulting dynamic fracture formulation is fully evolutionary, hence enabling the analysis of complex crack patterns without requiring any a priori assumption on the damage location and the growth path, and without using any algorithm to numerically track the evolving crack surface. The evolutionary nature of the variable-order formalism also prevents the need for additional partial differential equations to predict the evolution of the damage field, hence suggesting a conspicuous reduction in complexity and computational cost. Remarkably, the variable-order formulation is naturally capable of capturing extremely detailed features characteristic of dynamic crack propagation such as crack surface roughening as well as single and multiple branching. The accuracy and robustness of the proposed variable-order formulation are validated by comparing the results of direct numerical simulations with experimental data of typical benchmark problems available in the literature.


Author(s):  
Benjamin D. Youngman ◽  
David B. Stephenson

We develop a statistical framework for simulating natural hazard events that combines extreme value theory and geostatistics. Robust generalized additive model forms represent generalized Pareto marginal distribution parameters while a Student’s t -process captures spatial dependence and gives a continuous-space framework for natural hazard event simulations. Efficiency of the simulation method allows many years of data (typically over 10 000) to be obtained at relatively little computational cost. This makes the model viable for forming the hazard module of a catastrophe model. We illustrate the framework by simulating maximum wind gusts for European windstorms, which are found to have realistic marginal and spatial properties, and validate well against wind gust measurements.


Sign in / Sign up

Export Citation Format

Share Document