Analysis of Decomposability and Complexity for Design Problems in the Context of Decomposition

Author(s):  
Li Chen ◽  
Simon Li

The current practice in problem decomposition assumes that (1) design problems can be rationally decomposed a priori and (2) decomposition can usefully result in complexity reduction a priori. However, this assumption is not always true in reality. In response to this concern, this paper introduces the notions of decomposability and complexity to problem decomposition. In particular, a full scale of decomposability analysis and complexity analysis in the context of decomposition are presented along with approaches and algorithms. These new analyses not only address the viability and validity of decomposition, but also help achieve an optimal number of sub-problems during decomposition, which is usually determined by trial and error or a priori. Further, a procedure that is able to combine these new analyses into our two-phase decomposition framework is described. This effort leads to an enhanced decomposition method that is able to find the most appropriate decomposition solution to a complex design problem.

2005 ◽  
Vol 127 (2) ◽  
pp. 184-195 ◽  
Author(s):  
Li Chen ◽  
Zhendong Ding ◽  
Simon Li

This paper presents a formal two-phase decomposition method for complex design problems that are represented in an attribute-component incidence matrix. Unlike the conventional approaches, this method decouples the overall decomposition process into two separate, autonomous function components: dependency analysis and matrix partitioning, which are algorithmically achieved by an extended Hierarchical Cluster Analysis (HCA) and a Partition Point Analysis (PPA), respectively. The extended HCA (Phase 1) is applied to convert the (input) incidence matrix, which is originally unorganized, into a banded diagonal matrix. The PPA (Phase 2) is applied to further transform this matrix into a block-angular matrix according to a given set of decomposition criteria. This method provides both flexibility in the choice of the different settings on the decomposition criteria, and diversity in the generation of the decomposition solutions, both taking place in Phase 2 without resort to Phase 1. These features essentially make this decomposition method effective, especially in its application to re-decomposition. A powertrain design example is employed for illustration and discussion.


2010 ◽  
Vol 132 (6) ◽  
Author(s):  
Simon Li

The two-phase method is a matrix-based approach for system decomposition, in which a system is represented by a rectangular matrix to capture dependency relationships of two sets of system elements. While the two-phase method has its own advantages in problem decomposition, this paper focuses on two methodical extensions to improve the method’s capability. The first extension is termed nonbinary dependency analysis, which can handle nonbinary dependency information, in addition to just binary information, of the model. This extension is based on the formal analysis of a resemblance coefficient to quantify the couplings among the model’s elements. The second extension is termed heuristic partitioning analysis, which allows the method to search for a reasonably good decomposition solution with less computing effort. This extension can be viewed as an alternative to the original partitioning approach that uses an enumerative approach to search for an optimal solution. At the end, the relief valve redesign example is applied to illustrate and justify the newly developed method components.


Author(s):  
Azrah Azhar ◽  
Erica L. Gralla ◽  
Connor Tobias ◽  
Jeffrey W. Herrmann

Many design problems are too difficult to solve all at once; therefore, design teams often decompose these problems into more manageable subproblems. While there has been much interest in engineering design teams, no standard method has been developed to understand how teams solve design problems. This paper describes a method for analyzing a team’s design activities and identifying the subproblems that they considered. This method uses both qualitative and quantitative techniques; in particular, it uses association rule learning to group variables into subproblems. We used the method on data from ten teams who redesigned a manufacturing facility. This approach provides researchers with a clear structure for using observational data to identify the problem decomposition patterns of human designers.


Geosciences ◽  
2021 ◽  
Vol 11 (2) ◽  
pp. 99 ◽  
Author(s):  
Yueqi Gu ◽  
Orhun Aydin ◽  
Jacqueline Sosa

Post-earthquake relief zone planning is a multidisciplinary optimization problem, which required delineating zones that seek to minimize the loss of life and property. In this study, we offer an end-to-end workflow to define relief zone suitability and equitable relief service zones for Los Angeles (LA) County. In particular, we address the impact of a tsunami in the study due to LA’s high spatial complexities in terms of clustering of population along the coastline, and a complicated inland fault system. We design data-driven earthquake relief zones with a wide variety of inputs, including geological features, population, and public safety. Data-driven zones were generated by solving the p-median problem with the Teitz–Bart algorithm without any a priori knowledge of optimal relief zones. We define the metrics to determine the optimal number of relief zones as a part of the proposed workflow. Finally, we measure the impacts of a tsunami in LA County by comparing data-driven relief zone maps for a case with a tsunami and a case without a tsunami. Our results show that the impact of the tsunami on the relief zones can extend up to 160 km inland from the study area.


2009 ◽  
Vol 43 (2) ◽  
pp. 48-60 ◽  
Author(s):  
M. Martz ◽  
W. L. Neu

AbstractThe design of complex systems involves a number of choices, the implications of which are interrelated. If these choices are made sequentially, each choice may limit the options available in subsequent choices. Early choices may unknowingly limit the effectiveness of a final design in this way. Only a formal process that considers all possible choices (and combinations of choices) can insure that the best option has been selected. Complex design problems may easily present a number of choices to evaluate that is prohibitive. Modern optimization algorithms attempt to navigate a multidimensional design space in search of an optimal combination of design variables. A design optimization process for an autonomous underwater vehicle is developed using a multiple objective genetic optimization algorithm that searches the design space, evaluating designs based on three measures of performance: cost, effectiveness, and risk. A synthesis model evaluates the characteristics of a design having any chosen combination of design variable values. The effectiveness determined by the synthesis model is based on nine attributes identified in the U.S. Navy’s Unmanned Undersea Vehicle Master Plan and four performance-based attributes calculated by the synthesis model. The analytical hierarchy process is used to synthesize these attributes into a single measure of effectiveness. The genetic algorithm generates a set of Pareto optimal, feasible designs from which a decision maker(s) can choose designs for further analysis.


Author(s):  
Stephen S. Altus ◽  
Ilan M. Kroo ◽  
Peter J. Gage

Abstract Complex engineering studies typically involve hundreds of analysis routines and thousands of variables. The sequence of operations used to evaluate a design strongly affects the speed of each analysis cycle. This influence is particularly important when numerical optimization is used, because convergence generally requires many iterations. Moreover, it is common for disciplinary teams to work simultaneously on different aspects of a complex design. This practice requires decomposition of the analysis into subtasks, and the efficiency of the design process critically depends on the quality of the decomposition achieved. This paper describes the development of software to plan multidisciplinary design studies. A genetic algorithm is used, both to arrange analysis subroutines for efficient execution, and to decompose the task into subproblems. The new planning tool is compared with an existing heuristic method. It produces superior results when the same merit function is used, and it can readily address a wider range of planning objectives.


2003 ◽  
Vol 125 (5) ◽  
pp. 845-851 ◽  
Author(s):  
K. J. Daun ◽  
D. P. Morton ◽  
J. R. Howell

This paper presents an optimization methodology for designing radiant enclosures containing specularly-reflecting surfaces. The optimization process works by making intelligent perturbations to the enclosure geometry at each design iteration using specialized numerical algorithms. This procedure requires far less time than the forward “trial-and-error” design methodology, and the final solution is near optimal. The radiant enclosure is analyzed using a Monte Carlo technique based on exchange factors, and the design is optimized using the Kiefer-Wolfowitz method. The optimization design methodology is demonstrated by solving two industrially-relevant design problems involving two-dimensional enclosures that contain specular surfaces.


2018 ◽  
Vol 7 (3) ◽  
pp. 581-604 ◽  
Author(s):  
Armin Eftekhari ◽  
Michael B Wakin ◽  
Rachel A Ward

Abstract Leverage scores, loosely speaking, reflect the importance of the rows and columns of a matrix. Ideally, given the leverage scores of a rank-r matrix $M\in \mathbb{R}^{n\times n}$, that matrix can be reliably completed from just $O (rn\log ^{2}n )$ samples if the samples are chosen randomly from a non-uniform distribution induced by the leverage scores. In practice, however, the leverage scores are often unknown a priori. As such, the sample complexity in uniform matrix completion—using uniform random sampling—increases to $O(\eta (M)\cdot rn\log ^{2}n)$, where η(M) is the largest leverage score of M. In this paper, we propose a two-phase algorithm called MC2 for matrix completion: in the first phase, the leverage scores are estimated based on uniform random samples, and then in the second phase the matrix is resampled non-uniformly based on the estimated leverage scores and then completed. For well-conditioned matrices, the total sample complexity of MC2 is no worse than uniform matrix completion, and for certain classes of well-conditioned matrices—namely, reasonably coherent matrices whose leverage scores exhibit mild decay—MC2 requires substantially fewer samples. Numerical simulations suggest that the algorithm outperforms uniform matrix completion in a broad class of matrices and, in particular, is much less sensitive to the condition number than our theory currently requires.


2021 ◽  
Author(s):  
Ehsan Abdolahnejad ◽  
Mahdi Moghimi ◽  
Shahram Derakhshan

Abstract Optimal transfer of two-phase solid-liquid flow (slurry flow) has long been a major industrial challenge. Slurry pumps are among the most common types of centrifugal pumps used to deal with this transfer issue. The approach of improving slurry pumps and consequently increasing the efficiency of a flow transmission system requires overcoming the effects of slurry flow such as the reduction in head, efficiency, and wear. This study attempts to investigate the changes in the pump head by modifying the slip factor distribution in the impeller channel. For this purpose, the effect of splitter blades on slip factor distribution to improve the pump head was investigated using numerical simulation tools and validated based on experimental test data. Next, an optimization process was used to determine the characteristics of the splitter (i.e., length, number, and environmental position of the splitter) based on a combination of experimental design methods, surface response, and genetic algorithm. The optimization results indicate that the splitters were in a relative circumferential position of 67.2% to the suction surface of the main blade. Also, the optimal number and length of splitter blades were 6 and 62.8% of the length of the main blades, respectively. Because of adding splitter blades and the reduction in the flow passage, the best efficiency point (BEP) of the slurry pump moved toward lower flow rates. The result of splitter optimization was the increase in pump head from 29.7 m to 31.7 m and the upkeep of efficiency in the initial values.


Sign in / Sign up

Export Citation Format

Share Document