Developing Biomimetic Guidelines for the Highly Optimized and Robust Design of Complex Products or Their Components

Author(s):  
Anosh P. Wadia ◽  
Daniel A. McAdams

Improved complex system design methods can lead to innovative, efficient, and robust product designs. This research aims at improving the design of products that compose a portion of, or exist within, a complex system. Before attempting to improve product designs, one requires a better understanding and characterization of complex systems. One method to characterize optimized and robust complex systems is to use the Theory of Highly Optimized Tolerance (HOT). The theory states that highly optimized and tolerant complex systems are robust in conditions for which they were designed, but fragile in the face of unanticipated events. Highly robust and optimized complex systems are abundant in the biological domain. In fact, nature represents a vast resource for innovative solutions to varied design problems. Leveraging these solutions to solve engineering problems is often referred to as biomimetic design. This research analyzes twenty bio-inspired engineering products including the biological system from which they were derived. The HOT theory is used analyze the biomimetic systems and identify the inherent characteristics that make the designs robust to their environment. These characteristics were reviewed to identify common features and trends present within the information transfer between the biological and engineering domains. Finally, the inferred features and trends were abstracted into usable guidelines stated as nine biomimetic design guidelines. Similar to the forty Theory of Inventive Problem Solving principles, these bio-inspired guidelines could aid engineers in developing innovative and robust solutions to design problems. In fact, a similarity between some of the biomimetic design guidelines and TRIZ principles is observed. This correlation suggests that solutions perceived as innovative in the engineering domain match those in nature.

Author(s):  
Caitlin Stack ◽  
Douglas L. Van Bossuyt

Current methods of functional failure risk analysis do not facilitate explicit modeling of systems equipped with Prognostics and Health Management (PHM) hardware. As PHM systems continue to grow in application and popularity within major complex systems industries (e.g. aerospace, automotive, civilian nuclear power plants), implementation of PHM modeling within the functional failure modeling methodologies will become useful for the early phases of complex system design and for analysis of existing complex systems. Functional failure modeling methods have been developed in recent years to assess risk in the early phases of complex system design. However, the methods of functional modeling have yet to include an explicit method for analyzing the effects of PHM systems on system failure probabilities. It is common practice within the systems health monitoring industry to design the PHM subsystems during the later stages of system design — typically after most major system architecture decisions have been made. This practice lends itself to the omission of considering PHM effects on the system during the early stages of design. This paper proposes a new method for analyzing PHM subsystems’ contribution to risk reduction in the early stages of complex system design. The Prognostic Systems Variable Configuration Comparison (PSVCC) eight-step method developed here expands upon existing methods of functional failure modeling by explicitly representing PHM subsystems. A generic pressurized water nuclear reactor primary coolant loop system is presented as a case study to illustrate the proposed method. The success of the proposed method promises more accurate modeling of complex systems equipped with PHM subsystems in the early phases of design.


2016 ◽  
Vol 13 (2) ◽  
pp. 119-140
Author(s):  
Nicholas McGuigan ◽  
◽  
Thomas Kern ◽  

The future employment markets our graduates are likely to face are increasingly complex and unpredictable. Demands are being placed on higher-education providers to become more holistic and integrated in their approach. For business schools across Australia, this requires a significant (re)conceptualisation of how student learning is facilitated, in respect to content, processes and infrastructure. Future business professionals will be required to think in diverse and integrated ways, adopting transdisciplinary approaches to solve complex system-design problems. This calls for educators to focus on creativity and innovation; in response, we need to reinterpret our teaching philosophies, content and processes. In this paper we argue that, by exploring the Bauhaus pedagogical process of “unlearning” in accounting curricula, a dynamic, engaging, and creative space can be opened up for learners and educators alike. “Unlearning” can support a critical and reflective culture for both students and teachers that nurtures a deeper understanding of the “ways of thinking” as business professionals.


Author(s):  
Michael R. S. Slater ◽  
Douglas L. Van Bossuyt

Risk analysis in engineering design is of paramount importance when developing complex systems or upgrading existing systems. In many complex systems, new generations of systems are expected to have decreased risk and increased reliability when compared with previous designs. For instance, within the American civilian nuclear power industry, the Nuclear Regulatory Commission (NRC) has progressively increased requirements for reliability and driven down the chance of radiological release beyond the plant site boundary. However, many ongoing complex system design efforts analyze risk after early major architecture decisions have been made. One promising method of bringing risk considerations earlier into the conceptual stages of the complex system design process is functional failure modeling. Function Failure Identification and Propagation (FFIP) and related methods began the push toward assessing risk using the functional modeling taxonomy. This paper advances the Dedicated Failure Flow Arrestor Function (DFFAF) method which incorporates dedicated Arrestor Functions (AFs) whose purpose is to stop failure flows from propagating along uncoupled failure flow pathways, as defined by Uncoupled Failure Flow State Reasoner (UFFSR). By doing this, DFFAF provides a new tool to the functional failure modeling toolbox for complex system engineers. This paper introduces DFFAF and provides an illustrative simplified civilian Pressurized Water Reactor (PWR) nuclear power plant case study.


2016 ◽  
Vol 3 (1) ◽  
Author(s):  
Kirsten Sinclair ◽  
Daniel Livingstone

Difficulty understanding the large number of interactions involved in complex systems makes their successful engineering a problem. Petri Nets are one graphical modelling technique used to describe and check proposed designs of complex systems thoroughly. While automatic analysis capabilities of Petri Nets are useful, their visual form is less so, particularly for communicating the design they represent. In engineering projects, this can lead to a gap in communications between people with different areas of expertise, negatively impacting achieving accurate designs.In contrast, although capable of representing a variety of real and imaginary objects effectively, behaviour of serious games can only be analysed manually through interactive simulation. This paper examines combining the complementary strengths of Petri Nets and serious games. The novel contribution of this work is a serious game prototype of a complex system design that has been checked thoroughly. Underpinned by Petri Net analysis, the serious game can be used as a high-level interface to communicate and refine the design.Improvement of a complex system design is demonstrated by applying the integration to a proof-of-concept case study.   


Author(s):  
L. Siddharth ◽  
Amaresh Chakrabarti ◽  
Srinivasan Venkataraman

Analogical design has been a long-standing approach to solve engineering design problems. However, it is still unclear as to how analogues should be presented to engineering design in order to maximize the utility of these. The utility is minimal when analogues are complex and belong to other domain (e.g., biology). Prior work includes the use of a function model called SAPPhIRE to represent over 800 biological and engineered systems. SAPPhIRE stands for the entities: States, Actions, Parts, Phenomena, Inputs, oRgans, and Effects that together represent the functionality of a system at various levels of abstraction. In this paper, we combine instances of SAPPhIRE model for representing complex systems (also from the biological domain). We use an electric buzzer to illustrate and compare the efficacy of this model in explaining complex systems with that of a well-known model from literature. The use of multiple-instance SAPPhIRE model instances seems to provide a more comprehensive explanation of a complex system, which includes elements of description that are not present in other models, providing an indication as to which elements might have been missing from a given description. The proposed model is implemented in a web-based tool called Idea-Inspire 4.0, a brief introduction of which is also provided.


Author(s):  
Farzaneh Farhangmehr ◽  
Irem Y. Tumer

The design and development cycle for complex systems is full of uncertainty, commonly recognized as the main source of risk in organizations engaged in design and development. One of the challenges for such organizations is assessing how much risk (cost, schedule, scope) they can take on and still remain competitive. The risk associated with the design of complex systems is fundamentally tied to uncertainty, which may lead to suboptimal performance or failure if unmanaged. By understanding the sources of uncertainty in all stages of complex system design, decision-makers can make more informed choices and identify “hotspots” for reducing risks due to uncertainty by reallocating resources, adding safeguards, etc. There are two major categories of uncertainty (certain uncertainty) classification in the design of complex systems: Knowledge/epistemic uncertainty and Variability/Aleatory uncertainty. The intersection of these two sets is ambiguity uncertainty and the outside is what we don’t know we don’t know (uncertain uncertainty). By setting detailed definitions, we can reduce the ambiguity uncertainty. Furthermore, we can subdivide knowledge uncertainty into model, ambiguity and behavioral uncertainty, and subdivide variability uncertainty into natural randomness, ambiguity and behavioral uncertainty. We can go further and find subcategories for model and behavioral uncertainty. Using this classification for uncertainty, this paper proposes the “Capture, Assessment and Communication Tool for Uncertainty Simulation” (CACTUS) for assessing, capturing, and communicating risks due to uncertainty during complex system design. CACTUS has columns to identify sources, location, severity and importance of uncertainty in stages of design. By applying CACTUS, decision-makers will be able to find answers to the following questions for each type of uncertainty included in the design process: 1 - Where is uncertainty from? (i.e., Sources); 2 - In which stages of design does uncertainty appear? (i.e., Location); 3 - What is its severity?; and, 4 - What is its importance? The hypothesis of this research is that, by using CACTUS, design organizations can capture, assess, and efficiently and effectively communicate uncertainty through their design processes, and as a result, improve their capacity for delivering complex systems that meet cost, schedule, and performance objectives. The fundamental steps of the methodology are illustrated by using a concurrent design case study from NASA’s Project Design Center.


2021 ◽  
pp. 1-19
Author(s):  
Stephen Sapol ◽  
Zoe Szajnfarber

Abstract Complex systems must sustain value over extended lifetimes, often in the face of significant uncertainty. Flexibility “in” systems has been shown to be highly valuable for Large Monolithic Systems (LMS). However, other research highlighted that the value of flexibility “in” is highly contingent on delays in implementation. These limitations become more important when applied to other classes of complex systems, including Fleet-Based Systems (FBS). To overcome these challenges, this paper introduces a complementary approach to flexible design, termed “Flexibility ‘of’” and applies it to a case study of a fleet of military vehicles (an FBS). Unlike LMS, FBS are composed of multiple identical units that collectively deliver value. While each unit is itself a complex system (e.g., a tank or aircraft), the collective nature of the operations provides additional paths to flexibility: in addition to implementing flexibility at the vehicle level, flexibility can be applied to the management of the fleet. Flexibility “of” involves procuring a mixed capability fleet upfront and then actively managing which subsets of that fleet are deployed to meet emerging needs. Our results demonstrate the potential value for an “of” strategy and provide guidance for when different flexibility strategies should be adopted alone or in combination.


Author(s):  
Brett G. Amidan ◽  
Thomas A. Ferryman

The power grid is a complex system. Multiple quantities are measured from hundreds of locations, at rates up to 30 Hz. There are both correlated and uncorrelated variables. Powerful methods are needed to examine this large amount of data and better understand the complex system, and in the case of the power grid, identify imminent adverse events, such as blackouts. These methods need to sift through any multicollinearity among the variables, account for the random uncertainty that is present within each variable, and focus on practical differences as defined by domain experts in addition to statistical differences. These methods will then help the user to better understand the complex system by uncovering the hidden gems within the data. These gems include identification of the uncertainty, characterization of the typical patterns, and the discovery of atypical events. This paper will discuss the intricate methods used to explore the data, and the novel displays used to communicate the findings. This paper will also delve into the exploration of other complex systems, like aviation safety, using similar methods.


2002 ◽  
Vol 69 ◽  
pp. 117-134 ◽  
Author(s):  
Stuart M. Haslam ◽  
David Gems ◽  
Howard R. Morris ◽  
Anne Dell

There is no doubt that the immense amount of information that is being generated by the initial sequencing and secondary interrogation of various genomes will change the face of glycobiological research. However, a major area of concern is that detailed structural knowledge of the ultimate products of genes that are identified as being involved in glycoconjugate biosynthesis is still limited. This is illustrated clearly by the nematode worm Caenorhabditis elegans, which was the first multicellular organism to have its entire genome sequenced. To date, only limited structural data on the glycosylated molecules of this organism have been reported. Our laboratory is addressing this problem by performing detailed MS structural characterization of the N-linked glycans of C. elegans; high-mannose structures dominate, with only minor amounts of complex-type structures. Novel, highly fucosylated truncated structures are also present which are difucosylated on the proximal N-acetylglucosamine of the chitobiose core as well as containing unusual Fucα1–2Gal1–2Man as peripheral structures. The implications of these results in terms of the identification of ligands for genomically predicted lectins and potential glycosyltransferases are discussed in this chapter. Current knowledge on the glycomes of other model organisms such as Dictyostelium discoideum, Saccharomyces cerevisiae and Drosophila melanogaster is also discussed briefly.


1989 ◽  
Author(s):  
Daniel Bensen ◽  
Michael Welge ◽  
Alfred Huebler ◽  
Norman Packard

Sign in / Sign up

Export Citation Format

Share Document