Trade-Off Analysis for the Design of a High Performance Hydrostatic Actuation System

2000 ◽  
Author(s):  
S. R. Habibi

Abstract This paper considers the design of a high performance hydrostatic actuation system referred to as the ElectroHydraulic Actuator (EHA). The expected performance of EHA and its dominant design parameters are identified by using mathematical modeling. The design parameters are classified into Direct and Indirect categories based on the measure of their accessibility to the designer. The Direct parameters are directly quantifiable and, can be linked to the performance of EHA through a set of mathematical functions. A prototype of EHA has been produced and described. The mathematical functions linking performance to design parameters are used to investigate design trade-offs. Design improvements to the prototype are suggested by using constrained quadratic programming.

2012 ◽  
Vol 602-604 ◽  
pp. 2259-2262
Author(s):  
Hong Xi Zhou ◽  
Chao Chen ◽  
Tao Wang

In this paper a high-performance single level uncooled microbolometer detectors with a unit cell size of 25um×25um is introduced. An efficient detectors requires low Noise Equivalent Temperature Difference(NETD) (<80mK,f/1,60Hz)and low thermal time constant (<8.3ms). The trade-offs between physical parameters are studied to attain the optimum design parameters including the thermal conductance, the thermal time constant and the active area, consequently, optimum design parameters such as the width and the length of the support arms, which can satisfy the demand of an efficient detectors is achieved.


2012 ◽  
Vol 11 (3) ◽  
pp. 118-126 ◽  
Author(s):  
Olive Emil Wetter ◽  
Jürgen Wegge ◽  
Klaus Jonas ◽  
Klaus-Helmut Schmidt

In most work contexts, several performance goals coexist, and conflicts between them and trade-offs can occur. Our paper is the first to contrast a dual goal for speed and accuracy with a single goal for speed on the same task. The Sternberg paradigm (Experiment 1, n = 57) and the d2 test (Experiment 2, n = 19) were used as performance tasks. Speed measures and errors revealed in both experiments that dual as well as single goals increase performance by enhancing memory scanning. However, the single speed goal triggered a speed-accuracy trade-off, favoring speed over accuracy, whereas this was not the case with the dual goal. In difficult trials, dual goals slowed down scanning processes again so that errors could be prevented. This new finding is particularly relevant for security domains, where both aspects have to be managed simultaneously.


2019 ◽  
Author(s):  
Anna Katharina Spälti ◽  
Mark John Brandt ◽  
Marcel Zeelenberg

People often have to make trade-offs. We study three types of trade-offs: 1) "secular trade-offs" where no moral or sacred values are at stake, 2) "taboo trade-offs" where sacred values are pitted against financial gain, and 3) "tragic trade-offs" where sacred values are pitted against other sacred values. Previous research (Critcher et al., 2011; Tetlock et al., 2000) demonstrated that tragic and taboo trade-offs are not only evaluated by their outcomes, but are also evaluated based on the time it took to make the choice. We investigate two outstanding questions: 1) whether the effect of decision time differs for evaluations of decisions compared to decision makers and 2) whether moral contexts are unique in their ability to influence character evaluations through decision process information. In two experiments (total N = 1434) we find that decision time affects character evaluations, but not evaluations of the decision itself. There were no significant differences between tragic trade-offs and secular trade-offs, suggesting that the decisions structure may be more important in evaluations than moral context. Additionally, the magnitude of the effect of decision time shows us that decision time, may be of less practical use than expected. We thus urge, to take a closer examination of the processes underlying decision time and its perception.


2019 ◽  
Author(s):  
Kasper Van Mens ◽  
Joran Lokkerbol ◽  
Richard Janssen ◽  
Robert de Lange ◽  
Bea Tiemens

BACKGROUND It remains a challenge to predict which treatment will work for which patient in mental healthcare. OBJECTIVE In this study we compare machine algorithms to predict during treatment which patients will not benefit from brief mental health treatment and present trade-offs that must be considered before an algorithm can be used in clinical practice. METHODS Using an anonymized dataset containing routine outcome monitoring data from a mental healthcare organization in the Netherlands (n = 2,655), we applied three machine learning algorithms to predict treatment outcome. The algorithms were internally validated with cross-validation on a training sample (n = 1,860) and externally validated on an unseen test sample (n = 795). RESULTS The performance of the three algorithms did not significantly differ on the test set. With a default classification cut-off at 0.5 predicted probability, the extreme gradient boosting algorithm showed the highest positive predictive value (ppv) of 0.71(0.61 – 0.77) with a sensitivity of 0.35 (0.29 – 0.41) and area under the curve of 0.78. A trade-off can be made between ppv and sensitivity by choosing different cut-off probabilities. With a cut-off at 0.63, the ppv increased to 0.87 and the sensitivity dropped to 0.17. With a cut-off of at 0.38, the ppv decreased to 0.61 and the sensitivity increased to 0.57. CONCLUSIONS Machine learning can be used to predict treatment outcomes based on routine monitoring data.This allows practitioners to choose their own trade-off between being selective and more certain versus inclusive and less certain.


Author(s):  
Steven Bernstein

This commentary discusses three challenges for the promising and ambitious research agenda outlined in the volume. First, it interrogates the volume’s attempts to differentiate political communities of legitimation, which may vary widely in composition, power, and relevance across institutions and geographies, with important implications not only for who matters, but also for what gets legitimated, and with what consequences. Second, it examines avenues to overcome possible trade-offs from gains in empirical tractability achieved through the volume’s focus on actor beliefs and strategies. One such trade-off is less attention to evolving norms and cultural factors that may underpin actors’ expectations about what legitimacy requires. Third, it addresses the challenge of theory building that can link legitimacy sources, (de)legitimation practices, audiences, and consequences of legitimacy across different types of institutions.


Author(s):  
Mark Endrei ◽  
Chao Jin ◽  
Minh Ngoc Dinh ◽  
David Abramson ◽  
Heidi Poxon ◽  
...  

Rising power costs and constraints are driving a growing focus on the energy efficiency of high performance computing systems. The unique characteristics of a particular system and workload and their effect on performance and energy efficiency are typically difficult for application users to assess and to control. Settings for optimum performance and energy efficiency can also diverge, so we need to identify trade-off options that guide a suitable balance between energy use and performance. We present statistical and machine learning models that only require a small number of runs to make accurate Pareto-optimal trade-off predictions using parameters that users can control. We study model training and validation using several parallel kernels and more complex workloads, including Algebraic Multigrid (AMG), Large-scale Atomic Molecular Massively Parallel Simulator, and Livermore Unstructured Lagrangian Explicit Shock Hydrodynamics. We demonstrate that we can train the models using as few as 12 runs, with prediction error of less than 10%. Our AMG results identify trade-off options that provide up to 45% improvement in energy efficiency for around 10% performance loss. We reduce the sample measurement time required for AMG by 90%, from 13 h to 74 min.


Author(s):  
Lisa Best ◽  
Kimberley Fung-Loy ◽  
Nafiesa Ilahibaks ◽  
Sara O. I. Ramirez-Gomez ◽  
Erika N. Speelman

AbstractNowadays, tropical forest landscapes are commonly characterized by a multitude of interacting institutions and actors with competing land-use interests. In these settings, indigenous and tribal communities are often marginalized in landscape-level decision making. Inclusive landscape governance inherently integrates diverse knowledge systems, including those of indigenous and tribal communities. Increasingly, geo-information tools are recognized as appropriate tools to integrate diverse interests and legitimize the voices, values, and knowledge of indigenous and tribal communities in landscape governance. In this paper, we present the contribution of the integrated application of three participatory geo-information tools to inclusive landscape governance in the Upper Suriname River Basin in Suriname: (i) Participatory 3-Dimensional Modelling, (ii) the Trade-off! game, and (iii) participatory scenario planning. The participatory 3-dimensional modelling enabled easy participation of community members, documentation of traditional, tacit knowledge and social learning. The Trade-off! game stimulated capacity building and understanding of land-use trade-offs. The participatory scenario planning exercise helped landscape actors to reflect on their own and others’ desired futures while building consensus. Our results emphasize the importance of systematically considering tool attributes and key factors, such as facilitation, for participatory geo-information tools to be optimally used and fit with local contexts. The results also show how combining the tools helped to build momentum and led to diverse yet complementary insights, thereby demonstrating the benefits of integrating multiple tools to address inclusive landscape governance issues.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Michal Sitina ◽  
Heiko Stark ◽  
Stefan Schuster

AbstractIn humans and higher animals, a trade-off between sufficiently high erythrocyte concentrations to bind oxygen and sufficiently low blood viscosity to allow rapid blood flow has been achieved during evolution. Optimal hematocrit theory has been successful in predicting hematocrit (HCT) values of about 0.3–0.5, in very good agreement with the normal values observed for humans and many animal species. However, according to those calculations, the optimal value should be independent of the mechanical load of the body. This is in contradiction to the exertional increase in HCT observed in some animals called natural blood dopers and to the illegal practice of blood boosting in high-performance sports. Here, we present a novel calculation to predict the optimal HCT value under the constraint of constant cardiac power and compare it to the optimal value obtained for constant driving pressure. We show that the optimal HCT under constant power ranges from 0.5 to 0.7, in agreement with observed values in natural blood dopers at exertion. We use this result to explain the tendency to better exertional performance at an increased HCT.


Sign in / Sign up

Export Citation Format

Share Document