Current design procedures for steel-lined pressure tunnels

1983 ◽  
Vol 10 (1) ◽  
pp. 150-161
Author(s):  
A. Nicolopoulos

A systematic procedure for the design of steel liners for penstock tunnels is presented in this article. Useful hints and other relevant information are also provided to help in decision-making. Special attention has been paid to factors of safety and allowable stress, areas where a certain amount of controversy exists.The application of the procedure is facilitated by the development of computerized design graphs. These graphs are presented using a common notation and an appropriate scale, which makes it possible to evaluate the different theories by direct comparison. Finally, a detailed design example is given for a better understanding of the proposed procedure. Keywords: hydroelectric structures, structural steel, linings, tunnels, state-of-the-art review.

2020 ◽  
Vol 10 (24) ◽  
pp. 9082
Author(s):  
João Boné ◽  
João C. Ferreira ◽  
Ricardo Ribeiro ◽  
Gonçalo Cadete

This paper presents DisBot, the first Portuguese speaking chatbot that uses social media retrieved knowledge to support citizens and first-responders in disaster scenarios, in order to improve community resilience and decision-making. It was developed and tested using Design Science Research Methodology (DSRM), being progressively matured with field specialists through several design and development iterations. DisBot uses a state-of-the-art Dual Intent Entity Transformer (DIET) architecture to classify user intents, and makes use of several dialogue policies for managing user conversations, as well as storing relevant information to be used in further dialogue turns. To generate responses, it uses real-world safety knowledge, and infers a dynamic knowledge graph that is dynamically updated in real-time by a disaster-related knowledge extraction tool, presented in previous works. Through its development iterations, DisBot has been validated by field specialists, who have considered it to be a valuable asset in disaster management.


1996 ◽  
Vol 34 (3-4) ◽  
pp. 405-412 ◽  
Author(s):  
Andrea Deininger ◽  
Frank W. Günthert ◽  
Peter A. Wilderer

Density currents in the deeper zones of clarifiers and currents in the clear water zone have a significant influence on clarifier performance. Measurements of flow velocity profiles were conducted in full-scale horizontally flown circular secondary clarifiers. Relations between the hydraulic load and the development of density currents could be detected. Those patterns are not taken into account in current design procedures. Stationary design approaches are mainly based on the overflow rate. Novel design methods based on the dynamic behavior of flow and density distribution in clarifiers are needed in order to improve the efficacy of wastewater treatment systems.


2020 ◽  
Author(s):  
Emma Chavez ◽  
Vanessa Perez ◽  
Angélica Urrutia

BACKGROUND : Currently, hypertension is one of the diseases with greater risk of mortality in the world. Particularly in Chile, 90% of the population with this disease has idiopathic or essential hypertension. Essential hypertension is characterized by high blood pressure rates and it´s cause is unknown, which means that every patient might requires a different treatment, depending on their history and symptoms. Different data, such as history, symptoms, exams, etc., are generated for each patient suffering from the disease. This data is presented in the patient’s medical record, in no order, making it difficult to search for relevant information. Therefore, there is a need for a common, unified vocabulary of the terms that adequately represent the diseased, making searching within the domain more effective. OBJECTIVE The objective of this study is to develop a domain ontology for essential hypertension , therefore arranging the more significant data within the domain as tool for medical training or to support physicians’ decision making will be provided. METHODS The terms used for the ontology were extracted from the medical history of de-identified medical records, of patients with essential hypertension. The Snomed-CT’ collection of medical terms, and clinical guidelines to control the disease were also used. Methontology was used for the design, classes definition and their hierarchy, as well as relationships between concepts and instances. Three criteria were used to validate the ontology, which also helped to measure its quality. Tests were run with a dataset to verify that the tool was created according to the requirements. RESULTS An ontology of 310 instances classified into 37 classes was developed. From these, 4 super classes and 30 relationships were obtained. In the dataset tests, 100% correct and coherent answers were obtained for quality tests (3). CONCLUSIONS The development of this ontology provides a tool for physicians, specialists, and students, among others, that can be incorporated into clinical systems to support decision making regarding essential hypertension. Nevertheless, more instances should be incorporated into the ontology by carrying out further searched in the medical history or free text sections of the medical records of patients with this disease.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Batel Yifrah ◽  
Ayelet Ramaty ◽  
Genela Morris ◽  
Avi Mendelsohn

AbstractDecision making can be shaped both by trial-and-error experiences and by memory of unique contextual information. Moreover, these types of information can be acquired either by means of active experience or by observing others behave in similar situations. The interactions between reinforcement learning parameters that inform decision updating and memory formation of declarative information in experienced and observational learning settings are, however, unknown. In the current study, participants took part in a probabilistic decision-making task involving situations that either yielded similar outcomes to those of an observed player or opposed them. By fitting alternative reinforcement learning models to each subject, we discerned participants who learned similarly from experience and observation from those who assigned different weights to learning signals from these two sources. Participants who assigned different weights to their own experience versus those of others displayed enhanced memory performance as well as subjective memory strength for episodes involving significant reward prospects. Conversely, memory performance of participants who did not prioritize their own experience over others did not seem to be influenced by reinforcement learning parameters. These findings demonstrate that interactions between implicit and explicit learning systems depend on the means by which individuals weigh relevant information conveyed via experience and observation.


2021 ◽  
Vol 11 (6) ◽  
pp. 721
Author(s):  
Russell J. Boag ◽  
Niek Stevenson ◽  
Roel van Dooren ◽  
Anne C. Trutti ◽  
Zsuzsika Sjoerds ◽  
...  

Working memory (WM)-based decision making depends on a number of cognitive control processes that control the flow of information into and out of WM and ensure that only relevant information is held active in WM’s limited-capacity store. Although necessary for successful decision making, recent work has shown that these control processes impose performance costs on both the speed and accuracy of WM-based decisions. Using the reference-back task as a benchmark measure of WM control, we conducted evidence accumulation modeling to test several competing explanations for six benchmark empirical performance costs. Costs were driven by a combination of processes, running outside of the decision stage (longer non-decision time) and showing the inhibition of the prepotent response (lower drift rates) in trials requiring WM control. Individuals also set more cautious response thresholds when expecting to update WM with new information versus maintain existing information. We discuss the promise of this approach for understanding cognitive control in WM-based decision making.


2021 ◽  
Vol 65 (4) ◽  
pp. 643-651
Author(s):  
Th. Nitschke-Pagel ◽  
J. Hensel

AbstractThe consideration of residual stresses in fatigue-loaded welds is currently done only qualitatively without reliable knowledge about their real distribution, amount and prefix. Therefore, the tools which enable a more or less unsafe consideration in design concepts are mainly based on unsafe experiences and doubtful assumptions. Since the use of explicitly determined residual stresses outside the welding community is state of the art, the target of the presented paper is to show a practicable way for an enhanced consideration of residual stresses in the current design tools. This is not only limited on residual stresses induced by welding, but also on post-weld treatment processes like HFMI or shot peening. Results of extended experiments with longitudinal fillet welds and butt welds of low and high strength steels evidently show that an improved use of residual stresses in fatigue strength approximation enables a better evaluation of peening processes as well as of material adjusted welding procedures or post-weld stress relief treatments. The concept shows that it is generally possible to overcome the existing extremely conservative but although unsafe rules and regulations and may also enable the improved use of high strength steels.


2001 ◽  
Vol 17 (1) ◽  
pp. 114-122 ◽  
Author(s):  
Steven H. Sheingold

Decision making in health care has become increasingly reliant on information technology, evidence-based processes, and performance measurement. It is therefore a time at which it is of critical importance to make data and analyses more relevant to decision makers. Those who support Bayesian approaches contend that their analyses provide more relevant information for decision making than do classical or “frequentist” methods, and that a paradigm shift to the former is long overdue. While formal Bayesian analyses may eventually play an important role in decision making, there are several obstacles to overcome if these methods are to gain acceptance in an environment dominated by frequentist approaches. Supporters of Bayesian statistics must find more accommodating approaches to making their case, especially in finding ways to make these methods more transparent and accessible. Moreover, they must better understand the decision-making environment they hope to influence. This paper discusses these issues and provides some suggestions for overcoming some of these barriers to greater acceptance.


2016 ◽  
Vol 23 (9) ◽  
pp. 1240-1249 ◽  
Author(s):  
Sean R Locke ◽  
Lawrence R Brawley

Exercise-related cognitive errors reflect biased processing of exercise-relevant information. The purpose of this study was to examine whether differences existed between individuals reporting low and high exercise-related cognitive errors on information processed about a relevant exercise decision-making situation. In all, 138 adults completed an online questionnaire. The high exercise-related cognitive error group primarily focused on negative content about the situation compared to the low exercise-related cognitive error group who focused on both positive and negative content. The high exercise-related cognitive error group displayed biased processing of exercise information, as suggested by the cognitive errors model. Future research should examine whether biasing information processing caused by exercise-related cognitive errors can be modified and attenuated.


2012 ◽  
Vol 2012 ◽  
pp. 1-24 ◽  
Author(s):  
Mona Riabacke ◽  
Mats Danielson ◽  
Love Ekenberg

Comparatively few of the vast amounts of decision analytical methods suggested have been widely spread in actual practice. Some approaches have nevertheless been more successful in this respect than others. Quantitative decision making has moved from the study of decision theory founded on a single criterion towards decision support for more realistic decision-making situations with multiple, often conflicting, criteria. Furthermore, the identified gap between normative and descriptive theories seems to suggest a shift to more prescriptive approaches. However, when decision analysis applications are used to aid prescriptive decision-making processes, additional demands are put on these applications to adapt to the users and the context. In particular, the issue of weight elicitation is crucial. There are several techniques for deriving criteria weights from preference statements. This is a cognitively demanding task, subject to different biases, and the elicited values can be heavily dependent on the method of assessment. There have been a number of methods suggested for assessing criteria weights, but these methods have properties which impact their applicability in practice. This paper provides a survey of state-of-the-art weight elicitation methods in a prescriptive setting.


2021 ◽  
Author(s):  
Russell Golman ◽  
George Loewenstein ◽  
Andras Molnar ◽  
Silvia Saccardo

Management scientists recognize that decision making depends on the information people have but lack a unified behavioral theory of the demand for (and avoidance of) information. Drawing on an existing theoretical framework in which utility depends on beliefs and the attention paid to them, we develop and test a theory of the demand for information encompassing instrumental considerations, curiosity, and desire to direct attention to beliefs one feels good about. We decompose an individual’s demand for information into the desire to refine beliefs, holding attention constant, and the desire to focus attention on anticipated beliefs, holding these beliefs constant. Because the utility of resolving uncertainty (i.e., refining beliefs) depends on the attention paid to it and more important or salient questions capture more attention, demand for information depends on the importance and salience of the question(s) it addresses. In addition, because getting new information focuses attention on one’s beliefs and people want to savor good news and ignore bad news, the desire to obtain or avoid information depends on the valence (i.e., goodness or badness) of anticipated beliefs. Five experiments (n = 2,361) test and find support for these hypotheses, looking at neutrally valenced as well as ego-relevant information. People are indeed more inclined to acquire information (a) when it feels more important, even if it cannot aid decision making (Experiments 1A and 2A); (b) when a question is more salient, manipulated through time lag (Experiments 1B and 2B); and (c) when anticipated beliefs have higher valence (Experiment 2C). This paper was accepted by Yan Chen, behavioral economics and decision analysis.


Sign in / Sign up

Export Citation Format

Share Document