Evaluation of the Performance of Statistical Tests Used in Making Cleanup Decisions at Superfund Sites. Part 2: Real World Implications of Using Various Decision Rules

Author(s):  
DW Berman ◽  
BC Allen ◽  
CB Van Landingham
2020 ◽  
Vol 117 (26) ◽  
pp. 14812-14818 ◽  
Author(s):  
Bin Zhou ◽  
Xiangyi Meng ◽  
H. Eugene Stanley

Whether real-world complex networks are scale free or not has long been controversial. Recently, in Broido and Clauset [A. D. Broido, A. Clauset,Nat. Commun.10, 1017 (2019)], it was claimed that the degree distributions of real-world networks are rarely power law under statistical tests. Here, we attempt to address this issue by defining a fundamental property possessed by each link, the degree–degree distance, the distribution of which also shows signs of being power law by our empirical study. Surprisingly, although full-range statistical tests show that degree distributions are not often power law in real-world networks, we find that in more than half of the cases the degree–degree distance distributions can still be described by power laws. To explain these findings, we introduce a bidirectional preferential selection model where the link configuration is a randomly weighted, two-way selection process. The model does not always produce solid power-law distributions but predicts that the degree–degree distance distribution exhibits stronger power-law behavior than the degree distribution of a finite-size network, especially when the network is dense. We test the strength of our model and its predictive power by examining how real-world networks evolve into an overly dense stage and how the corresponding distributions change. We propose that being scale free is a property of a complex network that should be determined by its underlying mechanism (e.g., preferential attachment) rather than by apparent distribution statistics of finite size. We thus conclude that the degree–degree distance distribution better represents the scale-free property of a complex network.


2019 ◽  
Vol 11 (1) ◽  
pp. 833-858 ◽  
Author(s):  
John Rust

Dynamic programming (DP) is a powerful tool for solving a wide class of sequential decision-making problems under uncertainty. In principle, it enables us to compute optimal decision rules that specify the best possible decision in any situation. This article reviews developments in DP and contrasts its revolutionary impact on economics, operations research, engineering, and artificial intelligence with the comparative paucity of its real-world applications to improve the decision making of individuals and firms. The fuzziness of many real-world decision problems and the difficulty in mathematically modeling them are key obstacles to a wider application of DP in real-world settings. Nevertheless, I discuss several success stories, and I conclude that DP offers substantial promise for improving decision making if we let go of the empirically untenable assumption of unbounded rationality and confront the challenging decision problems faced every day by individuals and firms.


2020 ◽  
Vol 10 (15) ◽  
pp. 5388
Author(s):  
Uday K. Chakraborty

The Jaya algorithm is arguably one of the fastest-emerging metaheuristics amongst the newest members of the evolutionary computation family. The present paper proposes a new, improved Jaya algorithm by modifying the update strategies of the best and the worst members in the population. Simulation results on a twelve-function benchmark test-suite and a real-world problem show that the proposed strategy produces results that are better and faster in the majority of cases. Statistical tests of significance are used to validate the performance improvement.


Author(s):  
MASAHIRO INUIGUCHI ◽  
RYUTA ENOMOTO

In order to analyze the distribution of individual opinions (decision rules) in a group, clustering of decision tables is proposed. An agglomerative hierarchical clustering (AHC) of decision tables has been examined. The result of AHC does not always optimize some criterion. We develop non-hierarchical clustering techniques for decision tables. In order to treat positive and negative evaluations to a common profile, we use a vector of rough membership values to represent individual opinion to a profile. Using rough membership values, we develop a K -means method as well as fuzzy c-means methods for clustering decision tables. We examined the proposed methods in clustering real world decision tables obtained by a questionnaire investigation.


Author(s):  
Masahiro Inuiguchi

Rough sets can be interpreted in two ways: classification of objects and approximation of a set. From this point of view, classification-oriented and approximation-oriented rough sets have been proposed. In this paper, the author reconsiders those two kinds of rough sets with reviewing their definitions, properties and relations. The author describes that rough sets based on positive and negative extensive relations are mathematically equivalent but it is important to consider both because they obtained positive and negative extensive relations are not always in inverse relation in the real world. The difference in size of granules between union-based and intersection-based approximations is emphasized. Moreover, the types of decision rules associated with those rough sets are shown.


1996 ◽  
Vol 25 (1) ◽  
pp. 68-75 ◽  
Author(s):  
B. Wade Brorsen ◽  
Scott H. Irwin

Agricultural economists' research on price forecasting and marketing strategies has been used little by those in the real world. We argue that fresh approaches to research are needed. First, we argue that we need to adopt a new theoretical paradigm, noisy rational expectations. This paradigm suggests that gains from using price forecasting models with public data or from using a marketing strategy are not impossible, but any gains are likely to be small. We need to conduct falsification tests; to perform confirmation and replication; to adjust research to reflect structural changes, such as increased contracting; and always to conduct statistical tests. We also provide a modest agenda for changing our research and extension programs.


2017 ◽  
Author(s):  
Richard Ernst

The core of health care cost-effectiveness analysis is a set of decision rules for enabling public health care agencies to choose the most socially beneficial treatments to provide to or insure for their patient communities. Inappropriate versions of these rules are used by the national and provincial health services of the UK and several of the commonwealth countries, and they are also commonly used in published cost-effectiveness analyses. Here the correct decision rules are derived from standard utilitarian welfare premises and two different models of the optimizing behavior of a rational health care agency. The methods of probabilistic cost-effectiveness are discussed, and statistical tests are proposed for applying the decision rules under conditions of uncertainty.


2017 ◽  
Vol 35 (15_suppl) ◽  
pp. e21003-e21003
Author(s):  
Ying Qiu ◽  
Zhiyi Li ◽  
Jackson Tang ◽  
Pavel Atanasov ◽  
Syed Mahmood ◽  
...  

e21003 Background: Guidelines for metastatic melanoma (MM) recommend targeted therapy combination (combo) for patients (pts) with BRAF mutation (BRAF+) and immunotherapy for pts irrespective of BRAF status. The study objective was to describe real world characteristics and treatment patterns among mm pts treated with either dabrafenib+trametinib (D+T) or ipilimumab+nivolumab (I+N) combo therapies. Methods: This retrospective observational study utilized Flatiron Health’s electronic health record (EHR) data. Included pts were ≥18 years and treated with either D+T or I+N (post mm diagnosis) from Jan 2013 - Jul 2016. Baseline characteristics, treatment patterns and data on therapy discontinuation were collected from structured data in patient records and unstructured data in physician notes. Non-parametric statistical tests were used in light of sample size limitations. Results: 135 D+T pts and 75 I+N pts were included. Median age was 60 for D+T pts and 64 for I+N pts. 69% of D+T pts were male vs 68% of I+N pts. Most pts were Caucasian (82% for both). Before receiving combo therapy, 100% of D+T pts vs 31% of I+N pts were BRAF+, 38% of D+T pts vs 31% of I+N pts had brain metastasis (mts) (p > .05), 25% of D+T pts vs 16% of I+N pts had high LDH levels (p = .05), and 18% of D+T pts vs 35% of I+N pts reported ECOG = 0 status (p < .01). Median follow-up time was 234 days for D+T pts vs 173 days for I+N pts (p < .01). 64% of D+T pts vs 81% of I+N pts received combo therapy as first line (1L) therapy (p = .02). The 3 and 6 month discontinuation rates were 9% and 24% for D+T 1L pts; 12% and 26% for I+N 1L pts; and 21% and 47% for the subset of I+N 1L BRAF+ pts. 13% of D+T 1L pts vs 28% of I+N 1L pts discontinued treatment over the follow-up period citing toxicity in physician notes (p = .05). Among pts with brain mts, the 3 and 6 month discontinuation rates were 3% and 20% for D+T 1L pts; 24% and 35% for I+N 1L pts; and 33% and 50% for I+N 1L BRAF+ pts. Conclusions: Prior to initiation of combo therapy, D+T pts had more advanced disease: pts were more likely to have brain mts, high LDH level, receive D+T as 2L+ therapy, and less likely to have ECOG = 0 status than I+N pts. I+N 1L pts were more likely to discontinue therapy due to toxicity than D+T 1L pts even with a shorter median follow-up time.


Sign in / Sign up

Export Citation Format

Share Document