Probabilistic consistency of conditional probability bounds

Author(s):  
Angelo Gilio
Author(s):  
VERONICA BIAZZO ◽  
ANGELO GILIO ◽  
GIUSEPPE SANFILIPPO

We illustrate an approach to uncertain knowledge based on lower conditional probability bounds. We exploit the coherence principle of de Finetti and a related notion of generalized coherence (g-coherence), which is equivalent to the "avoiding uniform loss" property introduced by Walley for lower and upper probabilities. Based on the additive structure of random gains, we define suitable notions of non relevant gains and of basic sets of variables. Exploiting them, the linear systems in our algorithms can work with reduced sets of variables and/or constraints. In this paper, we illustrate the notions of non relevant gain and of basic set by examining several cases of imprecise assessments defined on families with three conditional events. We adopt a geometrical approach, obtaining some necessary and sufficient conditions for g-coherence. We also propose two algorithms which provide new strategies for reducing the number of constraints and for deciding g-coherence. In this way, we try to overcome the computational difficulties which arise when linear systems become intractable. Finally, we illustrate our methods by giving some examples.


Author(s):  
Fulvio Tonon ◽  
Xiaomin You ◽  
Alberto Bernardini

The primary difference between precise and imprecise probability theories lies in the allowance for imprecision, or a gap between upper and lower expectations (also called previsions) of bounded real functions. This gap generates a set of probability distributions or measures. As a result, in imprecise probabilities, the notion of independence on joint spaces is not unique; for example, notions of unknown interaction, epistemic irrelevance/independence and strong independence have been proposed in the literature. After introducing the three concepts of independence, various algorithms are proposed to calculate, through the different definitions of independence, both prevision and conditional probability bounds generated by marginal distributions over finite joint spaces. All algorithms are designed to accommodate two different types of constraints that define the sets of marginal distributions: previsions bounds or extreme distributions. Algorithms are applied to simple examples that show the role of the different quantities introduced and the equivalence of the two types of constraints. It is shown that, in epistemic irrelevance/independence, re-writing algorithms in terms of joint distributions turn quadratic optimization problems into linear ones.


Author(s):  
Laura Mieth ◽  
Raoul Bell ◽  
Axel Buchner

Abstract. The present study serves to test how positive and negative appearance-based expectations affect cooperation and punishment. Participants played a prisoner’s dilemma game with partners who either cooperated or defected. Then they were given a costly punishment option: They could spend money to decrease the payoffs of their partners. Aggregated over trials, participants spent more money for punishing the defection of likable-looking and smiling partners compared to punishing the defection of unlikable-looking and nonsmiling partners, but only because participants were more likely to cooperate with likable-looking and smiling partners, which provided the participants with more opportunities for moralistic punishment. When expressed as a conditional probability, moralistic punishment did not differ as a function of the partners’ facial likability. Smiling had no effect on the probability of moralistic punishment, but punishment was milder for smiling in comparison to nonsmiling partners.


2002 ◽  
Vol 3 (1) ◽  
pp. 30-40
Author(s):  
Joseph D. Cautilli ◽  
Donald A. Hantula

Author(s):  
E. D. Avedyan ◽  
Le Thi Trang Linh

The article presents the analytical results of the decision-making by the majority voting algorithm (MVA). Particular attention is paid to the case of an even number of experts. The conditional probabilities of the MVA for two hypotheses are given for an even number of experts and their properties are investigated depending on the conditional probability of decision-making by independent experts of equal qualifications and on their number. An approach to calculating the probabilities of the correct solution of the MVA with unequal values of the conditional probabilities of accepting hypotheses of each statistically mutually independent expert is proposed. The findings are illustrated by numerical and graphical calculations.


1986 ◽  
Author(s):  
Florin Avram ◽  
Murrad S. Taqqu
Keyword(s):  

Author(s):  
Andrew Gelman ◽  
Deborah Nolan

This chapter contains many classroom activities and demonstrations to help students understand basic probability calculations, including conditional probability and Bayes rule. Many of the activities alert students to misconceptions about randomness. They create dramatic settings where the instructor discerns real coin flips from fake ones, students modify dice and coins in order to load them, students “accused” of lying based on the outcome of an inaccurate simulated lie detector face their classmates. Additionally, probability models of real outcomes offer good value: first we can do the probability calculations, and then can go back and discuss the potential flaws of the model.


Sign in / Sign up

Export Citation Format

Share Document