Bayesian Conditionalization and Probability Kinematics

1994 ◽  
Vol 45 (2) ◽  
pp. 451-466 ◽  
Author(s):  
COLIN HOWSON ◽  
ALLAN FRANKLIN
2015 ◽  
Vol 8 (4) ◽  
pp. 611-648 ◽  
Author(s):  
SIMON M. HUTTEGGER

AbstractWe explore the question of whether sustained rational disagreement is possible from a broadly Bayesian perspective. The setting is one where agents update on the same information, with special consideration being given to the case of uncertain information. The classical merging of opinions theorem of Blackwell and Dubins shows when updated beliefs come and stay closer for Bayesian conditioning. We extend this result to a type of Jeffrey conditioning where agents update on evidence that is uncertain but solid (hard Jeffrey shifts). However, merging of beliefs does not generally hold for Jeffrey conditioning on evidence that is fluid (soft Jeffrey shifts, Field shifts). Several theorems on the asymptotic behavior of subjective probabilities are proven. Taken together they show that while a consensus nearly always emerges in important special cases, sustained rational disagreement can be expected in many other situations.


2010 ◽  
Vol 35 ◽  
pp. 89-105 ◽  
Author(s):  
Lydia McGrew ◽  

1967 ◽  
Vol 18 (3) ◽  
pp. 197-209 ◽  
Author(s):  
ISAAC LEVI

Synthese ◽  
1980 ◽  
Vol 44 (3) ◽  
pp. 421-442 ◽  
Author(s):  
Zoltan Domotor ◽  
Mario Zanotti ◽  
Henson Graves

Author(s):  
Jan Sprenger ◽  
Stephan Hartmann

This chapter sets the stage for what follows, introducing the reader to the philosophical principles and the mathematical formalism behind Bayesian inference and its scientific applications. We explain and motivate the representation of graded epistemic attitudes (“degrees of belief”) by means of specific mathematical structures: probabilities. Then we show how these attitudes are supposed to change upon learning new evidence (“Bayesian Conditionalization”), and how all this relates to theory evaluation, action and decision-making. After sketching the different varieties of Bayesian inference, we present Causal Bayesian Networks as an intuitive graphical tool for making Bayesian inference and we give an overview over the contents of the book.


Author(s):  
Jan Sprenger ◽  
Stephan Hartmann

Learning indicative conditionals and learning relative frequencies have one thing in common: they are examples of conditional evidence, that is, evidence that includes a suppositional element. Standard Bayesian theory does not describe how such evidence affects rational degrees of belief, and natural solutions run into major problems. We propose that conditional evidence is best modeled by a combination of two strategies: First, by generalizing Bayesian Conditionalization to minimizing an appropriate divergence between prior and posterior probability distribution. Second, by representing the relevant causal relations and the implied conditional independence relations in a Bayesian network that constrains both prior and posterior. We show that this approach solves several well-known puzzles about learning conditional evidence (e.g., the notorious Judy Benjamin problem) and that learning an indicative conditional can often be described adequately by conditionalizing on the associated material conditional.


Sign in / Sign up

Export Citation Format

Share Document