scholarly journals Trust as a Precursor to Belief Revision

2018 ◽  
Vol 61 ◽  
pp. 699-722 ◽  
Author(s):  
Richard Booth ◽  
Aaron Hunter

Belief revision is concerned with incorporating new information into a pre-existing set of beliefs. When the new information comes from another agent, we must first determine if that agent should be trusted. In this paper, we define trust as a pre-processing step before revision. We emphasize that trust in an agent is often restricted to a particular domain of expertise. We demonstrate that this form of trust can be captured by associating a state partition with each agent, then relativizing all reports to this partition before revising. We position the resulting family of trust-sensitive revision operators within the class of selective revision operators of Ferme and Hansson, and we prove a representation result that characterizes the class of trust-sensitive revision operators in terms of a set of postulates. We also show that trust-sensitive revision is manipulable, in the sense that agents can sometimes have incentive to pass on misleading information.

2019 ◽  
Author(s):  
Elizabeth Bonawitz ◽  
Patrick Shafto ◽  
Yue Yu ◽  
Sophie Elizabeth Colby Bridgers ◽  
Aaron Gonzalez

Burgeoning evidence suggests that when children observe data, they use knowledge of the demonstrator’s intent to augment learning. We propose that the effects of social learning may go beyond cases where children observe data, to cases where they receive no new information at all. We present a model of how simply asking a question a second time may lead to belief revision, when the questioner is expected to know the correct answer. We provide an analysis of the CHILDES corpus to show that these neutral follow-up questions are used in parent-child conversations. We then present three experiments investigating 4- and 5-year-old children’s reactions to neutral follow-up questions posed by ignorant or knowledgeable questioners. Children were more likely to change their answers in response to a neutral follow-up question from a knowledgeable questioner than an ignorant one. We discuss the implications of these results in the context of common practices in legal, educational, and experimental psychological settings.


Author(s):  
Adrian Haret ◽  
Stefan Woltran

Classical axiomatizations of belief revision include a postulate stating that if new information is consistent with initial beliefs, then revision amounts to simply adding the new information to the original knowledge base. This postulate assumes a conservative attitude towards initial beliefs, in the sense that an agent faced with the need to revise them will seek to preserve initial beliefs as much as possible. In this work we look at operators that can assume different attitudes towards original beliefs. We provide axiomatizations of these operators by varying the aforementioned postulate and obtain representation results that characterize the new types of operators using preorders on possible worlds. We also present concrete examples for each new type of operator, adapting notions from decision theory.


Author(s):  
David Buckingham ◽  
Daniel Kasenberg ◽  
Matthias Scheutz

We propose a novel approach to the problem of false belief revision in epistemic planning. Our state representations are pointed Kripke models with two binary relations over possible worlds: one representing agents' necessarily true knowledge, and one representing agents' possibly false beliefs. State transition functions maintain S5n properties in the knowledge relation and KD45n properties in the belief relation. When new information contradicts an agent's beliefs, belief revision draws new possible worlds from the agent's knowledge relation. Our method also improves upon prior work by accommodating false announcements. We develop our system as an extension to the mA* action language, presenting transition functions for ontic, sensing, and announcement actions.


Author(s):  
Theofanis Aravanis ◽  
Pavlos Peppas ◽  
Mary-Anne Williams

In this article, we provide the epistemic-entrenchment characterization of the weak version of Parikh’s relevance-sensitive axiom for belief revision — known as axiom (P) — for the general case of incomplete theories. Loosely speaking, axiom (P) states that, if a belief set K can be divided into two disjoint compartments, and the new information φ relates only to the first compartment, then the second compartment should not be affected by the revision of K by φ. The above-mentioned characterization, essentially, constitutes additional constraints on epistemic-entrenchment preorders, that induce AGM revision functions, satisfying the weak version of Parikh’s axiom (P).


2021 ◽  
Author(s):  
Joe Roussos

The problem of awareness growth, also known as the problem of new hypotheses, is a persistent challenge to Bayesian theories of rational belief and decision making. Cases of awareness growth include coming to consider a completely new possibility (called expansion), or coming to consider finer distinctions through the introduction of a new partition (called refinement). Recent work has centred on Reverse Bayesianism, a proposal for rational awareness growth due to Karni and Vierø. This essay develops a "Reserve Bayesian" position and defends it against two challenges. The first, due to Anna Mahtani, says that Reverse Bayesian approaches yield the wrong result in cases where the growth of awareness constitutes an expansion relative to one partition, but a refinement relative to a different partition. The second, due to Steele and Stefánsson, says that Reverse Bayesian approaches cannot deal with new propositions that are evidentially relevant to old propositions. I argue that these challenges confuse questions of belief revision with questions of awareness change. Mahtani’s cases reveal that the change of awareness itself requires a model which specifies how propositions in the agent’s old algebra are identified with propositions in the new algebra. I introduce a lattice-theoretic model for this purpose, which resolves Mahtani’s problem cases and some of Steele and Stefánsson’s cases. Applying my model of awareness change, then Reverse Bayesianism, and then a generalised belief revision procedure, resolves Steele and Stefánsson’s remaining cases. In demonstrating this, I introduce a simple and general model of belief revision in the face of new information about previously unknown propositions.


Author(s):  
Safia Laaziz ◽  
Younes Zeboudj ◽  
Salem Benferhat ◽  
Faiza Haned Khellaf

The problem of belief change is considered as a major issue in managing the dynamics of an information system. It consists in modifying an uncertainty distribution, representing agents’ beliefs, in the light of a new information. In this paper, we focus on the so-called multiple iterated belief revision or C-revision, proposed for conditioning or revising uncertain distributions under uncertain inputs. Uncertainty distributions are represented in terms of ordinal conditional functions. We will use prioritized or weighted knowledge bases as a compact representation of uncertainty distributions. The input information leading to a revision of an uncertainty distribution is also represented by a set of consistent weighted formulas. This paper shows that C-revision, defined at a semantic level using ordinal conditional functions, has a very natural representation using weighted knowledge bases. We propose simple syntactic methods for revising weighted knowledge bases, that are semantically meaningful in the frameworks of possibility theory and ordinal conditional functions. In particular, we show that the space complexity of the proposed syntactic C-revision is linear with respect to the size of initial weighted knowledge bases.


Author(s):  
Adrian Haret ◽  
Johannes P. Wallner ◽  
Stefan Woltran

We study a type of change on knowledge bases inspired by the dynamics of formal argumentation systems, where the goal is to enforce acceptance of certain arguments. We put forward that enforcing acceptance of arguments can be viewed as a member of the wider family of belief change operations, and that an axiomatic treatment of it is therefore desirable. In our case, laying down axioms enables a precise account of the close connection between enforcing arguments and belief revision. Our analysis of enforcing arguments proceeds by (i) axiomatizing it as an operation in propositional logic and providing a representation result in terms of rankings on sets of interpretations, (ii) showing that it stands in close relationship to belief revision, and (iii) using it as a gateway towards a principled treatment of enforcement in abstract argumentation.


2018 ◽  
Author(s):  
Javier Rasero ◽  
Jesus M Cortes ◽  
Daniele Marinazzo ◽  
Sebastiano Stramaglia

AbstractOne of the biggest challenges in preprocessing pipelines for neuroimaging data is to increase the signal-to-noise ratio of the data which will be used for subsequent analyses. In the same line, we suggest in the present work that the application of consensus clustering for brain connectivity matrices to find subgroups of subjects can be a valid additional”connectome processing” step helpful to reduce intra-group variability and therefore increase the separability of distinct classes. In addition, by partitioning the data before any group comparison, we demonstrate that unique regions within each cluster arise and bring new information that could be relevant from a clinical point of view.


Author(s):  
Stipe Pandžić

AbstractThis paper develops a logical theory that unifies all three standard types of argumentative attack in AI, namely rebutting, undercutting and undermining attacks. We build on default justification logic that already represents undercutting and rebutting attacks, and we add undermining attacks. Intuitively, undermining does not target default inference, as undercutting, or default conclusion, as rebutting, but rather attacks an argument’s premise as a starting point for default reasoning. In default justification logic, reasoning starts from a set of premises, which is then extended by conclusions that hold by default. We argue that modeling undermining defeaters in the view of default theories requires changing the set of premises upon receiving new information. To model changes to premises, we give a dynamic aspect to default justification logic by using the techniques from the logic of belief revision. More specifically, undermining is modeled with belief revision operations that include contracting a set of premises, that is, removing some information from it. The novel combination of default reasoning and belief revision in justification logic enriches both approaches to reasoning under uncertainty. By the end of the paper, we show some important aspects of defeasible argumentation in which our logic compares favorably to structured argumentation frameworks.


2019 ◽  
Vol 66 ◽  
pp. 765-792 ◽  
Author(s):  
Theofanis Aravanis ◽  
Pavlos Peppas ◽  
Mary-Anne Williams

In this article, the epistemic-entrenchment and partial-meet characterizations of Parikh's relevance-sensitive axiom for belief revision, known as axiom (P), are provided. In short, axiom (P) states that, if a belief set $K$ can be divided into two disjoint compartments, and the new information $\varphi$ relates only to the first compartment, then the revision of $K$ by $\varphi$ should not affect the second compartment. Accordingly, we identify the subclass of epistemic-entrenchment and that of selection-function preorders, inducing AGM revision functions that satisfy axiom (P). Hence, together with the faithful-preorders characterization of (P) that has already been provided, Parikh's axiom is fully characterized in terms of all popular constructive models of Belief Revision. Since the notions of relevance and local change are inherent in almost all intellectual activity, the completion of the constructive view of (P) has a significant impact on many theoretical, as well as applied, domains of Artificial Intelligence.


Sign in / Sign up

Export Citation Format

Share Document