Belief revision is concerned with incorporating new information into a pre-existing set of belief... more Belief revision is concerned with incorporating new information into a pre-existing set of beliefs. When the new information comes from another agent, we must first determine if that agent should be trusted. In this paper, we define trust as a pre-processing step before revision. We emphasize that trust in an agent is often restricted to a particular domain of expertise. We demonstrate that this form of trust can be captured by associating a state partition with each agent, then relativizing all reports to this partition before revising. We position the resulting family of trust-sensitive revision operators within the class of selective revision operators of Ferme and Hansson, and we examine its properties. In particular, we show how trust-sensitive revision is manipulable, in the sense that agents can sometimes have incentive to pass on misleading information. When multiple reporting agents are involved, we use a distance function over states to represent differing degrees of trust...
Dung-style abstract argumentation theory centers on argumentation frameworks and acceptance funct... more Dung-style abstract argumentation theory centers on argumentation frameworks and acceptance functions. The latter take as input a framework and return sets of labelings. This methodology assumes full awareness of the arguments relevant to the evaluation. There are two reasons why this is not satisfactory. Firstly, full awareness is, in general, not a realistic assumption. Second, frameworks have explanatory power, which allows us to reason abductively or counterfactually, but this is lost under the usual semantics. To recover this aspect, we generalize conventional acceptance, and we present the concept of a conditional acceptance function.
In this paper we present a modal logic framework to reason about the expertise of information sou... more In this paper we present a modal logic framework to reason about the expertise of information sources. A source is considered an expert on a proposition $$\varphi $$ φ if they are able to correctly refute $$\varphi $$ φ in any possible world where $$\varphi $$ φ is false. Closely connected with expertise is a notion of soundness of information: $$\varphi $$ φ is said to be “sound” if it is true up to lack of expertise of the source. That is, any statement logically weaker than $$\varphi $$ φ on which the source has expertise must in fact be true. This is relevant for modelling situations in which sources make claims beyond their domain of expertise. Particular attention is paid to the connection between expertise and knowledge: we show that expertise and soundness admit precise interpretations in terms of S4 and S5 epistemic logic, under certain conditions. We go on to extend the framework to multiple sources, defining two notions of collective expertise. These also have epistemic i...
The problem of truth discovery, i.e., of trying to find the true facts concerning a number of obj... more The problem of truth discovery, i.e., of trying to find the true facts concerning a number of objects based on reports from various information sources of unknown trustworthiness, has received increased attention recently. The problem is made interesting by the fact that the relative believability of facts depends on the trustworthiness of their sources, which in turn depends on the believability of the facts the sources report. Several algorithms for truth discovery have been proposed, but their evaluation has mainly been performed experimentally by computing accuracy against large datasets. Furthermore, it is often unclear how these algorithms behave on an intuitive level. In this paper we take steps towards a framework for truth discovery which allows comparison and evaluation of algorithms based instead on their theoretical properties. To do so we pose truth discovery as a social choice problem, and formulate various axioms that any reasonable algorithm should satisfy. Along the...
In recent work, we proposed a method of reconstructing an agent's epistemic state from observ... more In recent work, we proposed a method of reconstructing an agent's epistemic state from observations of its revision history. These observations contained information of what the agent believed after receiving which input. In this presentation we intend to illustrate an extension of the work - allowing the observations to contain additional information of what the agent did *not* believe after a revision step. We will show that the BR-framework we assumed is only partially satisfactory for handling the extended observations.
Traditional work in belief revision deals with the question of what an agent should believe upon ... more Traditional work in belief revision deals with the question of what an agent should believe upon receiving new information. We will give an overview about what can be concluded about an agent based on an observation of its belief revision behaviour. The observation contains partial information about the revision inputs received by the agent and its beliefs upon receiving them. We will sketch a method for reasoning about past and future beliefs of the agent and predicting which inputs it accepts and rejects. The focus of this talk will be on different degrees of incompleteness of the observation and variants of the general question we are able to deal with.
We develop a model of abduction in abstract argumentation, where changes to an argumentation fram... more We develop a model of abduction in abstract argumentation, where changes to an argumentation framework act as hypotheses to explain the support of an observation. We present dialogical proof theories for the main decision problems (i.e., finding hypotheses that explain skeptical/credulous support) and we show that our model can be instantiated on the basis of abductive logic programs.
In this paper we introduce and study credibility-limited improvement operators. The idea is to ac... more In this paper we introduce and study credibility-limited improvement operators. The idea is to accept the new piece of information if this information is judged credible by the agent, so in this case a revision is performed. When the new piece of information is not credible then it is not accepted (no revision is performed), but its plausibility is still improved in the epistemic state of the agent, similarly to what is done by improvement operators. We use a generalized definition of Darwiche and Pearl epistemic states, where to each epistemic state can be associated, in addition to the set of accepted formulas (beliefs), a set of credible formulas. We provide a syntactic and semantic characterization of these operators.
Ranking the participants of a tournament has applications in voting, paired comparisons analysis,... more Ranking the participants of a tournament has applications in voting, paired comparisons analysis, sports and other domains. In this paper we introduce bipartite tournaments, which model situations in which two different kinds of entity compete indirectly via matches against players of the opposite kind; examples include education (students/exam questions) and solo sports (golfers/courses). In particular, we look to find rankings via chain graphs, which correspond to bipartite tournaments in which the sets of adversaries defeated by the players on one side are nested with respect to set inclusion. Tournaments of this form have a natural and appealing ranking associated with them. We apply chain editing – finding the minimum number of edge changes required to form a chain graph – as a new mechanism for tournament ranking. The properties of these rankings are investigated in a probabilistic setting, where they arise as maximum likelihood estimators, and through the axiomatic method of ...
In this paper we present a brief overview of belief change, a research area concerned with the qu... more In this paper we present a brief overview of belief change, a research area concerned with the question of how a rational agent ought to change its mind in the face of new, possibly conflicting, information. We limit ourselves to logic-based belief change, with a particular emphasis on classical propositional logic as the underlying logic in which beliefs are to be represented. Our intention is to provide the reader with a basic introduction to the work done in this area over the past 30 years. In doing so we hope to sketch the main historical results, provide appropriate pointers to further references, and discuss some current developments. We trust that this will spur on the interested reader to learn more about the topic, and perhaps to join us in the further development of this exciting field of research.
In Belief Revision the new information is generally accepted, following the principle of primacy ... more In Belief Revision the new information is generally accepted, following the principle of primacy of update. In some case this behavior can be criticized and one could require that some new pieces of information can be rejected by the agent because, for instance, of insufficient plausibility. This has given rise to several approaches of non-prioritized Belief Revision. In particular (Hansson et al. 2001) defined credibilitylimited revision operators, where a revision is accepted only if the new information is a formula that belongs to a set ...
Abstract. We consider the problem of learning a user's ordinal preferences on multiattribute... more Abstract. We consider the problem of learning a user's ordinal preferences on multiattribute domains, assuming that the user's preferences may be modelled as a kind of lexicographic ordering. We introduce a general graphical representation called LP-structures which captures various natural classes of such ordering in which both the order of importance between attributes and the local preferences over each attribute may or may not be conditional on the values of other attributes. For each class we determine the Vapnik- ...
In this paper we consider the merging of rules or conditionals. In contrast to other approaches, ... more In this paper we consider the merging of rules or conditionals. In contrast to other approaches, we do not invent a new approach from scratch, for one particular kind of rule, but we are interested in ways to generalize existing revision and merging operators from belief merging to rule merging. First, we study ways to merge rules based on only a notion of consistency of a set of rules, and illustrate this approach using a consolidation operator of Booth and Richter. Second, we consider ways to merge rules based on a notion of ...
2006 International Workshop on Description Logics DL’06, May 30, 2006
Existing description logic reasoners provide the means to detect logical errors in ontologies, bu... more Existing description logic reasoners provide the means to detect logical errors in ontologies, but lack the capability to resolve them. We present a tableaubased algorithm for computing maximally satisfiable terminologies in ALC. Our main contribution is the ability of the algorithm to handle GCIs, using a refined blocking condition that ensures termination is achieved at the right point during the expansion process. Our work is closely related to that of [1], which considered the same problem for assertional (Abox) statements only, and [2], ...
Belief revision is concerned with incorporating new information into a pre-existing set of belief... more Belief revision is concerned with incorporating new information into a pre-existing set of beliefs. When the new information comes from another agent, we must first determine if that agent should be trusted. In this paper, we define trust as a pre-processing step before revision. We emphasize that trust in an agent is often restricted to a particular domain of expertise. We demonstrate that this form of trust can be captured by associating a state partition with each agent, then relativizing all reports to this partition before revising. We position the resulting family of trust-sensitive revision operators within the class of selective revision operators of Ferme and Hansson, and we examine its properties. In particular, we show how trust-sensitive revision is manipulable, in the sense that agents can sometimes have incentive to pass on misleading information. When multiple reporting agents are involved, we use a distance function over states to represent differing degrees of trust...
Dung-style abstract argumentation theory centers on argumentation frameworks and acceptance funct... more Dung-style abstract argumentation theory centers on argumentation frameworks and acceptance functions. The latter take as input a framework and return sets of labelings. This methodology assumes full awareness of the arguments relevant to the evaluation. There are two reasons why this is not satisfactory. Firstly, full awareness is, in general, not a realistic assumption. Second, frameworks have explanatory power, which allows us to reason abductively or counterfactually, but this is lost under the usual semantics. To recover this aspect, we generalize conventional acceptance, and we present the concept of a conditional acceptance function.
In this paper we present a modal logic framework to reason about the expertise of information sou... more In this paper we present a modal logic framework to reason about the expertise of information sources. A source is considered an expert on a proposition $$\varphi $$ φ if they are able to correctly refute $$\varphi $$ φ in any possible world where $$\varphi $$ φ is false. Closely connected with expertise is a notion of soundness of information: $$\varphi $$ φ is said to be “sound” if it is true up to lack of expertise of the source. That is, any statement logically weaker than $$\varphi $$ φ on which the source has expertise must in fact be true. This is relevant for modelling situations in which sources make claims beyond their domain of expertise. Particular attention is paid to the connection between expertise and knowledge: we show that expertise and soundness admit precise interpretations in terms of S4 and S5 epistemic logic, under certain conditions. We go on to extend the framework to multiple sources, defining two notions of collective expertise. These also have epistemic i...
The problem of truth discovery, i.e., of trying to find the true facts concerning a number of obj... more The problem of truth discovery, i.e., of trying to find the true facts concerning a number of objects based on reports from various information sources of unknown trustworthiness, has received increased attention recently. The problem is made interesting by the fact that the relative believability of facts depends on the trustworthiness of their sources, which in turn depends on the believability of the facts the sources report. Several algorithms for truth discovery have been proposed, but their evaluation has mainly been performed experimentally by computing accuracy against large datasets. Furthermore, it is often unclear how these algorithms behave on an intuitive level. In this paper we take steps towards a framework for truth discovery which allows comparison and evaluation of algorithms based instead on their theoretical properties. To do so we pose truth discovery as a social choice problem, and formulate various axioms that any reasonable algorithm should satisfy. Along the...
In recent work, we proposed a method of reconstructing an agent's epistemic state from observ... more In recent work, we proposed a method of reconstructing an agent's epistemic state from observations of its revision history. These observations contained information of what the agent believed after receiving which input. In this presentation we intend to illustrate an extension of the work - allowing the observations to contain additional information of what the agent did *not* believe after a revision step. We will show that the BR-framework we assumed is only partially satisfactory for handling the extended observations.
Traditional work in belief revision deals with the question of what an agent should believe upon ... more Traditional work in belief revision deals with the question of what an agent should believe upon receiving new information. We will give an overview about what can be concluded about an agent based on an observation of its belief revision behaviour. The observation contains partial information about the revision inputs received by the agent and its beliefs upon receiving them. We will sketch a method for reasoning about past and future beliefs of the agent and predicting which inputs it accepts and rejects. The focus of this talk will be on different degrees of incompleteness of the observation and variants of the general question we are able to deal with.
We develop a model of abduction in abstract argumentation, where changes to an argumentation fram... more We develop a model of abduction in abstract argumentation, where changes to an argumentation framework act as hypotheses to explain the support of an observation. We present dialogical proof theories for the main decision problems (i.e., finding hypotheses that explain skeptical/credulous support) and we show that our model can be instantiated on the basis of abductive logic programs.
In this paper we introduce and study credibility-limited improvement operators. The idea is to ac... more In this paper we introduce and study credibility-limited improvement operators. The idea is to accept the new piece of information if this information is judged credible by the agent, so in this case a revision is performed. When the new piece of information is not credible then it is not accepted (no revision is performed), but its plausibility is still improved in the epistemic state of the agent, similarly to what is done by improvement operators. We use a generalized definition of Darwiche and Pearl epistemic states, where to each epistemic state can be associated, in addition to the set of accepted formulas (beliefs), a set of credible formulas. We provide a syntactic and semantic characterization of these operators.
Ranking the participants of a tournament has applications in voting, paired comparisons analysis,... more Ranking the participants of a tournament has applications in voting, paired comparisons analysis, sports and other domains. In this paper we introduce bipartite tournaments, which model situations in which two different kinds of entity compete indirectly via matches against players of the opposite kind; examples include education (students/exam questions) and solo sports (golfers/courses). In particular, we look to find rankings via chain graphs, which correspond to bipartite tournaments in which the sets of adversaries defeated by the players on one side are nested with respect to set inclusion. Tournaments of this form have a natural and appealing ranking associated with them. We apply chain editing – finding the minimum number of edge changes required to form a chain graph – as a new mechanism for tournament ranking. The properties of these rankings are investigated in a probabilistic setting, where they arise as maximum likelihood estimators, and through the axiomatic method of ...
In this paper we present a brief overview of belief change, a research area concerned with the qu... more In this paper we present a brief overview of belief change, a research area concerned with the question of how a rational agent ought to change its mind in the face of new, possibly conflicting, information. We limit ourselves to logic-based belief change, with a particular emphasis on classical propositional logic as the underlying logic in which beliefs are to be represented. Our intention is to provide the reader with a basic introduction to the work done in this area over the past 30 years. In doing so we hope to sketch the main historical results, provide appropriate pointers to further references, and discuss some current developments. We trust that this will spur on the interested reader to learn more about the topic, and perhaps to join us in the further development of this exciting field of research.
In Belief Revision the new information is generally accepted, following the principle of primacy ... more In Belief Revision the new information is generally accepted, following the principle of primacy of update. In some case this behavior can be criticized and one could require that some new pieces of information can be rejected by the agent because, for instance, of insufficient plausibility. This has given rise to several approaches of non-prioritized Belief Revision. In particular (Hansson et al. 2001) defined credibilitylimited revision operators, where a revision is accepted only if the new information is a formula that belongs to a set ...
Abstract. We consider the problem of learning a user's ordinal preferences on multiattribute... more Abstract. We consider the problem of learning a user's ordinal preferences on multiattribute domains, assuming that the user's preferences may be modelled as a kind of lexicographic ordering. We introduce a general graphical representation called LP-structures which captures various natural classes of such ordering in which both the order of importance between attributes and the local preferences over each attribute may or may not be conditional on the values of other attributes. For each class we determine the Vapnik- ...
In this paper we consider the merging of rules or conditionals. In contrast to other approaches, ... more In this paper we consider the merging of rules or conditionals. In contrast to other approaches, we do not invent a new approach from scratch, for one particular kind of rule, but we are interested in ways to generalize existing revision and merging operators from belief merging to rule merging. First, we study ways to merge rules based on only a notion of consistency of a set of rules, and illustrate this approach using a consolidation operator of Booth and Richter. Second, we consider ways to merge rules based on a notion of ...
2006 International Workshop on Description Logics DL’06, May 30, 2006
Existing description logic reasoners provide the means to detect logical errors in ontologies, bu... more Existing description logic reasoners provide the means to detect logical errors in ontologies, but lack the capability to resolve them. We present a tableaubased algorithm for computing maximally satisfiable terminologies in ALC. Our main contribution is the ability of the algorithm to handle GCIs, using a refined blocking condition that ensures termination is achieved at the right point during the expansion process. Our work is closely related to that of [1], which considered the same problem for assertional (Abox) statements only, and [2], ...
Uploads
Papers