Skip to main content
A popular approach to solving a decision process with non-Markovian rewards (NMRDP) is to exploit a compact representation of the reward function to automatically translate the NMRDP into an equivalent Markov decision process (MDP)... more
A popular approach to solving a decision process with non-Markovian rewards (NMRDP) is to exploit a compact representation of the reward function to automatically translate the NMRDP into an equivalent Markov decision process (MDP) amenable to our favorite MDP solution method. The contribution of this paper is a representation of non-Markovian reward functions and a translation into MDP aimed at making the best possible use of state-based anytime algorithms as the solution method. By explicitly constructing and exploring only parts of the state space, these algorithms are able to trade computation time for policy quality, and have proven quite effective in dealing with large MDPs. Our representation extends future linear temporal logic to express rewards. Our translation has the effect of embedding model-checking in the solution method and results in an MDP of the minimal size achievable without stepping outside the anytime framework.
... on diagonals only apply where the rank exists, hence the antecedent condition E! (a+x). The second example shows a highly nontrivial problem which FINDER solved in about a day on a Sparc-2 and which had previously been open. The... more
... on diagonals only apply where the rank exists, hence the antecedent condition E! (a+x). The second example shows a highly nontrivial problem which FINDER solved in about a day on a Sparc-2 and which had previously been open. The problem is to find an idempotent ...
An Ackermann constant is a formula of sentential logic built up from the sentential constant t by closing under connectives. It is known that there are only finitely many non-equivalent Ackermann constants in the relevant logic R. In this... more
An Ackermann constant is a formula of sentential logic built up from the sentential constant t by closing under connectives. It is known that there are only finitely many non-equivalent Ackermann constants in the relevant logic R. In this paper it is shown that the most natural systems close to R but weaker than it-in particular the non-distributive system LR and the modalised system NR-allow infinitely many Ackermann constants to be distinguished. The argument in each case proceeds by construction of an algebraic model, infinite in the case of LR and of arbitrary finite size in the case of NR. The search for these models was aided by the computer program MaGIC (Matrix Generator for Implication Connectives) developed by the author at the Australian National University.
$LTL is a version of linear temporal logic in which eventualities are not expressible, but in which there is a sentential constant $ intended to be true just at the end of some behaviour of interest - that is, to mark the end of the... more
$LTL is a version of linear temporal logic in which eventualities are not expressible, but in which there is a sentential constant $ intended to be true just at the end of some behaviour of interest - that is, to mark the end of the accepted (finite) words of some language. There is an effectively recognisable class of $LTL formulae which express behaviours, but in a sense different from the standard one of temporal logics like LTL or CTL. This representation is useful for solving a class of decision processes with temporally extended goals, which in turn are useful for representing an important class of AI planning problems
This paper is about planning in one of the simplest domains: the Blocks World (BW). In the first part, we examine some known polynomial time algorithms for BW planning which approximate optimality. We improve the known complexity bounds... more
This paper is about planning in one of the simplest domains: the Blocks World (BW). In the first part, we examine some known polynomial time algorithms for BW planning which approximate optimality. We improve the known complexity bounds of two such algorithms, and give the first proper formulation of a third. In the second part, we give an algorithm for generating plans free of redundancy in a sense appropriate to the problem. Though this does not necessarily produce higher quality solutions in the worst case, we present empirical evidence that its average behaviour is better than that of the other algorithms. Irredundant plans are not in general optimal, but they are minimal in a reasonable sense, and such minimal BW planning, unlike optimal BW planning, is tractable.
... New South Wales) Masoud Mohammadian (UC) Rohan Baxter (CSIRO, Commonwealth Scien-tific & Industrial Research Organization) Daryl Essam (University of New South Wales) Program Committee Hussein Abbass: Australian Defence Force... more
... New South Wales) Masoud Mohammadian (UC) Rohan Baxter (CSIRO, Commonwealth Scien-tific & Industrial Research Organization) Daryl Essam (University of New South Wales) Program Committee Hussein Abbass: Australian Defence Force Academy Leila Alem: CSIRO ...
I note that the logics of the “relevant” group most closely tied to the research programme in paraconsistency are those without the contraction postulate(A→.A→B)→.A→B and its close relatives. As a move towards gaining control of the... more
I note that the logics of the “relevant” group most closely tied to the research programme in paraconsistency are those without the contraction postulate(A→.A→B)→.A→B and its close relatives. As a move towards gaining control of the contraction-free systems I show that they are prime (that wheneverA ∨B is a theorem so is eitherA orB). The proof is an extension of the metavaluational techniques standardly used for analogous results about intuitionist logic or the relevant positive logics.
The duality between conflicts and diagnoses in the field of diagnosis, or between plans and landmarks in the field of planning, or between unsatisfiable cores and minimal co-satisfiable sets in SAT or CSP solving, has been known for many... more
The duality between conflicts and diagnoses in the field of diagnosis, or between plans and landmarks in the field of planning, or between unsatisfiable cores and minimal co-satisfiable sets in SAT or CSP solving, has been known for many years. Recent work in these communities (Davies and Bacchus, CP 2011, Bonet and Helmert, ECAI 2010, Haslum et al., ICAPS 2012, Stern et al., AAAI 2012) has brought it to the fore as a topic of current interest. The present paper lays out the set-theoretic basis of the concept, and introduces a generic implementation of an algorithm based on it. This algorithm provides a method for converting decision procedures into optimisation ones across a wide range of applications without the need to rewrite the decision procedure implementations. Initial experimental validation shows good performance on a number of benchmark problems from AI planning.
... Computers are better than we are at handling a certain amount of certain kinds of complexity, so they have foun.da niche in that kind of work from which they cannot be moved. There was, however, a cloud on the horizon. For ...
• Elegant axioms for groups [13, 19], lattices [25], loops [14, 15, 12], and other algebraic structures [24] have been discovered by the extended Argonne team. aWolfram [53, 801–818] suggests that he discovered these axioms (no citations... more
• Elegant axioms for groups [13, 19], lattices [25], loops [14, 15, 12], and other algebraic structures [24] have been discovered by the extended Argonne team. aWolfram [53, 801–818] suggests that he discovered these axioms (no citations to our work). He also reports ...
In reporting on the theorem prover SCOTT (Slaney, SCOTT: A Semantically Guided Theorem Prover, Proc. IJCAI, 1993) we suggested semantic constraint as as an appropriate mechanism for guiding proof searches in propositional systems where... more
In reporting on the theorem prover SCOTT (Slaney, SCOTT: A Semantically Guided Theorem Prover, Proc. IJCAI, 1993) we suggested semantic constraint as as an appropriate mechanism for guiding proof searches in propositional systems where the rule of inference is condensed detachment---a generalisation of Modus Ponens. Such constrained condensed detachment is closely analogous to semantic resolution. This paper exhibits an example which shows that semantically constrained condensed detachment is incomplete. That is, there are formulae deducible by means of condensed detachment which are not deducible when the semantic constraint is imposed. This answers an open question from our 1993 paper. Semantically Constrained Condensed Detachment is Incomplete John Slaney and Timothy J. Surendonk August 8, 1995 Abstract In reporting on the theorem prover SCOTT (Slaney, SCOTT: A Semantically Guided Theorem Prover, Proc. IJCAI, 1993) we suggested semantic constraint as as an appropriate mechanism f...
Research Interests:
We present a simple and efficient algorithm to solve delete-free planning problems optimally and calculate the h+ heuristic. The algorithm efficiently computes a minimum-cost hitting set for a complete set of disjunctive action landmarks... more
We present a simple and efficient algorithm to solve delete-free planning problems optimally and calculate the h+ heuristic. The algorithm efficiently computes a minimum-cost hitting set for a complete set of disjunctive action landmarks generated on the fly. Unlike other recent approaches, the landmarks it generates are guaranteed to be set-inclusion minimal. In almost all delete-relaxed IPC domains, this leads to a significant coverage and runtime improvement.
This paper presents F, substructural logic designed to treat vagueness. Weaker than Lukasiewicz’s infinitely valued logic, it is presented first in a natural deduction system, then given a Kripke semantics in the manner of Routley and... more
This paper presents F, substructural logic designed to treat vagueness. Weaker than Lukasiewicz’s infinitely valued logic, it is presented first in a natural deduction system, then given a Kripke semantics in the manner of Routley and Meyer's ternary relational semantics for R and related systems, but in this case, the points are motivated as degrees to which the truth could be stretched. Soundness and completeness are proved, not only for the propositional system, but also for its extension with first-order quantifiers. The first-order models allow not only objects with vague properties, but also objects whose very existence is a matter of degree.
CNF simplifiers play a very important role in minimising structured problem hardness. Although they can be used in an in-search process, most of them serve in a pre-search phase and rely on one form or another of resolution. Based on our... more
CNF simplifiers play a very important role in minimising structured problem hardness. Although they can be used in an in-search process, most of them serve in a pre-search phase and rely on one form or another of resolution. Based on our understanding about problem structure, in the paper, we extend the single pre-search process to a multiple one in order
Anbulagan, Pham, DN, Slaney, J. and Sattar, A.(2010) Boosting SLS Using Resolution, in Trends in Constraint Programming (eds F. Benhamou, N. Jussien and B. O'Sullivan), ISTE, London, UK. doi: 10.1002/9780470612309. ch17
Research Interests:
The modal logic KD45 is frequently presented as the standard account of the logic of belief for a single agent, where perhaps that agent is viewed as having the doxastic properties of a deductive database. However, KD45 is absurdly strong... more
The modal logic KD45 is frequently presented as the standard account of the logic of belief for a single agent, where perhaps that agent is viewed as having the doxastic properties of a deductive database. However, KD45 is absurdly strong for such a reading. Specifically, • K, D and 5 together are logically incoherent, even as an account of the theory of a trivial database. • K and 5 together make nonsense of the motivating concepts of belief and introspection. • K and D together contradict certain simple empirically observable facts. In the light of these observations, it is unacceptable that KD45 should continue to be paraded as any kind of doxastic logic. This paper recommends that the objections to KD45 be taken seriously and that the “standard” account be revised accordingly. It is suggested that such a revision will force a major shift in the theory of epistemic and doxastic logic. KD45 is not a doxastic logic
ABSTRACT this paper we report one line of attack on the focus problem for saturation methods of first order theorem proving, by injecting semantic information into heuristics for ordering the possible inferences. Preliminary work on this... more
ABSTRACT this paper we report one line of attack on the focus problem for saturation methods of first order theorem proving, by injecting semantic information into heuristics for ordering the possible inferences. Preliminary work on this idea, 1 in collaboration with Lusk, McCune and others, resulted in the system Scott [5, 1, 7] which showed some modest e#ciency gains relative to its parent Otter. However, the main technique used in that prover was model resolution; the work on false preference (see below) remained unsystematic and lacked a theoretical basis. The new generation of Scott rests on a new understanding of semantic guidance and shows remarkably stable behaviour over a wide range of problems. We present results on problems from the TPTP library and performance under fair conditions in CASC as compelling evidence that the e#ects exploited by our technique are real and useful

And 21 more