CS 5/7320
Artificial Intelligence
Knowledge-Based Agents
AIMA Chapters 7-9
Slides by Michael Hahsler
based on slides by Svetlana Lazepnik
with figures from the AIMA textbook
This work is licensed under a Creative Commons "Exercise Plays Vital Role Maintaining Brain Health"
Attribution-ShareAlike 4.0 International License. by A Health Blog
What is Logic?
Logic is a formal system for representing and manipulating facts
(i.e., knowledge) so that true conclusions may be drawn
Syntax: rules for constructing valid E.g., x + 2 y is a valid arithmetic
sentences sentence, x2y + is not
Semantics: “meaning” of Specifically, semantics defines truth
of sentences
sentences, or relationship between
E.g., x + 2 y is true in a world where
logical sentences and the real x=5
world and y = 7
Reality vs. Knowledge Representation
Knowledge / Fact
Facts
Learning
• Facts: Sentences we know to be true.
• Possible worlds: all worlds/models which are consistent with the
facts we know (compare with belief state).
• Learning new facts reduces the number of possible worlds.
• Entailment: A sentence logically follows from what we already know.
Knowledge-Based Agents
Knowledge
Domain-specific content
base
Inference engine Domain-independent algorithms
• Knowledge base (KB) = set of sentences in a formal language (knowledge
representation) that are known to be true = set of facts
• Declarative approach to building an agent: Define what it needs to know
• Distinction between data (knowledge) and program (inference)
• Fullest realization of this philosophy was in the field of expert systems or
knowledge-based systems in the 1970s and 1980s
Generic Knowledge-based Agent
Memorize percept
at time t
Ask for logical
action
Record action
taken at time t
Some Types of Logic
PROPOSITIONAL LOGIC FIRST ORDER LOGIC
Propositional Logic
Propositional logic: Syntax in BN-Form
= Symbols
Negation
Conjunction
Disjunction
Implication
Biconditional
Validity and Satisfiability
A sentence is valid if
e.g., True, A A, A A, (A (A B)) B
it is true in all models
useful to deduct new sentences.
(called a tautology)
A sentence is e.g., AB, C
satisfiable if it is true useful to find new facts that satisfy all
in some model possible worlds.
A sentence is
unsatisfiable if it is e.g., AA
true in no models
Possible Worlds, Models and Truth Tables
A model specifies a “possible world” with the true/false
status of each proposition symbol in the knowledge base
• E.g., P is true and Q is true
• With two symbols, there are 22 = 4 possible worlds/models, and they can
be enumerated exhaustively using:
A truth table specifies the truth value of a composite sentence for each
possible assignments of truth values to its atoms. Each row is a model.
We have 3 possible worlds for 𝑃 ⇒ 𝑄 = 𝑡𝑟𝑢𝑒
Propositional logic: Semantics
Rules for evaluating truth with respect to a model:
• P is true iff P is false
• P Q is true iff P is true and Q is true
• P Q is true iff P is true or Q is true
• P Q is true iff P is false or Q is true
Sentence Model
Logical equivalence
Two sentences are logically equivalent iff (read if, and only if)
they are true in same models
Entailment
• Entailment means that a sentence follows from the
premises contained in the knowledge base:
KB ╞ α
• The knowledge base KB entails sentence α iff α is
true in all models where KB is true
• E.g., KB with x = 0 entails sentence x * y = 0
• Tests for entailment
• KB ╞ α iff (KB α) is valid
• KB ╞ α iff (KB α) is unsatisfiable
Inference
• Logical inference: a procedure for generating
sentences that follow from a knowledge base KB
• An inference procedure is sound if it derives a
sentence α iff KB╞ α. I.e, it only derives true
sentences.
• An inference procedure is complete if it can derive
all α for which KB╞ α.
Inference
• How can we check whether a sentence α is entailed by KB?
• How about we enumerate all possible models of the KB (truth
assignments of all its symbols), and check that α is true in every
model in which KB is true?
• Is this sound (if an answer is produced, it is correct)?
• Is this complete (guaranteed to produce the correct answer)?
• Problem: if KB contains n symbols, the truth table will be of size 2n
• Better idea: use inference rules, or sound procedures to generate
new sentences or conclusions given the premises in the KB
Complexity of inference
• Propositional inference is co-NP-complete
• Complement of the SAT problem: α ╞ β if and only if the sentence α β is
unsatisfiable
• Every known inference algorithm has worst-case exponential running time
• Efficient inference possible for restricted cases
• e.g., Horn clauses are disjunctions of literals with at most one positive literal.
Example: Wumpus World
Example: Wumpus World
Initial KB needs to contain rules like these for
each square:
𝐵𝑟𝑒𝑒𝑧𝑒 1,1 ⟺ 𝑃𝑖𝑡 1,2 ∨ 𝑃𝑖𝑡 2,1
𝐵𝑟𝑒𝑒𝑧𝑒 1,2 ⟺ 𝑃𝑖𝑡 1,1 ∨ 𝑃𝑖𝑡 1,3 ∨ 𝑃𝑖𝑡 2,2
Stench 1,1 ⟺ 𝑊 1,2 ∨ 𝑊 2,1
…
Percepts at (1,1) are no breeze or stench. Add
the following facts to the KB:
¬𝐵𝑟𝑒𝑒𝑧𝑒(1,1)
¬𝑆𝑡𝑒𝑛𝑐ℎ(1,1)
Inference will tell us that the following facts are
entailed:
¬𝑃𝑖𝑡(1,2), ¬𝑃𝑖𝑡(2,1), ¬𝑊(1,2), ¬𝑊 2,1
This means that (1,2) and (2,1) are safe.
Summary
• Logical agents apply inference to a knowledge base to derive
new information and make decisions
• Basic concepts of logic:
• syntax: formal structure of sentences
• semantics: truth of sentences in models
• entailment: necessary truth of one sentence given another
• inference: deriving sentences from other sentences
• soundness: derivations produce only entailed sentences
• completeness: derivations can produce all entailed sentences
• Resolution is complete for propositional logic
• Algorithms use forward, backward chaining, are linear in
time, and complete for special clauses (definite clauses).
First-Order Logic
Limitations of propositional logic
• Suppose you want to say “All humans are mortal”
• In propositional logic, you would need ~6.7 billion statements of the form:
MichaelIsHuman and MichaelIsMortal,
SarahIsHuman and SarahIsMortal, …
• Suppose you want to say “Some people can run a
marathon”
• You would need a disjunction of ~6.7 billion statements:
MichaelcanRunAMarathon or … or SarahCanRunAMarathon
Other Languages to represent Knowledge
• First-order Logic adds objects and relations.
Syntax of FOL
Objects
Relations. Predicate
is/returns True or False
Function returns an object
Universal quantification
• x P(x)
• Example: “Everyone at SMU is smart”
x At(x,SMU) Smart(x)
Why not x At(x,SMU) Smart(x)?
• Roughly speaking, equivalent to the conjunction of all
possible instantiations of the variable:
[At(John, SMU) Smart(John)] ...
[At(Richard, SMU) Smart(Richard)] ...
• x P(x) is true in a model m iff P(x) is true with x being each
possible object in the model
Existential quantification
• x P(x)
• Example: “Someone at SMU is smart”
x At(x,SMU) Smart(x)
Why not x At(x,SMU) Smart(x)?
• Roughly speaking, equivalent to the disjunction of all
possible instantiations:
[At(John,SMU) Smart(John)]
[At(Richard,SMU) Smart(Richard)] …
• x P(x) is true in a model m iff P(x) is true with x being some
possible object in the model
Inference in FOL
• Inference is complicated.
1. Reduction to propositional logic and then use propositional logic
inference.
2. Directly do inference on FOL (or a subset like definite clauses)
• Unification: Combine two sentences into one.
• Forward Chaining for FOL
• Backward Chaining for FOL
• Logical programming (e.g., Prolog)
Limitations of Logic
• What if we cannot collect enough knowledge to make a decision
e.g., the environment if only partially observable?
• What about randomness?
• What about facts about the future?
grade(Michael, AI, this_semester)
+ Natural Language facts, objects, relations, … ???
Large Language Models as Knowledge-based Agents
Reality vs. Knowledge Representation
Learned words,
grammar, facts.
Knowledge
Prompts Text
• The user formulates a question about the real world as a prompt
(sequence)
• The LLM generates text using its knowledge.
• The text (hopefully) is useful in the real world
Large Language Models as Knowledge-based Agents
Knowledge-Based Agents
Learned words,
grammar, facts.
Domain-independent content (pre-training)
Knowledge +
Domain-specific content (fine tuning)
base
Inference engine Domain-independent algorithms
Decoder-only
transformer
Large Language Models as Knowledge-based Agents
Generic Knowledge-based Agent
Memorize percept
at time t
Ask for generating
text (generate new
tokens using
autoregression)
Record action
taken at time t