Unit 4 Ai
Unit 4 Ai
UNIT IV
KNOWLEDGE REPRESENTATION AND REASONING
Overview
Knowledge representation and reasoning are essential for intelligent agents to make decisions
and take actions based on the information they acquire from the environment. Representing
knowledge accurately and effectively is critical for problem-solving and decision-making
processes. The reasoning process leverages this knowledge to derive conclusions and make
informed judgments. Logic serves as the foundation for making the knowledge usable and
enabling reasoning.
Key Concepts
What is Knowledge?
Definition
Properties of Effective KR
Approaches to KR
Relational Knowledge Structure: Stores facts in relational databases, offering weak inference
capabilities.
Inheritable Knowledge Structure: Uses hierarchies for mapping inheritance relationships (e.g.,
ISA relationships).
Inferential Knowledge Structure: Employs first-order predicate logic for combining and
utilizing knowledge.
Semantic Networks and Frames: Represent relationships between concepts and their
attributes.
Refining Facts
Mapping Knowledge
Information Mapping
Granularity
Object Representation
Structure Selection
Summary
Knowledge representation and reasoning are interconnected processes critical for intelligent
agents. Effective KR relies on logical, structured approaches to represent and manipulate
knowledge, enabling agents to adapt and make decisions. While challenges exist in selecting and
designing KR systems, addressing issues like granularity, relationships, and structure selection
ensures robust solutions for problem-solving.
KNOWLEDGE-BASED AGENTS
Agent: Acts based on its environment and requires knowledge to perform actions effectively.
Knowledge Representation Language: Uses sentences to represent facts about the world.
Operating Principle: An agent's actions depend on its perception of the environment and the
knowledge stored in the KB.
Percept Acquisition:
The agent perceives the environment (input: percept p at time t). This percept is recorded in the
KB as knowledge using a sentence representation.
Decision Making:
The agent queries the KB to decide an appropriate action (a). The action is based on logical
reasoning derived from the KB.
Knowledge Update:
After executing the action, the KB is updated to include information about the action taken.
Algorithm for Knowledge-Based Agents
Architecture
Implementation Level: Logical sentences are executed; does not affect knowledge level details.
Task Acceptance:
Knowledge Initialization
Declarative Knowledge:
Predefined rules embedded in the system. Explicitly outlines what actions to take in specific
scenarios.
Procedural Knowledge:
Summary
Knowledge-based agents leverage their structured knowledge base to make informed, logical
decisions. They follow a systematic process of perceiving, acting, and updating their knowledge.
By employing declarative and procedural knowledge, these agents adapt flexibly to dynamic
environments and meet specific goals effectively.
The agent explores a cave with rooms connected by passageways. The agent's task is to
find the gold, return to the starting position ([1,1]), and climb out of the cave without being eaten
by the Wumpus or falling into a pit.
Wumpus: A dangerous creature that lurks in a random room and eats any agent that enters its
room.
Pits: Some rooms contain bottomless pits that will trap any agent that enters.
Gold: Occasionally, there is a heap of gold in a room, which the agent must collect.
Objective: Collect the gold and exit the cave without encountering dangers (Wumpus or pits).
Stench: Perceived in the current room and adjacent rooms if the Wumpus is in the room or in an
adjacent room.
Breeze: Perceived in adjacent rooms to the agent’s current room if there is a pit.
The percept is represented as a five-symbol list. For example, if there is a stench and a breeze but
no glitter, bump, or scream, the percept would be:
Shoot: Fire an arrow in a straight line in the direction the agent is facing. The arrow kills the
Wumpus if it hits it. The agent has only one arrow, so only the first shoot attempt will have an
effect.
Climb: Used to exit the cave. Effective only at the start square [1,1].
Die: Happens automatically if the agent enters a square with a pit or the Wumpus.
Performance Measure:
Death: -1000 points for falling into a pit or being eaten by the Wumpus.
Arrow Usage: -10 points for using the arrow (whether successful or not).
Environment Features:
The Wumpus World cave consists of discrete rooms connected by doors. Rooms adjacent to the
Wumpus emit a stench, while those near pits have a breeze. A glitter indicates the presence of
gold in a square. The agent can kill the Wumpus by shooting an arrow if facing it, pick up gold
using the grab action when in the same square, or drop it with the release action. The placement
of the Wumpus and pits is randomized at the start of the game. The agent perceives its
environment through five sensors: stench, breeze, glitter, bump, and scream. It can perform
various actions using actuators, including turning left or right, moving forward, grabbing,
releasing, and shooting.
Fully Observable: No, the agent only perceives local information in the current room and its
adjacent rooms.
Deterministic: Yes, the outcomes are exactly specified for each action.
Static: Yes, the Wumpus and pits do not move throughout the game.
Single-Agent: Yes, the agent is the only active participant in the world. The Wumpus is a "natural
feature" and does not actively interact with the agent.
Summary:
The Wumpus World is a simple, discrete, and deterministic environment in which an agent
explores a cave, faces challenges like the Wumpus and pits, and aims to collect gold and exit the
cave without dying. The agent's behavior is guided by sensory inputs and actions, with a
performance measure based on its success in completing tasks and avoiding danger.
The agent explores a cave with rooms connected by passageways. The agent's task is to
find the gold, return to the starting position ([1,1]), and climb out of the cave without being eaten
by the Wumpus or falling into a pit.
The knowledge base (KB) of the agent in the Wumpus World consists of the predefined
rules governing the environment and the initial percept of "nothing" in the starting square [1,1].
This initial knowledge is represented by the Boolean percept feature values <0, 0, 0, 0, 0>,
indicating no stench, breeze, glitter, bump, or scream at T=0. From this, the agent infers that the
adjacent squares [2,1] and [1,2] are safe, adding these as propositions to the KB.
At T=1, logical reasoning reveals more insights: the percepts suggest the presence of a pit
in either [2,2] or [3,1]. The Wumpus cannot be in [1,1] (the starting square) or [2,2] because of
the absence of stench in [1,2]. Thus, the Wumpus must be in [1,3]. Further analysis shows no
breeze in [1,2], meaning there is no pit in [2,2]. Consequently, the pit must be in [3,1].
By T=3, the KB allows the agent to definitively conclude that there is a pit in [3,1]. This
logical reasoning process exemplifies how the agent uses its percepts and knowledge to deduce
facts about the world. The KB ensures that the conclusions drawn are valid in all possible
configurations of the Wumpus World consistent with the known information at time T=3. This
reflects the essence of logical reasoning: deriving truths based on what is known and eliminating
alternative possibilities.
Summary:
The Wumpus World is a simple, discrete, and deterministic environment in which an agent
explores a cave, faces challenges like the Wumpus and pits, and aims to collect gold and exit the
cave without dying. The agent's behavior is guided by sensory inputs and actions, with a
performance measure based on its success in completing tasks and avoiding danger. The
knowledge base will be updated each and every time when the agent is exploring the rooms in
the cave.
LOGIC
Understanding Logic
Logic is the study of reasoning principles and how they assist in decision-making. It provides a
structured framework for reasoning, encompassing syntax, semantics, and inference procedures.
The syntax defines how logical statements are represented, while semantics deals with the
meaning of those statements, which can be evaluated as true or false. Logical reasoning connects
these components to infer new information or decide the truth of statements.
Syntax refers to the formal structure of logical statements, such as symbols, formulas, or
sentences. Semantics, on the other hand, concerns the interpretation and truth values of these
sentences. It can be put forth in the following way: Assume that x and y are the two sentences
whose relationship is to be determined. Let us map this relationship as follows:
x=y
This indicates that the sentence x entails y, indicating that in the model when x is true, y is also
true. For example, suppose
KB = {p=1}
Then, KB |= {p + 1 = 2}
In the Wumpus World, the agent combines percepts (e.g., breezes, stenches) with known rules to
build a knowledge base (KB). For instance, if there is a breeze at [2,1] and nothing at [1,1], the
agent uses logical reasoning to infer possible pit locations in [1,2], [2,2], or [3,1]. Model checking
is employed to evaluate all possible configurations of pits, ensuring the reasoning aligns with the
KB. For instance, if no breeze is detected at [1,1][1,1][1,1], the KB can infer that a pit does not
exist in [1,2]. Logical reasoning ensures truth and completeness: if the KB is true, any sentence
derived from it must also be true.
Propositional Logic
Propositional logic is the simplest form of logic used in reasoning, where statements are atomic
propositions that can either be true or false. Propositional logic is fundamental in AI for tasks
such as decision-making and modeling, providing a basis for more complex reasoning systems
like first-order logic.
In propositional logic, sentences can be categorized into two types: simple (atomic) and
compound sentences. A simple sentence consists of a single propositional symbol, such as P, Q, or
R, which represents a proposition that can either be true or false. There are two primary truth
values: true (1) and false (0). The syntax defines how these symbols and operators can be
arranged to form valid sentences.
The five main logical operators in propositional logic are:
1. Negation (~ or ¬): It negates the truth value of a proposition.
2. Conjunction (∧): It represents "and", meaning both propositions must be true for the
result to be true.
3. Disjunction (∨): It represents "or", meaning at least one proposition must be true for the
result to be true.
4. Implication (→): Represents "if... then", where the truth of the first proposition ensures
the truth of the second.
5. Biconditional (↔): Represents "if and only if", meaning both propositions must have the
same truth value.
Syntax also involves precedence rules, which specify the order of operations for logical
connectives, with negation having the highest precedence. Parentheses are used to clarify and set
the precedence of operations.
Semantics in Propositional Logic
Semantics in propositional logic determines the truth value of a sentence based on the truth
values of the propositions involved. For simple sentences, determining the truth value is
straightforward. However, for compound sentences, the truth value depends on the individual
propositions and how they are connected using logical operators.
When evaluating compound sentences, the truth value is determined by the truth values of the
components (propositions). The number of possible models or combinations of truth values
grows exponentially with the number of propositions. For example, with two propositional
symbols, there are 4 possible combinations (2²); with three symbols, there are 8 combinations
(2³).
Truth Tables
Truth tables are used to evaluate the truth value of compound sentences. For example, if P and Q
are two propositions, the truth table for logical operators can be constructed to determine the
resulting truth value.
For example, if we have a breeze at [1,1], we can represent this as B. The existence of a pit at [2,1]
or [1,2] can be represented as P and Q, respectively. In propositional logic, we express this
relationship as:
B↔(P∨Q)
However, certain operators like implication (→) may lead to incomplete knowledge, as they
cannot fully represent all possible scenarios in the Wumpus World.
Building a Knowledge Base (KB)
The first step in building a Knowledge Base (KB) for a specific environment (e.g., the Wumpus
World) is to decide on the propositions that represent facts about the environment. Once the
propositions are defined, logical rules are created using these propositions and logical operators
to represent the relationships and behaviors in the environment.
For example, consider the Wumpus World where P(i,j)P represents the proposition that there is
a pit in room [i,j], and B(i, j) represents the proposition that there is a breeze in room [i, j].
Some rules in the KB might look like:
Rule 1: ¬P[1,1] — This rule states that there is no pit in room [1,1].
Rule 2: B[1,1]→(P[1,2]∨P[2,1]) This rule indicates that if there is a breeze in room [1,1],
then a pit must exist in either room [1,2] or room [2,1].
The KB is constructed by combining all the rules using conjunctions, i.e., the final KB is the
conjunction of all the individual rules:
KB=Rule 1∧Rule 2∧…∧Rule n
This ensures that all rules in the KB are considered true and valid, and the agent can use this KB
to make logical inferences about the world.
Inference
Once the Knowledge Base (KB) is represented, the next step is to infer conclusions based on the
information stored in the KB. Inference involves determining whether a given sentence (x)
follows logically from the KB. This is expressed as:
KB⊨x
Inference often involves enumerating all possible models, where a "model" represents a specific
assignment of truth values to the propositions in the KB. The number of possible models
increases exponentially with the number of propositions, which can significantly affect
efficiency.
For example:
A truth table is used to enumerate all possible models for the propositions, rules,
and the KB.
If ∼P[1,2] is true, we can infer that there is no pit in room [1,2].
However, for other cases (e.g., P[3,1]), if there exists even one model where the KB
is false, inference cannot be made conclusively.
Key Concepts
Tautology:
A sentence is true in all possible models, irrespective of the truth values of its
propositions.
Example: (P∨∼P).
Contradiction:
A sentence is always false, irrespective of the truth values of its propositions.
Example: (P∧∼P).
Satisfiability:
A sentence is satisfiable if it is true in at least one model.
For example:
Reasoning patterns involve applying inference rules to derive conclusions. These rules, also
known as patterns of inference, eliminate the need for explicitly enumerating models and
ensure sound reasoning.
1. Modus Ponens:
If a→B and a are true, then B can be inferred.
a→B, a⊨B
Example:
o Rule: (wumpus_ahead∧wumpus_alive)→shoot
o Fact: wumpus_ahead∧wumpus_alive
o Inference: shoot.
2. And-Elimination:
From a conjunction, any individual conjunct can be inferred.
(A∧B)⊨A or B
Example:
o Fact: wumpus_ahead∧wumpus_alive.
o Inference: wumpus_alive.
These rules, such as Modus Ponens and And-Elimination, play a critical role in eliminating the
need to generate all possible models. Instead, they provide a structured way to derive sound and
logical conclusions directly from the KB.
Summary
RESOLUTION
What is Resolution?
Resolution is a single inference rule that provides a complete inference algorithm when
combined with a complete search algorithm.
1. Scenario:
o The agent is in [1,2] after visiting [2,1] and [1,1].
o Percepts:
Stench: Yes
Breeze: No
2. Rules:
o Rule a: ∼B[1,2]
o Rule b: B[1,2]→(P[1,1]∨P[2,2]∨P[1,3])
o Rule c: ∼P[2,2]
o Rule d: ∼P[1,3]
o Rule e: P[1,1]∨P[2,2]∨P[1,3]
3. Applying Resolution:
o Resolve Rule c (∼P[2,2]) with Rule e (P[1,1]∨P[2,2]∨P[1,3]): Rule f:P[1,1]∨P[1,3]
o Using the KB (∼P[1,1]) to resolve with Rule f:
o Rule g:P[1,3]
o Conclusion: There is definitely a pit in [3,1].
Key Concepts
1. Literals:
o A literal is a proposition or its negation (e.g., P or ∼P).
2. Unit Resolution:
o Simplifies inference by resolving a single literal with a disjunction.
o Example: Resolving ∼P with P∨Q gives Q.
3. Refutation Completeness:
o Resolution can prove any entailed conclusion by deriving a contradiction from the
negation of the conclusion.
o Example: To prove X, show that ∼X leads to a contradiction.
What is CNF?
o A conjunction of disjunctions of literals.
o Example: (P∨Q)∧(∼P∨R).
Why CNF?
o Resolution is applied only to disjunctions, so all rules must be converted to CNF.
Given A→(B∧C):
1. Replace implication:
∼A∨(B∧C).
2. Distribute OR over AND:
(∼A∨B)∧(∼A∨C).
3. Result: The sentence is in CNF.
A∨B
Example:
o P∨Q
o ∼P∨R
o Resolution: Q∨R.
Summary
Resolution is a powerful and complete inference rule used for logical reasoning. By converting
knowledge into CNF and applying resolution systematically, we can derive any valid conclusion
from the KB. It eliminates the need for exhaustive model enumeration and provides a basis for
automated reasoning in propositional logic.
RESOLUTION ALGORITHM
The resolution algorithm builds upon this concept by systematically applying the resolution rule
in conjunction with a complete search process. The algorithm works as follows:
1. Convert to Conjunctive Normal Form (CNF): First, the KB and ∼y are transformed into
CNF, a standard logical form consisting of a conjunction of disjunctions of literals.
2. Derive Resulting Clauses: The CNF is broken down into individual clauses that
represent simpler logical expressions.
3. Apply the Resolution Rule: The resolution rule is applied to pairs of clauses containing
complementary literals, creating new clauses.
4. Resolve Complementing Pairs: The complementary pairs are resolved to generate new
rules or clauses that logically follow from the KB.
5. Update the Knowledge Base: Newly derived clauses are added to the KB if they are not
already present.
6. Repeat the Process: Steps 3–5 are repeated until one of the following conditions occurs:
o (i): No new clauses can be added, indicating the KB does not entail y.
o (ii): An empty clause is generated, which signifies a contradiction, proving that
KB⊨y.
The resolution algorithm provides a complete, sound, and systematic approach for reasoning in
propositional and first-order logic. Its ability to derive conclusions by detecting contradictions
makes it a fundamental tool in logic-based systems, automated theorem proving, and artificial
intelligence.
Overview
Resolution provides a complete inference mechanism in logic, but practical applications often
rely on simpler and more efficient methods, such as Forward Chaining and Backward Chaining,
especially when the knowledge base (KB) is restricted to Horn Clauses.
Horn Clauses
A Horn Clause is a disjunction of literals with at most one positive literal. These clauses are
commonly used in knowledge representation and inference due to their simplicity and efficiency.
Definite Clause: Contains exactly one positive literal (called the head) and one or more negative
literals (called the body).
Example: P[3,1]
Integrity Constraint: A clause expressing conditions that must hold, often used to detect errors.
Inference Mechanisms
Forward Chaining and Backward Chaining are two algorithms used for inference, especially
when dealing with Horn Clauses. These methods aim to resolve queries by traversing the KB.
Query Example:
Forward Chaining
Definition:
Forward chaining starts with known facts and derives conclusions by iteratively applying rules in
the KB until the query is resolved or no further inference can be made.
1. Start with known facts: Identify the facts provided in the KB.
2. Infer conclusions: If the premise of a rule in the KB is true, add the conclusion of the rule
to the set of known facts.
3. Repeat: Continue this process until:
Characteristics:
AND-OR Graph: The approach can be visualized as an AND-OR graph, where inferences are made
only when all conjuncts (AND conditions) are satisfied.
Time Complexity: The method operates in linear time relative to the size of the KB.
Example:
Given:
1. If A and B, then C.
2. If C, then D.
Inference:
1. A and B imply C.
2. C implies D.
Thus, D is true.
BACKWARD CHAINING
Definition:
Backward chaining begins with the query and works backward to infer the facts that would
support the query. It focuses only on facts and rules relevant to the query.
1. Start with the query: Determine if the query can be resolved directly as true or false.
2. Identify implications: Find rules in the KB that can infer the query.
3. Prove premises: Check if all the premises of the identified rules are true.
Characteristics:
Time Complexity: More efficient than forward chaining as it focuses only on relevant facts
and rules.
Approach: Goal-driven (i.e., driven by the goal or query).
Reasoning: Forms part of goal-directed reasoning.
Example:
Query: Is D true?
Given:
1. If A and B, then C.
2. If C, then D.
Inference:
Summary
In practice, agents often combine forward and backward chaining to achieve efficient reasoning.
The choice of method depends on the nature of the problem:
Forward Chaining: Best when there are many known facts and few potential queries.
Backward Chaining: Ideal when the query is specific, and the KB is large.
PREDICATE LOGIC
Introduction
Predicate logic, also known as First-Order Logic (FOL), builds upon propositional logic to provide
a more expressive and powerful framework for knowledge representation and reasoning. While
propositional logic is useful, it becomes complex and inadequate for representing relationships
and objects in more intricate environments. Predicate logic overcomes these limitations,
enabling us to describe objects, their properties, and the relationships between them effectively.
Example:
Expressing these in propositional logic is not straightforward. Predicate logic, however, provides
the necessary structure to represent such statements accurately and draw valid conclusions.
Predicate logic, as the name suggests, represents information in terms of predicates. A predicate
describes the properties of objects or the relationships between objects. Each sentence in
predicate logic has a subject and a predicate.
Example:
Subject: "car"
Predicate: "is red."
Representation: R(x) where R represents "red," and x is any object.
Predicate logic sentences are constructed from constants, predicates, functions, and variables.
These sentences can be atomic or connected with logical connectives.
Sentence → Atomic sentence |Sentence connective Sentence |Quantifier Variable, ... Sentence
|~Sentence
Examples of Representation
Simple Sentence:
Complex Sentence:
Function Example:
Father(Sita) = Ramesh
Logical Connectives
Examples:
Father(Rohan) ∧ Father(Shyam)
¬Dancer(Riya)
Quantifiers
Syntax: ∀x P(x)
Example: "All kids are naughty."
o Representation: ∀x (Kid(x) → Naughty(x))
o Meaning: For every x, if x is a kid, then x is naughty.
Keywords: "every," "each," "everyone."
Syntax: ∃x P(x)
Example: "Some people are kind."
o Representation: ∃x (Kind(x))
o Meaning: There exists at least one x such that x is kind.
Keywords: "some," "at least one."
Universe of Discourse
The universe of discourse is the set of all objects being considered. For instance, in
mathematics, it could be the set of integers.
Key Concepts
Examples:
Summary
Unification and lifting are concepts related to inference in First Order Logic (FOL). They
introduce the notion of logical inference, which involves finding results by removing quantifiers
and converting rules of the knowledge base (KB) into propositional logic for inference.
The rule states that by substituting a ground term (a term without variables) for
the variable, any sentence can be inferred.
Using substitution:
Here, abc is a Skolem constant (a new name not appearing elsewhere in the KB).
To infer non-quantified sentences from quantified ones, substitution methods are applied:
Example KB:
After substitution:
Student(Shyam) ∧ Kind(Shyam)
Student(Rohan) ∧ Kind(Rohan)
Intelligent(Shyam) ∧ Intelligent(Rohan)
Unification
Unification is the process of finding substitutions that make different logical sentences identical.
This involves substituting predicate parameters to allow machines to understand relationships.
Example:
After unification:
Generalised Modus Ponens: This rule is a lifted version of modus ponens, allowing inference in
FOL:
1. Premises:
o ∀x (Student(x) ∧ Kind(x) → Intelligent(x))
o Student(Shyam)
o Kind(y)
2. Substitution:
o θ = {x/Shyam, y/Shyam}
3. Inference:
o Intelligent(Shyam)
Generalised modus ponens lifts propositional logic modus ponens to FOL, enabling inference
with quantifiers and predicates.
UNIFICATION ALGORITHM
Unification is the process of determining a substitution θ that makes two logical sentences
identical. The rules of inference that are "lifted" need this substitution to unify different logical
expressions. This is critical in AI for tasks like natural language processing, theorem proving, and
reasoning.
The Unification Algorithm takes two sentences and returns a unifier if one exists. The result is
the substitution θ that, when applied, makes the two sentences identical.
Notation:
Unify(a, b): This function takes two sentences, a and b, and returns the unifier θ, where:
o Subs(θ, a) = Subs(θ, b).
Example of Unification
Suppose the query is Friends(Rita, x), and the knowledge base (KB) contains:
1. Friends(Rita, Seema)
2. Friends(x, Maya)
3. Friends(y, Neha)
Applying unification:
If variable names are identical in conflicting ways, unification fails. For example:
Unify(Friends(Rita, x), Friends(x, Maya)) fails because x cannot be both "Rita" and
"Maya".
Change variable names in the second expression to avoid conflict: Friends(z, Maya).
Then, Unify(Friends(Rita, x), Friends(z, Maya)):
o θ = {x/Maya, z/Rita}.
1. Recursively call the algorithm with the ctr-th argument of both x and y.
2. Add the result to the substitution set (Subs).
3. If the result is Fail, return Fail.
4. If the result is non-NULL, apply the substitution to the remaining
arguments of x and y.
2. Return the final substitution set:
o Subs.
Importance of Unification
Knowledge representation plays a crucial role in artificial intelligence (AI) systems. This section
explains how rules can be used to represent knowledge effectively and covers key concepts like
declarative and procedural knowledge, logic programming, reasoning techniques, matching
algorithms, and conflict resolution.
1.Declarative Knowledge:
∀x Bird(x) → Feathers(x)
2. Procedural Knowledge:
Comparison:
Logic Programming
Logic programming is a paradigm where logical assertions are treated as programs. Prolog is an
example of logic programming that uses Horn clauses and has a fixed control strategy.
Key Features:
1. Backward Reasoning: Starts with the goal and works backward to find supporting
evidence.
2. Steps in Prolog's Control Strategy:
o Start with the problem statement (goal).
o Check facts and rules to prove the goal.
o Use unification to match variables.
o Apply depth-first search and backtracking when necessary.
Example:
1. Forward Reasoning:
o Starts from initial facts and applies rules to derive new facts until the goal is
reached.
o Application: Useful when new facts evolve, e.g., monitoring systems.
2. Backward Reasoning:
o Starts from the goal and works backward to find facts/rules that support the goal.
o Application: Problem-solving scenarios where the goal is well-defined.
When preconditions do not match the current state directly, matching algorithms like
unification are employed to find consistency.
Rete Algorithm
Example: Rule:
P1(X, 5)
The algorithm evaluates which facts match the rules efficiently by propagating data across a
structured network.
1. Approximate Matching:
o Handles cases where exact matches are not possible (e.g., speech recognition with
noise).
o Example: AI systems like ELIZA use partial pattern matching to simulate
meaningful interactions.
2. Complex Matching:
o Involves rules that infer properties not explicitly mentioned in the current state.
o Example: Matching predicates like has-strips and has-spots based on structural
similarity.
Conflict Resolution
In rule-based systems, conflict resolution determines the order of rule execution when
multiple rules are applicable.
Applications
Natural Language Processing: ELIZA uses pattern matching for interactive dialogues.
Problem Solving: Forward and backward reasoning in AI systems.
Monitoring Systems: Forward reasoning to handle evolving facts.
Rule-Based Systems: Efficient knowledge management using Rete Algorithm.
Summary:
Rules play a vital role in knowledge representation, enabling systems to infer and reason
effectively.
Declarative and procedural knowledge offer distinct approaches, each suited to specific
tasks.
SEMANTIC NETWORKS
Sematic Network
Representation Example
ISA(Batsman, Cricketer)
Instance(Tendulkar, Cricketer)
Team(Tendulkar, India)
Frame Systems
Frame systems extend semantic networks for structured knowledge representation. Frames are
collections of attributes or slots representing real-world concepts.
For "Tendulkar":
Instance: Batsman
Runs: 15500
Team: India
Types of Inference
Categories of Inference
1. Deduction:
o Derives specific conclusions from general facts or axioms.
o Example:
Applications
Types of Reasoning
Hypothetical Reasoning
Analogical Reasoning
Summary
Uncertainty is a fundamental aspect of the real world. To address it intelligently, one must apply
logic and take appropriate actions. Uncertainty arises due to various factors, such as omitted
data, unavailability of complete data, or partial availability of events. Probability theory provides
a structured approach to handle such uncertainties.
Bayesian Probability
Bayesian probability predicts the likelihood of events based on prior knowledge and updates
beliefs with new evidence. It is particularly useful in uncertain scenarios, forming an extension
of logic to reason with uncertain statements.
Example
Predicting whether a person will be selected for a cricket team depends on factors such as
matches played, skills, and role (e.g., all-rounder or batsman). Bayesian probability models these
relationships for informed decision-making.
Proposition: A sentence that is either true or false. For example, "Is the batsman out?"
can take values “<true, false>”.
Hypothesis: An educated guess or explanation, often expressed as "If A occurs, then B."
o Antecedent: The condition (A).
o Consequent: The outcome (B).
o Example: "If a prisoner learns skilled work in prison, then they are less likely to
reoffend."
Probability
Probability represents the likelihood of an event occurring, with values between 0 and 1.
Types of Probability
1. Classical Probability:
o Probability of an event A:
2. Joint Probability:
o Probability of multiple events occurring together:
3. Marginal Probability:
o Unconditional probability of an event A, irrespective of event B:
4. Prior Probability:
o Probability based on prior belief without observed data.
5. Conditional Probability:
o Probability of event A, given event B has occurred:
o If A and B are independent:
o If not independent:
6. Posterior Probability:
o Probability after considering relevant evidence.
o Example: Prior probability . Posterior probability considers the evidence of injury.
Bayes' theorem allows inference of probabilities for hypotheses given observed data or evidence,
forming the basis for Bayesian inference.
Bayesian Inference
Bayesian inference is a statistical method to infer for an unknown situation. It involves collecting
the evidences that are consistent or inconsistent with a hypothesis. As more and more evidences
are gathered, the degree of belief in a hypothesis changes like normal reasoning. With enough
evidences, the hypothesis can become true or false. Equation is given as:
P(H|E) = P(E|H)P(H)/P(E)
Here, P(H) is the prior probability. P(E) is the marginal probability, and P(H|E) is the conditional
probability. It can also be mapped to a cause-and-effect problem or even to a diagnostic one.
To make it clearer, let us take an example. Let e be the proposition that 'X person has elbow pain'.
Let t be the proposition that 'X has tennis elbow'. So, if we want to derive a diagnosis, we will
have the mapping as follows:
In Bayesian inference, it uses the initial belief on the hypothesis and estimates the degree of
belief as the evidences become available.
Belief Network
A belief network or a Bayesian network is a probabilistic graphical model that represents a set of
random variables and their conditional dependencies through a directed acyclic graph (DAG). In
the graph:
The dependent node of a node is called the child node, whereas the one on which it is dependent
is the parent node. The directed edge points to the child node to represent the dependency. The
parent node is called the cause, and the child node is called the effect. The nodes that are not
connected are independent of each other. Every node is associated with a probability function,
which takes values of the parent function and gives the probability of the variable it represents.
A Bayesian network essentially provides the entire description of a domain. The network is seen
as a joint probability distribution. Each entry in this probability distribution is represented by
the product of the appropriate elements of the conditional probability table (CPT).
Example:
Consider a hypothetical example to understand the concept. We need to evaluate the probability
of selection of Ram or Shyam with regard to the coach. A Bayesian network is used to represent
this relationship.
In the network:
The node 'Coach is Milind' does not have any parent node. Its value represents the prior
probability, taken from past data.
Let C = Coach, S = Shyam selected, R = Ram selected.
These computations are marginal probabilities. Now, let us apply Bayes' theorem. Suppose we
know that Shyam is selected but do not know whether the coach is Milind. The computation will
be:
This gives the probability that the coach is Milind. This information can further be used to decide
whether Shyam is selected. This method is known as propagation, where evidence helps
propagate probabilities.
Conclusion