[go: up one dir, main page]

0% found this document useful (0 votes)
71 views53 pages

Unit 4 Ai

The document discusses knowledge representation and reasoning as critical components for intelligent agents, emphasizing the importance of structured knowledge for decision-making. It covers various approaches to knowledge representation, the workings of knowledge-based agents, and the Wumpus World environment as a case study. Additionally, it explains the principles of logic, including syntax and semantics, and their application in reasoning within the context of artificial intelligence.

Uploaded by

Jatin Pochiraju
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
71 views53 pages

Unit 4 Ai

The document discusses knowledge representation and reasoning as critical components for intelligent agents, emphasizing the importance of structured knowledge for decision-making. It covers various approaches to knowledge representation, the workings of knowledge-based agents, and the Wumpus World environment as a case study. Additionally, it explains the principles of logic, including syntax and semantics, and their application in reasoning within the context of artificial intelligence.

Uploaded by

Jatin Pochiraju
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 53

21CSC206T – ARTIFICIAL INTELLIGENCE

UNIT IV
KNOWLEDGE REPRESENTATION AND REASONING
Overview

Knowledge representation and reasoning are essential for intelligent agents to make decisions
and take actions based on the information they acquire from the environment. Representing
knowledge accurately and effectively is critical for problem-solving and decision-making
processes. The reasoning process leverages this knowledge to derive conclusions and make
informed judgments. Logic serves as the foundation for making the knowledge usable and
enabling reasoning.

Key Concepts

What is Knowledge?

 Knowledge consists of patterns and associations derived from data or information.


 It aids decision-making and problem-solving in day-to-day and complex scenarios.
 Example: A teacher judging a student’s performance based on previous data or inputs
from other teachers.

Reasoning and Logic

 Reasoning connects available knowledge to derive outcomes or judgments.


 Logic is the framework that ensures knowledge is represented systematically and utilized
effectively.

Knowledge Representation (KR)

Definition

KR is about systematically organizing and representing facts, enabling problem-solving. It


supports reasoning processes to draw conclusions from available data.

Properties of Effective KR

 Adequacy in Representation: Covers all relevant knowledge for the domain.


 Adequacy in Inferring: Allows manipulation of data to derive new information.

Dr.Padmapriya G, Dr.Deeba K, Dr.Aruna M, Dr.Arthi B, Dr. Anto Arockia Rosaline R 1|Page


21CSC206T – ARTIFICIAL INTELLIGENCE

 Efficiency in Acquisition: Adapts and incorporates new information incrementally.

Approaches to KR

Relational Knowledge Structure: Stores facts in relational databases, offering weak inference
capabilities.

Inheritable Knowledge Structure: Uses hierarchies for mapping inheritance relationships (e.g.,
ISA relationships).

Inferential Knowledge Structure: Employs first-order predicate logic for combining and
utilizing knowledge.

Procedural Knowledge Structure: Represents knowledge as a sequence of steps, often


implemented using programming languages like LISP or production rules.

Semantic Networks and Frames: Represent relationships between concepts and their
attributes.

Reasoning and Representation Process

Refining Facts

 Knowledge evolves with additional data over time.


 Relationships among data points are established through reasoning.

Mapping Knowledge

 Effective KR maps relevant information into a system that supports decision-making.

Dr.Padmapriya G, Dr.Deeba K, Dr.Aruna M, Dr.Arthi B, Dr. Anto Arockia Rosaline R 2|Page


21CSC206T – ARTIFICIAL INTELLIGENCE

 Examples include relational databases, hierarchies, or procedural steps.

Issues in Knowledge Representation

Information Mapping

 Determining what information to represent and how to represent it.


 Ensuring the representation aligns with reasoning processes for accuracy.

Attributes and Relationships

 Identifying key attributes and their relationships.


 Capturing inheritance and ISA relationships for logical conclusions.

Granularity

 Deciding the depth of representation (e.g., splitting facts for accessibility).


 Balancing simplicity with comprehensiveness.

Object Representation

 Using logic to handle sets, exceptions, and membership criteria.


 Employing extensional (listing members) or intensional (rule-based) definitions.

Structure Selection

 Choosing an appropriate structure for specific problems.


 Ensuring flexibility for revisions and accurate outcomes.
 Methods include using significant keywords, pointers, or problem-specific clues.

Dr.Padmapriya G, Dr.Deeba K, Dr.Aruna M, Dr.Arthi B, Dr. Anto Arockia Rosaline R 3|Page


21CSC206T – ARTIFICIAL INTELLIGENCE

Summary

Knowledge representation and reasoning are interconnected processes critical for intelligent
agents. Effective KR relies on logical, structured approaches to represent and manipulate
knowledge, enabling agents to adapt and make decisions. While challenges exist in selecting and
designing KR systems, addressing issues like granularity, relationships, and structure selection
ensures robust solutions for problem-solving.

KNOWLEDGE-BASED AGENTS

Definition and Overview

Agent: Acts based on its environment and requires knowledge to perform actions effectively.

Knowledge Base (KB): A structured representation of information that supports an agent's


actions.

Knowledge Representation Language: Uses sentences to represent facts about the world.

Operating Principle: An agent's actions depend on its perception of the environment and the
knowledge stored in the KB.

Working of Knowledge-Based Agents

Percept Acquisition:

The agent perceives the environment (input: percept p at time t). This percept is recorded in the
KB as knowledge using a sentence representation.

Decision Making:

The agent queries the KB to decide an appropriate action (a). The action is based on logical
reasoning derived from the KB.

Knowledge Update:

After executing the action, the KB is updated to include information about the action taken.
Algorithm for Knowledge-Based Agents

Dr.Padmapriya G, Dr.Deeba K, Dr.Aruna M, Dr.Arthi B, Dr. Anto Arockia Rosaline R 4|Page


21CSC206T – ARTIFICIAL INTELLIGENCE

The process can be summarized in three steps:

Architecture

Actions are based on knowledge rather than random decisions.

Three levels of operation:

Knowledge Level: Information the agent knows and its goals.

Logical Level: Representation of knowledge in logical language.

Implementation Level: Logical sentences are executed; does not affect knowledge level details.

Task Acceptance:

 Accepts tasks through well-defined goals.


 Learns and adapts flexibly to changing situations.

Knowledge Initialization

Declarative Knowledge:

Predefined rules embedded in the system. Explicitly outlines what actions to take in specific
scenarios.

Procedural Knowledge:

Inference-based decision-making. Derives actions by analyzing the situation.

Dr.Padmapriya G, Dr.Deeba K, Dr.Aruna M, Dr.Arthi B, Dr. Anto Arockia Rosaline R 5|Page


21CSC206T – ARTIFICIAL INTELLIGENCE

Summary

Knowledge-based agents leverage their structured knowledge base to make informed, logical
decisions. They follow a systematic process of perceiving, acting, and updating their knowledge.
By employing declarative and procedural knowledge, these agents adapt flexibly to dynamic
environments and meet specific goals effectively.

THE WUMPUS WORLD ENVIRONMENT

The Wumpus Computer Game

The agent explores a cave with rooms connected by passageways. The agent's task is to
find the gold, return to the starting position ([1,1]), and climb out of the cave without being eaten
by the Wumpus or falling into a pit.

Key Elements of the Game:

Wumpus: A dangerous creature that lurks in a random room and eats any agent that enters its
room.

Pits: Some rooms contain bottomless pits that will trap any agent that enters.

Gold: Occasionally, there is a heap of gold in a room, which the agent must collect.

Objective: Collect the gold and exit the cave without encountering dangers (Wumpus or pits).

Dr.Padmapriya G, Dr.Deeba K, Dr.Aruna M, Dr.Arthi B, Dr. Anto Arockia Rosaline R 6|Page


21CSC206T – ARTIFICIAL INTELLIGENCE

Agent’s Starting Position:

The agent always starts at position [1,1].

The agent has the following sensory inputs:

Stench: Perceived in the current room and adjacent rooms if the Wumpus is in the room or in an
adjacent room.

Breeze: Perceived in adjacent rooms to the agent’s current room if there is a pit.

Glitter: Perceived if the agent is in the room containing the gold.

Bump: Perceived when the agent walks into a wall.

Scream: Heard everywhere in the cave if the Wumpus is killed.

The percept is represented as a five-symbol list. For example, if there is a stench and a breeze but
no glitter, bump, or scream, the percept would be:

[Stench, Breeze, None, None, None]

Possible Actions of the Agent:

Go Forward: Move one square forward.

Turn Right: Rotate 90 degrees clockwise.

Turn Left: Rotate 90 degrees counterclockwise.

Grab: Pick up an object (e.g., gold) in the current square.

Shoot: Fire an arrow in a straight line in the direction the agent is facing. The arrow kills the
Wumpus if it hits it. The agent has only one arrow, so only the first shoot attempt will have an
effect.

Climb: Used to exit the cave. Effective only at the start square [1,1].

Die: Happens automatically if the agent enters a square with a pit or the Wumpus.

Dr.Padmapriya G, Dr.Deeba K, Dr.Aruna M, Dr.Arthi B, Dr. Anto Arockia Rosaline R 7|Page


21CSC206T – ARTIFICIAL INTELLIGENCE

Performance Measure:

Gold: +1000 points for collecting gold.

Death: -1000 points for falling into a pit or being eaten by the Wumpus.

Per Step: -1 point for each step taken.

Arrow Usage: -10 points for using the arrow (whether successful or not).

Environment Features:

The Wumpus World cave consists of discrete rooms connected by doors. Rooms adjacent to the
Wumpus emit a stench, while those near pits have a breeze. A glitter indicates the presence of
gold in a square. The agent can kill the Wumpus by shooting an arrow if facing it, pick up gold
using the grab action when in the same square, or drop it with the release action. The placement
of the Wumpus and pits is randomized at the start of the game. The agent perceives its
environment through five sensors: stench, breeze, glitter, bump, and scream. It can perform
various actions using actuators, including turning left or right, moving forward, grabbing,
releasing, and shooting.

Wumpus World Characterization:

Fully Observable: No, the agent only perceives local information in the current room and its
adjacent rooms.

Deterministic: Yes, the outcomes are exactly specified for each action.

Static: Yes, the Wumpus and pits do not move throughout the game.

Discrete: Yes, the world is composed of a grid of rooms.

Single-Agent: Yes, the agent is the only active participant in the world. The Wumpus is a "natural
feature" and does not actively interact with the agent.

Summary:

The Wumpus World is a simple, discrete, and deterministic environment in which an agent
explores a cave, faces challenges like the Wumpus and pits, and aims to collect gold and exit the

Dr.Padmapriya G, Dr.Deeba K, Dr.Aruna M, Dr.Arthi B, Dr. Anto Arockia Rosaline R 8|Page


21CSC206T – ARTIFICIAL INTELLIGENCE

cave without dying. The agent's behavior is guided by sensory inputs and actions, with a
performance measure based on its success in completing tasks and avoiding danger.

THE WUMPUS WORLD - SOLUTION

The Wumpus Computer Game

The agent explores a cave with rooms connected by passageways. The agent's task is to
find the gold, return to the starting position ([1,1]), and climb out of the cave without being eaten
by the Wumpus or falling into a pit.

The knowledge base (KB) of the agent in the Wumpus World consists of the predefined
rules governing the environment and the initial percept of "nothing" in the starting square [1,1].
This initial knowledge is represented by the Boolean percept feature values <0, 0, 0, 0, 0>,
indicating no stench, breeze, glitter, bump, or scream at T=0. From this, the agent infers that the
adjacent squares [2,1] and [1,2] are safe, adding these as propositions to the KB.

Dr.Padmapriya G, Dr.Deeba K, Dr.Aruna M, Dr.Arthi B, Dr. Anto Arockia Rosaline R 9|Page


21CSC206T – ARTIFICIAL INTELLIGENCE

At T=1, logical reasoning reveals more insights: the percepts suggest the presence of a pit
in either [2,2] or [3,1]. The Wumpus cannot be in [1,1] (the starting square) or [2,2] because of
the absence of stench in [1,2]. Thus, the Wumpus must be in [1,3]. Further analysis shows no
breeze in [1,2], meaning there is no pit in [2,2]. Consequently, the pit must be in [3,1].

By T=3, the KB allows the agent to definitively conclude that there is a pit in [3,1]. This
logical reasoning process exemplifies how the agent uses its percepts and knowledge to deduce
facts about the world. The KB ensures that the conclusions drawn are valid in all possible
configurations of the Wumpus World consistent with the known information at time T=3. This
reflects the essence of logical reasoning: deriving truths based on what is known and eliminating
alternative possibilities.

Dr.Padmapriya G, Dr.Deeba K, Dr.Aruna M, Dr.Arthi B, Dr. Anto Arockia Rosaline R 10 | P a g e


21CSC206T – ARTIFICIAL INTELLIGENCE

Summary:

The Wumpus World is a simple, discrete, and deterministic environment in which an agent
explores a cave, faces challenges like the Wumpus and pits, and aims to collect gold and exit the
cave without dying. The agent's behavior is guided by sensory inputs and actions, with a
performance measure based on its success in completing tasks and avoiding danger. The
knowledge base will be updated each and every time when the agent is exploring the rooms in
the cave.

LOGIC

Understanding Logic

Logic is the study of reasoning principles and how they assist in decision-making. It provides a
structured framework for reasoning, encompassing syntax, semantics, and inference procedures.
The syntax defines how logical statements are represented, while semantics deals with the
meaning of those statements, which can be evaluated as true or false. Logical reasoning connects
these components to infer new information or decide the truth of statements.

Syntax and Semantics in Logic

Syntax refers to the formal structure of logical statements, such as symbols, formulas, or
sentences. Semantics, on the other hand, concerns the interpretation and truth values of these
sentences. It can be put forth in the following way: Assume that x and y are the two sentences
whose relationship is to be determined. Let us map this relationship as follows:

x=y

This indicates that the sentence x entails y, indicating that in the model when x is true, y is also
true. For example, suppose

KB = {p=1}

Then, KB |= {p + 1 = 2}

Dr.Padmapriya G, Dr.Deeba K, Dr.Aruna M, Dr.Arthi B, Dr. Anto Arockia Rosaline R 11 | P a g e


21CSC206T – ARTIFICIAL INTELLIGENCE

Example in the Wumpus World

In the Wumpus World, the agent combines percepts (e.g., breezes, stenches) with known rules to
build a knowledge base (KB). For instance, if there is a breeze at [2,1] and nothing at [1,1], the
agent uses logical reasoning to infer possible pit locations in [1,2], [2,2], or [3,1]. Model checking
is employed to evaluate all possible configurations of pits, ensuring the reasoning aligns with the
KB. For instance, if no breeze is detected at [1,1][1,1][1,1], the KB can infer that a pit does not
exist in [1,2]. Logical reasoning ensures truth and completeness: if the KB is true, any sentence
derived from it must also be true.

Dr.Padmapriya G, Dr.Deeba K, Dr.Aruna M, Dr.Arthi B, Dr. Anto Arockia Rosaline R 12 | P a g e


21CSC206T – ARTIFICIAL INTELLIGENCE

Propositional Logic

Propositional logic is the simplest form of logic used in reasoning, where statements are atomic
propositions that can either be true or false. Propositional logic is fundamental in AI for tasks
such as decision-making and modeling, providing a basis for more complex reasoning systems
like first-order logic.

Dr.Padmapriya G, Dr.Deeba K, Dr.Aruna M, Dr.Arthi B, Dr. Anto Arockia Rosaline R 13 | P a g e


21CSC206T – ARTIFICIAL INTELLIGENCE

SYNTAX, SEMANTICS AND KNOWLEDGE BASE BUILDING

Syntax in Propositional Logic

In propositional logic, sentences can be categorized into two types: simple (atomic) and
compound sentences. A simple sentence consists of a single propositional symbol, such as P, Q, or
R, which represents a proposition that can either be true or false. There are two primary truth
values: true (1) and false (0). The syntax defines how these symbols and operators can be
arranged to form valid sentences.
The five main logical operators in propositional logic are:
1. Negation (~ or ¬): It negates the truth value of a proposition.
2. Conjunction (∧): It represents "and", meaning both propositions must be true for the
result to be true.
3. Disjunction (∨): It represents "or", meaning at least one proposition must be true for the
result to be true.
4. Implication (→): Represents "if... then", where the truth of the first proposition ensures
the truth of the second.
5. Biconditional (↔): Represents "if and only if", meaning both propositions must have the
same truth value.
Syntax also involves precedence rules, which specify the order of operations for logical
connectives, with negation having the highest precedence. Parentheses are used to clarify and set
the precedence of operations.
Semantics in Propositional Logic

Semantics in propositional logic determines the truth value of a sentence based on the truth
values of the propositions involved. For simple sentences, determining the truth value is
straightforward. However, for compound sentences, the truth value depends on the individual
propositions and how they are connected using logical operators.
When evaluating compound sentences, the truth value is determined by the truth values of the
components (propositions). The number of possible models or combinations of truth values
grows exponentially with the number of propositions. For example, with two propositional

Dr.Padmapriya G, Dr.Deeba K, Dr.Aruna M, Dr.Arthi B, Dr. Anto Arockia Rosaline R 14 | P a g e


21CSC206T – ARTIFICIAL INTELLIGENCE

symbols, there are 4 possible combinations (2²); with three symbols, there are 8 combinations
(2³).
Truth Tables

Truth tables are used to evaluate the truth value of compound sentences. For example, if P and Q
are two propositions, the truth table for logical operators can be constructed to determine the
resulting truth value.

For example, if we have a breeze at [1,1], we can represent this as B. The existence of a pit at [2,1]
or [1,2] can be represented as P and Q, respectively. In propositional logic, we express this
relationship as:
B↔(P∨Q)
However, certain operators like implication (→) may lead to incomplete knowledge, as they
cannot fully represent all possible scenarios in the Wumpus World.
Building a Knowledge Base (KB)

The first step in building a Knowledge Base (KB) for a specific environment (e.g., the Wumpus
World) is to decide on the propositions that represent facts about the environment. Once the
propositions are defined, logical rules are created using these propositions and logical operators
to represent the relationships and behaviors in the environment.
For example, consider the Wumpus World where P(i,j)P represents the proposition that there is
a pit in room [i,j], and B(i, j) represents the proposition that there is a breeze in room [i, j].
Some rules in the KB might look like:
 Rule 1: ¬P[1,1] — This rule states that there is no pit in room [1,1].
 Rule 2: B[1,1]→(P[1,2]∨P[2,1]) This rule indicates that if there is a breeze in room [1,1],
then a pit must exist in either room [1,2] or room [2,1].

Dr.Padmapriya G, Dr.Deeba K, Dr.Aruna M, Dr.Arthi B, Dr. Anto Arockia Rosaline R 15 | P a g e


21CSC206T – ARTIFICIAL INTELLIGENCE

The KB is constructed by combining all the rules using conjunctions, i.e., the final KB is the
conjunction of all the individual rules:
KB=Rule 1∧Rule 2∧…∧Rule n
This ensures that all rules in the KB are considered true and valid, and the agent can use this KB
to make logical inferences about the world.

INFERENCES – REASONING PATTERNS

Inference

Once the Knowledge Base (KB) is represented, the next step is to infer conclusions based on the
information stored in the KB. Inference involves determining whether a given sentence (x)
follows logically from the KB. This is expressed as:

KB⊨x

This indicates that x is true in all models where the KB is true.

Model Enumeration and Efficiency

Inference often involves enumerating all possible models, where a "model" represents a specific
assignment of truth values to the propositions in the KB. The number of possible models
increases exponentially with the number of propositions, which can significantly affect
efficiency.

For example:

 A truth table is used to enumerate all possible models for the propositions, rules,
and the KB.
 If ∼P[1,2] is true, we can infer that there is no pit in room [1,2].
 However, for other cases (e.g., P[3,1]), if there exists even one model where the KB
is false, inference cannot be made conclusively.

Dr.Padmapriya G, Dr.Deeba K, Dr.Aruna M, Dr.Arthi B, Dr. Anto Arockia Rosaline R 16 | P a g e


21CSC206T – ARTIFICIAL INTELLIGENCE

Steps for Deciding Entailment

Given KB, x, and a list of symbols:

1 If the list of symbols is empty:

o Check if the current model is consistent with KB.


o If true, check if x is also true.
o If false, the model is inconsistent.
2. If symbols remain:
o Recursively construct partial models using the symbols in KB and x.

Key Concepts

 Tautology:
A sentence is true in all possible models, irrespective of the truth values of its
propositions.
Example: (P∨∼P).
 Contradiction:
A sentence is always false, irrespective of the truth values of its propositions.
Example: (P∧∼P).
 Satisfiability:
A sentence is satisfiable if it is true in at least one model.

Example: If x is true in model m, then m satisfies x.

Refutation (Proof by Contradiction)

To determine the validity or satisfiability of a sentence:

 A sentence x is valid if ∼x is not satisfiable.


 A sentence x is satisfiable if ∼x is not valid.

Dr.Padmapriya G, Dr.Deeba K, Dr.Aruna M, Dr.Arthi B, Dr. Anto Arockia Rosaline R 17 | P a g e


21CSC206T – ARTIFICIAL INTELLIGENCE

For example:

x⊨y ⟺ ∼(x∧∼y) is unsatisfiable.

This is known as proof by refutation or contradiction, where an assumption is proven false by


showing it leads to a contradiction.

Reasoning Patterns in Propositional Logic

Reasoning patterns involve applying inference rules to derive conclusions. These rules, also
known as patterns of inference, eliminate the need for explicitly enumerating models and
ensure sound reasoning.

1. Modus Ponens:
If a→B and a are true, then B can be inferred.

a→B, a⊨B

Example:

o Rule: (wumpus_ahead∧wumpus_alive)→shoot
o Fact: wumpus_ahead∧wumpus_alive
o Inference: shoot.
2. And-Elimination:
From a conjunction, any individual conjunct can be inferred.

(A∧B)⊨A or B

Example:

o Fact: wumpus_ahead∧wumpus_alive.
o Inference: wumpus_alive.

Dr.Padmapriya G, Dr.Deeba K, Dr.Aruna M, Dr.Arthi B, Dr. Anto Arockia Rosaline R 18 | P a g e


21CSC206T – ARTIFICIAL INTELLIGENCE

Significance of Reasoning Patterns

These rules, such as Modus Ponens and And-Elimination, play a critical role in eliminating the
need to generate all possible models. Instead, they provide a structured way to derive sound and
logical conclusions directly from the KB.

Summary

Inference is central to reasoning in propositional logic. By understanding key reasoning patterns


and concepts like tautology, contradiction, and satisfiability, we can make logical conclusions
efficiently. Using rules like Modus Ponens and And-Elimination ensures that our inferences are
both sound and valid without exhaustive model enumeration.

RESOLUTION

Resolution and Completeness

 Completeness in Search Algorithms:

A search algorithm is considered complete if it guarantees reaching a reachable goal.


However, if the inference rules are inadequate or insufficient, achieving the goal becomes
impossible.

 What is Resolution?

Resolution is a single inference rule that provides a complete inference algorithm when
combined with a complete search algorithm.

o It allows deriving conclusions entailed by the knowledge base (KB).


o Resolution works by resolving literals between two disjunctions.

Dr.Padmapriya G, Dr.Deeba K, Dr.Aruna M, Dr.Arthi B, Dr. Anto Arockia Rosaline R 19 | P a g e


21CSC206T – ARTIFICIAL INTELLIGENCE

Example: Resolution in the Wumpus World

1. Scenario:
o The agent is in [1,2] after visiting [2,1] and [1,1].
o Percepts:
 Stench: Yes
 Breeze: No
2. Rules:
o Rule a: ∼B[1,2]
o Rule b: B[1,2]→(P[1,1]∨P[2,2]∨P[1,3])
o Rule c: ∼P[2,2]
o Rule d: ∼P[1,3]
o Rule e: P[1,1]∨P[2,2]∨P[1,3]
3. Applying Resolution:
o Resolve Rule c (∼P[2,2]) with Rule e (P[1,1]∨P[2,2]∨P[1,3]): Rule f:P[1,1]∨P[1,3]
o Using the KB (∼P[1,1]) to resolve with Rule f:
o Rule g:P[1,3]
o Conclusion: There is definitely a pit in [3,1].

Key Concepts

1. Literals:
o A literal is a proposition or its negation (e.g., P or ∼P).
2. Unit Resolution:
o Simplifies inference by resolving a single literal with a disjunction.
o Example: Resolving ∼P with P∨Q gives Q.
3. Refutation Completeness:
o Resolution can prove any entailed conclusion by deriving a contradiction from the
negation of the conclusion.
o Example: To prove X, show that ∼X leads to a contradiction.

Dr.Padmapriya G, Dr.Deeba K, Dr.Aruna M, Dr.Arthi B, Dr. Anto Arockia Rosaline R 20 | P a g e


21CSC206T – ARTIFICIAL INTELLIGENCE

Conjunctive Normal Form (CNF)

 What is CNF?
o A conjunction of disjunctions of literals.
o Example: (P∨Q)∧(∼P∨R).
 Why CNF?
o Resolution is applied only to disjunctions, so all rules must be converted to CNF.

Steps to Convert to CNF

1. Eliminate Bidirectional Implications:


o Replace A↔B with (A→B)∧(B→A).
2. Eliminate Implications:
o Replace A→B with ∼A∨B.
3. Apply Equivalence Rules:
o Use DeMorgan's Theorem:
 ∼(∼α)=α
 ∼(α∨β)=∼α∧∼β
 ∼(α∧β)=∼α∨∼β
4. Distribute OR over AND:
o Example:
 (P∨(Q∧R)) becomes (P∨Q)∧(P∨R).

Example of CNF Conversion

Given A→(B∧C):

1. Replace implication:
∼A∨(B∧C).
2. Distribute OR over AND:
(∼A∨B)∧(∼A∨C).
3. Result: The sentence is in CNF.

Dr.Padmapriya G, Dr.Deeba K, Dr.Aruna M, Dr.Arthi B, Dr. Anto Arockia Rosaline R 21 | P a g e


21CSC206T – ARTIFICIAL INTELLIGENCE

Resolution Inference Rule in CNF

 If A∨P and ∼P∨B are in CNF, resolve them to get:

A∨B

 Example:
o P∨Q
o ∼P∨R
o Resolution: Q∨R.

Summary

Resolution is a powerful and complete inference rule used for logical reasoning. By converting
knowledge into CNF and applying resolution systematically, we can derive any valid conclusion
from the KB. It eliminates the need for exhaustive model enumeration and provides a basis for
automated reasoning in propositional logic.

RESOLUTION ALGORITHM

Refutation involves proving a statement by contradiction. To demonstrate that a query 𝑦� is


entailed by a knowledge base (𝐾�𝐵�), we add the negation of the query (∼𝑦�) to the KB and show
that this combined set of statements leads to an inconsistency or contradiction. This indicates
that 𝐾�𝐵�⊨𝑦�

The resolution algorithm builds upon this concept by systematically applying the resolution rule
in conjunction with a complete search process. The algorithm works as follows:

1. Convert to Conjunctive Normal Form (CNF): First, the KB and ∼y are transformed into
CNF, a standard logical form consisting of a conjunction of disjunctions of literals.
2. Derive Resulting Clauses: The CNF is broken down into individual clauses that
represent simpler logical expressions.

Dr.Padmapriya G, Dr.Deeba K, Dr.Aruna M, Dr.Arthi B, Dr. Anto Arockia Rosaline R 22 | P a g e


21CSC206T – ARTIFICIAL INTELLIGENCE

3. Apply the Resolution Rule: The resolution rule is applied to pairs of clauses containing
complementary literals, creating new clauses.
4. Resolve Complementing Pairs: The complementary pairs are resolved to generate new
rules or clauses that logically follow from the KB.
5. Update the Knowledge Base: Newly derived clauses are added to the KB if they are not
already present.
6. Repeat the Process: Steps 3–5 are repeated until one of the following conditions occurs:
o (i): No new clauses can be added, indicating the KB does not entail y.
o (ii): An empty clause is generated, which signifies a contradiction, proving that
KB⊨y.

The resolution algorithm provides a complete, sound, and systematic approach for reasoning in
propositional and first-order logic. Its ability to derive conclusions by detecting contradictions
makes it a fundamental tool in logic-based systems, automated theorem proving, and artificial
intelligence.

FORWARD AND BACKWARD CHAINING

Overview

Resolution provides a complete inference mechanism in logic, but practical applications often
rely on simpler and more efficient methods, such as Forward Chaining and Backward Chaining,
especially when the knowledge base (KB) is restricted to Horn Clauses.

Horn Clauses

A Horn Clause is a disjunction of literals with at most one positive literal. These clauses are
commonly used in knowledge representation and inference due to their simplicity and efficiency.

Example of a Horn Clause:

¬S[1,1] ∨ Breeze ∨ B[1,1]

(Here, S[1,1] represents the state of the agent at position 1,1.)

Non-Horn Clause Example:

Dr.Padmapriya G, Dr.Deeba K, Dr.Aruna M, Dr.Arthi B, Dr. Anto Arockia Rosaline R 23 | P a g e


21CSC206T – ARTIFICIAL INTELLIGENCE

¬B[14] ∨ P[21] ∨ P[12]

Types of Horn Clauses:

Definite Clause: Contains exactly one positive literal (called the head) and one or more negative
literals (called the body).

Example: ¬P[2,2] ∨ Q[1,3] (can be written as: If P[2,2], then Q[1,3]).

Fact: A clause with no negative propositions.

Example: P[3,1]

Integrity Constraint: A clause expressing conditions that must hold, often used to detect errors.

Example: P[3,1] ∨ P[1,3](Represents: Either P[3,1] or P[1,3]).

Inference Mechanisms

Forward Chaining and Backward Chaining are two algorithms used for inference, especially
when dealing with Horn Clauses. These methods aim to resolve queries by traversing the KB.

Query Example:

Determine whether any position in a room has a pit.

Forward Chaining

Definition:

Forward chaining starts with known facts and derives conclusions by iteratively applying rules in
the KB until the query is resolved or no further inference can be made.

Steps of Forward Chaining:

1. Start with known facts: Identify the facts provided in the KB.
2. Infer conclusions: If the premise of a rule in the KB is true, add the conclusion of the rule
to the set of known facts.
3. Repeat: Continue this process until:

Dr.Padmapriya G, Dr.Deeba K, Dr.Aruna M, Dr.Arthi B, Dr. Anto Arockia Rosaline R 24 | P a g e


21CSC206T – ARTIFICIAL INTELLIGENCE

 The query is resolved as true or false, or


 No further inference is possible.

Characteristics:

AND-OR Graph: The approach can be visualized as an AND-OR graph, where inferences are made
only when all conjuncts (AND conditions) are satisfied.

Time Complexity: The method operates in linear time relative to the size of the KB.

Approach: Data-driven (i.e., driven by available facts).

Example:

Given:

1. If A and B, then C.
2. If C, then D.

Facts: A and B are true.

Inference:

1. A and B imply C.
2. C implies D.

Thus, D is true.

BACKWARD CHAINING

Definition:

Backward chaining begins with the query and works backward to infer the facts that would
support the query. It focuses only on facts and rules relevant to the query.

Steps of Backward Chaining:

1. Start with the query: Determine if the query can be resolved directly as true or false.
2. Identify implications: Find rules in the KB that can infer the query.
3. Prove premises: Check if all the premises of the identified rules are true.

Dr.Padmapriya G, Dr.Deeba K, Dr.Aruna M, Dr.Arthi B, Dr. Anto Arockia Rosaline R 25 | P a g e


21CSC206T – ARTIFICIAL INTELLIGENCE

 If true, the query is inferred as true.


 If not, recursively apply backward chaining on the premises.
4. Repeat until:
 The query is resolved, or
 It is determined that the query cannot be inferred from the KB.

Characteristics:

 Time Complexity: More efficient than forward chaining as it focuses only on relevant facts
and rules.
 Approach: Goal-driven (i.e., driven by the goal or query).
 Reasoning: Forms part of goal-directed reasoning.

Example:

Query: Is D true?

Given:

1. If A and B, then C.
2. If C, then D.

Inference:

1. D depends on C (from rule 2).

2. C depends on A and B (from rule 1).

3. Check A and B. If both are true, infer C. Then, infer D.

Summary

In practice, agents often combine forward and backward chaining to achieve efficient reasoning.
The choice of method depends on the nature of the problem:

Forward Chaining: Best when there are many known facts and few potential queries.

Backward Chaining: Ideal when the query is specific, and the KB is large.

Dr.Padmapriya G, Dr.Deeba K, Dr.Aruna M, Dr.Arthi B, Dr. Anto Arockia Rosaline R 26 | P a g e


21CSC206T – ARTIFICIAL INTELLIGENCE

PREDICATE LOGIC

Introduction

Predicate logic, also known as First-Order Logic (FOL), builds upon propositional logic to provide
a more expressive and powerful framework for knowledge representation and reasoning. While
propositional logic is useful, it becomes complex and inadequate for representing relationships
and objects in more intricate environments. Predicate logic overcomes these limitations,
enabling us to describe objects, their properties, and the relationships between them effectively.

Why Predicate Logic?

Propositional logic becomes cumbersome when:

 Representing relationships between multiple objects.


 Describing generalizations or complex environments.

Example:

Consider the statements:

 "All kids are naughty."


 "Suzy is a kid."
 "Therefore, Suzy is naughty."

Expressing these in propositional logic is not straightforward. Predicate logic, however, provides
the necessary structure to represent such statements accurately and draw valid conclusions.

Predicate Logic Basics

Predicate logic extends propositional logic by introducing:

1. Predicates: Represent properties of objects or relationships between objects.


o Example: Naughty(Suzy) → "Suzy is naughty."
2. Quantifiers: Allow generalization over objects.
o Universal Quantifier (∀): Represents "for all."
 Example: ∀x Kid(x) → Naughty(x) → "All kids are naughty."

Dr.Padmapriya G, Dr.Deeba K, Dr.Aruna M, Dr.Arthi B, Dr. Anto Arockia Rosaline R 27 | P a g e


21CSC206T – ARTIFICIAL INTELLIGENCE

o Existential Quantifier (∃): Represents "there exists."


 Example: ∃x Kid(x) ∧ Naughty(x) → "There exists a kid who is naughty."
3. Variables: Represent objects in a domain.
o Example: x, y, Suzy.
4. Logical Connectives: Include ∧ (AND), ∨ (OR), ¬ (NOT), → (IMPLIES).

REPRESENTING FACTS IN LOGIC: SYNTAX AND SEMANTICS

Introduction to Predicate Logic

Predicate logic, as the name suggests, represents information in terms of predicates. A predicate
describes the properties of objects or the relationships between objects. Each sentence in
predicate logic has a subject and a predicate.

Example:

Sentence: "The car is red."

 Subject: "car"
 Predicate: "is red."
 Representation: R(x) where R represents "red," and x is any object.

Predicate logic sentences are constructed from constants, predicates, functions, and variables.
These sentences can be atomic or connected with logical connectives.

Syntax of Predicate Logic

A sentence in predicate logic can be represented as:

Sentence → Atomic sentence |Sentence connective Sentence |Quantifier Variable, ... Sentence
|~Sentence

Atomic Sentence → Predicate(Terms)

Components of Predicate Logic

 Predicates: Represent relationships or properties.

Dr.Padmapriya G, Dr.Deeba K, Dr.Aruna M, Dr.Arthi B, Dr. Anto Arockia Rosaline R 28 | P a g e


21CSC206T – ARTIFICIAL INTELLIGENCE

Examples: Blue(x), Academic(y)

 Terms: Components within predicates that can be:


 Constants: Represent specific objects (e.g., Mary, A).
 Variables: Represent placeholders (e.g., p, q, r).
 Functions: Refer indirectly to other objects (e.g., father_of(x)).
 Connectives: Logical operators such as ∧ (AND), ∨ (OR), → (IMPLIES), and ¬ (NOT).
 Quantifiers:
 ∀ (Universal Quantifier): Represents "for all."
 ∃ (Existential Quantifier): Represents "there exists."

Examples of Representation

Simple Sentence:

 "Sita is the mother of Rohan."


 Representation: Mother(Sita, Rohan)

Complex Sentence:

 "Reeta's uncle and Rohan's daddy booked a flat."

Representation: Booked(Uncle(Reeta), Daddy(Rohan))

Function Example:

 Father(Sita) = Ramesh

Logical Connectives

Examples:

Father(Rohan) ∧ Father(Shyam)

¬Dancer(Riya)

Dr.Padmapriya G, Dr.Deeba K, Dr.Aruna M, Dr.Arthi B, Dr. Anto Arockia Rosaline R 29 | P a g e


21CSC206T – ARTIFICIAL INTELLIGENCE

Quantifiers

1. Universal Quantification (∀):

 Syntax: ∀x P(x)
 Example: "All kids are naughty."
o Representation: ∀x (Kid(x) → Naughty(x))
o Meaning: For every x, if x is a kid, then x is naughty.
 Keywords: "every," "each," "everyone."

2. Existential Quantification (∃):

 Syntax: ∃x P(x)
 Example: "Some people are kind."
o Representation: ∃x (Kind(x))
o Meaning: There exists at least one x such that x is kind.
 Keywords: "some," "at least one."

Universe of Discourse

The universe of discourse is the set of all objects being considered. For instance, in
mathematics, it could be the set of integers.

Instance Representation and ISA Relationships

Key Concepts

1. Instance: Indicates membership of an object in a class.


o Example: Instance(Shyam, Engineer)
 Meaning: Shyam is a member of the class "Engineer."
2. ISA Relationship: Represents class inclusion or inheritance.
o Example: ISA(Engineer, Intelligent)
 Meaning: All engineers are intelligent.

Dr.Padmapriya G, Dr.Deeba K, Dr.Aruna M, Dr.Arthi B, Dr. Anto Arockia Rosaline R 30 | P a g e


21CSC206T – ARTIFICIAL INTELLIGENCE

Examples:

 "Shyam is an engineer. All engineers are intelligent."


o Instance(Shyam, Engineer)
o ISA(Engineer, Intelligent)

Summary

 Predicate logic is built upon propositional logic, adding expressiveness through


predicates, quantifiers, and variables.
 It enables compact representation of complex relationships and generalizations.
 The ISA and instance relationships allow effective knowledge representation with
inheritance.
 Predicate logic is ideal for reasoning in complex domains where propositional logic falls
short.

UNIFICATION AND LIFTING: INFERENCE IN FOL

Unification and lifting are concepts related to inference in First Order Logic (FOL). They
introduce the notion of logical inference, which involves finding results by removing quantifiers
and converting rules of the knowledge base (KB) into propositional logic for inference.

Inference Rule for Quantifiers

Two common rules are used for inference from quantifiers:

1. Rule of Universal Instantiation (UI):


o Example: "All students who are kind are intelligent."

Representation: ∀x (Student(x) ∧ Kind(x) → Intelligent(x))

Using substitution, we can infer:

Dr.Padmapriya G, Dr.Deeba K, Dr.Aruna M, Dr.Arthi B, Dr. Anto Arockia Rosaline R 31 | P a g e


21CSC206T – ARTIFICIAL INTELLIGENCE

 Student(Shyam) ∧ Kind(Shyam) → Intelligent(Shyam)


 Student(Rohan) ∧ Kind(Rohan) → Intelligent(Rohan)

The rule states that by substituting a ground term (a term without variables) for
the variable, any sentence can be inferred.

Syntax: Subs(θ, S), where θ = {v/t}, and t is a ground term.

2. Rule of Existential Instantiation (EI):


o Example: ∃x (Car(x) ∧ Drive(x, Shyam))

Using substitution:

 Car(abc) ∧ Drive(abc, Shyam)

Here, abc is a Skolem constant (a new name not appearing elsewhere in the KB).

o UI produces a logically equivalent KB, while EI produces a satisfiable but not


equivalent KB.

Reducing to Propositional Inference

To infer non-quantified sentences from quantified ones, substitution methods are applied:

 Existential quantifiers: Substituted by one instantiation.


 Universal quantifiers: Replaced with all possible instantiations.

Example KB:

 ∀x (Student(x) ∧ Kind(x) → Intelligent(x))


 Student(Shyam)
 Kind(Shyam)
 Friends(Shyam, Rohan)

Dr.Padmapriya G, Dr.Deeba K, Dr.Aruna M, Dr.Arthi B, Dr. Anto Arockia Rosaline R 32 | P a g e


21CSC206T – ARTIFICIAL INTELLIGENCE

After substitution:

 Student(Shyam) ∧ Kind(Shyam)
 Student(Rohan) ∧ Kind(Rohan)
 Intelligent(Shyam) ∧ Intelligent(Rohan)

Problems with propositionalisation:

 Generates many irrelevant sentences.


 For p predicates of arity k and n constants, propositionalisation creates p × n^k
sentences.

Unification

Unification is the process of finding substitutions that make different logical sentences identical.
This involves substituting predicate parameters to allow machines to understand relationships.

Example:

 ∀x (Student(x) ∧ Kind(x) → Intelligent(x))


 Student(Shyam)
 Kind(Shyam)

After unification:

 Shyam is inferred to be intelligent.

First Order Inference Rule:

 Unification uses substitution to infer:


o Find x such that Student(x).
o Check if x satisfies Kind(x).
o Conclude Intelligent(x).

Dr.Padmapriya G, Dr.Deeba K, Dr.Aruna M, Dr.Arthi B, Dr. Anto Arockia Rosaline R 33 | P a g e


21CSC206T – ARTIFICIAL INTELLIGENCE

Generalised Modus Ponens: This rule is a lifted version of modus ponens, allowing inference in
FOL:

1. Premises:
o ∀x (Student(x) ∧ Kind(x) → Intelligent(x))
o Student(Shyam)
o Kind(y)
2. Substitution:
o θ = {x/Shyam, y/Shyam}
3. Inference:
o Intelligent(Shyam)

Generalised modus ponens lifts propositional logic modus ponens to FOL, enabling inference
with quantifiers and predicates.

UNIFICATION ALGORITHM

Unification is the process of determining a substitution θ that makes two logical sentences
identical. The rules of inference that are "lifted" need this substitution to unify different logical
expressions. This is critical in AI for tasks like natural language processing, theorem proving, and
reasoning.

The Unification Algorithm takes two sentences and returns a unifier if one exists. The result is
the substitution θ that, when applied, makes the two sentences identical.

Notation:

 Unify(a, b): This function takes two sentences, a and b, and returns the unifier θ, where:
o Subs(θ, a) = Subs(θ, b).

Example of Unification

Dr.Padmapriya G, Dr.Deeba K, Dr.Aruna M, Dr.Arthi B, Dr. Anto Arockia Rosaline R 34 | P a g e


21CSC206T – ARTIFICIAL INTELLIGENCE

Suppose the query is Friends(Rita, x), and the knowledge base (KB) contains:

1. Friends(Rita, Seema)
2. Friends(x, Maya)
3. Friends(y, Neha)

Applying unification:

1. Unify(Friends(Rita, x), Friends(Rita, Seema)):


o θ = {x/Seema}
2. Unify(Friends(Rita, x), Friends(x, Maya)):
o Fails (θ = Fail), as x cannot simultaneously be "Rita" and "Maya".
3. Unify(Friends(Rita, x), Friends(y, Neha)):
o θ = {x/Neha, y/Rita}.

If variable names are identical in conflicting ways, unification fails. For example:

 Unify(Friends(Rita, x), Friends(x, Maya)) fails because x cannot be both "Rita" and
"Maya".

To resolve this, standardization of variable names is performed. For instance:

 Change variable names in the second expression to avoid conflict: Friends(z, Maya).
 Then, Unify(Friends(Rita, x), Friends(z, Maya)):
o θ = {x/Maya, z/Rita}.

Generalized Unification Algorithm

1. Base Case: x and y are variables or constants:


o If x and y are identical, return NULL.
o If x is a variable:
 If x occurs in y, return Fail.
 Otherwise, return {y/x}.
o If y is a variable:

Dr.Padmapriya G, Dr.Deeba K, Dr.Aruna M, Dr.Arthi B, Dr. Anto Arockia Rosaline R 35 | P a g e


21CSC206T – ARTIFICIAL INTELLIGENCE

 If y occurs in x, return Fail.


 Otherwise, return {x/y}.
o Otherwise, return Fail.
2. If the arguments mismatch:
o Return Fail.
3. If the predicates do not match:
o Return Fail.
4. Initialize an empty substitution set:
o Subs = {}.
5. Iterate over all arguments:
o For each argument ctr (from 1 to the number of arguments):

1. Recursively call the algorithm with the ctr-th argument of both x and y.
2. Add the result to the substitution set (Subs).
3. If the result is Fail, return Fail.
4. If the result is non-NULL, apply the substitution to the remaining
arguments of x and y.
2. Return the final substitution set:
o Subs.

Importance of Unification

 Natural Language Parsers: Unification is essential for matching linguistic patterns.


 AI Applications: It plays a significant role in theorem proving, logic programming, and
automated reasoning.
 Mathematical Foundations: Strongly grounded in mathematical logic, making it crucial
for understanding and implementing AI algorithms.

Unification, while seemingly straightforward, requires attention to variable handling and


substitution rules. It serves as a cornerstone for many AI systems that require reasoning and
pattern matching.

Dr.Padmapriya G, Dr.Deeba K, Dr.Aruna M, Dr.Arthi B, Dr. Anto Arockia Rosaline R 36 | P a g e


21CSC206T – ARTIFICIAL INTELLIGENCE

KNOWLEDGE REPRESENTATION USING RULES

Representing Knowledge Using Rules

Knowledge representation plays a crucial role in artificial intelligence (AI) systems. This section
explains how rules can be used to represent knowledge effectively and covers key concepts like
declarative and procedural knowledge, logic programming, reasoning techniques, matching
algorithms, and conflict resolution.

1.Declarative Knowledge:

 Represents facts or assertions without specifying how they are used.


 Requires a separate program to define the procedures for using these facts.

Bird(Parrot), Bird(Sparrow), Feathers(Pigeon)

∀x Bird(x) → Feathers(x)

Query: "Find all entities with feathers" (Feathers(y))


Answer: Parrot, Sparrow, and Pigeon.

2. Procedural Knowledge:

o Embeds control information directly into the knowledge, specifying what to do


and how.
o Example: Depth-first search or heuristic methods to prioritize certain outcomes.
For the above query, procedural control could restrict the answer to only Pigeon.

Comparison:

 Declarative: Focuses on what knowledge is.


 Procedural: Focuses on how to use the knowledge.

Dr.Padmapriya G, Dr.Deeba K, Dr.Aruna M, Dr.Arthi B, Dr. Anto Arockia Rosaline R 37 | P a g e


21CSC206T – ARTIFICIAL INTELLIGENCE

Logic Programming

Logic programming is a paradigm where logical assertions are treated as programs. Prolog is an
example of logic programming that uses Horn clauses and has a fixed control strategy.

Key Features:

1. Backward Reasoning: Starts with the goal and works backward to find supporting
evidence.
2. Steps in Prolog's Control Strategy:
o Start with the problem statement (goal).
o Check facts and rules to prove the goal.
o Use unification to match variables.
o Apply depth-first search and backtracking when necessary.

Example:

Facts: Father(John, Mary), Mother(Anna, Mary)

Query: Parent(x, Mary)

Result: x = John or x = Anna

Forward and Backward Reasoning

1. Forward Reasoning:
o Starts from initial facts and applies rules to derive new facts until the goal is
reached.
o Application: Useful when new facts evolve, e.g., monitoring systems.
2. Backward Reasoning:
o Starts from the goal and works backward to find facts/rules that support the goal.
o Application: Problem-solving scenarios where the goal is well-defined.

Dr.Padmapriya G, Dr.Deeba K, Dr.Aruna M, Dr.Arthi B, Dr. Anto Arockia Rosaline R 38 | P a g e


21CSC206T – ARTIFICIAL INTELLIGENCE

Key Factors in Choosing Reasoning:

 Number of goals vs. start states.


 Likelihood of evolving new facts.
 Branching factor at decision nodes.

Matching with Variables

When preconditions do not match the current state directly, matching algorithms like
unification are employed to find consistency.

 Unification Algorithm: Matches rules with facts using variable substitution.


 Many-to-Many Matching: When multiple rules and facts are involved, Rete Algorithm is
used.

Rete Algorithm

 A pattern-matching algorithm for rule-based systems.


 Features:
o Maintains a network of rules and state descriptions.
o Avoids repetitive testing by storing intermediate results.
o Supports efficient updates when facts change.

Example: Rule:

if P1(A, B) and P2(A, C) then Result(A, C)

P1(X, 5)

The algorithm evaluates which facts match the rules efficiently by propagating data across a
structured network.

Approximate and Complex Matching

1. Approximate Matching:

Dr.Padmapriya G, Dr.Deeba K, Dr.Aruna M, Dr.Arthi B, Dr. Anto Arockia Rosaline R 39 | P a g e


21CSC206T – ARTIFICIAL INTELLIGENCE

o Handles cases where exact matches are not possible (e.g., speech recognition with
noise).
o Example: AI systems like ELIZA use partial pattern matching to simulate
meaningful interactions.
2. Complex Matching:
o Involves rules that infer properties not explicitly mentioned in the current state.
o Example: Matching predicates like has-strips and has-spots based on structural
similarity.

Conflict Resolution

In rule-based systems, conflict resolution determines the order of rule execution when
multiple rules are applicable.

Criteria for Conflict Resolution:

1. Rule-Based: Prioritize based on specific rules.


2. Object-Based: Select based on matched objects.
3. Action-Based: Focus on the actions performed by the matched rule.

Applications

 Natural Language Processing: ELIZA uses pattern matching for interactive dialogues.
 Problem Solving: Forward and backward reasoning in AI systems.
 Monitoring Systems: Forward reasoning to handle evolving facts.
 Rule-Based Systems: Efficient knowledge management using Rete Algorithm.

Summary:

 Rules play a vital role in knowledge representation, enabling systems to infer and reason
effectively.
 Declarative and procedural knowledge offer distinct approaches, each suited to specific
tasks.

Dr.Padmapriya G, Dr.Deeba K, Dr.Aruna M, Dr.Arthi B, Dr. Anto Arockia Rosaline R 40 | P a g e


21CSC206T – ARTIFICIAL INTELLIGENCE

 Logic programming (Prolog) and reasoning techniques (forward/backward) are


foundational in AI.
 Efficient matching and conflict resolution mechanisms (e.g., Rete Algorithm) optimize
rule-based systems.

SEMANTIC NETWORKS

Sematic Network

Semantic networks represent knowledge through a network of nodes (representing objects or


concepts) connected by arcs (representing relationships). This approach emphasizes
understanding knowledge as interconnected relationships between concepts.

Key Features of Semantic Networks

 Nodes: Represent objects or categories (e.g., "Tendulkar" or "Batsman").


 Arcs: Represent relationships between nodes (e.g., "ISA," "Instance," "Haspart").

Common Semantic Relationships

1. Instance: Represents a specific example of a general concept.


o Example: Tendulkar is an instance of Batsman.
2. ISA (is-a): Represents a subset or a type of a more general concept.
o Example: Batsman ISA Cricketer.
3. Haspart: Indicates that one concept is part of another.
o Example: Stumps Haspart Bails.

Representation Example

Using the cricket domain:

 ISA(Batsman, Cricketer)

Dr.Padmapriya G, Dr.Deeba K, Dr.Aruna M, Dr.Arthi B, Dr. Anto Arockia Rosaline R 41 | P a g e


21CSC206T – ARTIFICIAL INTELLIGENCE

 Instance(Tendulkar, Cricketer)
 Team(Tendulkar, India)

2. Inference in Semantic Networks

Two main inference mechanisms are used in semantic networks:

1. Intersection Search: Finds relationships by identifying intersections between nodes. The


process involves activating nodes and spreading the activation to related nodes.
2. Inheritance: Utilizes ISA and Instance relationships to derive conclusions and enables
default reasoning.

Example of Differentiated Relationships

To differentiate values (e.g., comparing runs of players):

 Use explicit links like "greater than."

Dr.Padmapriya G, Dr.Deeba K, Dr.Aruna M, Dr.Arthi B, Dr. Anto Arockia Rosaline R 42 | P a g e


21CSC206T – ARTIFICIAL INTELLIGENCE

Partitioned Semantic Networks

Partitioned semantic networks address some limitations of traditional semantic networks:

 Purpose: Overcome vagueness in token meanings and inefficiencies in inference


mechanisms.
 Structure: Divide the network into spaces, where each space becomes a node or object.
 Example: The statement "case is strong" is partitioned into its own space and referred to
as an object.

Dr.Padmapriya G, Dr.Deeba K, Dr.Aruna M, Dr.Arthi B, Dr. Anto Arockia Rosaline R 43 | P a g e


21CSC206T – ARTIFICIAL INTELLIGENCE

Frame Systems

Frame systems extend semantic networks for structured knowledge representation. Frames are
collections of attributes or slots representing real-world concepts.

Key Features of Frames

 Frame Name: Identifies the concept.


 Slots: Attributes or properties associated with the concept.
 Values: Specific data for each attribute.

Example of Frame Representation

For "Tendulkar":

 Instance: Batsman
 Runs: 15500
 Team: India

Minsky's Frame Theory

 Frames group related information into clusters or "chunks."


 Frames allow procedural knowledge representation.

Frames can be organized hierarchically (e.g., meta-classes).

Types of Inference

Inference is the reasoning process that derives conclusions from knowledge.

Categories of Inference

1. Deduction:
o Derives specific conclusions from general facts or axioms.
o Example:

Dr.Padmapriya G, Dr.Deeba K, Dr.Aruna M, Dr.Arthi B, Dr. Anto Arockia Rosaline R 44 | P a g e


21CSC206T – ARTIFICIAL INTELLIGENCE

 Axiom: All kids are naughty.


 Fact: Riya is a kid.
 Conclusion: Riya is naughty.
o Properties: Truth-preserving, logical inference.
2. Induction:
o Generalizes rules or axioms from observations.
o Example:
 Observation: Riya is a kid and is naughty.
 Conclusion: Kids are naughty.
3. Abduction:
o Derives premises from observations and general axioms.
o Example:
 Axiom: All kids are naughty.
 Observation: Riya is naughty.
 Inference: Riya must be a kid.

Applications

 Deduction: Used in formal logic.


 Abduction: Used in diagnostic and expert systems.

Types of Reasoning

Different reasoning approaches suit various domains and applications.

Common Sense Reasoning

 Simulates human-like judgment in machines.


 Requires formalization through logic.
 Tools: ConceptNet (an extension of WordNet) for common sense knowledge
representation.

Dr.Padmapriya G, Dr.Deeba K, Dr.Aruna M, Dr.Arthi B, Dr. Anto Arockia Rosaline R 45 | P a g e


21CSC206T – ARTIFICIAL INTELLIGENCE

Hypothetical Reasoning

 Involves reasoning with assumptions to evaluate their outcomes.


 Applications: Diagnostic systems.
 Process:
1. Generate hypotheses.
2. Test hypotheses by generating predictions.
3. Validate predictions through experimentation.

Analogical Reasoning

 Derives conclusions by finding similarities between objects or situations.


 Process:
1. Identify the target for understanding.
2. Find a matching domain and terms.
3. Transfer attributes from the matching domain to the target.

Summary

 Semantic Networks: Represent conceptual relationships through nodes and arcs.


 Partitioned Networks: Refine semantic networks for logical and heuristic adequacy.
 Frame Systems: Provide structured representations of complex knowledge.
 Inference Mechanisms: Enable logical reasoning, diagnosis, and decision-making.
 Reasoning Types: Adapt to domain-specific needs, ranging from common sense to
analogical reasoning.

Dr.Padmapriya G, Dr.Deeba K, Dr.Aruna M, Dr.Arthi B, Dr. Anto Arockia Rosaline R 46 | P a g e


21CSC206T – ARTIFICIAL INTELLIGENCE

UNCERTAIN KNOWLEDGE AND REASONING

Uncertainty and Methods

Uncertainty is a fundamental aspect of the real world. To address it intelligently, one must apply
logic and take appropriate actions. Uncertainty arises due to various factors, such as omitted
data, unavailability of complete data, or partial availability of events. Probability theory provides
a structured approach to handle such uncertainties.

Methods to Handle Uncertainty

1. Default or Monotonic Logic:


o Assumptions are made about conditions (e.g., traffic, road conditions) to make
decisions.
o Contradictions may arise, requiring a measure to calculate belief and disbelief.
o Certainty factors help quantify belief and disbelief, often represented using
probability.
2. Dempster-Shafer Theory:
o Differentiates between reasoning and decision theory.
o Uses concepts of belief and plausibility:
 Belief: Based on all available evidence.
 Plausibility: Based on evidence compatible but not consistent with the
hypothesis.
3. Fuzzy Logic:
o Truth values are assigned partially.
o Uses values between 0 and 1 to represent degrees of applicability.
o Example: A document primarily about human anatomy belongs to science, but one
containing anatomy and geography belongs partially to science.
4. Probability Theory:
o Assigns probable values to uncertain information.
o Captures uncertainties through conditional probabilities.
o Uses Bayesian inference to infer relationships among variables.

Dr.Padmapriya G, Dr.Deeba K, Dr.Aruna M, Dr.Arthi B, Dr. Anto Arockia Rosaline R 47 | P a g e


21CSC206T – ARTIFICIAL INTELLIGENCE

Bayesian probability provides a powerful framework for decision-making under uncertainty by


modeling relationships and reasoning with uncertain statements.

Bayesian Probability And Belief Network

Probability can be interpreted from two perspectives:

 Objective: Represents the physical likelihood of events.


 Subjective: Estimates probability based on past experiences and calculates the degree of
belief.

Bayesian Probability

Bayesian probability predicts the likelihood of events based on prior knowledge and updates
beliefs with new evidence. It is particularly useful in uncertain scenarios, forming an extension
of logic to reason with uncertain statements.

Example

Predicting whether a person will be selected for a cricket team depends on factors such as
matches played, skills, and role (e.g., all-rounder or batsman). Bayesian probability models these
relationships for informed decision-making.

Basic Concepts and Terminologies

Propositions and Hypothesis

 Proposition: A sentence that is either true or false. For example, "Is the batsman out?"
can take values “<true, false>”.
 Hypothesis: An educated guess or explanation, often expressed as "If A occurs, then B."
o Antecedent: The condition (A).
o Consequent: The outcome (B).
o Example: "If a prisoner learns skilled work in prison, then they are less likely to
reoffend."

Dr.Padmapriya G, Dr.Deeba K, Dr.Aruna M, Dr.Arthi B, Dr. Anto Arockia Rosaline R 48 | P a g e


21CSC206T – ARTIFICIAL INTELLIGENCE

Probability

Probability represents the likelihood of an event occurring, with values between 0 and 1.

Types of Probability

1. Classical Probability:
o Probability of an event A:
2. Joint Probability:
o Probability of multiple events occurring together:
3. Marginal Probability:
o Unconditional probability of an event A, irrespective of event B:
4. Prior Probability:
o Probability based on prior belief without observed data.
5. Conditional Probability:
o Probability of event A, given event B has occurred:
o If A and B are independent:
o If not independent:
6. Posterior Probability:
o Probability after considering relevant evidence.
o Example: Prior probability . Posterior probability considers the evidence of injury.

Bayes' theorem allows inference of probabilities for hypotheses given observed data or evidence,
forming the basis for Bayesian inference.

UNCERTAIN KNOWLEDGE AND REASONING

Dr.Padmapriya G, Dr.Deeba K, Dr.Aruna M, Dr.Arthi B, Dr. Anto Arockia Rosaline R 49 | P a g e


21CSC206T – ARTIFICIAL INTELLIGENCE

Bayesian Inference

Bayesian inference is a statistical method to infer for an unknown situation. It involves collecting
the evidences that are consistent or inconsistent with a hypothesis. As more and more evidences
are gathered, the degree of belief in a hypothesis changes like normal reasoning. With enough
evidences, the hypothesis can become true or false. Equation is given as:

P(H|E) = P(E|H)P(H)/P(E)

where E is evidence and H is hypothesis.

Here, P(H) is the prior probability. P(E) is the marginal probability, and P(H|E) is the conditional
probability. It can also be mapped to a cause-and-effect problem or even to a diagnostic one.

To make it clearer, let us take an example. Let e be the proposition that 'X person has elbow pain'.
Let t be the proposition that 'X has tennis elbow'. So, if we want to derive a diagnosis, we will
have the mapping as follows:

 P(person having elbow pain) = P(e)


 P(person having tennis elbow) = P(t)
 P(person having tennis elbow who has elbow pain) = P(t|e) (to be found out)
 P(person having pain in elbow given that he has tennis elbow) = P(e|t)

So, it would be: P(t|e) = P(e|t)P(t)/P(e)

In Bayesian inference, it uses the initial belief on the hypothesis and estimates the degree of
belief as the evidences become available.

Belief Network

Dr.Padmapriya G, Dr.Deeba K, Dr.Aruna M, Dr.Arthi B, Dr. Anto Arockia Rosaline R 50 | P a g e


21CSC206T – ARTIFICIAL INTELLIGENCE

A belief network or a Bayesian network is a probabilistic graphical model that represents a set of
random variables and their conditional dependencies through a directed acyclic graph (DAG). In
the graph:

 Nodes represent random variables.


 Directed edges represent the conditional dependencies among the variables.

The dependent node of a node is called the child node, whereas the one on which it is dependent
is the parent node. The directed edge points to the child node to represent the dependency. The
parent node is called the cause, and the child node is called the effect. The nodes that are not
connected are independent of each other. Every node is associated with a probability function,
which takes values of the parent function and gives the probability of the variable it represents.

A Bayesian network essentially provides the entire description of a domain. The network is seen
as a joint probability distribution. Each entry in this probability distribution is represented by
the product of the appropriate elements of the conditional probability table (CPT).

Example:
Consider a hypothetical example to understand the concept. We need to evaluate the probability
of selection of Ram or Shyam with regard to the coach. A Bayesian network is used to represent
this relationship.

In the network:

Dr.Padmapriya G, Dr.Deeba K, Dr.Aruna M, Dr.Arthi B, Dr. Anto Arockia Rosaline R 51 | P a g e


21CSC206T – ARTIFICIAL INTELLIGENCE

 The node 'Coach is Milind' does not have any parent node. Its value represents the prior
probability, taken from past data.
 Let C = Coach, S = Shyam selected, R = Ram selected.

Using this example, computations are as follows:

 P(C) = Probability that the coach is Milind


 P(~C) = Probability that the coach is not Milind

P(R) = P(R|C) * P(C) + P(R|~C) * P(~C)


= (0.8 * 0.9) + (0.1 * 0.1)
= 0.73

Similarly, compute the probability that Shyam is selected:

P(S) = P(S|C) * P(C) + P(S|~C) * P(~C)


= (0.6 * 0.9) + (0.5 * 0.1)
= 0.59

These computations are marginal probabilities. Now, let us apply Bayes' theorem. Suppose we
know that Shyam is selected but do not know whether the coach is Milind. The computation will
be:

P(C|R) = [P(R|C) * P(C)] / P(R)


= (0.8 * 0.9) / 0.73
= 0.98

This gives the probability that the coach is Milind. This information can further be used to decide
whether Shyam is selected. This method is known as propagation, where evidence helps
propagate probabilities.

Conclusion

Dr.Padmapriya G, Dr.Deeba K, Dr.Aruna M, Dr.Arthi B, Dr. Anto Arockia Rosaline R 52 | P a g e


21CSC206T – ARTIFICIAL INTELLIGENCE

This chapter provides a comprehensive foundation in knowledge representation and reasoning,


essential for designing intelligent agents. It begins with knowledge-based agents and the
Wumpus world as practical examples, progressing to propositional and predicate logic for
representing facts and reasoning. Techniques like resolution, conjunctive normal form, forward
and backward chaining, and the unification algorithm enable logical inference. Structured
knowledge representation models, including semantic networks, frames, and rule-based
systems, demonstrate methods for encoding complex relationships. To address uncertainty,
probabilistic frameworks like Bayesian belief networks, Markov assumptions, and the Dempster-
Shafer theory were introduced. Together, these methods equip intelligent systems to reason,
adapt, and act effectively in both deterministic and uncertain environments.

Dr.Padmapriya G, Dr.Deeba K, Dr.Aruna M, Dr.Arthi B, Dr. Anto Arockia Rosaline R 53 | P a g e

You might also like