[go: up one dir, main page]

0% found this document useful (0 votes)
34 views65 pages

FAIML Notes Unit 3

FAIML unit3

Uploaded by

vinaymarbe40
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
34 views65 pages

FAIML Notes Unit 3

FAIML unit3

Uploaded by

vinaymarbe40
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 65

Unit 3 Knowledge based Reasoning

Knowledge-Based Agent in Artificial


intelligence
o An intelligent agent needs knowledge about the real world for
taking decisions and reasoning to act efficiently.
o Knowledge-based agents are those agents who have the
capability of maintaining an internal state of knowledge,
reason over that knowledge, update their knowledge
after observations and take actions. These agents can
represent the world with some formal representation
and act intelligently.
o Knowledge-based agents are composed of two main parts:
o Knowledge-base and
o Inference system.

A knowledge-based agent must able to do the following:

o An agent should be able to represent states, actions, etc.


o An agent Should be able to incorporate new percepts
o An agent can update the internal representation of the world
o An agent can deduce the internal representation of the world
o An agent can deduce appropriate actions.
The architecture of knowledge-based agent:

The above diagram is representing a generalized architecture for a


knowledge-based agent. The knowledge-based agent (KBA) take
input from the environment by perceiving the environment. The
input is taken by the inference engine of the agent and which also
communicate with KB to decide as per the knowledge store in KB.
The learning element of KBA regularly updates the KB by learning
new knowledge.

Knowledge base: Knowledge-base is a central component of a


knowledge-based agent, it is also known as KB. It is a collection of
sentences (here 'sentence' is a technical term and it is not identical
to sentence in English). These sentences are expressed in a
language which is called a knowledge representation language. The
Knowledge-base of KBA stores fact about the world.
Why use a knowledge base?
Knowledge-base is required for updating knowledge for an agent to
learn with experiences and take action as per the knowledge.
27.7M
538
History of Java

Inference system
Inference means deriving new sentences from old. Inference system
allows us to add a new sentence to the knowledge base. A sentence
is a proposition about the world. Inference system applies logical
rules to the KB to deduce new information.

Inference system generates new facts so that an agent can update


the KB. An inference system works mainly in two rules which are
given as:

o Forward chaining
o Backward chaining

Operations Performed by KBA


Following are three operations which are performed by KBA
in order to show the intelligent behavior:

1. TELL: This operation tells the knowledge base what it


perceives from the environment.
2. ASK: This operation asks the knowledge base what action it
should perform.
3. Perform: It performs the selected action.

A generic knowledge-based agent:


Following is the structure outline of a generic knowledge-based
agents program:
1. function KB-AGENT(percept):
2. persistent: KB, a knowledge base
3. t, a counter, initially 0, indicating time
4. TELL(KB, MAKE-PERCEPT-SENTENCE(percept, t))
5. Action = ASK(KB, MAKE-ACTION-QUERY(t))
6. TELL(KB, MAKE-ACTION-SENTENCE(action, t))
7. t = t + 1
8. return action

The knowledge-based agent takes percept as input and returns an


action as output. The agent maintains the knowledge base, KB, and
it initially has some background knowledge of the real world. It also
has a counter to indicate the time for the whole process, and this
counter is initialized with zero.

Each time when the function is called, it performs its three


operations:

o Firstly it TELLs the KB what it perceives.


o Secondly, it asks KB what action it should take
o Third agent program TELLS the KB that which action was
chosen.

The MAKE-PERCEPT-SENTENCE generates a sentence as setting that


the agent perceived the given percept at the given time.

The MAKE-ACTION-QUERY generates a sentence to ask which action


should be done at the current time.

MAKE-ACTION-SENTENCE generates a sentence which asserts that


the chosen action was executed.

Various levels of knowledge-based agent:


A knowledge-based agent can be viewed at different levels which
are given below:
1. Knowledge level
Knowledge level is the first level of knowledge-based agent, and in
this level, we need to specify what the agent knows, and what the
agent goals are. With these specifications, we can fix its behavior.
For example, suppose an automated taxi agent needs to go from a
station A to station B, and he knows the way from A to B, so this
comes at the knowledge level.

2. Logical level:
At this level, we understand that how the knowledge representation
of knowledge is stored. At this level, sentences are encoded into
different logics. At the logical level, an encoding of knowledge into
logical sentences occurs. At the logical level we can expect to the
automated taxi agent to reach to the destination B.

3. Implementation level:
This is the physical representation of logic and knowledge. At the
implementation level agent perform actions as per logical and
knowledge level. At this level, an automated taxi agent actually
implement his knowledge and logic so that he can reach to the
destination.

Approaches to designing a knowledge-based


agent:
There are mainly two approaches to build a knowledge-based agent:

1. 1. Declarative approach: We can create a knowledge-based


agent by initializing with an empty knowledge base and telling
the agent all the sentences with which we want to start with.
This approach is called Declarative approach.
2. 2. Procedural approach: In the procedural approach, we
directly encode desired behavior as a program code. Which
means we just need to write a program that already encodes
the desired behavior or agent.

However, in the real world, a successful agent can be built by


combining both declarative and procedural approaches, and
declarative knowledge can often be compiled into more efficient
procedural code.

What is knowledge representation?


Humans are best at understanding, reasoning, and interpreting
knowledge. Human knows things, which is knowledge and as per
their knowledge they perform various actions in the real world. But
how machines do all these things comes under knowledge
representation and reasoning. Hence we can describe Knowledge
representation as following:

o Knowledge representation and reasoning (KR, KRR) is the part


of Artificial intelligence which concerned with AI agents
thinking and how thinking contributes to intelligent behavior of
agents.
o It is responsible for representing information about the real
world so that a computer can understand and can utilize this
knowledge to solve the complex real world problems such as
diagnosis a medical condition or communicating with humans
in natural language.
o It is also a way which describes how we can represent
knowledge in artificial intelligence. Knowledge representation
is not just storing data into some database, but it also enables
an intelligent machine to learn from that knowledge and
experiences so that it can behave intelligently like a human.

What to Represent:
Following are the kind of knowledge which needs to be represented
in AI systems:

o Object: All the facts about objects in our world domain. E.g.,
Guitars contains strings, trumpets are brass instruments.
o Events: Events are the actions which occur in our world.

o Performance: It describe behavior which involves knowledge


about how to do things.
o Meta-knowledge: It is knowledge about what we know.

o Facts: Facts are the truths about the real world and what we
represent.
o Knowledge-Base: The central component of the knowledge-
based agents is the knowledge base. It is represented as KB.
The Knowledgebase is a group of the Sentences (Here,
sentences are used as a technical term and not identical with
the English language).

Knowledge: Knowledge is awareness or familiarity gained by


experiences of facts, data, and situations. Following are the types of
knowledge in artificial intelligence:

Types of knowledge
Following are the various types of knowledge:
176.2K
Artificial Intelligence designed to predict rainfall within two hours
1. Declarative Knowledge:

o Declarative knowledge is to know about something.


o It includes concepts, facts, and objects.
o It is also called descriptive knowledge and expressed in
declarative sentences.
o It is simpler than procedural language.

2. Procedural Knowledge

o It is also known as imperative knowledge.


o Procedural knowledge is a type of knowledge which is
responsible for knowing how to do something.
o It can be directly applied to any task.
o It includes rules, strategies, procedures, agendas, etc.
o Procedural knowledge depends on the task on which it can be
applied.

3. Meta-knowledge:

o Knowledge about the other types of knowledge is called Meta-


knowledge.

4. Heuristic knowledge:

o Heuristic knowledge is representing knowledge of some


experts in a filed or subject.
o Heuristic knowledge is rules of thumb based on previous
experiences, awareness of approaches, and which are good to
work but not guaranteed.

5. Structural knowledge:

o Structural knowledge is basic knowledge to problem-solving.


o It describes relationships between various concepts such as
kind of, part of, and grouping of something.
o It describes the relationship that exists between concepts or
objects.

The relation between knowledge and intelligence:


Knowledge of real-worlds plays a vital role in intelligence and same
for creating artificial intelligence. Knowledge plays an important role
in demonstrating intelligent behavior in AI agents. An agent is only
able to accurately act on some input when he has some knowledge
or experience about that input.

Let's suppose if you met some person who is speaking in a language


which you don't know, then how you will able to act on that. The
same thing applies to the intelligent behavior of the agents.
As we can see in below diagram, there is one decision maker which
act by sensing the environment and using knowledge. But if the
knowledge part will not present then, it cannot display intelligent
behavior.

AI knowledge cycle:
An Artificial intelligence system has the following components for
displaying intelligent behavior:

o Perception
o Learning
o Knowledge Representation and Reasoning
o Planning
o Execution
The above diagram is showing how an AI system can interact with
the real world and what components help it to show intelligence. AI
system has Perception component by which it retrieves information
from its environment. It can be visual, audio or another form of
sensory input. The learning component is responsible for learning
from data captured by Perception comportment. In the complete
cycle, the main components are knowledge representation and
Reasoning. These two components are involved in showing the
intelligence in machine-like humans. These two components are
independent with each other but also coupled together. The
planning and execution depend on analysis of Knowledge
representation and reasoning.

Approaches to knowledge representation:


There are mainly four approaches to knowledge representation,
which are givenbelow:

1. Simple relational knowledge:

o It is the simplest way of storing facts which uses the relational


method, and each fact about a set of the object is set out
systematically in columns.
o This approach of knowledge representation is famous in
database systems where the relationship between different
entities is represented.
o This approach has little opportunity for inference.

Example: The following is the simple relational knowledge


representation.

Player Weight Age

Player1 65 23

Player2 58 18

Player3 75 24

2. Inheritable knowledge:

o In the inheritable knowledge approach, all data must be stored


into a hierarchy of classes.
o All classes should be arranged in a generalized form or a
hierarchal manner.
o In this approach, we apply inheritance property.
o Elements inherit values from other members of a class.
o This approach contains inheritable knowledge which shows a
relation between instance and class, and it is called instance
relation.
o Every individual frame can represent the collection of
attributes and its value.
o In this approach, objects and values are represented in Boxed
nodes.
o We use Arrows which point from objects to their values.
o Example:

3. Inferential knowledge:

o Inferential knowledge approach represents knowledge in the


form of formal logics.
o This approach can be used to derive more facts.
o It guaranteed correctness.
o Example: Let's suppose there are two statements:

a. Marcus is a man
b. All men are mortal
Then it can represent as;

man(Marcus)
∀x = man (x) ----------> mortal (x)s
4. Procedural knowledge:

o Procedural knowledge approach uses small programs and


codes which describes how to do specific things, and how to
proceed.
o In this approach, one important rule is used which is If-Then
rule.

o In this knowledge, we can use various coding languages such


as LISP language and Prolog language.
o We can easily represent heuristic or domain-specific
knowledge using this approach.
o But it is not necessary that we can represent all cases in this
approach.

Requirements for knowledge Representation


system:
A good knowledge representation system must possess the
following properties.

1. 1.Representational Accuracy:
KR system should have the ability to represent all kind of
required knowledge.
2. 2. Inferential Adequacy:
KR system should have ability to manipulate the
representational structures to produce new knowledge
corresponding to existing structure.
3. 3. Inferential Efficiency:
The ability to direct the inferential knowledge mechanism into
the most productive directions by storing appropriate guides.
4. 4. Acquisitional efficiency- The ability to acquire the new
knowledge easily using automatic methods.

Techniques of knowledge representation


There are mainly four ways of knowledge representation which are
given as follows:

1. Logical Representation
2. Semantic Network Representation
3. Frame Representation
4. Production Rules

1. Logical Representation
Logical representation is a language with some concrete rules which
deals with propositions and has no ambiguity in representation.
Logical representation means drawing a conclusion based on various
conditions. This representation lays down some important
communication rules. It consists of precisely defined syntax and
semantics which supports the sound inference. Each sentence can
be translated into logics using syntax and semantics.

Syntax:

o Syntaxes are the rules which decide how we can construct


legal sentences in the logic.
o It determines which symbol we can use in knowledge
representation.
o How to write those symbols.

Semantics:

o Semantics are the rules by which we can interpret the


sentence in the logic.
o Semantic also involves assigning a meaning to each sentence.

Logical representation can be categorised into mainly two logics:

a. Propositional Logics
b. Predicate logics

Note: We will discuss Prepositional Logics and Predicate logics in later chapters.

Advantages of logical representation:

1. Logical representation enables us to do logical reasoning.


2. Logical representation is the basis for the programming
languages.

Disadvantages of logical Representation:

1. Logical representations have some restrictions and are


challenging to work with.
2. Logical representation technique may not be very natural, and
inference may not be so efficient.

Note: Do not be confused with logical representation and logical reasoning as logical
representation is a representation language and reasoning is a process of thinking
logically.

2. Semantic Network Representation


Semantic networks are alternative of predicate logic for knowledge
representation. In Semantic networks, we can represent our
knowledge in the form of graphical networks. This network consists
of nodes representing objects and arcs which describe the
relationship between those objects. Semantic networks can
categorize the object in different forms and can also link those
objects. Semantic networks are easy to understand and can be
easily extended.
30.4M

636

How to find Nth Highest Salary in SQL

Next

Stay

This representation consist of mainly two types of relations:

a. IS-A relation (Inheritance)


b. Kind-of-relation

Example: Following are some statements which we need to


represent in the form of nodes and arcs.

Statements:

a. Jerry is a cat.
b. Jerry is a mammal
c. Jerry is owned by Priya.
d. Jerry is brown colored.
e. All Mammals are animal.

In the above diagram, we have represented the different type of


knowledge in the form of nodes and arcs. Each object is connected
with another object by some relation.

Drawbacks in Semantic representation:

1. Semantic networks take more computational time at runtime


as we need to traverse the complete network tree to answer
some questions. It might be possible in the worst case scenario
that after traversing the entire tree, we find that the solution
does not exist in this network.
2. Semantic networks try to model human-like memory (Which
has 1015 neurons and links) to store the information, but in
practice, it is not possible to build such a vast semantic
network.
3. These types of representations are inadequate as they do not
have any equivalent quantifier, e.g., for all, for some, none,
etc.
4. Semantic networks do not have any standard definition for the
link names.
5. These networks are not intelligent and depend on the creator
of the system.

Advantages of Semantic network:

1. Semantic networks are a natural representation of knowledge.


2. Semantic networks convey meaning in a transparent manner.
3. These networks are simple and easily understandable.

3. Frame Representation
A frame is a record like structure which consists of a collection of
attributes and its values to describe an entity in the world. Frames
are the AI data structure which divides knowledge into substructures
by representing stereotypes situations. It consists of a collection of
slots and slot values. These slots may be of any type and sizes. Slots
have names and values which are called facets.

Facets: The various aspects of a slot is known as Facets. Facets


are features of frames which enable us to put constraints on the
frames. Example: IF-NEEDED facts are called when data of any
particular slot is needed. A frame may consist of any number of
slots, and a slot may include any number of facets and facets may
have any number of values. A frame is also known as slot-filter
knowledge representation in artificial intelligence.

Frames are derived from semantic networks and later evolved into
our modern-day classes and objects. A single frame is not much
useful. Frames system consist of a collection of frames which are
connected. In the frame, knowledge about an object or event can be
stored together in the knowledge base. The frame is a type of
technology which is widely used in various applications including
Natural language processing and machine visions.

Example: 1
Let's take an example of a frame for a book

Slots Filters

Title Artificial Intelligence

Genre Computer Science

Author Peter Norvig

Edition Third Edition

Year 1996

Page 1152

Example 2:
Let's suppose we are taking an entity, Peter. Peter is an engineer as
a profession, and his age is 25, he lives in city London, and the
country is England. So following is the frame representation for this:

Slots Filter

Name Peter

Profession Doctor
Age 25

Marital status Single

Weight 78

Advantages of frame representation:

1. The frame knowledge representation makes the programming


easier by grouping the related data.
2. The frame representation is comparably flexible and used by
many applications in AI.
3. It is very easy to add slots for new attribute and relations.
4. It is easy to include default data and to search for missing
values.
5. Frame representation is easy to understand and visualize.

Disadvantages of frame representation:

1. In frame system inference mechanism is not be easily


processed.
2. Inference mechanism cannot be smoothly proceeded by frame
representation.
3. Frame representation has a much generalized approach.

4. Production Rules
Production rules system consist of (condition, action) pairs which
mean, "If condition then action". It has mainly three parts:
o The set of production rules
o Working Memory
o The recognize-act-cycle

In production rules agent checks for the condition and if the


condition exists then production rule fires and corresponding action
is carried out. The condition part of the rule determines which rule
may be applied to a problem. And the action part carries out the
associated problem-solving steps. This complete process is called a
recognize-act cycle.

The working memory contains the description of the current state of


problems-solving and rule can write knowledge to the working
memory. This knowledge match and may fire other rules.

If there is a new situation (state) generates, then multiple


production rules will be fired together, this is called conflict set. In
this situation, the agent needs to select a rule from these sets, and
it is called a conflict resolution.

Example:

o IF (at bus stop AND bus arrives) THEN action (get into
the bus)
o IF (on the bus AND paid AND empty seat) THEN action
(sit down).
o IF (on bus AND unpaid) THEN action (pay charges).
o IF (bus arrives at destination) THEN action (get down
from the bus).

Advantages of Production rule:

1. The production rules are expressed in natural language.


2. The production rules are highly modular, so we can easily
remove, add or modify an individual rule.
Disadvantages of Production rule:

1. Production rule system does not exhibit any learning


capabilities, as it does not store the result of the problem for
the future uses.
2. During the execution of the program, many rules may be
active hence rule-based production systems are inefficient.

Propositional logic in Artificial intelligence


Propositional logic (PL) is the simplest form of logic where all the
statements are made by propositions. A proposition is a declarative
statement which is either true or false. It is a technique of
knowledge representation in logical and mathematical form.

Example:

1. a) It is Sunday.
2. b) The Sun rises from West (False proposition)
3. c) 3+3= 7(False proposition)
4. d) 5 is a prime number.

Following are some basic facts about propositional logic:

o Propositional logic is also called Boolean logic as it works on 0


and 1.
o In propositional logic, we use symbolic variables to represent
the logic, and we can use any symbol for a representing a
proposition, such A, B, C, P, Q, R, etc.
o Propositions can be either true or false, but it cannot be both.
o Propositional logic consists of an object, relations or function,
and logical connectives.
o These connectives are also called logical operators.
o The propositions and connectives are the basic elements of the
propositional logic.
o Connectives can be said as a logical operator which connects
two sentences.
o A proposition formula which is always true is called tautology,
and it is also called a valid sentence.
o A proposition formula which is always false is
called Contradiction.
o A proposition formula which has both true and false values is
called
o Statements which are questions, commands, or opinions are
not propositions such as "Where is Rohini", "How are you",
"What is your name", are not propositions.

Syntax of propositional logic:


The syntax of propositional logic defines the allowable sentences for
the knowledge representation. There are two types of Propositions:

a. Atomic Propositions
b. Compound propositions

o Atomic Proposition: Atomic propositions are the simple


propositions. It consists of a single proposition symbol. These
are the sentences which must be either true or false.

Example:
2.9M
Netflix Is Testing Ways To End Password Sharing

1. a) 2+2 is 4, it is an atomic proposition as it is a true fact.


2. b) "The Sun is cold" is also a proposition as it is a false fact.
o Compound proposition: Compound propositions are
constructed by combining simpler or atomic propositions, using
parenthesis and logical connectives.

Example:

1. a) "It is raining today, and street is wet."


2. b) "Ankit is a doctor, and his clinic is in Mumbai."

Logical Connectives:
Logical connectives are used to connect two simpler propositions or
representing a sentence logically. We can create compound
propositions with the help of logical connectives. There are mainly
five connectives, which are given as follows:

1. Negation: A sentence such as ¬ P is called negation of P. A


literal can be either Positive literal or negative literal.
2. Conjunction: A sentence which has ∧ connective such as, P ∧
Q is called a conjunction.
Example: Rohan is intelligent and hardworking. It can be
written as,
P= Rohan is intelligent,
Q= Rohan is hardworking. → P∧ Q.
3. Disjunction: A sentence which has ∨ connective, such as P ∨
Q. is called disjunction, where P and Q are the propositions.
Example: "Ritika is a doctor or Engineer",
Here P= Ritika is Doctor. Q= Ritika is Doctor, so we can write it
as P ∨ Q.
4. Implication: A sentence such as P → Q, is called an
implication. Implications are also known as if-then rules. It can
be represented as
If it is raining, then the street is wet.
Let P= It is raining, and Q= Street is wet, so it is
represented as P → Q
5. Biconditional: A sentence such as P⇔ Q is a Biconditional
sentence, example If I am breathing, then I am alive
P= I am breathing, Q= I am alive, it can be represented
as P ⇔ Q.

Following is the summarized table for Propositional Logic


Connectives:

Truth Table:
In propositional logic, we need to know the truth values of
propositions in all possible scenarios. We can combine all the
possible combination with logical connectives, and the
representation of these combinations in a tabular format is
called Truth table. Following are the truth table for all logical
connectives:
Truth table with three propositions:
We can build a proposition composing three propositions P, Q, and
R. This truth table is made-up of 8n Tuples as we have taken three
proposition symbols.

Precedence of connectives:
Just like arithmetic operators, there is a precedence order for
propositional connectors or logical operators. This order should be
followed while evaluating a propositional problem. Following is the
list of the precedence order for operators:

Precedence Operators

First Precedence Parenthesis

Second Precedence Negation

Third Precedence Conjunction(AND)

Fourth Precedence Disjunction(OR)

Fifth Precedence Implication

Six Precedence Biconditional


interpretations. Such as ¬R∨ Q, It can be interpreted as (¬R) ∨ Q.
Note: For better understanding use parenthesis to make sure of the correct

Logical equivalence:
Logical equivalence is one of the features of propositional logic. Two
propositions are said to be logically equivalent if and only if the
columns in the truth table are identical to each other.

Let's take two propositions A and B, so for logical equivalence, we


can write it as A⇔B. In below truth table we can see that column for
¬A∨ B and A→B, are identical hence A is Equivalent to B

Properties of Operators:

Commutativity:
P∧ Q= Q ∧ P, or
o

P ∨ Q = Q ∨ P.
o

Associativity:
(P ∧ Q) ∧ R= P ∧ (Q ∧ R),
o

(P ∨ Q) ∨ R= P ∨ (Q ∨ R)
o

Identity element:
P ∧ True = P,
o

P ∨ True= True.
o

Distributive:
P∧ (Q ∨ R) = (P ∧ Q) ∨ (P ∧ R).
o

P ∨ (Q ∧ R) = (P ∨ Q) ∧ (P ∨ R).
o

o
DE Morgan's Law:
¬ (P ∧ Q) = (¬P) ∨ (¬Q)
o

¬ (P ∨ Q) = (¬ P) ∧ (¬Q).
o

o Double-negation elimination:
o ¬ (¬P) = P.

Limitations of Propositional logic:

o We cannot represent relations like ALL, some, or none with


propositional logic. Example:

a. All the girls are intelligent.


b. Some apples are sweet.
Propositional logic has limited expressive power.
In propositional logic, we cannot describe statements in terms
of their properties or logical relationships.

Rules of Inference in Artificial intelligence


Inference:
In artificial intelligence, we need intelligent computers which can
create new logic from old logic or by evidence, so generating the
conclusions from evidence and facts is termed as Inference.

Inference rules:
Inference rules are the templates for generating valid arguments.
Inference rules are applied to derive proofs in artificial intelligence,
and the proof is a sequence of the conclusion that leads to the
desired goal.
In inference rules, the implication among all the connectives plays
an important role. Following are some terminologies related to
inference rules:

o Implication: It is one of the logical connectives which can be


represented as P → Q. It is a Boolean expression.
o Converse: The converse of implication, which means the
right-hand side proposition goes to the left-hand side and vice-
versa. It can be written as Q → P.
o Contrapositive: The negation of converse is termed as
contrapositive, and it can be represented as ¬ Q → ¬ P.
o Inverse: The negation of implication is called inverse. It can
be represented as ¬ P → ¬ Q.

From the above term some of the compound statements are


equivalent to each other, which we can prove using truth table:
40.3M
842
Exception Handling in Java - Javatpoint

Hence from the above truth table, we can prove that P → Q is


equivalent to ¬ Q → ¬ P, and Q→ P is equivalent to ¬ P → ¬ Q.

Types of Inference rules:


1. Modus Ponens:
The Modus Ponens rule is one of the most important rules of
inference, and it states that if P and P → Q is true, then we can infer
that Q will be true. It can be represented as:
Example:

Statement-1: "If I am sleepy then I go to bed" ==> P→ Q


Statement-2: "I am sleepy" ==> P
Conclusion: "I go to bed." ==> Q.
Hence, we can say that, if P→ Q is true and P is true then Q will be
true.

Proof by Truth table:

2. Modus Tollens:
The Modus Tollens rule state that if P→ Q is true and ¬ Q is true,
then ¬ P will also true. It can be represented as:

Statement-1: "If I am sleepy then I go to bed" ==> P→ Q


Statement-2: "I do not go to the bed."==> ~Q
Statement-3: Which infers that "I am not sleepy" => ~P

Proof by Truth table:


3. Hypothetical Syllogism:
The Hypothetical Syllogism rule state that if P→R is true whenever
P→Q is true, and Q→R is true. It can be represented as the following
notation:

Example:

Statement-1: If you have my home key then you can unlock my


home. P→Q
Statement-2: If you can unlock my home then you can take my
money. Q→R
Conclusion: If you have my home key then you can take my
money. P→R

Proof by truth table:

4. Disjunctive Syllogism:
The Disjunctive syllogism rule state that if P∨Q is true, and ¬P is
true, then Q will be true. It can be represented as:

Example:

Statement-1: Today is Sunday or Monday. ==>P∨Q


Statement-2: Today is not Sunday. ==> ¬P
Conclusion: Today is Monday. ==> Q

Proof by truth-table:
5. Addition:
The Addition rule is one the common inference rule, and it states
that If P is true, then P∨Q will be true.

Example:

Statement: I have a vanilla ice-cream. ==> P


Statement-2: I have Chocolate ice-cream.
Conclusion: I have vanilla or chocolate ice-cream. ==> (P∨Q)

Proof by Truth-Table:

6. Simplification:
The simplification rule state that if P∧ Q is true, then Q or P will
also be true. It can be represented as:

Proof by Truth-Table:
7. Resolution:
The Resolution rule state that if P∨Q and ¬ P∧R is true, then Q∨R
will also be true. It can be represented as

Proof by Truth-Table:

First-Order Logic in Artificial intelligence


In the topic of Propositional logic, we have seen that how to
represent statements using propositional logic. But unfortunately, in
propositional logic, we can only represent the facts, which are either
true or false. PL is not sufficient to represent the complex sentences
or natural language statements. The propositional logic has very
limited expressive power. Consider the following sentence, which we
cannot represent using PL logic.

o "Some humans are intelligent", or


o "Sachin likes cricket."
To represent the above statements, PL logic is not sufficient, so we
required some more powerful logic, such as first-order logic.

First-Order logic:
o First-order logic is another way of knowledge representation in
artificial intelligence. It is an extension to propositional logic.
o FOL is sufficiently expressive to represent the natural language
statements in a concise way.
o First-order logic is also known as Predicate logic or First-
order predicate logic. First-order logic is a powerful
language that develops information about the objects in a
more easy way and can also express the relationship between
those objects.
o First-order logic (like natural language) does not only assume
that the world contains facts like propositional logic but also
assumes the following things in the world:
o Objects: A, B, people, numbers, colors, wars, theories,
squares, pits, wumpus, ......
o Relations: It can be unary relation such as: red,
round, is adjacent, or n-any relation such as: the sister
of, brother of, has color, comes between
o Function: Father of, best friend, third inning of, end
of, ......
o As a natural language, first-order logic also has two main parts:

a. Syntax
b. Semantics

Syntax of First-Order logic:


The syntax of FOL determines which collection of symbols is a
logical expression in first-order logic. The basic syntactic elements
of first-order logic are symbols. We write statements in short-hand
notation in FOL.

Basic Elements of First-order logic:


Following are the basic elements of FOL syntax:
27.7M

538

History of Java

Constant 1, 2, A, John, Mumbai, cat,....

Variables x, y, z, a, b,....

Predicates Brother, Father, >,....

Function sqrt, LeftLegOf, ....

Connectives ∧, ∨, ¬, ⇒, ⇔

Equality ==

Quantifier ∀, ∃

Atomic sentences:

o Atomic sentences are the most basic sentences of first-order


logic. These sentences are formed from a predicate symbol
followed by a parenthesis with a sequence of terms.
o We can represent atomic sentences as Predicate (term1,
term2, ......, term n).

Example: Ravi and Ajay are brothers: => Brothers(Ravi,


Ajay).
Chinky is a cat: => cat (Chinky).

Complex Sentences:

o Complex sentences are made by combining atomic sentences


using connectives.

First-order logic statements can be divided into two parts:

o Subject: Subject is the main part of the statement.


o Predicate: A predicate can be defined as a relation, which
binds two atoms together in a statement.

Consider the statement: "x is an integer.", it consists of two


parts, the first part x is the subject of the statement and second part
"is an integer," is known as a predicate.

Quantifiers in First-order logic:


o A quantifier is a language element which generates
quantification, and quantification specifies the quantity of
specimen in the universe of discourse.
o These are the symbols that permit to determine or identify the
range and scope of the variable in the logical expression. There
are two types of quantifier:
a. Universal Quantifier, (for all, everyone, everything)
b. Existential quantifier, (for some, at least one).

Universal Quantifier:
Universal quantifier is a symbol of logical representation, which
specifies that the statement within its range is true for everything or
every instance of a particular thing.

The Universal quantifier is represented by a symbol ∀, which


resembles an inverted A.

Note: In universal quantifier we use implication "→".

If x is a variable, then ∀x is read as:

o For all x
o For each x
o For every x.

Example:
All man drink coffee.

Let a variable x which refers to a cat so all x can be represented in


UOD as below:
∀x man(x) → drink (x, coffee).

It will be read as: There are all x where x is a man who drink coffee.

Existential Quantifier:
Existential quantifiers are the type of quantifiers, which express that
the statement within its scope is true for at least one instance of
something.

It is denoted by the logical operator ∃, which resembles as inverted


E. When it is used with a predicate variable then it is called as an
existential quantifier.

Note: In Existential quantifier we always use AND or Conjunction symbol (∧).

If x is a variable, then existential quantifier will be ∃x or ∃(x). And it


will be read as:

o There exists a 'x.'


o For some 'x.'
o For at least one 'x.'

Example:
Some boys are intelligent.

∃x: boys(x) ∧ intelligent(x)

It will be read as: There are some x where x is a boy who is


intelligent.

Points to remember:
The main connective for universal quantifier ∀ is implication →.
The main connective for existential quantifier ∃ is and ∧.
o

Properties of Quantifiers:
In universal quantifier, ∀x∀y is similar to ∀y∀x.
In Existential quantifier, ∃x∃y is similar to ∃y∃x.
o

o
o ∃x∀y is not similar to ∀y∃x.

Some Examples of FOL using quantifier:

1. All birds fly.


In this question the predicate is "fly(bird)."
And since there are all birds who fly so it will be represented as

∀x bird(x) →fly(x).
follows.

2. Every man respects his parent.


In this question, the predicate is "respect(x, y)," where x=man,

Since there is every man so will use ∀, and it will be represented as


and y= parent.

∀x man(x) → respects (x, parent).


follows:

3. Some boys play cricket.

y= game. Since there are some boys so we will use ∃, and it will
In this question, the predicate is "play(x, y)," where x= boys, and

∃x boys(x) → play(x, cricket).


be represented as:

4. Not all students like both Mathematics and Science.


In this question, the predicate is "like(x, y)," where x= student,

Since there are not all students, so we will use ∀ with negation,
and y= subject.

¬∀ (x) [ student(x) → like(x, Mathematics) ∧ like(x,


so following representation for this:

Science)].

5. Only one student failed in Mathematics.


In this question, the predicate is "failed(x, y)," where x=
student, and y= subject.
Since there is only one student who failed in Mathematics, so we will

∃(x) [ student(x) → failed (x, Mathematics) ∧∀ (y)


use following representation for this:

[¬(x==y) ∧ student(y) → ¬failed (x, Mathematics)].

Free and Bound Variables:


The quantifiers interact with variables which appear in a suitable
way. There are two types of variables in First-order logic which are
given below:

Free Variable: A variable is said to be a free variable in a formula if


it occurs outside the scope of the quantifier.

Example: ∀x ∃(y)[P (x, y, z)], where z is a free variable.

Bound Variable: A variable is said to be a bound variable in a


formula if it occurs within the scope of the quantifier.

Example: ∀x [A (x) B( y)], here x and y are the bound


variables.

Forward Chaining and backward chaining in


AI
In artificial intelligence, forward and backward chaining is one of the
important topics, but before understanding forward and backward
chaining lets first understand that from where these two terms
came.

Inference engine:
The inference engine is the component of the intelligent system in
artificial intelligence, which applies logical rules to the knowledge
base to infer new information from known facts. The first inference
engine was part of the expert system. Inference engine commonly
proceeds in two modes, which are:

a. Forward chaining
b. Backward chaining

Horn Clause and Definite clause:

Horn clause and definite clause are the forms of sentences, which
enables knowledge base to use a more restricted and efficient
inference algorithm. Logical inference algorithms use forward and
backward chaining approaches, which require KB in the form of
the first-order definite clause.
40.5M
844
Exception Handling in Java - Javatpoint

Definite clause: A clause which is a disjunction of literals


with exactly one positive literal is known as a definite clause or
strict horn clause.

Horn clause: A clause which is a disjunction of literals with at


most one positive literal is known as horn clause. Hence all the
definite clauses are horn clauses.

Example: (¬ p V ¬ q V k). It has only one positive literal k.

It is equivalent to p ∧ q → k.

A. Forward Chaining
Forward chaining is also known as a forward deduction or forward
reasoning method when using an inference engine. Forward
chaining is a form of reasoning which start with atomic sentences in
the knowledge base and applies inference rules (Modus Ponens) in
the forward direction to extract more data until a goal is reached.

The Forward-chaining algorithm starts from known facts, triggers all


rules whose premises are satisfied, and add their conclusion to the
known facts. This process repeats until the problem is solved.

Properties of Forward-Chaining:

o It is a down-up approach, as it moves from bottom to top.


o It is a process of making a conclusion based on known facts or
data, by starting from the initial state and reaches the goal
state.
o Forward-chaining approach is also called as data-driven as we
reach to the goal using available data.
o Forward -chaining approach is commonly used in the expert
system, such as CLIPS, business, and production rule systems.

Consider the following famous example which we will use in both


approaches:

Example:
"As per the law, it is a crime for an American to sell weapons
to hostile nations. Country A, an enemy of America, has
some missiles, and all the missiles were sold to it by Robert,
who is an American citizen."

Prove that "Robert is criminal."

To solve the above problem, first, we will convert all the above facts
into first-order definite clauses, and then we will use a forward-
chaining algorithm to reach the goal.

Facts Conversion into FOL:

o It is a crime for an American to sell weapons to hostile nations.


(Let's say p, q, and r are variables)
American (p) ∧ weapon(q) ∧ sells (p, q, r) ∧ hostile(r) →
Criminal(p) ...(1)
o Country A has some missiles. ?p Owns(A, p) ∧ Missile(p). It
can be written in two definite clauses by using Existential
Instantiation, introducing new Constant T1.
Owns(A, T1) ......(2)
Missile(T1) .......(3)
All of the missiles were sold to country A by Robert.
?p Missiles(p) ∧ Owns (A, p) → Sells (Robert, p, A)
o

...
...(4)
o Missiles are weapons.
Missile(p) → Weapons (p) .......(5)
o Enemy of America is known as hostile.
Enemy(p, America) →Hostile(p) ........(6)
o Country A is an enemy of America.
Enemy (A, America) .........(7)
o Robert is American
American(Robert). ..........(8)

Forward chaining proof:


Step-1:

In the first step we will start with the known facts and will choose
the sentences which do not have implications, such
as: American(Robert), Enemy(A, America), Owns(A, T1), and
Missile(T1). All these facts will be represented as below.

Step-2:

At the second step, we will see those facts which infer from
available facts and with satisfied premises.

Rule-(1) does not satisfy premises, so it will not be added in the first
iteration.

Rule-(2) and (3) are already added.

Rule-(4) satisfy with the substitution {p/T1}, so Sells (Robert, T1,


A) is added, which infers from the conjunction of Rule (2) and (3).

Rule-(6) is satisfied with the substitution(p/A), so Hostile(A) is added


and which infers from Rule-(7).
Step-3:

At step-3, as we can check Rule-(1) is satisfied with the


substitution {p/Robert, q/T1, r/A}, so we can add
Criminal(Robert) which infers all the available facts. And hence we
reached our goal statement.

Hence it is proved that Robert is Criminal using forward


chaining approach.

B. Backward Chaining:
Backward-chaining is also known as a backward deduction or
backward reasoning method when using an inference engine. A
backward chaining algorithm is a form of reasoning, which starts
with the goal and works backward, chaining through rules to find
known facts that support the goal.

Properties of backward chaining:


o It is known as a top-down approach.
o Backward-chaining is based on modus ponens inference rule.
o In backward chaining, the goal is broken into sub-goal or sub-
goals to prove the facts true.
o It is called a goal-driven approach, as a list of goals decides
which rules are selected and used.
o Backward -chaining algorithm is used in game theory,
automated theorem proving tools, inference engines, proof
assistants, and various AI applications.
o The backward-chaining method mostly used a depth-first
search strategy for proof.

Example:
In backward-chaining, we will use the same above example, and will
rewrite all the rules.

o American (p) ∧ weapon(q) ∧ sells (p, q, r) ∧ hostile(r) →


Criminal(p) ...(1)
Owns(A, T1) ........(2)
Missile(T1)
?p Missiles(p) ∧ Owns (A, p) → Sells (Robert, p, A)
o

......(4)
o Missile(p) → Weapons (p) .......(5)
o Enemy(p, America) →Hostile(p) ........(6)
o Enemy (A, America) .........(7)
o American(Robert). ..........(8)

Backward-Chaining proof:
In Backward chaining, we will start with our goal predicate, which
is Criminal(Robert), and then infer further rules.
Step-1:

At the first step, we will take the goal fact. And from the goal fact,
we will infer other facts, and at last, we will prove those facts true.
So our goal fact is "Robert is Criminal," so following is the predicate
of it.

Step-2:

At the second step, we will infer other facts form goal fact which
satisfies the rules. So as we can see in Rule-1, the goal predicate
Criminal (Robert) is present with substitution {Robert/P}. So we will
add all the conjunctive facts below the first level and will replace p
with Robert.

Here we can see American (Robert) is a fact, so it is proved


here.

Step-3:t At step-3, we will extract further fact Missile(q) which infer


from Weapon(q), as it satisfies Rule-(5). Weapon (q) is also true with
the substitution of a constant T1 at q.
Step-4:

At step-4, we can infer facts Missile(T1) and Owns(A, T1) form


Sells(Robert, T1, r) which satisfies the Rule- 4, with the substitution
of A in place of r. So these two statements are proved here.
Step-5:

At step-5, we can infer the fact Enemy(A,


America) from Hostile(A) which satisfies Rule- 6. And hence all the
statements are proved true using backward chaining.
Difference between backward chaining and
forward chaining
Following is the difference between the forward chaining
and backward chaining:

o Forward chaining as the name suggests, start from the known


facts and move forward by applying inference rules to extract
more data, and it continues until it reaches to the goal,
whereas backward chaining starts from the goal, move
backward by using inference rules to determine the facts that
satisfy the goal.
o Forward chaining is called a data-driven inference technique,
whereas backward chaining is called a goal-driven inference
technique.
o Forward chaining is known as the down-up approach, whereas
backward chaining is known as a top-down approach.
o Forward chaining uses breadth-first search strategy,
whereas backward chaining uses depth-first search strategy.
o Forward and backward chaining both applies Modus
ponens inference rule.
o Forward chaining can be used for tasks such as planning,
design process monitoring, diagnosis, and classification,
whereas backward chaining can be used for classification
and diagnosis tasks.
o Forward chaining can be like an exhaustive search, whereas
backward chaining tries to avoid the unnecessary path of
reasoning.
o In forward-chaining there can be various ASK questions from
the knowledge base, whereas in backward chaining there can
be fewer ASK questions.
o Forward chaining is slow as it checks for all the rules, whereas
backward chaining is fast as it checks few required rules only.

S. Forward Chaining Backward Chaining


No.

1. Forward chaining starts from known Backward chaining starts from the goal and
facts and applies inference rule to works backward through inference rules to
extract more data unit it reaches to find the required facts that support the goal.
the goal.

2. It is a bottom-up approach It is a top-down approach

3. Forward chaining is known as data- Backward chaining is known as goal-driven


driven inference technique as we technique as we start from the goal and
reach to the goal using the divide into sub-goal to extract the facts.
available data.

4. Forward chaining reasoning applies Backward chaining reasoning applies a


a breadth-first search strategy. depth-first search strategy.

5. Forward chaining tests for all the Backward chaining only tests for few required
available rules rules.

6. Forward chaining is suitable for the Backward chaining is suitable for diagnostic,
planning, monitoring, control, and prescription, and debugging application.
interpretation application.

7. Forward chaining can generate an Backward chaining generates a finite number


infinite number of possible of possible conclusions.
conclusions.

8. It operates in the forward direction. It operates in the backward direction.

9. Forward chaining is aimed for any Backward chaining is only aimed for the
conclusion. required data.

Reasoning in Artificial intelligence


In previous topics, we have learned various ways of knowledge
representation in artificial intelligence. Now we will learn the various
ways to reason on this knowledge using different logical schemes.

Reasoning:
The reasoning is the mental process of deriving logical conclusion
and making predictions from available knowledge, facts, and beliefs.
Or we can say, "Reasoning is a way to infer facts from existing
data." It is a general process of thinking rationally, to find valid
conclusions.

In artificial intelligence, the reasoning is essential so that the


machine can also think rationally as a human brain, and can perform
like a human.

Types of Reasoning
In artificial intelligence, reasoning can be divided into the following
categories:

Keep Watching

o Deductive reasoning
o Inductive reasoning
o Abductive reasoning
o Common Sense Reasoning
o Monotonic Reasoning
o Non-monotonic Reasoning

Note: Inductive and deductive reasoning are the forms of propositional logic.

1. Deductive reasoning:
Deductive reasoning is deducing new information from logically
related known information. It is the form of valid reasoning, which
means the argument's conclusion must be true when the premises
are true.

Deductive reasoning is a type of propositional logic in AI, and it


requires various rules and facts. It is sometimes referred to as top-
down reasoning, and contradictory to inductive reasoning.
In deductive reasoning, the truth of the premises guarantees the
truth of the conclusion.

Deductive reasoning mostly starts from the general premises to the


specific conclusion, which can be explained as below example.

Example:

Premise-1: All the human eats veggies

Premise-2: Suresh is human.

Conclusion: Suresh eats veggies.

The general process of deductive reasoning is given below:

2. Inductive Reasoning:
Inductive reasoning is a form of reasoning to arrive at a conclusion
using limited sets of facts by the process of generalization. It starts
with the series of specific facts or data and reaches to a general
statement or conclusion.

Inductive reasoning is a type of propositional logic, which is also


known as cause-effect reasoning or bottom-up reasoning.

In inductive reasoning, we use historical data or various premises to


generate a generic rule, for which premises support the conclusion.

In inductive reasoning, premises provide probable supports to the


conclusion, so the truth of premises does not guarantee the truth of
the conclusion.

Example:

Premise: All of the pigeons we have seen in the zoo are white.
Conclusion: Therefore, we can expect all the pigeons to be white.

3. Abductive reasoning:
Abductive reasoning is a form of logical reasoning which starts with
single or multiple observations then seeks to find the most likely
explanation or conclusion for the observation.

Abductive reasoning is an extension of deductive reasoning, but in


abductive reasoning, the premises do not guarantee the conclusion.

Example:

Implication: Cricket ground is wet if it is raining

Axiom: Cricket ground is wet.

Conclusion It is raining.

4. Common Sense Reasoning


Common sense reasoning is an informal form of reasoning, which
can be gained through experiences.

Common Sense reasoning simulates the human ability to make


presumptions about events which occurs on every day.

It relies on good judgment rather than exact logic and operates


on heuristic knowledge and heuristic rules.

Example:

1. One person can be at one place at a time.


2. If I put my hand in a fire, then it will burn.
The above two statements are the examples of common sense
reasoning which a human mind can easily understand and assume.

5. Monotonic Reasoning:
In monotonic reasoning, once the conclusion is taken, then it will
remain the same even if we add some other information to existing
information in our knowledge base. In monotonic reasoning, adding
knowledge does not decrease the set of prepositions that can be
derived.

To solve monotonic problems, we can derive the valid conclusion


from the available facts only, and it will not be affected by new
facts.

Monotonic reasoning is not useful for the real-time systems, as in


real time, facts get changed, so we cannot use monotonic
reasoning.

Monotonic reasoning is used in conventional reasoning systems, and


a logic-based system is monotonic.

Any theorem proving is an example of monotonic reasoning.

Example:

o Earth revolves around the Sun.

It is a true fact, and it cannot be changed even if we add another


sentence in knowledge base like, "The moon revolves around the
earth" Or "Earth is not round," etc.

Advantages of Monotonic Reasoning:

o In monotonic reasoning, each old proof will always remain


valid.
o If we deduce some facts from available facts, then it will
remain valid for always.

Disadvantages of Monotonic Reasoning:


o We cannot represent the real world scenarios using Monotonic
reasoning.
o Hypothesis knowledge cannot be expressed with monotonic
reasoning, which means facts should be true.
o Since we can only derive conclusions from the old proofs, so
new knowledge from the real world cannot be added.

6. Non-monotonic Reasoning
In Non-monotonic reasoning, some conclusions may be invalidated if
we add some more information to our knowledge base.

Logic will be said as non-monotonic if some conclusions can be


invalidated by adding more knowledge into our knowledge base.

Non-monotonic reasoning deals with incomplete and uncertain


models.

"Human perceptions for various things in daily life, "is a general


example of non-monotonic reasoning.

Example: Let suppose the knowledge base contains the following


knowledge:

o Birds can fly

o Penguins cannot fly

o Pitty is a bird

So from the above sentences, we can conclude that Pitty can fly.

However, if we add one another sentence into knowledge base


"Pitty is a penguin", which concludes "Pitty cannot fly", so it
invalidates the above conclusion.

Advantages of Non-monotonic reasoning:


o For real-world systems such as Robot navigation, we can use
non-monotonic reasoning.
o In Non-monotonic reasoning, we can choose probabilistic facts
or can make assumptions.

Disadvantages of Non-monotonic Reasoning:

o In non-monotonic reasoning, the old facts may be invalidated


by adding new sentences.
o It cannot be used for theorem proving.

Probabilistic reasoning in Artificial


intelligence
Uncertainty:
Till now, we have learned knowledge representation using first-order
logic and propositional logic with certainty, which means we were
sure about the predicates. With this knowledge representation, we
might write A→B, which means if A is true then B is true, but
consider a situation where we are not sure about whether A is true
or not then we cannot express this statement, this situation is called
uncertainty.

So to represent uncertain knowledge, where we are not sure about


the predicates, we need uncertain reasoning or probabilistic
reasoning.

Causes of uncertainty:
Following are some leading causes of uncertainty to occur in the real
world.

1. Information occurred from unreliable sources.


2. Experimental Errors
3. Equipment fault
4. Temperature variation
5. Climate change.

Probabilistic reasoning:
Probabilistic reasoning is a way of knowledge representation where
we apply the concept of probability to indicate the uncertainty in
knowledge. In probabilistic reasoning, we combine probability theory
with logic to handle the uncertainty.

We use probability in probabilistic reasoning because it provides a


way to handle the uncertainty that is the result of someone's
laziness and ignorance.

In the real world, there are lots of scenarios, where the certainty of
something is not confirmed, such as "It will rain today," "behavior of
someone for some situations," "A match between two teams or two
players." These are probable sentences for which we can assume
that it will happen but not sure about it, so here we use probabilistic
reasoning.

Need of probabilistic reasoning in AI:

o When there are unpredictable outcomes.


o When specifications or possibilities of predicates becomes too
large to handle.
o When an unknown error occurs during an experiment.

In probabilistic reasoning, there are two ways to solve problems with


uncertain knowledge:

o Bayes' rule

o Bayesian Statistics
Note: We will learn the above two rules in later chapters.

As probabilistic reasoning uses probability and related terms, so


before understanding probabilistic reasoning, let's understand some
common terms:

Probability: Probability can be defined as a chance that an uncertain


event will occur. It is the numerical measure of the likelihood that an
event will occur. The value of probability always remains between 0
and 1 that represent ideal uncertainties.

1. 0 ≤ P(A) ≤ 1, where P(A) is the probability of an event A.


1. P(A) = 0, indicates total uncertainty in an event A.
1. P(A) =1, indicates total certainty in an event A.

We can find the probability of an uncertain event by using the below


formula.

o P(¬A) = probability of a not happening event.


o P(¬A) + P(A) = 1.

Event: Each possible outcome of a variable is called an event.

Sample space: The collection of all possible events is called sample


space.

Random variables: Random variables are used to represent the


events and objects in the real world.

Prior probability: The prior probability of an event is probability


computed before observing new information.

Posterior Probability: The probability that is calculated after all


evidence or information has taken into account. It is a combination
of prior probability and new information.
Conditional probability:
Conditional probability is a probability of occurring an event when
another event has already happened.

Let's suppose, we want to calculate the event A when event B has


already occurred, "the probability of A under the conditions of B", it
can be written as:

Where P(A⋀B)= Joint probability of a and B

P(B)= Marginal probability of B.

If the probability of A is given and we need to find the probability of


B, then it will be given as:

It can be explained by using the below Venn diagram, where B is


occurred event, so sample space will be reduced to set B, and now
we can only calculate event A when event B is already occurred by
dividing the probability of P(A⋀B) by P( B ).

Example:

In a class, there are 70% of the students who like English and 40%
of the students who likes English and mathematics, and then what is
the percent of students those who like English also like
mathematics?
Solution:

Let, A is an event that a student likes Mathematics

B is an event that a student likes English.

You might also like