[go: up one dir, main page]

0% found this document useful (0 votes)
6 views21 pages

Chapter 4 Knowledge Representation and Reasoning

Chapter 4 discusses knowledge representation and reasoning in artificial intelligence, focusing on logic, inference, and knowledge-based agents. It outlines the components of knowledge-based agents, including knowledge bases and inference systems, and introduces propositional and first-order logic as methods for representing knowledge. The chapter also covers syntax, semantics, quantifiers, and inference rules relevant to first-order logic.

Uploaded by

moyibersisa18
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
6 views21 pages

Chapter 4 Knowledge Representation and Reasoning

Chapter 4 discusses knowledge representation and reasoning in artificial intelligence, focusing on logic, inference, and knowledge-based agents. It outlines the components of knowledge-based agents, including knowledge bases and inference systems, and introduces propositional and first-order logic as methods for representing knowledge. The chapter also covers syntax, semantics, quantifiers, and inference rules relevant to first-order logic.

Uploaded by

moyibersisa18
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 21

Chapter 4: Knowledge Representation and Reasoning

4.1 Logic and Inference


Logic is the study of correct reasoning. It includes both formal and informal logic. Formal
logic investigates how conclusions follow from premises. Logic forms the formal
foundation of knowledge representation and reasoning. Logic defines:
 Syntax of sentences, which specifies the structure of sentences, and
 Semantics of sentences, which defines the truth of each sentence in each
possible world or model.
 Entailment – the relation between a sentence and another sentence that follows
from it. In mathematical notation, we write α |= β to mean that the sentence α
entails the sentence β. The formal definition of entailment is this:
α |= β if and only if, in every model in which α is true, β is also true. We
can also write as α |= β if and only if M(α) ⊆ M(β)
In artificial intelligence, generating the conclusions from evidence and facts is termed as
Inference.
4.2 Logical Agents
Humans know things; and what they know helps them do things. These statements
make strong claims about how the intelligence of humans is achieved, by processes of
reasoning that operate on internal representations of knowledge. In AI, this approach to
intelligence is embodied in knowledge-based agents.
Knowledge-Based Agent (KBA)
An intelligent agent needs knowledge about the real world for taking decisions and
reasoning to act efficiently.
Knowledge-based agents are those agents who have the capability of maintaining an
internal state of knowledge, reason over that knowledge, update their knowledge after
observations and take actions. These agents can represent the world with some formal
representation and act intelligently.
Knowledge-based agents are composed of two main parts:
 Knowledge-base and Inference system.
Knowledge Base – Knowledge-base (KB) is a central component of a knowledge-
based agent. It is a collection of sentences (here 'sentence' is a technical term and it is

1
not identical to sentence in English). These sentences are expressed in a language
which is called a knowledge representation language. A representation language is
defined by its syntax, which specifies the structure of sentences, and its semantics,
which defines the truth of each sentence in each possible world or model.
The Knowledge-base of KBA stores fact about the world. Knowledge-base is required
for updating knowledge for an agent to learn with experiences and take action as per
the knowledge.
Inference system – Inference system generates new facts (from old) so that an agent
can update the KB. An inference system works mainly in two rules which are given as:
 Forward chaining
 Backward chaining
The Architecture of Knowledge-Based Agent
The below diagram is representing a generalized architecture for a knowledge-based
agent. The knowledge-based agent (KBA) takes input from the environment by
perceiving the environment. The input is taken by the inference engine of the agent and
which also communicate with KB to decide as per the knowledge store in KB. The
learning element of KBA regularly updates the KB by learning new knowledge.

2
A Generic Knowledge-Based Agent
Following is the structure outline of a generic knowledge-based agents program:

Like all agents, it takes a percept as input and returns an action. The agent maintains a
knowledge base which may initially contain some background knowledge. It also has a
counter to indicate the time for the whole process, and this counter is initialized with 0.
Each time the agent program is called, it performs three operations.
 First, it TELLs the knowledge base what it perceives.
 Second, it ASKs the knowledge base what action it should perform.
 Third, it TELLs the knowledge base which action was chosen, and the agent
executes the action.
MAKE-PERCEPT-SENTENCE constructs a sentence asserting that the agent
perceived the given percept at the given time.
MAKE-ACTION-QUERY constructs a sentence that asks what action should be done at
the current time.
MAKE-ACTION-SENTENCE constructs a sentence asserting that the chosen action
was executed.
Approaches to Designing a Knowledge-Based Agent
There are mainly two approaches to build a knowledge-based agent:
1. Declarative Approach: Starting with an empty knowledge base, the agent
designer can TELL sentences one by one until the agent knows how to operate
in its environment.
2. Procedural Approach: encodes desired behaviors directly as program code.

3
4.3 Propositional Logic
Propositional logic (PL) is the simplest form of logic where all the statements are made
by propositions. A proposition is a declarative statement which is either true or false.
Example: 5 is prime number.
The sun rises from west.
It is a technique of knowledge representation in logical and mathematical form.
Syntax of Propositional Logic
The syntax of propositional logic defines the allowable sentences. The atomic
sentences consist of a single proposition symbol. Each such symbol stands for a
proposition that can be true or false. Propositional symbols start with an uppercase
letter and may contain other letters or subscripts, for example: P, Q, R, W 1,3 and North.
Complex sentences are constructed from simpler sentences, using parentheses and
logical connectives. There are five connectives in common use:
1. Negation (¬): A sentence such as ¬ P is called negation of P. A literal can be
either Positive literal or negative literal.
2. Conjunction (∧): A sentence which has ∧ connective such as, P ∧ Q is called a
conjunction.
Example:
Rohan is intelligent and hardworking. It can be written as, P= Rohan is
intelligent, Q= Rohan is hardworking. → P∧ Q.
3. Disjunction (∨): A sentence which has ∨ connective, such as P ∨ Q. is called
disjunction, where P and Q are the propositions.
Example:
"Ritika is a doctor or Engineer", Here P= Ritika is Doctor. Q= Ritika is
Doctor, so we can write it as P ∨ Q.
4. Implication (⇒): A sentence such as P ⇒ Q, is called an implication. Implications
are also known as if-then rules.
Example:
If it is raining, then the street is wet. Let P= It is raining, and Q= Street is
wet, so it is represented as P ⇒ Q
5. Biconditional (⇔): A sentence such as P⇔ Q is a Biconditional sentence

4
Example:
If I am breathing, then I am alive P= I am breathing, Q= I am alive, it can
be represented as P ⇔ Q.
Grammar of sentences in PL along with operator precedence from highest to lowest.

Semantics of Propositional Logic


The semantics defines the rules for determining the truth of a sentence with respect to a
particular model. In propositional logic, a model simply fixes the truth value—true or
false—for every proposition symbol. For example, if the sentences in the knowledge
base make use of the proposition symbols P1, P2, and P3, then one possible model is
m1 = {P1 = false, P2 = false, P3 = true}.
The semantics for propositional logic must specify how to compute the truth value of
any sentence, given a model.
Atomic sentences are easy:
 True is true in every model and false is false in every model.
 The truth value of every other proposition symbol must be specified directly in the
model. For example, in the model m1 given earlier, P2 is false.
For complex sentences, we have five rules, which hold for any sub sentences P and Q
in any model m (here ―iff‖ means ―if and only if‖):
 ¬P is true iff P is false in m.
 P ∧ Q is true iff both P and Q are true in m.

5
 P ∨ Q is true iff either P or Q is true in m.
 P ⇒ Q is true unless P is true and Q is false in m.
 P ⇔ Q is true iff P and Q are both true or both false in m.

Truth table for the five logical connectives.


Inference rule
Inference rules are the templates for generating valid arguments. An argument is valid if
the conclusion is true whenever the premises are all true.. They are applied to derive
proofs in artificial intelligence. A proof is a chain of conclusion that leads to the desired
goal.
Types of Inference rules
1. Modus Ponens: The Modus Ponens rule is one of the most important rules of
inference, and it states that if P and P → Q is true, then we can infer that Q will be
true. It can be represented as:

Example: Statement-1: "If I am sleepy then I go to bed" ==> P→ Q


Statement-2: "I am sleepy" ==> P
Conclusion: "I go to bed." ==> Q.
Hence, we can say that, if P→ Q is true and P is true then Q will be true.
2. Modus Tollens
The Modus Tollens rule state that if P→ Q is true and ¬ Q is true, then ¬ P will also true.
It can be represented as:

Example: Statement-1: "If I am sleepy then I go to bed" ==> P→ Q


Statement-2: "I do not go to the bed."==> ~Q

6
Statement-3: Which infers that "I am not sleepy" => ~P
4.4 Predicate (First-Order) Logic
In the topic of Propositional logic, we have seen that how to represent statements using
propositional logic. But unfortunately, in propositional logic, we can only represent the
facts, which are either true or false. Propositional logic is not sufficient to represent the
complex sentences or natural language statements. The propositional logic has very
limited expressive power. Consider the following sentence, which we cannot represent
using PL logic.
"Some humans are intelligent", or
To represent the above statements, Propositional logic is not sufficient, so we required
some more powerful logic, such as first-order logic.
First-Order logic
First-order logic is another way of knowledge representation in artificial intelligence. It is
an extension to propositional logic. First-Order Logic is sufficiently expressive to
represent the natural language statements in a concise way. It is also known as
Predicate logic or First-order predicate logic. First-order logic is a powerful language
that develops information about the objects in a more easy way and can also express
the relationship between those objects.
First-order logic (like natural language) does not only assume that the world contains
facts like propositional logic but also assumes the following things in the world:
 Objects: people, houses, numbers, theories, Ronald McDonald, colors, baseball
games, wars, centuries ...
 Relations: It unary relations or properties such as red, round, bogus, prime,
multistoried ..., or more general n-ary relations such as brother of, bigger than,
inside, part of, has color, occurred after, owns, comes between, ...
 Function: father of, best friend, third inning of, one more than, beginning of ...
First-order logic is built around objects and relations. First-order logic can be
characterized by its ontological and epistemological commitments. Ontological
commitment – What exists in the world (facts, objects, relations). Epistemological
Commitment – What an agent believes about facts (true/false/unknown)
As a natural language, first-order logic has two main parts: Syntax and Semantics

7
4.4.1 Syntax
The basic syntactic elements of first-order logic are the symbols that stand for objects,
relations, and functions. The symbols, therefore, come in three kinds: constant symbols,
which stand for objects; predicate symbols, which stand for relations; and function
symbols, which stand for functions.
Atomic Sentences
Atomic sentences are the most basic sentences of first-order logic. These
sentences are formed from a predicate symbol followed by a parenthesis with a
sequence of terms. A term is a logical expression that refers to an object. We can
represent atomic sentences as Predicate (term1, term2... term n).
Example: Richard and John are brothers: => Brothers(Richard, John).
Complex Sentences
Complex sentences are made by combining atomic sentences using
connectives.
Example: King(Richard) ∨ King(John)
4.4.2 Quantifiers
A quantifier is a language element which generates quantification, and quantification
specifies the quantity of specimen in the universe of discourse. These are the symbols
that permit to determine or identify the range and scope of the variable in the logical
expression. There are two types of quantifier:
 Universal Quantifier
 Existential quantifier
Universal Quantifier (∀)
Universal quantifier is a symbol of logical representation, which specifies that the
statement within its range is true for everything or every instance of a particular thing. In
universal quantifier we use implication "⇒".
If x is a variable, then ∀x is read as:
 For all x
 For each x
 For every x
For example, ―All kings are persons,‖ is written in first-order logic as

8
∀ x King(x) ⇒ Person(x).
This sentence says, ―For all x, if x is a king, then x is a person.‖
Existential Quantifier (∃)
Existential quantifiers are the type of quantifiers, which express that the statement
within its scope is true for at least one instance of something. In Existential quantifier we
always use AND or Conjunction symbol (∧).
If x is a variable, then existential quantifier will be ∃x or ∃(x). And it will be read as:
 There exists a 'x.'
 For some 'x.'
 For at least one 'x.'
To say, for example, that King John has a crown on his head, we write
∃ x Crown(x) ∧ OnHead(x, John) .
Meaning, there exist an x such that x is a crown ∧ x is on John‘s head
Exercise: write the following statement in first order logic
a) All man drink coffee
b) Some boys are intelligent
Nested Quantifiers
More complex sentences can be expressed using multiple quantifiers. The simplest
case is where the quantifiers are of the same type. For example, ―Brothers are siblings‖
can be written as
∀ x ∀ y Brother(x,y) ⇒ Sibling(x,y).
In other cases we will have mixtures. ―Everybody loves somebody‖ means that for every
person, there is someone that person loves:
∀ x ∃ y Loves(x,y) .
On the other hand, to say ―There is someone who is loved by everyone,‖ we write
∃ y ∀ x Loves(x,y)
4.5 Inferences in First Order Logic
Inference in First-Order Logic is used to deduce new facts or sentences from existing
sentences. Before understanding the FOL inference rule, let's understand some basic
terminologies used in FOL.

9
Substitution
Substitution is a fundamental operation performed on terms and formulas. It occurs in
all inference systems in first-order logic. The substitution is complex in the presence of
quantifiers in FOL. If we write F[a/x], so it refers to substitute a constant "a" in place of
variable "x".
Note: First-order logic is capable of expressing facts about some or all objects in the
universe.
Equality
First-Order logic does not only use predicate and terms for making atomic sentences
but also uses another way, which is equality in FOL. For this, we can use equality
symbols which specify that the two terms refer to the same object.
Example: Brother (John) = Smith.
As in the above example, the object referred by the Brother (John) is similar to the
object referred by Smith. The equality symbol can also be used with negation to
represent that two terms are not the same objects.
Example: ¬(x=y) which is equivalent to x ≠y.
4.5.1 FOL inference rules
As propositional logic we also have inference rules in first-order logic, so following are
some basic inference rules in FOL:
 Universal Instantiation
 Existential Instantiation
1. Universal Instantiation
Universal instantiation is also called as universal elimination or UI is a valid inference
rule. It can be applied multiple times to add new sentences. The new KB is logically
equivalent to the previous KB. As per UI, we can infer any sentence obtained by
substituting a ground term (a term without variables) for the variable.
The UI rule states that we can infer any sentence P(c) by substituting a ground term c (a
constant within domain x) from ∀ x P(x) for any object in the universe of discourse.

It can be represented as:

10
Example:1.
IF "Every person like ice-cream"=> ∀x P(x) so we can infer that "John likes ice-cream"
=> P(c)
Example: 2.
Let's take a famous example, "All kings who are greedy are Evil." So let our knowledge
base contains this detail as in the form of FOL:
∀x king(x) ∧ greedy (x) → Evil (x),
So from this information, we can infer any of the following statements using Universal
Instantiation:
King(John) ∧ Greedy (John) → Evil (John),
King(Richard) ∧ Greedy (Richard) → Evil (Richard),
King(Father(John)) ∧ Greedy (Father(John)) → Evil (Father(John)),
2. Existential Instantiation:
Existential instantiation is also called as Existential Elimination, which is a valid
inference rule in first-order logic. It can be applied only once to replace the existential
sentence. The new KB is not logically equivalent to old KB, but it will be satisfiable if old
KB was satisfiable.
This rule states that one can infer P(c) from the formula given in the form of ∃x P(x) for a
new constant symbol c. The restriction with this rule is that c used in the rule must be a
new term for which P(c ) is true.

It can be represented as:

Example:
From the given sentence: ∃x Crown(x) ∧ OnHead(x, John),
So we can infer: Crown(K) ∧ OnHead( K, John), as long as K does not appear in the
knowledge base.
 The above used K is a constant symbol, which is called Skolem constant.
 The Existential instantiation is a special case of Skolemization process.
Generalized Modes Ponens
For atomic sentences pi, pi′, and q, where there is a substitution θ such that

11
SUBST(θ, pi′) = SUBST(θ, pi), for all i,
p1′, p2′, . . . , pn′, (p1 ∧ p2 ∧ . . . ∧ pn ⇒ q)
SUBST(θ, q)
For our example:
p1′ is King(John) p1 is King(x)
p2′ is Greedy(y) p2 is Greedy(x)
θ is {x/John, y/John} q is Evil(x)
SUBST(θ, q) is Evil(John) .
4.5.2 Forward Chaining and Backward Chaining
The inference engine is the component of the intelligent system in artificial intelligence,
which applies logical rules to the knowledge base to infer new information from known
facts. Inference engine commonly proceeds in two modes, which are:
 Forward chaining
 Backward chaining
Horn Clause and Definite clause
Horn clause and definite clause are the forms of sentences, which enables knowledge
base to use a more restricted and efficient inference algorithm.
Definite clause: A clause which is a disjunction of literals with exactly one positive
literal is known as a definite clause or strict horn clause.
Horn clause: A clause which is a disjunction of literals with at most one positive literal is
known as horn clause. Hence all the definite clauses are horn clauses.
Example: (¬ p V ¬ q V k). It has only one positive literal k.
Logical inference algorithms use forward and backward chaining approaches, which
require KB in the form of the first-order definite clause.
A. Forward Chaining
Forward chaining is also known as a forward deduction or forward reasoning method
when using an inference engine. Forward chaining is a form of reasoning which start
with atomic sentences in the knowledge base and applies inference rules (Modus
Ponens) in the forward direction to extract more data until a goal is reached.
The Forward-chaining algorithm starts from known facts, triggers all rules whose
premises are satisfied, and add their conclusion to the known facts.

12
This process repeats until the problem is solved.
Properties of Forward-Chaining:
 It is a down-up approach, as it moves from bottom to top.
 It is a process of making a conclusion based on known facts or data, by starting
from the initial state and reaches the goal state.
 Forward-chaining approach is also called as data-driven as we reach to the goal
using available data.
 Forward -chaining approach is commonly used in the expert system, such as
CLIPS, business, and production rule systems.
B. Backward Chaining:
Backward-chaining is also known as a backward deduction or backward reasoning
method when using an inference engine. A backward chaining algorithm is a form of
reasoning, which starts with the goal and works backward, chaining through rules to find
known facts that support the goal.
Properties of backward chaining:
 It is known as a top-down approach.
 Backward-chaining is based on modus ponens inference rule.
 In backward chaining, the goal is broken into sub-goal or sub-goals to prove the
facts true.
 It is called a goal-driven approach, as a list of goals decides which rules are
selected and used.
 Backward -chaining algorithm is used in game theory, automated theorem
proving tools, inference engines, proof assistants, and various AI applications.
4.6 Knowledge Representation
Humans are best at understanding, reasoning, and interpreting knowledge. Human
knows things, which is knowledge and as per their knowledge they perform various
actions in the real world. But how machines do all these things comes under knowledge
representation and reasoning. Hence we can describe Knowledge representation as
following:
 It is also a way which describes how we can represent knowledge in artificial
intelligence.

13
Knowledge representation is not just storing data into some database, but it also
enables an intelligent machine to learn from that knowledge and experiences so
that it can behave intelligently like a human.
 Knowledge representation and reasoning (KR, KRR) is the part of Artificial
intelligence which concerned with AI agents thinking and how thinking
contributes to intelligent behavior of agents.
 It is responsible for representing information about the real world so that a
computer can understand and can utilize this knowledge to solve the complex
real world problems such as diagnosis a medical condition or communicating
with humans in natural language.
4.6.1 Types of knowledge
Knowledge: Knowledge is awareness or familiarity gained by experiences of facts, data,
and situations. Following are the types of knowledge in artificial intelligence:
 Procedural knowledge: Describes how to do things, provides a set of directions of
how to perform certain tasks, e.g., how to drive a car.
 Declarative knowledge: It describes objects, rather than processes. What is known
about a situation, e.g. it is sunny today, and cherries are red.
 Meta knowledge: Knowledge about knowledge, e.g., the knowledge that blood
pressure is more important for diagnosing a medical condition than eye color.
 Heuristic knowledge: Rule-of-thumb, e.g. if I start seeing shops, I am close to the
market.
 Structural knowledge: Describes structures and their relationships. e.g. how the
various parts of the car fit together to make a car, or knowledge structures in terms
of concepts, sub concepts, and objects.

14
4.6.2 AI knowledge Cycle
An Artificial intelligence system has the following components for displaying intelligent
behavior:
 Perception
 Learning
 Knowledge Representation and Reasoning
 Planning
 Execution
The below diagram is shows how an AI system can interact with the real world and what
components help it to show intelligence. AI system has Perception component by which
it retrieves information from its environment. It can be visual, audio or another form of
sensory input.

The learning component is responsible for learning from data captured by Perception
comportment. In the complete cycle, the main components are knowledge
representation and Reasoning. These two components are involved in showing the
intelligence in machine-like humans. These two components are independent with each
other but also coupled together. The planning and execution depend on analysis of
Knowledge representation and reasoning.
4.6.3 Properties of Knowledge Representation system:
A good knowledge representation system must possess the following properties.
15
 Representational Accuracy:
KR system should have the ability to represent all kind of required knowledge.
 Inferential Adequacy:
KR system should have ability to manipulate the representational structures to
produce new knowledge corresponding to existing structure.
 Inferential Efficiency:
The ability to direct the inferential knowledge mechanism into the most
productive directions by storing appropriate guides.
 Acquisitional efficiency- The ability to acquire the new knowledge easily using
automatic methods.
4.7 Knowledge Reasoning
Reasoning is the process of deriving logical conclusions from given facts. Reasoning is
referred as ―the process of working with knowledge, facts and problem solving
strategies to draw conclusions‖.
Types of Reasoning
a) Deductive Reasoning
Deductive reasoning, as the name implies, is based on deducing new information from
logically related known information. A deductive argument offers assertions that lead
automatically to a conclusion.
Example: If there is dry wood, oxygen and a spark, there will be a fire.
Given: There is dry wood, oxygen and a spark
We can deduce: There will be a fire.
b) Inductive Reasoning
Inductive reasoning is based on forming, or inducing a ‗generalization‘ from a limited set
of observations. Example:
Observation: All the crows that I have seen in my life are black.
Conclusion: All crows are black.
Thus the essential difference between deductive and inductive reasoning is that
inductive reasoning is based on experience while deductive reasoning is based on
rules, hence the latter will always be correct.

16
c) Abductive Reasoning
Abductive reasoning is a form of logical reasoning which starts with single or multiple
observations then seeks to find the most likely explanation or conclusion for the
observation. Abductive reasoning is an extension of deductive reasoning, but in
abductive reasoning, the premises do not guarantee the conclusion.
Example:
Implication: She carries an umbrella if it is raining
Axiom: she is carrying an umbrella
Conclusion: It is raining
This conclusion might be false, because there could be other reasons that she is
carrying an umbrella, for instance she might be carrying it to protect herself from the sun
d) Common-Sense Reasoning
Common-sense reasoning is an informal form of reasoning that uses rules gained
through experience or what we call rules-of-thumb. It operates on heuristic knowledge
and heuristic rules.
Example:
 One person can be at one place at a time.
 If I put my hand in a fire, then it will burn.
e) Non-Monotonic Reasoning
Non-Monotonic reasoning is used when the facts of the case are likely to change after
some time.
Example:
Rule: IF the wind blows, THEN the curtains sway
When the wind stops blowing, the curtains should sway no longer. However, if we use
monotonic reasoning, this would not happen. The fact that the curtains are swaying
would be retained even after the wind stopped blowing. In non-monotonic reasoning, we
have a ―truth maintenance system‖. It keeps track of what caused a fact to become true.
If the cause is removed, that fact is removed (retracted) also.
4.8 Probabilistic Reasoning
Till now, we have learned knowledge representation using first-order logic and
propositional logic with certainty, which means we were sure about the predicates. With

17
this knowledge representation, we might write A→B, which means if A is true then B is
true, but consider a situation where we are not sure about whether A is true or not then
we cannot express this statement, this situation is called uncertainty.
So to represent uncertain knowledge, where we are not sure about the predicates, we
need uncertain reasoning or probabilistic reasoning.
Probabilistic reasoning is a way of knowledge representation where we apply the
concept of probability to indicate the uncertainty in knowledge. In probabilistic
reasoning, we combine probability theory with logic to handle the uncertainty.
Need of probabilistic reasoning in AI
 When there are unpredictable outcomes.
 When specifications or possibilities of predicates becomes too large to handle.
 When an unknown error occurs during an experiment.
As probabilistic reasoning uses probability and related terms, so before understanding
probabilistic reasoning, let's understand some common terms:
Probability: Probability can be defined as a chance that an uncertain event will occur.
The value of probability always remains between 0 and 1 that represent ideal
uncertainties.
We can find the probability of an uncertain event by using the below formula.

.
 P(¬A) = probability of a not happening event.
 P(¬A) + P(A) = 1.
Event: Each possible outcome of a variable is called an event.
Sample space: The collection of all possible events is called sample space.
Random variables: Random variables are used to represent the events and objects in
the real world.
Prior probability: The prior probability of an event is probability computed before
observing new information.
Posterior Probability: The probability that is calculated after all evidence or information
has taken into account. It is a combination of prior probability and new information.

18
Conditional Probability
Conditional probability is a probability of occurring an event when another event has
already happened.
Let's suppose, we want to calculate the event A when event B has already occurred,
"the probability of A under the conditions of B", it can be written as:

Where P(A⋀B)= Joint probability of a and B


P(B)= Marginal probability of B.
If the probability of A is given and we need to find the probability of B, then it will be
given as:

Example:
In a class, there are 70% of the students who like English and 40% of the students who
likes English and mathematics, and then what is the percent of students those who like
English also like mathematics?
Solution:
Let, A is an event that a student likes Mathematics
B is an event that a student likes English.

Hence, 57% are the students who like English also like Mathematics.
4.9 Bayesian Reasoning
In probability theory, Bayes' theorem (Bayes' rule, Bayes' law, or Bayesian reasoning)
relates the conditional probability and marginal probabilities of two random events.
It is a way to calculate the value of P(B|A) with the knowledge of P(A|B).
Bayes' rule or Bayes' theorem:

19
Example:
A doctor is aware that disease meningitis causes a patient to have a stiff neck, and it
occurs 80% of the time. He is also aware of some more facts, which are given as
follows:
The Known probability that a patient has meningitis disease is 1/30,000.
The Known probability that a patient has a stiff neck is 2%.
What is the probability that a patient has diseases meningitis with a stiff neck?
Let a be the proposition that patient has stiff neck and b be the proposition that patient
has meningitis. , so we can calculate the following as:
P(a|b) = 0.8
P(b) = 1/30000
P(a)= 0.02

4.10 Knowledge Based System


A knowledge-based system (KBS) is a computer system which generates and utilizes
knowledge from different sources, data and information. These systems aid in solving
problems, especially complex ones, by utilizing artificial intelligence concepts. These
systems are mostly used in problem-solving procedures and to support human learning,
decision making and actions. Knowledge-based systems are considered to be a major
branch of artificial intelligence. They are capable of making decisions based on the
knowledge residing in them, and can understand the context of the data that is being
processed. Knowledge-based systems broadly consist of an interface engine and
knowledge base. The interface engine acts as the search engine, and the knowledge
base acts as the knowledge repository. Learning is an essential component of
knowledge-based systems and simulation of learning helps in the betterment of the
systems.
Knowledge-based systems can be broadly classified as CASE-based systems,
intelligent tutoring systems, expert systems, hypertext manipulation systems and
databases with intelligent user interface.

20
Compared to traditional computer-based information systems, knowledge-based
systems have many advantages. They can provide efficient documentation and also
handle large amounts of unstructured data in an intelligent fashion. Knowledge-based
systems can aid in expert decision making and allow users to work at a higher level of
expertise and promote productivity and consistency. These systems are considered
very useful when expertise is unavailable, or when data needs to be stored for future
usage or needs to be grouped with different expertise at a common platform, thus
providing large-scale integration of knowledge. Finally, knowledge-based systems are
capable of creating new knowledge by referring to the stored content.

21

You might also like