Principles of Artificial Intelligence (21AI54)
Ms. Shruthi U
Assistant Professor
Department of AI&ML
R N S Institute of Technology
1
Text Book
Title: Artificial Intelligence: A Modern Approach
Author: Stuart Russell and Peter Norvig
Publication: 3rd Edition, Pearson, 2015
2
Module 1
Introduction to Artificial Intelligence
• What is AI ?
• Foundations of Artificial Intelligence
• History of Artificial Intelligence
• Applications of AI
Intelligent Agents:
• Agents and environment
• Concept of Rationality
• The nature of environment
• The structure of agents.
3
Introduction to Artificial Intelligence, Definitions,
foundation, History and Applications
4
Introduction to Artificial Intelligence
• Homo Sapiens : The name is Latin for "wise man“
• Philosophy of AI - “Can a machine think and behave like humans do?”
• In Simple Words - Artificial Intelligence is a way of making a computer, a computer-
controlled robot, or a software think intelligently, in the similar manner the intelligent
humans think.
• Artificial intelligence (AI) is an area of computer science that emphasizes the creation of
intelligent machines that work and react like humans.
• AI is accomplished by studying how human brain thinks, and how humans learn, decide,
and work while trying to solve a problem, and then using the outcomes of this study as a
basis of developing intelligent software and systems.
5
What is AI?
Views of AI fall into four categories:
1. Thinking humanly
2. Thinking rationally
3. Acting humanly
4. Acting rationally
The textbook advocates "acting rationally"
6
Definitions of AI
if our system can be more
rational than humans in
some cases, why not?
Systems that Systems that
focus on action avoids
philosophical issues such as
think like humans think rationally
“is the system conscious”
etc. Systems that act Systems that act
like humans rationally
• We will follow “act rationally” approach
– Distinction may not be that important
• acting rationally/like a human presumably requires
(some sort of) thinking rationally/like a human,
7
• humans much more rational anyway in complex domains
Thinking Humanly Thinking Rationally
“The exciting new effort to make “The study of mental faculties through
computers think … machines with minds, the use of computational models.”
in the full and literal sense.”
“The study of the computations that make
“Activities that we associate with human it possible to perceive, reason and act.”
thinking, activities such as decision-
making, problem solving, learning…”
Acting Humanly Acting Rationally
“The art of creating machines that “Computational Intelligence is the study
perform functions that require of the design of intelligent agents.”
intelligence when performed by
people.” “AI … is concerned with intelligent
behavior in artifacts.”
“The study of how to make computers
do things at which, at the moment,
people are better.”
8
Acting humanly: Turing Test
• Turing (1950) developed "Computing machinery and
intelligence":
• "Can machines think?" or "Can machines behave
intelligently?"
• Operational test for intelligent behavior: the Imitation Game
A computer passes the test if a human interrogator, after posing some
written questions, cannot tell whether the written responses come from a
person or from a machine.
• Suggested major components of AI: knowledge,
reasoning, language understanding, learning
9
The conversation between interrogator and computer would be like this:
C(Interrogator): Are you a computer?
A(Computer): No
C: Multiply one large number to another, 158745887 * 56755647
A: After a long pause, an incorrect answer!
C: Add 5478012, 4563145
A: (Pause about 20 seconds and then give an answer)10041157 10
Yet AIresearchers havedevoted little effort to passing the Turing test, believing that it
is more important to study the underlying principles of intelligence than to duplicate
an exemplar.
The quest for “artificial flight” succeeded when the Wright brothers and others stopped
imitating birds and learned about aerodynamics.
Aeronautical engineering tests do not define the goal of their field as making “machines
that fly so exactly like pigeons that they can fool even other pigeons.”
11
Acting humanly: Turing Test
The computer would need to posses the following capabilities:
• Natural Language Processing: To enable it to communicate successfully
in English.
• Knowledge representation: To store what it knows or hear.
• Automated reasoning: To use the stored information to answer
questions and to draw new conclusions.
• Machine Learning : To adopt to new circumstances and to detect and
extrapolate patterns.
To pass the Total Turing Test
• Computer vision: To perceive objects.
• Robotics: To manipulate objects and move about.
12
Thinking humanly: Cognitive
Modeling
• If we are going to say that given program thinks like a human,
we must have some way of determining how humans think.
• We need to get inside the actual working of human minds.
• There are 3 ways to do it:
1. Through introspection
Trying to catch our own thoughts as they go
2. Through psychological experiments
Observing a person in action
3. Through brain imaging
Observing the brain in action
13
Thinking humanly: cognitive modeling
⚫Comparison of the trace of computer program
reasoning steps to traces of human subjects solving the
same problem.
⚫Cognitive Science brings together computer models
from AI and experimental techniques from psychology
to try to construct precise and testable theories of the
working of the human mind.
⚫Now distinct from AI
⚫AI and Cognitive Science fertilize each other in the areas of
vision and natural language.
14
Thinking humanly: Cognitive
Modeling
• Once we have a sufficiently precise theory of the mind, it
becomes possible to express the theory as a computer
program.
• If the program’s input-output behavior matches
corresponding human behavior, that is evidence that the
program’s mechanisms could also be working in humans.
• For example, Allen Newell and Herbert Simon, who
developed GPS, the "General Problem Solver”.
15
Thinking Rationally:
“Laws of Thought"
• Aristotle: one of the first to attempt to codify “right
thinking”. Mathematical representation (irrefutable
reasoning processes).
• His syllogisms provided patterns for argument structures that
always yielded correct conclusions when given premises are
correct.
Example – Socrates is a man
All men are mortal
Therefore
Socrates is mortal
initiated the field called logic.
16
Two main obstacles:
1. It is not easy to take informal knowledge and state it in the formal terms
required by logical notation, particularly when the knowledge is less than 100%
certain.
2. There is a big difference between being able to solve a problem “in
principle” and doing so in practice.
17
Acting Rationally:
Rational Agent
• An agent is an entity that perceives and acts
• A system is rational if it does the “right thing,” given what it
knows.
• This course is about designing rational agents
• Rational agent is one that acts so as to achieve the best
outcome or, when there is uncertainty, the best expected
outcome.
• Rational behavior: doing the right thing
• The right thing: that which is expected to maximize goal
achievement, given the available information
• We will concentrate on general principles of rational agents and
18
on components for constructing them.
Behave Rationally……….
• What means “behave rationally” for a person/system:
– Take the right/ best action to achieve the goals, based on his/its knowledge and belief
Example: Assume I don’t like to get wet in rain (my goal), so I bring an umbrella (my action). DoI
behave rationally?
– The answer is dependent on my knowledge and belief
– If I’ve heard the forecast for rain and I believe it, then bringing the umbrella isrational.
– If I’ve not heard the forecast for rain and I do not believe that it is going to rain, then
bringing the umbrella is not rational
• “Behave rationally” does not always achieve the goals successfully
Example:
• My goals – (i) do not get wet if rain; (ii) do not looked stupid(such as bring an umbrella
when not raining)
• My knowledge/belief – weather forecast for rain and I believe it
• My rational behaviour – bring an umbrella
• The outcome of my behaviour: If rain, then my rational behaviour achieves both goals; If
no rain, then my rational behaviour failsto achieve the 2nd goal
• The successfulness of “behave rationally” is limited by my knowledge and belief
19
Acting rationally: rational agent
•Achieving perfect rationality – always doing the right
thing – is not feasible in complicated environments.
•Limited rationality – acting appropriately when there is
not enough time to do all the computations one might
like.
20
Rational agents
⚫An agent is an entity that perceives and acts
⚫Abstractly, an agent is a function from percept histories to actions:
[f: P* A]
⚫For any given class of environments and tasks, we seek the agent (or
class of agents) with the best performance.
⚫Caveat: computational limitations make perfect rationality
unachievable
design best program for given machine resources
21
Definition of AI
• Existing definitions advocate everything from replicating human intelligence
to simply solving knowledge-intensive tasks.
Examples:
“Artificial Intelligence is the design, study and construction of computer
programs that behave intelligently.” -- Tom Dean.
“Artificial Intelligence is the enterprise of constructing a physical symbol
system that can reliably pass the Turing test.” -- Matt Ginsberg.
22
AI
Artificial Intelligence (AI) refers to the simulation of human intelligence in machines that are
programmed to think and learn like humans.
The goal of AI is to develop systems that can perform tasks that normally require human
intelligence, such as visual perception, speech recognition, decision-making, and language
translation.
AI can be categorized into
two types:
1. Narrow AI (Weak AI): This type of AI is designed and trained for
a particular task. It excels in performing specific tasks but lacks the
broad cognitive abilities of human intelligence.
Examples include virtual personal assistants like Siri and Alexa.
2. General AI (Strong AI): This type of AI possesses the ability to
understand, learn, and apply knowledge across different domains,
similar to human intelligence. General AI is still a theoretical concept
and has not been fully realized.
Foundations of Artificial Intelligence
Below are the disciplines that contributed ideas, viewpoints and techniques to
AI:
1. Philosophy
2. Mathematics
3. Economics
4. Neuroscience
5. Psychology
6. Computer Engineering
7. Control theory
8. Linguistics 25
Foundations of Artificial Intelligence
• Philosophy
Can formal rules be used to draw valid conclusions?
How does the mind arise from a physical brain?
Where does knowledge come from?
How does knowledge lead to action?
Aristotle was the first to formulate a precise set of laws governing the
rational part of the mind. He developed an informal system of syllogisms for
proper reasoning, which in principle allowed one to generate conclusions
mechanically, given initial premises.
All dogs are animals; all animals have four legs; therefore all dogs have
four legs
Descartes was a strong advocate of the power of reasoning in
understanding the world, philosophy now called as rationalism.
26
Foundations of Artificial Intelligence
• Mathematics
What are the formal rules to draw valid conclusions?
What can be computed?
How do we reason with uncertain information?
Formal representation and proof algorithms: Propositional logic
Computation: Turing tried to characterize exactly which functions are
computable - capable of being computed.
(un)decidability: Incompleteness theory showed that in any formal theory, there
are true statements that are undecidable i.e. they have no proof within the theory.
“ a line can be extended infinitely in both directions”
(in)tractability: A problem is called intractable if the time required to solve
instances of the problem grows exponentially with the size of the instance.
probability: Predicting the future.
27
Foundations of Artificial Intelligence
• Economics
How should we make decisions so as to maximize payoff?
Economics is the study of how people make choices that lead to
preferred outcomes(utility).
Decision theory: It combines probability theory with utility theory,
provides a formal and complete framework for decisions made under
uncertainty.
• Neuroscience
How do brains process information?
Neuroscience is the study of the nervous system, particularly
brain.
Brain consists of nerve cells or neurons. 10^11 neurons.
Neurons are considered as Computational units. 28
Foundations of Artificial Intelligence
• Psychology
How do Humans and animals think and act?
• Computer engineering
How can we build an efficient computer?
Building fast computers
• Control theory
How can artifacts operate under their own control?
Design systems that maximize an objective function over time
• Linguistics
How does the language relate to thought?
knowledge representation, grammar 29
History of AI
• 1943 McCulloch & Pitts developed Boolean circuit model of brain
• 1950 Turing's "Computing Machinery and Intelligence"
• 1956 Dartmouth meeting: "Artificial Intelligence" adopted
• 1952—69 McCarthy referred “Look, Ma, no hands!” era People
thought “Only arithmetic can be done and no more
before 1952”. Astonishing if something is done remotely
clever.
• 1950s Early AI programs, including
» Samuel's checkers program,
» Newell & Simon's Logic Theorist : It was the first program
deliberately engineered to mimic the problem solving skills of a
human being and is called "the first artificial
intelligence program". It would eventually prove 38 of the first 52
theorems in Whitehead and Russell's Principia Mathematica
» Gelernter's GeometryEngine
• 1965 Robinson's complete algorithm for logical reasoning 30
History of AI
• 1966—73 AI discovers computational complexity Neural
network research almost disappears
• 1969—79 Early development of knowledge-based systems
• 1980-- AI becomes an industry
• 1986-- Neural networks return to popularity AI
• 1987-- becomes a science
• 1995-- The emergence of intelligent agents
31
History of AI
32
Ref : https://www.javatpoint.com/history-of-artificial-intelligence
Applications of AI
Applications:
• Deep Blue(chess-playing computer) defeated the world
chess champion Garry Kasparov in 1997
• During the 1991 Gulf War, US forces deployed an AI logistics
planning and scheduling program that involved up to 50,000
vehicles, cargo, and people
Planning – How to use resources? Scheduling –
When to use the resources?
• NASA's on-board autonomous planning program
controlled the scheduling of operations for a spacecraft
• Google duplex
• The GPS developed in 1957 by Alan Newell and Hervert
Simon, embodied a grandiose vision
33
AGENTS
34
Agent Terminology
Performance Measure of Agent − It is the criteria, which
determines how successful an agent is.
Behavior of Agent − It is the action that agent performs after
any given sequence of percepts.
Percept − It is agent’s perceptual inputs at a given instance.
Percept Sequence − It is the history of all that an agent has
perceived till date.
Agent Function − It is a map from the precept sequence to an
action.
35
Rationality
Rationality is nothing but status of being reasonable, sensible, and having
good sense of judgment.
Rationality is concerned with expected actions and results depending
upon what the agent has perceived. Performing actions with the aim of
obtaining useful information is an important part of rationality.
What is Ideal Rational Agent?
An ideal rational agent is the one, which is capable of doing expected
actions to maximize its performance measure, on the basis of −
• Its percept sequence
• Its built-in knowledge base
Rationality of an agent depends on the following −
• The performance measures, which determine the degree of success.
• Agent’s Percept Sequence till now.
• The agent’s prior knowledge about the environment.
• The actions that the agent can carry out.
A rational agent always performs right action, where the right action
means the action that causes the agent to be most successful in the given
percept sequence. The problem the agent solves is characterized by 36
Performance Measure, Environment, Actuators, and Sensors (PEAS).
Examples of Agent:-
An agent is anything that can perceive its environment through sensors and acts upon that
environment through effectors.
• A human agent has sensory organs such as eyes, ears, nose, tongue and skin parallel to
the sensors, and other organs such as hands, legs, mouth, for effectors.
• A robotic agent replaces cameras and infrared range finders for the sensors, and various
motors and actuators for effectors.
• A software agent has encoded bit strings as its programs and actions.
37
Intelligent Agents: Agents and
environment, Concept of
Rationality, The nature of
environment, The structure of
agents
Intelligent Agents:
In the context of artificial intelligence, an intelligent agent is a system that perceives its
environment, processes information and takes actions to achieve specific goals.
The concept of intelligent agents is fundamental to understanding how AI systems interact
with their surroundings.
Let's delve into key
components related to
intelligent agents:
Agents and Environment:
•Agent: An agent is anything that can be viewed as perceiving its
environment through sensors and acting upon that environment through
effectors. In the context of AI, agents are typically computer programs
or systems.
•Environment: The environment is the external context in which the
agent operates. It includes everything outside the agent that can
potentially affect or be affected by the agent's actions.
Intelligent agent
40
Rationality
What is rational at any given time depends on four
things:
•The performance measure that defines the criterion of
success.
•The agent's prior knowledge of the environment.
•The actions that the agent can perform.
•The agent's percept sequence to date.
This leads to a definition of a rational agent:
For each possible percept sequence, a rational agent
should select an action that is ex-pected to maximize its
performance measure, given the evidence provided by the
percept sequence and whatever built-in knowledge the
agent has
42
Nature of Environment
Specifying the task environment
PEAS- Performance, Environment, Actuators, Sensors.
Properties of Task
Environment
Agents in Artificial
Intelligence
Artificial intelligence is defined as a study of rational agents. A rational
agent could be anything which makes decisions, as a person, firm,
machine, or software. It carries out an action with the best outcome after
considering past and current percepts(agent’s perceptual inputs at a given
instance).
An AI system is composed of an agent and its environment. The agents
act in their environment. The environment may contain other agents. An
agent is anything that can be viewed as :
• perceiving its environment through sensors and
• acting upon that environment through actuators
47
The Structure of Intelligent Agents
Agent’s structure can be viewed as −
• Agent = Architecture + Agent Program
• Architecture = the machinery that an agent
executes on.
• Agent Program = an implementation of an agent
function.
48
Agent Program
The Structure of Intelligent Agents
To understand the structure of Intelligent Agents, we should
be familiar with Architecture and Agent Program.
Architecture is the machinery that the agent executes on. It is
a device with sensors and actuators, for example : a robotic
car, a camera, a PC.
Agent program is an implementation of an agent function.
An agent function is a map from the percept sequence(history
of all that an agent has perceived till date) to an action.
50
Types of Agents
Agents can be grouped into five classes based on
their degree of perceived intelligence and capability.
All these agents can improve their performance and
generate better action over the time. These are
given below:
• Simple Reflex Agent
• Model-based reflex agent
• Goal-based agents
• Utility-based agent
• Learning agent
51
Simplex Agent
• The Simple reflex agents are the simplest agents. These agents take
decisions on the basis of the current percepts and ignore the rest of
the percept history.
• These agents only succeed in the fully observable environment.
• The Simple reflex agent does not consider any part of percepts history
during their decision and action process.
• The Simple reflex agent works on Condition-action rule, which means it
maps the current state to action. Such as a Room Cleaner agent, it
works only if there is dirt in the room.
• Problems for the simple reflex agent design approach:
• They have very limited intelligence
• They do not have knowledge of non-perceptual parts of the
current state
• Mostly too big to generate and to store.
• Not adaptive to changes in the environment.
52
Simplex Agent
53
Example
Model-based reflex agent
The Model-based agent can work in a partially observable
environment, and track the situation.
A model-based agent has two important factors:
• Model: It is knowledge about "how things happen in
the world," so it is called a Model-based agent.
• Internal State: It is a representation of the current
state based on percept history.
These agents have the model, "which is knowledge of the
world" and based on the model they perform actions.
Updating the agent state requires information about:
• How the world evolves
• How the agent's action affects the world.
55
Model-based reflex agent
56
For the braking problem, the internal state is
not too extensive— just the previous frame
from the camera, allowing the agent to detect
when two red lights at the edge of the vehicle
go on or off simultaneously.
For other driving tasks such as changing lanes,
the agent needs to keep track of where the
other cars are if it can’t see them all at once.
And for any driving to be possible at all, the
agent needs to keep track of where its keys are.
Updating this internal state information as time goes by
requires two kinds of knowledge to be encoded in the agent
program.
First, we need some information about how the world evolves
independently of the agent—for example, that an overtaking
car generally will be closer behind than it was a moment ago.
Second, we need some information about how the agent’s own
actions affect the world—for example, that when the agent
turns the steering wheel clockwise, the car turns to the right, or
that after driving for five minutes northbound on the freeway,
one is usually about five miles north of where one was five
minutes ago.
This knowledge about “how the world works”—
whether implemented in simple Boolean circuits or in
complete scientific theories—is called a model of the
world. An agent that uses such a model is called a
model-based agent
Goal-based agents
• The knowledge of the current state environment is not
always sufficient to decide for an agent to what to do.
• The agent needs to know its goal which describes desirable
situations.
• Goal-based agents expand the capabilities of the model-
based agent by having the "goal" information.
• They choose an action, so that they can achieve the goal.
• These agents may have to consider a long sequence of
possible actions before deciding whether the goal is
achieved or not. Such considerations of different scenario
are called searching and planning, which makes an agent
proactive.
61
Goal-based agents
62
Sometimes goal-based action selection is
straightforward
for example when goal satisfaction results immediately
from a single action.
Sometimes it will be trickier
for example, when the agent has to consider long
sequences of twists and turns to find a way to achieve
the goal.
Search and planning are the subfields of AI devoted to
finding action sequences that achieve the agent’s goals.
Reflex Agent Goal Based
For the reflex agent, on the other The goal-based agent appears less
hand, we would have to rewrite efficient, it is more flexible because the
many condition–action rules. knowledge that supports its decisions is
represented explicitly and can be
modified. If it starts to rain, the agent can
update its knowledge of how effectively
Reflex agent its brakes will operate; this will
Vs Goal automatically cause all of the relevant
behaviors to be altered to suit the new
based agent conditions
The reflex agent’s rules for when The goal-based agent’s behavior can
to turn and when to go straight easily be changed to go to a different
will work only for a single destination, simply by specifying that
destination; they must all be destination as the goal.
replaced to go somewhere new.
Example: Example:
The reflex agent brakes when it A goal-based agent, in principle, could
sees brake lights reason that if the car in front has its brake
lights on, it will slow down.
Utility-based agents
• These agents are similar to the goal-based agent but
provide an extra component of utility measurement which
makes them different by providing a measure of success at
a given state.
• Utility-based agent act based not only goals but also the
best way to achieve the goal.
• The Utility-based agent is useful when there are multiple
possible alternatives, and an agent has to choose in order
to perform the best action.
• The utility function maps each state to a real number to
check how efficiently each action achieves the goals.
65
Utility-based agents
66
Learning Agents
• A learning agent in AI is the type of agent which can learn from its past
experiences, or it has learning capabilities.
• It starts to act with basic knowledge and then able to act and adapt
automatically through learning.
• A learning agent has mainly four conceptual components, which are:
• Learning element: It is responsible for making improvements by learning
from environment
• Critic: Learning element takes feedback from critic which describes that
how well the agent is doing with respect to a fixed performance standard.
• Performance element: It is responsible for selecting external action
• Problem generator: This component is responsible for suggesting actions
that will lead to new and informative experiences.
• Hence, learning agents are able to learn, analyze performance, and look for
new ways to improve the performance.
67
Learning Agents
68
• A learning agent can be divided into four conceptual
components, as shown in Fig.
• The most important distinction is between the learning
element, which is responsible for making improvements,
• the performance element, which is responsible for
PERFORMANCE ELEMENT selecting external actions.
• The performance element is what we have previously
considered to be the entire agent: it takes in percepts and
decides on actions.
• The learning element uses CRITIC feedback from the critic
on how the agent is doing and determines how the
performance element should be modified to do better in the
future
• The last component of the learning agent is the problem
generator. It is responsible for suggesting actions that will
lead to new and informative experiences.
How the components of
agent programs work
Factored Representation
• Splits up each state into a fixed set of variables or
attributes, each of which can have a value
• Two different factored states can share some attributes
(such as being at some particular GPS location) and not
others (such as having lots of gas or having no gas);
• This makes it much easier to work out how to turn one
state into another.
• With factored representations, we can also represent
uncertainty—for example, ignorance about the amount of
gas in the tank can be represented by leaving that
attribute blank
• Many important areas of AI are based on
factored representations, including
• Constraint satisfaction algorithms
• Propositional logic
• planning
• Bayesian networks
• machine learning algorithms
Thank you!
73