[go: up one dir, main page]

0% found this document useful (0 votes)
9 views36 pages

MODULE 1-Own Notes

The document provides an overview of artificial intelligence (AI), detailing its definitions, approaches, foundational challenges, and historical development. It discusses key concepts such as the Turing Test, rational agents, and the importance of various fields like philosophy, mathematics, and neuroscience in shaping AI. Additionally, it outlines significant challenges faced by AI, including transparency, data privacy, and bias, along with proposed solutions.

Uploaded by

nirmal2404620
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
9 views36 pages

MODULE 1-Own Notes

The document provides an overview of artificial intelligence (AI), detailing its definitions, approaches, foundational challenges, and historical development. It discusses key concepts such as the Turing Test, rational agents, and the importance of various fields like philosophy, mathematics, and neuroscience in shaping AI. Additionally, it outlines significant challenges faced by AI, including transparency, data privacy, and bias, along with proposed solutions.

Uploaded by

nirmal2404620
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 36

MODULE 1

[INTRODUCTION TO ARTIFICIAL INTELLIGENCE AND PROBLEM-SOLVING]

1.1 Artificial Intelligence:

AI can be defined and understood along two dimensions:

 Thinking vs. Acting

 Human-like vs. Rational (ideal)

Dimension Human-like Rational (ideal)

Thinking Thinking Thinking Rationally


Humanly

Acting Acting Humanly Acting Rationally

Four Approaches to AI:

1. Thinking Humanly (Cognitive Modeling)

 Goal: Model how humans think using computational methods.

 Techniques:

o Introspection (self-observation)

o Psychological experiments

o Brain imaging

 Example: Newell and Simon’s General Problem Solver (GPS)

 Related Field: Cognitive Science

Definition:

"The study of mental faculties through the use of computational models." — Charniak &
McDermott (1985)

2. Thinking Rationally (Laws of Thought)

 Goal: Use logic to emulate correct reasoning.

 Basis: Aristotle’s syllogisms, formal logic.


 Challenge:

o Translating informal knowledge to formal logic.

o Computational limits (combinatorial explosion).

 Known as: Logicist approach

Definition:

"The study of the computations that make it possible to perceive, reason, and act." —
Winston (1992)

3. Acting Humanly (Turing Test)

 Goal: Make machines behave like humans.

 Test: Alan Turing’s Turing Test (1950)

o Pass if a human judge cannot distinguish between machine and human.

 Requirements for passing:

o Natural Language Processing (NLP)

o Knowledge Representation

o Automated Reasoning

o Machine Learning

 Extended Test (Total Turing Test):

o Adds Computer Vision and Robotics

Definition:

"The art of creating machines that perform functions that require intelligence when
performed by people." — Kurzweil (1990)

4. Acting Rationally (Rational Agent)

 Goal: Design agents that do the right thing to maximize goal achievement.

 Agent: Any entity that perceives and acts.

 Rational Agent:
o Acts to maximize expected outcomes.

o Uses reasoning, perception, learning, and planning.

 Advantage:

o Mathematically defined.

o Applicable beyond human-like behavior.

o More general and scientifically robust.

Definition:

"Computational Intelligence is the study of the design of intelligent agents." — Poole et al.
(1998)

Key Definitions

 Artificial Intelligence (AI): The science and engineering of making intelligent


machines that can simulate human thinking and/or act rationally.

 Turing Test: A test to evaluate a machine's ability to exhibit intelligent behavior


indistinguishable from a human.

 Total Turing Test: An extended version of the Turing Test that includes sensory and
physical interaction capabilities.

 Agent: An entity that perceives its environment and acts upon it.

 Rational Agent: An agent that acts to achieve the best expected outcome, given its
knowledge and perception.

 Syllogism: A form of logical reasoning where a conclusion is inferred from two


premises (e.g., All men are mortal; Socrates is a man; therefore, Socrates is mortal).

 Logicist Tradition: Building AI systems using formal logic and inference rules.

 Limited Rationality: In real-world settings, perfect rationality is impractical due to


computational limits. Instead, agents act reasonably well with limited resources.

1.2 Problems of Artificial Intelligence:

Foundational Challenges in Understanding Intelligence


 Intelligence ≠ perfect understanding; limited by perception, memory, computation.

 AI aims to replicate intelligent behavior across a spectrum (from insects to humans).

 Key aspects of intelligence studied in AI:

o Perception

o Communication (language use)

o Reasoning

o Planning

o Learning

o Memory

Preliminary Questions Before Advancing in AI

1. What are the underlying assumptions about intelligence?

2. What kinds of techniques are useful for solving AI problems?

3. At what level can human intelligence be modeled?

4. When can we say that an intelligent program has been built?

Key Problems in AI with Challenges & Solutions

1. Lack of Transparency (Black Box Problem)

 Challenge: AI decisions are hard to explain; especially problematic in critical fields


like healthcare, finance, and law.

 Solution:

o Invest in Explainable AI (XAI).

o Improve interpretability and model transparency.

o Ensure regulatory compliance and build trust.

“Lack of transparency in AI models can erode trust and make it harder for organizations to
comply with regulations.”

2. Data Privacy Concerns


 Challenge: AI systems require vast personal data, risking privacy breaches and
regulatory violations (e.g., GDPR).

 Solution:

o Adopt data governance frameworks.

o Use encryption, anonymization, and privacy-preserving AI (e.g., federated


learning).

3. Bias in AI Models

 Challenge: AI trained on biased data may perpetuate inequality (e.g., in hiring,


lending, law enforcement).

 Solution:

o Use diverse datasets.

o Implement bias detection and fairness-aware algorithms.

o Conduct regular audits of models and data sources.

“Bias in AI models is a critical issue that can have far-reaching implications for fairness and
equality.”

4. High Demand for AI Skills

 Challenge: Shortage of AI talent (data scientists, ML engineers) hampers


implementation and innovation.

 Solution:

o Invest in upskilling/reskilling programs.

o Partner with universities and use AI-as-a-Service (AIaaS) platforms.

5. Integration with Legacy Systems

 Challenge: Older IT systems are incompatible with modern AI tools, delaying AI


adoption.
 Solution:

o Use hybrid models.

o Employ APIs, cloud platforms, and middleware to bridge old and new
systems.

6. Ethics of AI Usage

 Challenge: Unethical AI use can lead to human rights violations, bias, loss of trust.

 Solution:

o Develop and implement AI ethics guidelines.

o Set up ethical review committees and ensure transparent AI decision-


making.

7. High Costs of AI Implementation

 Challenge: Infrastructure, talent, and data management costs are high—especially for
SMEs.

 Solution:

o Start with small-scale AI pilot projects.

o Use cloud-based AI services to reduce upfront costs.

8. Physical Intelligence Challenges

 Challenge: Robotics and physical AI must handle unpredictable environments (e.g.,


autonomous vehicles).

 Solution:

o Improve sensor technology and adaptive learning algorithms.

o Integrate machine learning with robotics for real-time adaptation.

9. Resistance to AI Adoption

 Challenge: Fear of job loss and change leads to organizational resistance to AI.
 Solution:

o Use change management strategies.

o Communicate how AI augments rather than replaces human roles.

1.3 Foundations of Artificial Intelligence:

1. Philosophy

Key Questions:

 Can rules lead to correct thinking?

 How does the mind arise from the brain?

 Where does knowledge come from?

 How does knowledge lead to action?

Easy Notes:

 Philosophy is the earliest root of AI.

 It explores reasoning, free will, and how knowledge leads to decisions.

 Rationalism says we understand by reasoning; Empiricism says we learn through


senses.

 Thinkers like Descartes and Aristotle tried to explain how humans think and act
logically.

Key Contributions:

 Aristotle: Created syllogisms (basic logical reasoning).

 Descartes: Proposed the mind is separate from the body (dualism).

 John Locke: Emphasized sensory experience (empiricism).

2. Mathematics

Key Questions:

 What are the rules for correct reasoning?


 What can computers compute?

 How can we reason with uncertainty?

Easy Notes:

 Mathematics helps formalize logic and problem-solving.

 It defines what problems are solvable and how hard they are.

 Introduces algorithms, logic, probability, and computational limits.

 AI uses math to model certainty, probability, and decision-making.

Key Contributions:

 George Boole: Invented Boolean logic (true/false).

 Alan Turing: Defined what computers can compute (Turing machine).

 Thomas Bayes: Introduced Bayesian probability for learning from evidence.

3. Economics

Key Questions:

 How do we make the best decisions?

 What if others affect our outcomes?

 How do we plan for the future?

Easy Notes:

 Economics helps AI in decision-making under uncertainty.

 Introduces concepts like utility (value), choices, and payoffs.

 Used in AI for planning, agent behavior, and optimization.

 Also inspired game theory and Markov decision processes.

Key Contributions:

 John von Neumann: Developed game theory.


 Richard Bellman: Introduced Markov Decision Processes (MDPs).

 Herbert Simon: Proposed satisficing – making good-enough decisions.

4. Neuroscience

Key Question:

 How does the brain process information?

Easy Notes:

 Neuroscience studies how neurons work and how the brain learns.

 Neurons send signals through synapses; their connections change over time to learn.

 AI takes inspiration from brain structure for neural networks.

 Tools like EEG and fMRI help us observe brain activity.

Key Contributions:

 Camillo Golgi & Santiago Cajal: Discovered neuron structure.

 Paul Broca: Found speech is controlled by a specific brain area (Broca’s Area).

 Hans Berger: Invented EEG to study brain activity.

5. Psychology

Key Question:

 How do humans and animals think and behave?

Easy Notes:

 Psychology studies how people perceive, think, and act.

 Early behaviorism focused only on input/output behavior (like a machine).

 Cognitive psychology sees the brain as an information processor.

 Helps AI understand how humans learn and solve problems.

Key Contributions:
 John Watson: Founded behaviorism (focus on external behavior).

 Kenneth Craik: Proposed humans have mental models to plan actions.

 Newell & Simon: Created cognitive models with computers.

6. Computer Engineering

Key Question:

 How can we build fast and powerful machines?

Easy Notes:

 AI needs a physical machine (the computer) to run.

 Early computers (ENIAC, Z3) helped automate calculations.

 Improvements in hardware make it possible to run complex AI models.

 AI also improved software tools like programming languages and data structures.

Key Contributions:

 Charles Babbage: Designed first programmable computer (Analytical Engine).

 Ada Lovelace: First computer programmer.

 Alan Turing: Built early AI-based computers (e.g., for chess).

7. Control Theory & Cybernetics

Key Question:

 How can machines control themselves?

Easy Notes:

 Focuses on feedback systems that adjust behavior (like thermostats).

 Machines aim to reduce error between goal and current state.

 Inspired AI ideas like autonomous agents and adaptive behavior.

Key Contributions:
 Norbert Wiener: Founded cybernetics – feedback-based control systems.

 W. Ross Ashby: Described how systems stay stable (homeostasis).

 James Watt: Built governor to regulate steam engine speed.

8. Linguistics

Key Question:

 How does language connect with thinking?

Easy Notes:

 AI needs to understand human language (NLP).

 Early theories struggled with language creativity.

 Chomsky showed how grammar rules explain sentence structure.

 Understanding language also requires knowledge and context.

Key Contributions:

 Noam Chomsky: Developed formal grammar theories.

 Panini (ancient India): Early grammar rules.

 Computational Linguistics: Field combining AI + language.

Summary Table

Field What it studies Key Contribution Example

Philosophy Logic, knowledge, action Aristotle, Descartes

Mathematics Logic, computation, uncertainty Turing, Bayes, Boole

Decision-making and rational


Economics Von Neumann, Bellman
choices

Neuroscience Brain structure and learning Broca, Golgi, Cajal

Psychology Thinking, behavior, mental modeling Watson, Craik, Newell &


Field What it studies Key Contribution Example

Simon

Computer Engineering Computers for intelligent programs Babbage, Lovelace, Turing

Control Theory Self-regulating systems Wiener, Ashby

Linguistics Language structure and meaning Chomsky, Panini

1.4 History of Artificial Intelligence:

1. Gestation Period (1943–1955)


 Start of AI concepts based on neuroscience, logic, and computation.
 McCulloch & Pitts (1943): Modeled neurons with logical functions.
 Donald Hebb (1949): Introduced Hebbian learning—“neurons that fire together
wire together.”
 Marvin Minsky & Dean Edmonds (1951): Built SNARC, first neural network
computer.
 Alan Turing (1950):
o Proposed the Turing Test.
o Introduced concepts of machine learning and reinforcement learning.
o Suggested the idea of the Child Machine – learning from experience.

2. Birth of AI (1956)
 Dartmouth Conference: Organized by McCarthy, Minsky, Shannon, and Rochester.
 Coined the term "Artificial Intelligence".
 Aim: Simulate all aspects of human intelligence in machines.
 Logic Theorist by Newell & Simon amazed participants—proved mathematical
theorems.
 Marked the official launch of AI as a separate research field.

3. Early Enthusiasm (1952–1969)


 AI systems showed early success in solving puzzles and logic problems.
 General Problem Solver (GPS) mimicked human problem-solving steps.
 Physical Symbol System Hypothesis: Intelligence involves symbol manipulation.
 Checkers-playing AI by Arthur Samuel learned through self-play.
 LISP language (1958) by McCarthy: First AI-specific programming language.
 Microworlds used to simulate small, manageable AI environments (e.g., SHRDLU,
SAINT, STUDENT).

4. Reality Check (1966–1973)


 Predictions of rapid AI progress proved overoptimistic.
 Machine Translation failed due to lack of semantic understanding.
 Lighthill Report (1973) criticized AI’s lack of practical results → UK cut funding.
 AI struggled with combinatorial explosion (too many possible paths).
 Perceptrons shown to be mathematically limited (Minsky & Papert).
 AI Winter began—drop in funding and interest.

5. Knowledge-Based Systems (1969–1979)


 Shift to domain-specific knowledge systems instead of general-purpose solvers.
 DENDRAL: Identified molecular structures using expert rules.
 MYCIN: Diagnosed blood infections better than junior doctors.
 Emphasized rules, certainty factors, and expert input.
 Gave rise to the term Expert Systems.
 Sparked work on natural language understanding using real-world context.

6. AI Becomes an Industry (1980–Present)


 R1 system at DEC saved $40 million by automating computer configuration.
 Thousands of expert systems used in companies by the late 1980s.
 AI Boom fueled by demand across industries (finance, manufacturing, etc.).
 Fifth Generation Project (Japan) aimed to create intelligent machines using Prolog.
 AI Winter struck again due to unrealistic promises and lack of delivery.

7. Return of Neural Networks (1986–Present)


 Backpropagation rediscovered and used to train multilayer networks.
 Rumelhart & McClelland (1986) reignited interest with Parallel Distributed
Processing.
 Rise of connectionist models: modeled brain-like processing.
 Neural nets became useful in speech, vision, and language.
 Sparked debate between symbolic vs. connectionist models—now seen as
complementary.

8. AI Embraces Scientific Method (1987–Present)


 Shift toward rigorous methods, mathematical foundations, and experiments.
 Dominant methods:
o Hidden Markov Models in speech and handwriting recognition.
o Bayesian Networks for uncertainty reasoning (Judea Pearl).
 Machine learning matured as a field.
 Use of shared datasets, reproducible research, and benchmarking became
standard.
 AI reintegrated with statistics, control theory, and optimization.

9. Rise of Intelligent Agents (1995–Present)


 Focus on autonomous, goal-directed agents.
 Architecture: SOAR, BDI (Belief–Desire–Intention).
 Agents embedded in web, robotics, video games, and e-commerce.
 AI applied in:
o Search engines
o Recommendation systems
o Chatbots
 Realization: Systems must handle uncertainty and partial observability.
 Stronger links formed with control theory, robotics, and economics.

10. Big Data Era (2001–Present)


 Emphasis shifted from “best algorithm” to “biggest dataset.”
 Yarowsky (1995): Achieved >96% word-sense accuracy with unlabelled data.
 Banko & Brill (2001): More data often > Better algorithm.
 Image processing improved massively with huge image datasets.
 Knowledge engineering replaced by automated learning from web-scale data.
 Fueled deep learning revolution (vision, NLP, etc.).

11. Human-Level AI & AGI (2004–Present)


 Researchers refocused on Artificial General Intelligence (AGI).
 Goal: Machines that can learn, adapt, and reason like humans.
 Friendly AI explored to ensure safe AI development.
 Kurzweil, Minsky, McCarthy, and others call for focus on creativity, learning,
self-improvement.
 Conferences and journals now dedicated to AGI & HLAI.
1.5 Agent and Environments:

A. Definition of an Agent
An agent is anything that can perceive its environment through sensors and act upon that
environment through actuators.
Key Points:
 Agents interact with an environment.
 They sense and respond based on what they perceive.
 The system includes:
o Input: Percepts (via Sensors)
o Output: Actions (via Actuators)

Main Components of the Agent System Diagram


Component Description
Environment The external world where the agent operates.
Sensors Devices through which the agent perceives its environment.
Percepts The input data that the agent receives through its sensors at a moment.
Percept The entire history of percepts received by the agent up to now.
Sequence
Actuators The mechanisms by which the agent affects the environment
(outputs).
Actions The operations or moves performed by the agent in response to
percepts.
Agent Function A mathematical mapping from percept sequence → action.

🔹 Percept
A percept is the agent’s immediate sensory input at a specific point in time.
Example:
 A camera on a robot detecting a red object → "Red object" is a percept.

🔹 Percept Sequence
The percept sequence is the complete history of all percepts the agent has received.
Example:
 [Red object, Wall, Green object] ← sequence of visual inputs.

🤖 Agent Function
Definition:
The agent function maps every possible percept sequence to an action.
Formula:
f(percept_sequence) → action
Example:
 A vacuum cleaner agent receives inputs like "dirty" or "clean".
 The agent function might say:
o If percept = "dirty", then action = "suck".
o If percept = "clean", then action = "move right".

💻Agent Program
Definition:
The agent program is the implementation of the agent function in code (software).
Example:
For the same vacuum cleaner:
def agent_program(percept):
if percept == "dirty":
return "suck"
else:
return "move right"
B. Nature of Environments OR [CHARACTERISTICS OF PROBLEMS]:
The task environments, which are essentially the “problems” to which rational agents
are the “solutions”.
An environment is the world with which the agent interacts. The nature of an environment
has a significant influence on the design and performance of an agent.
Environments can be classified based on several properties:

1. Fully Observable vs. Partially Observable


Type Description Example
Fully The agent’s sensors give access to the Chess, where all pieces are
Observable complete state of the environment at any visible.
time.
Partially The agent receives incomplete or noisy Poker or real-world driving
Observable data about the environment. (agent can’t see everything).

2. Deterministic vs. Stochastic


Type Description Example
Deterministic Next state of the environment is completely Solving a maze, chess
determined by the current state and the (assuming no hidden
agent’s action. elements).
Stochastic The environment includes randomness or Stock market prediction,
uncertainty. weather forecasting.

3. Episodic vs. Sequential


Type Description Example
Episodic Agent’s experience is divided into independent Image classification, spam
episodes; decisions in one do not affect others. email detection.
Sequential Current decisions affect future decisions and Autonomous driving,
states. video games.
4. Static vs. Dynamic
Type Description Example
Static The environment does not change while the Crossword puzzles, image
agent is deciding. processing.
Dynamic The environment can change while the Real-time strategy games, self-
agent thinks. driving cars.

5. Discrete vs. Continuous


Type Description Example
Discrete The number of possible states or Chess (finite moves, positions).
actions is finite.
Continuou The number of possible states or Robot navigation, driving (infinite
s actions is infinite or very large. positions and velocities).

6. Single Agent vs. Multi-Agent


Type Description Example
Single Only one agent operates in the Solitaire, crossword solving.
Agent environment.
Multi- Multiple agents exist and may be Football game, online auction
Agent cooperative or competitive. systems, chess with 2 players.
1.6 Structure of Agents:

• The job of AI is to design an agent program that implements the agent function— the
mapping from percepts to actions.

• This program will run on some sort of computing device with physical sensors and
actuators—we call this the architecture

Agent Program: An agent program is the software implementation that runs on an agent
and determines which action to take based on the percepts received from the environment.
How It Works (Step-by-Step)

1. Inputs a single percept at a time.

2. Stores the full history of percepts (called percepts sequence).

3. Uses a predefined lookup table to decide what action to take based on the complete
percept sequence.

4. Returns the corresponding action.

There are 5 types of Structure of Agents:

• Simple reflex agents;

• Model-based reflex agents;

• Goal-based agents;

• Utility-based agents.

• Learning Based agents.


1.7 Problem solving Agents and Problem Formulation:

A problem-solving agent is a goal-based agent that decides what actions to take by


formulating a problem, searching for a solution, and then executing a plan to reach its
goal.

Problem-Solving Agent Architecture

Steps:
1. Goal Formulation – Define what the agent wants to achieve.

2. Problem Formulation – Convert the goal into a problem (search problem).

3. Search – Use a search algorithm to find a sequence of actions (plan).

4. Execute – Carry out the actions in the environment.

Structure of a Search Problem

A search problem is defined by:

Component Description Example (Maze)

Initial State Starting point of the agent Start cell in maze

Actions Set of legal moves/actions Up, Down, Left, Right

Transition Description of what happens when an action is


Moving to a new cell
Model taken

Reached the destination


Goal Test Check if the current state satisfies the goal
cell

Cost associated with a path (used to evaluate


Path Cost Number of steps taken
solutions)

Examples:

A] Vacuum World:
B] 8 Puzzle Problem:
C] 8 Queen Problem:
D] Route Finding Problem:

1.8 AI technique:

1. Search Techniques

Definition:
Used to explore possible actions and find a solution path from the initial state to the goal
state.

Types:

 Uninformed Search (no extra knowledge):

o Examples: Breadth-First Search (BFS), Depth-First Search (DFS)

 Informed Search (uses heuristics):

o Examples: A*, Greedy Best-First Search

Example:
Finding the shortest path from one city to another in a map using GPS.

2. Knowledge Representation

Definition:
Storing facts, rules, and relationships in a way that machines can understand and reason with.

Techniques:
 Semantic networks

 Frames

 Ontologies

 Logic (Propositional, First-order)

Example:
An AI doctor system stores symptoms and diseases as facts to diagnose illnesses.

3. Reasoning and Inference

Definition:
Drawing conclusions or making decisions based on known facts or rules.

Types:

 Deductive Reasoning: From general to specific (e.g., All humans are mortal →
Socrates is mortal)

 Inductive Reasoning: From specific to general (e.g., Observing birds fly → All birds
fly)

Example:
If a student scored above 90 in all subjects, infer they are a top performer.

4. Machine Learning

Definition:
Technique that enables systems to learn from data and improve over time without being
explicitly programmed.

Types:

 Supervised Learning: Learns from labeled data

o Example: Email spam detection

 Unsupervised Learning: Finds patterns in unlabeled data


o Example: Customer segmentation

 Reinforcement Learning: Learns by trial-and-error using rewards

o Example: Teaching a robot to walk

5. Natural Language Processing (NLP)

Definition:
Enabling machines to understand, interpret, and generate human language.

Applications:

 Chatbots

 Translation

 Sentiment analysis

Example:
Google Translate converts English to Tamil automatically using NLP.

6. Computer Vision

Definition:
AI technique to make machines see and interpret visual data like images or videos.

Applications:

 Face recognition

 Medical image analysis

 Object detection

Example:
A smartphone camera detecting faces for auto-focus.

7. Robotics
Definition:
Combines AI with physical hardware to create intelligent machines that act in the physical
world.

Functions:

 Navigation

 Object manipulation

 Human interaction

Example:
A robotic vacuum cleaner navigating and cleaning rooms.

8. Expert Systems

Definition:
AI programs that mimic the decision-making ability of a human expert in a specific domain.

Components:

 Knowledge base (facts and rules)

 Inference engine (logic to apply rules)

Example:
MYCIN, an expert system that helps doctors diagnose bacterial infections.

🧠 Summary Table

AI Technique What it Does Example

Search Finds solutions using paths Shortest route in Google Maps

Knowledge
Stores and organizes facts Disease-symptom database
Representation

Derives new info from known


Reasoning If it's raining, take an umbrella
facts
AI Technique What it Does Example

Netflix movie
Machine Learning Learns from data
recommendations

NLP Understands human language Siri answering questions

Computer Vision Interprets visual information Face ID on phones

Robotics Acts physically in the real world Self-driving cars

Expert Systems Makes expert-level decisions AI medical diagnosis

1.9 Problem Types and Characteristics:

What is a Problem in AI?

A problem in AI is a formal representation of a task where an agent needs to find a


sequence of actions to achieve a goal from an initial state.

To solve it, we define:

 Initial State

 Goal State

 Actions

 Transition Model

 Path Cost

🔍 I. Types of AI Problems

1. Single-State Problem (Deterministic)

 The outcome of actions is completely predictable.

 The agent knows exactly what state it is in.

Example:
Solving a maze where there are no surprises and you know the full layout.
2. Multiple-State Problem (Nondeterministic)

 The agent may end up in one of many states after performing an action.

 Requires contingency planning.

Example:
Robot navigation with slippery surfaces – one move may result in multiple outcomes.

3. Contingency Problem

 The agent must plan for uncertainty using sensors and if–then rules.

Example:
A delivery drone rerouting due to sudden rain or wind.

4. Exploration Problem

 The agent must explore and discover the environment.

 Initially has no map or knowledge.

Example:
A robot exploring an unknown planet.

📚 II. Characteristics of AI Problems

Characteristic Types Examples

Observability Fully / Partially Chess / Poker

Determinism Deterministic / Stochastic Puzzle / Stock Market

Dynamics Static / Dynamic Sudoku / Self-driving

Discreteness Discrete / Continuous Chess / Robotic arm


Episode Dependency Episodic / Sequential Face detection / Chess

Environment Knowledge Known / Unknown GPS with map / Mars rover

1.10 State Space Search


Definition
State space search is a process in which an AI agent searches through a set of possible
states (the state space) to find a path from an initial state to a goal state.

🧠 Key Concepts
Term Description Example

A position of a chess
State A specific situation or configuration.
board.

Initial State The starting point of the search. Start point in a maze.

A desired state that satisfies the problem Finding the exit of the
Goal State
objective. maze.

Move left, right, up, or


Actions Possible steps or moves that can be taken.
down.

Transition Rules that define the result of applying actions to Moving from one square
Model states. to another.

A sequence of states resulting from a sequence Path traced from start to


Path
of actions. end.

A numerical cost associated with a path (used to


Path Cost Number of moves taken.
evaluate solutions).

🧭 Search Tree vs State Space Graph


 Search Tree:
A tree where nodes represent states and edges represent actions.
 State Space Graph:
A more compact representation that avoids redundant states. It shows all unique
states and transitions.

🔁 Types of State Space Search


1. Uninformed (Blind) Search
No additional information is used except the problem definition.
Method Description Example Use

Breadth-First Explores all nodes at a level before going Finding shortest path in a
Search deeper. maze.

Explores one branch deeply before


Depth-First Search Navigating file directories.
backtracking.

Uniform Cost
Expands the least cost path node first. Pathfinding with travel cost.
Search

2. Informed (Heuristic) Search


Uses additional knowledge (heuristics) to guide the search.
Method Description Example Use

Greedy Best- Uses heuristic to estimate closeness to Route finding using straight-line
First goal. distance.

Combines path cost and heuristic (f(n) =


A* Efficient optimal path search.
g(n) + h(n))

📌 Properties to Compare Search Algorithms


 Completeness – Does it always find a solution?
 Optimality – Is the solution the best one (lowest cost)?
 Time Complexity – How long does it take?
 Space Complexity – How much memory is needed?

🧩 Example Problem: 8-Puzzle


 Initial state: 3x3 tile configuration
 Actions: Move empty tile (left, right, up, down)
 Goal state: Tiles arranged in numerical order
 Search method: A* with Manhattan distance as heuristic

🔄 General Search Algorithm Steps


1. Initialize: Start with initial state in the queue.
2. Loop until solution found or no more nodes:
o Remove a node from the queue
o Test for goal state
o Expand node (apply actions)
o Add resulting states to the queue
1.11 Production Systems and its characteristics
What is a Production System?
A production system is a model of computation used in Artificial Intelligence to simulate
intelligent behavior. It consists of a set of rules (productions) and a database (working
memory).
A production system applies rules to data to produce new data, mimicking intelligent
decision-making.

🧱 Components of a Production System


Component Description Example

Working A database that stores the current state of the


At(Home), Have(Key)
Memory system (facts or conditions).

Production A set of condition–action rules: “If condition, IF At(Home) AND


Rules then action.” Have(Key) THEN GoTo(Car)

Control Determines which rule to apply when


Rule ordering strategy
System multiple are applicable (conflict resolution).

🔁 How it Works (Cycle)


1. Match: Identify all rules whose conditions match the current state in working
memory.
2. Conflict Resolution: If multiple rules match, choose one using a strategy.
3. Execute: Apply the selected rule to update the working memory.
4. Repeat until the goal is achieved.

🔍 Types of Production Systems


Type Description Example

Rules do not delete or contradict previously Logic systems, theorem


Monotonic
derived facts. proving.
Type Description Example

Rules may delete or modify facts; knowledge Expert systems that adapt to
Non-Monotonic
can change. changes.

Given an initial state, produces the same


Deterministic Calculator program.
output every time.

Non- May produce different outputs for the same Puzzle solvers, AI planning
Deterministic input (uses search). systems.

⭐ Characteristics of Production Systems


Characteristic Description

Knowledge is encoded in small, independent rules. Easy to update or


Modularity
add.

Declarative Nature Rules state what to do, not how.

Flexibility Can be adapted to many domains by changing rules.

Transparency Easy to trace and explain decision-making process (rule-by-rule).

Conflict Resolution Built-in strategy to decide among multiple applicable rules.

Forward or Can work in forward chaining (data-driven) or backward chaining


Backward (goal-driven).

🔗 Example: Production System for Diagnosing a Fever


Working Memory:
Symptom(Fever)
Symptom(Cough)
Rules:
IF Symptom(Fever) AND Symptom(Cough) THEN Diagnose(Flu)
IF Symptom(Fever) AND NOT Symptom(Cough) THEN Diagnose(Dengue)
Execution:
 Rule 1 is matched.
 System executes: Diagnose(Flu)

🔄 Forward vs. Backward Chaining


Chaining Type Description Use Case

Starts from facts → applies rules → reaches Data-driven


Forward Chaining
conclusion reasoning

Backward Starts from goal → looks for supporting Goal-driven


Chaining facts/rules reasoning
1.12 Applications of AI:
Application Area Use / Description Example
Healthcare Disease diagnosis, treatment planning, IBM Watson for cancer
drug discovery, robotic surgeries diagnosis
Finance Fraud detection, credit scoring, AI detects suspicious
algorithmic trading credit card use
Education Personalized learning, grading Duolingo’s adaptive
automation, virtual tutors learning
Retail & E- Product recommendation, customer Amazon product
commerce service, inventory management recommendations
Transportation Self-driving cars, traffic forecasting, Tesla Autopilot
route optimization
Manufacturing Predictive maintenance, quality control, AI robots inspecting
robotic automation products
Agriculture Crop health monitoring, yield AI drones for field
prediction, pest control analysis
Cybersecurity Threat detection, anomaly detection, Darktrace for cyber threat
real-time monitoring prevention
Entertainment & Content curation, interactive gaming, Netflix show suggestions
Media media creation
Smart Assistants Voice control, automation, smart home Siri, Alexa, Google
interaction Assistant

You might also like