[go: up one dir, main page]

0% found this document useful (0 votes)
10 views45 pages

AI Manual Without Code

The document outlines various experiments related to artificial intelligence, focusing on agent architectures, state space formulation, Prolog programming, and search algorithms. It discusses different types of agents, their applications, and the implementation of breadth-first and depth-first search algorithms. Each experiment concludes with a summary of the findings and the effectiveness of the methods used.

Uploaded by

Nachiket Patil
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
10 views45 pages

AI Manual Without Code

The document outlines various experiments related to artificial intelligence, focusing on agent architectures, state space formulation, Prolog programming, and search algorithms. It discusses different types of agents, their applications, and the implementation of breadth-first and depth-first search algorithms. Each experiment concludes with a summary of the findings and the effectiveness of the methods used.

Uploaded by

Nachiket Patil
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 45

EXPERIMENT NO.

1
Date of Performance :
Date of Submission :

AIM: Discuss suitable Agent Architecture for the problem.


THEORY:
1.1 Specifying Task Environment
In artificial intelligence, an agent is a computer program or system that is designed to perceive
its environment, make decisions and take actions to achieve a specific goal or set of goals. The
agent operates autonomously, meaning it is not directly controlled by a human operator.
Agents can be classified into different types based on their characteristics, such as whether they
are reactive or proactive, whether they have a fixed or dynamic environment, and whether they
are single or multi-agent systems.
1.2 Types of Agents
Agents can be grouped into five classes based on their degree of perceived intelligence and
capability
● Simple Reflex Agents
● Goal-Based Agents
● Learning Agent
● Multi-agent systems
● Hierarchical agents
1.2.1 Simple Reflex Agents :
Simple reflex agents ignore the rest of the percept history and act only on the basis of the current
percept. Percept history is the history of all that an agent has perceived to date. The agent
function is based on the condition-action rule. A condition-action rule is a rule that maps a state
i.e., a condition to an action. If the condition is true, then the action is taken, else not.
Problems with Simple reflex agents are :
● Very limited intelligence.
● No knowledge of non-perceptual parts of the state.
● Usually too big to generate and store.

1.2.2 Goal-Based Agents :


These kinds of agents take decisions based on how far they are currently from their
goal(description of desirable situations). Their every action is intended to reduce their distance
from the goal. This allows the agent a way to choose among multiple possibilities, selecting the
one which reaches a goal state.
1.2.3 Learning Agent :
A learning agent in AI is the type of agent that can learn from its past experiences or it has
learning capabilities. It starts to act with basic knowledge and then is able to act and adapt
automatically through learning. A learning agent has mainly four conceptual components, which
are:
● Learning element: It is responsible for making improvements by learning from the
environment.
● Critic: The learning element takes feedback from critics which describes how well the
agent is doing with respect to a fixed performance standard.
● Performance element: It is responsible for selecting external action.
● Problem Generator: This component is responsible for suggesting actions that will lead to
new and informative experiences.

1.2.4 Learning Agent :


These agents interact with other agents to achieve a common goal. They may have to coordinate
their actions and communicate with each other to achieve their objective.
A multi-agent system (MAS) is a system composed of multiple interacting agents that are
designed to work together to achieve a common goal. These agents may be autonomous or semi-
autonomous and are capable of perceiving their environment, making decisions, and taking
action to achieve the common objective.
1.2.5 Hierarchical Agents :
These agents are organized into a hierarchy, with high-level agents overseeing the behavior of
lower-level agents. The high-level agents provide goals and constraints, while the low-level
agents carry out specific tasks. Hierarchical agents are useful in complex environments with
many tasks and sub-tasks.
Hierarchical agents are agents that are organized into a hierarchy, with high-level agents
overseeing the behavior of lower-level agents. The high-level agents provide goals and
constraints, while the low-level agents carry out specific tasks. This structure allows for more
efficient and organized decision-making in complex environments.
Hierarchical agents can be implemented in a variety of applications, including robotics,
manufacturing, and transportation systems. They are particularly useful in environments where
there are many tasks and sub-tasks that need to be coordinated and prioritized.
1.3 Uses Of Agents :
Agents are used in a wide range of applications in artificial intelligence, including:
● Robotics: Agents can be used to control robots and automate tasks in manufacturing,
transportation, and other industries.
● Smart homes and buildings: Agents can be used to control heating, lighting, and other
systems in smart homes and buildings, optimizing energy use and improving comfort. ●
Transportation systems: Agents can be used to manage traffic flow, optimize routes for
autonomous vehicles, and improve logistics and supply chain management.
● Healthcare: Agents can be used to monitor patients, provide personalized treatment plans,
and optimize healthcare resource allocation.
● Finance: Agents can be used for automated trading, fraud detection, and risk management
in the financial industry.
CONCLUSION:We have discussed suitable Agent Architecture for the problem.

SIGN AND REMARK:

DATE:

R1 R2 R3 R4 Total Signature
(3 Marks) (4 Marks) (4 Marks) (4 Marks) (15 Marks)
EXPERIMENT NO. 2
Date of Performance:

Date of Submission:

AIM: Assignments on State space formulation and PEAS representation for various AI

applications.

THEORY:

1.1 Specifying Task Environment


Task environments, which are essentially the "problems" to which rational agents are the
"solutions.” In designing an agent, the first step must always be to specify the task
environment as fully as possible with the help of PEAS (Performance, Environment,
Actuators, Sensors) description.
1.2 Example of PEAS
Problem Definition:
•PEAS(Performance, Environment, Actuators, Sensors)
•First step to design an agent is to specify task environment as fully as possible.
•While designing a rational agent we have to specify the performance measure, the
environment, and agent’s actuators and sensors.
•We group them under the heading task environment

PEAS Descriptors:
A goal directed agent needs to achieve certain goals. Such an agent selects its actions based
on the goal it has. Many problems can be represented as a set of states and a set of rules of
how one state is transformed to another. Each state is an abstract representation of the
agent's environment. It is an abstraction that denotes a configuration of the agent.
1.3 Problem Formulation
• Formulate a problem as a state space search by showing the legal problem states, the legal
operators, and the initial and goal states .
• A state is defined by the specification of the values of all attributes of interest in the world
• An operator changes one state into the other; it has a precondition which is the value of
certain attributes prior to the application of the operator, and a set of effects, which are the
attributes altered by the operator
• The initial state is where you start
• The goal state is the partial description of the solution
An initial state is the description of the starting configuration of the agent An action or an
operator takes the agent from one state to another state which is called a successor state. A
state can have a number of successor states. A plan is a sequence of actions. The cost of a
plan is referred to as the path cost. The path cost is a positive.
Problem formulation means choosing a relevant set of states to consider, and a feasible set of
operators for moving from one state to another. Search is the process of considering various
possible sequences of operators applied to the initial state, and finding out a sequence which
culminates in a goal state.
Search Problem is formally described as following:
• S: the full set of states
• s0 : the initial state
• A:SS is a set of operators
• G is the set of final states. Note that G ⊆S

The search problem is to find a sequence of actions which transforms the agent from the
number. In many cases the path∈ cost is computed by taking the sum of the costs of each
action. initial state to a goal state g G. A sequence of states is called a path. The cost of a
path is positive.
1.4 Example of Problem formulation
Problem Formulation:
Agent Type Performance Environment Actuators Sensors
Measure

Part Picking Error Hazardous Gripper 3D Object


Robot Recovery, Material Customizatio Detection,
Scalability, Handling, n, Conveyor Environm
Safety, Low Light Speed ental
Throughput Conditions, Control, Arm Sensors,
Rate, Item Multi-Level Path Planning Shelf
Sorting, Racks Space
Predictive Utilization,
Maintenance Object
Fragility
Detection
Partial Tabulation Chart for Part Picking Robot
Percept Sequence Action

Camera Detection Robot Activation

Identifying Location Planning the optimal


path

Detect a fragile item Adjusting its grip

Identifies the items Plans the optimal


destination path to reach the
destination

Detects an obstacle Robot slows down


ahead

Item placed at Robot releases the


Destination item

CONCLUSION: We have successfully implemented State space formulation and PEAS


representation for various AI applications.
SIGN AND REMARK:

DATE:

R1 (3 R2 (4 R3 (4 R4 (4 Total Signature
Marks) Marks) Marks) Marks)
(15 Marks)

EXPERIMENT NO. 3
Date of Performance :

Date of Submission :

AIM: Exploring Prolog and understanding prolog programs as an AI programming language


THEORY:
3.1 Understanding Prolog

Tom X
Pam

Bob
Liz Parent
Y Grand
Parent

Ann
Pat
jim
Parent

Jim

Family tree

parent
parent
X Y

female sister

X X
Y

CODE 1:
Expected to produce the output in taking the parents input and deriving various relations.
OUTPUT:
CONCLUSION: SUCCESSFULLY IMPLEMENTED BASIC PROGRAMS AND DEALT WITH VARIOUS

OTHER PROBLEMS AS PROLOG IS USEFUL FOR SOLVING FAMILIAR PROBLEMS LIKE MONKEY

BANANA PUZZLE AND FLIGHT ROUTES. USING THE RIGHT DATA STRUCTURES MADE THE

SOLUTION EVEN BETTER, REVEALING HOW SMART PLANNING AND PROBLEM-SOLVING METHODS

MATTER FOR DEALING WITH TOUGH SITUATIONS.

SIGN AND REMARK:


DATE:

R1 R2 R3 R4 Total Signature
(3 Marks) (4 Marks) (4 Marks) (4 Marks) (15 Marks)

EXPERIMENT 4
Date of Performance :
Date of Submission :

AIM: To implement Breadth First Search as Uninformed Search and Depth First Search.
S/W USED : C/C++/Java/Prolog
THEORY:
Breadth First Search
Breadth First Search (BFS) searches breadth-wise in the problem space. Breadth-First search is
like traversing a tree where each node is a state which may a be a potential candidate for
solution. It expands nodes from the root of the tree and then generates one level of the tree at a
time until a solution is found. It is very easily implemented by maintaining a queue of nodes.
Initially the queue contains just the root. In each iteration, node at the head of the queue is
removed and then expanded. The generated child nodes are then added to the tail of the queue.
Algorithm: Breadth-First Search
1. Create a variable called NODE-LIST and set it to the initial state.
2. Loop until the goal state is found or NODE-LIST is empty.
a. Remove the first element, say E, from the NODE-LIST. If NODE-
LIST was empty then quit.
b. For each way that each rule can match the state described in E do:
i) Apply the rule to generate a new state.
ii) If the new state is the goal state, quit and return this state.
iii) Otherwise add this state to the end of NODE-LIST
Since it never generates a node in the tree until all the nodes at shallower levels have been
generated, breadth-first search always finds a shortest path to a goal. Since each node can be
generated in constant time, the amount of time used by Breadth first search is proportional to the
number of nodes generated, which is a function of the branching factor b and the solution d.
Since the number of nodes at level d is b d, the total number of nodes generated in the worst case
is b + b2 + b3 +… + bd i.e. O(bd) , the asymptotic time complexity of breadth first search.

Breadth First Search


Look at the above tree with nodes starting from root node, R at the first level, A and B at the
second level and C, D, E and F at the third level. If we want to search for node E then BFS will
search level by level. First it will check if E exists at the root. Then it will check nodes at the
second level. Finally it will find E a the third level.
Advantages Of Breadth-First Search
Breadth first search will never get trapped exploring the useless path forever except if the no. of
successors to any are infinite
1. BFS is complete i.e.If there is a solution, BFS will definitely find it out.
2. BFS is optimal i.e. If there is more than one solution then BFS can find the minimal
one that requires less number of steps.
Disadvantages Of Breadth-First Search
1. The main drawback of Breadth first search is its memory requirement. Since each level
of the tree must be saved in order to generate the next level, and the amount
of memory is proportional to the number of nodes stored, the space complexity of BFS
is O(bd). As a result, BFS is severely space-bound in practice so will exhaust
the memory available on typical computers in a matter of minutes.
2. If the solution is farther away from the root, breath first search will consume lot of
time.
Depth First Search:-
It is a recursive algorithm to search all the vertices of a tree data structure or a graph. The depth-
first search (DFS) algorithm starts with the initial node of graph G and goes deeper until we find
the goal node or the node with no children.Because of the recursive nature, stack data structure
can be used to implement the DFS algorithm. The process of implementing the DFS is similar to
the BFS algorithm.

Algorithm for DFS:-

Step 1: SET STATUS = 1 (ready state) for each node in G

Step 2: Push the starting node A on the stack and set its STATUS = 2 (waiting state)

Step 3: Repeat Steps 4 and 5 until STACK is empty

Step 4: Pop the top node N. Process it and set its STATUS = 3 (processed state)

Step 5: Push on the stack all the neighbors of N that are in the ready state (whose STATUS = 1)
and set their STATUS = 2 (waiting state)

[END OF LOOP]

Step 6: EXIT

Now, let's understand the working of the DFS algorithm by using an example. In the example
given below, there is a directed graph having 7 vertices.

Advantages of DFS:-
· Memory requirement is only linear with respect to the search graph. This is in contrast
with breadth-first search which requires more space. The reason is that the algorithm only
needs to store a stack of nodes on the path from the root to the current node.
· The time complexity of a depth-first Search to depth d and branching factor b (the
number of children at each node, the outdegree) is O(bd) since it generates the same set
of nodes as breadth-first search, but simply in a different order. If depth-first search finds
a solution without exploring much in a path then the time and space it takes will be very
less.
· DFS requires less memory since only the nodes on the current path are stored. By chance
DFS may find a solution without examining much of the search space at all.
Disadvantages of Depth First Search:-
The disadvantage of Depth-First Search is that there is a possibility that it may down the left-
most path forever. Even a finite graph can generate an infinite tre One solution to this problem is
to impose a cutoff depth on the search. Although ideal cutoff is the solution depth d and this
value is rarely known in advance of actually solving the problem. If the chosen cutoff depth is
less than d, the algorithm will fail to find a solution, whereas if the cutoff depth is greater than d,
a large price is paid in execution time, and the first solution found may not be an optimal
one.Depth-First Search is not guaranteed to find the solution. And there is no guarantee to find a
minimal solution, if more than one solution.

Code:- Define various edges and with help of prolog get the BFS and DFS
INPUT and OUTPUT:
Breadth-First Search:

Depth-First Search:

CONCLUSION: Successfully Implemented Breadth First Search and Depth First Search as
Uninformed Search
SIGN AND REMARK:

DATE:

R1 R2 R3 R3 Total Signature
(3 (4 (4 (4 Marks) (15
Marks) Marks) Marks) Marks)
EXPERIMENT 5

Date of Performance :
Date of Submission :
AIM: Developing and Executing an Interactive Human-Computer Interaction System for
Solving the 8-Puzzle Problem using Prolog
S/W USED : C/C++/Java/Prolog
THEORY:
This experiment focuses on solving the 8-Puzzle Problem using the Prolog language. By
utilizing Prolog's logical reasoning capabilities, we implement Breadth-First Search (BFS)
and Depth-First Search (DFS) as uninformed search strategies to navigate the puzzle's state
space. Through a comparative analysis, we investigate the performance of these strategies in
terms of solution quality, computational efficiency, and memory usage. This study provides
insights into the efficacy of Prolog for solving complex problems and offers a perspective on
the effectiveness of different search approaches for the 8-Puzzle Problem.
Algorithm: Min- max, Alpha – Beta Pruning
Breadth-First Search (BFS) Algorithm for 8-Puzzle Problem in Prolog:
Define the initial state of the 8-Puzzle problem.
Create a queue and enqueue the initial state.
Create an empty set to track visited states.
While the queue is not empty:
a. Dequeue the front state from the queue.
b. If the current state is the goal state, terminate with success.
c. Generate all possible valid successor states from the current state.
d. For each successor state:
i. If it has n&#39 ;t been visited:
- Mark it as visited.
- If the new state is a goal state, terminate with success.
- Enqueue the state onto the queue.
Depth-First Search (DFS) Algorithm for 8-Puzzle Problem in Prolog:
Define the initial state of the 8-Puzzle problem.
Create a stack and push the initial state.
Create an empty set to track visited states.
While the stack is not empty:
a. Pop the top state from the stack.
b. If the current state is the goal state, terminate with success.
c. Generate all possible valid successor states from the current state.
d. For each successor state:
i. If it hasn't been visited:
- Mark it as visited.
- If the new state is a goal state, terminate with success.
- Push the state onto the stack.
In both algorithms, the initial state represents the configuration of the 8-Puzzle, the goal state
represents the desired arrangement, and the successor states are generated by moving tiles to
empty positions. The BFS and DFS strategies differ in how they manage the queue and stack
for state exploration.
In Prolog implementation, you would need to define predicates for generating valid
successor states, checking for the goal state, tracking visited states, and managing the queue
or stack accordingly. The specific syntax and structure would depend on the Prolog dialect
you are using and your implementation approach.
Advantages of Best-First Search (BFS) for Solving the 8-Puzzle Problem:
1. Completeness: BFS is guaranteed to find a solution if one exists. It explores all possible
states level by level, ensuring that the shortest solution is found.
2. Optimal Solution: When applied to the 8-Puzzle Problem, BFS will always find the optimal
solution, i.e., the solution with the fewest number of moves. This is because BFS explores
states in order of their distance from the initial state.
3. Exploration of Multiple Solutions: If there are multiple solutions with the same length, BFS
will explore and find multiple solutions.
Disadvantages of Best-First Search (BFS) for Solving the 8-Puzzle Problem:
1. Memory Intensive: BFS stores all generated states in memory, which can quickly become
impractical for large state spaces like the 8-Puzzle. The memory requirement grows
exponentially with the depth of the search.
2. Slower for Deeper Solutions: BFS explores all nodes at a given depth before moving to the
next level. If the solution is deep in the search tree, BFS may take a long time to reach it.
Advantages of Depth-First Search (DFS) for Solving the 8-Puzzle Problem:
1. Memory Efficiency: DFS stores fewer states in memory compared to BFS, as it explores
deeper into the state space before backtracking. This makes DFS suitable for large state
spaces.
2. Faster for Deep Solutions: If the solution is located deep in the search tree, DFS might find
it faster compared to BFS since it prioritizes going deep before exploring wider.
Disadvantages of Depth-First Search (DFS) for Solving the 8-Puzzle Problem:
1. Completeness not Guaranteed: DFS might not find a solution even if one exists. It can get
stuck in infinite loops or explore non-promising branches early, missing the optimal solution.
2. Non-Optimal Solution: The solution found by DFS might not be optimal. It tends to find the
first solution it encounters, which might not be the shortest one.
3. Lack of Multiple Solutions: DFS generally finds a single solution and may not explore
multiple solutions of the same length.
CODE: To achieve the goal state of unorganized 8 puzzle problem.
INPUT and OUTPUT:

CONCLUSION: We learned to Develop and Execute an Interactive Human-Computer


Interaction System for Solving the 8-Puzzle Problem using Prolog successfully.
SIGN AND REMARK:
DATE:

R1 (3 R2 R3 (4 R4 (4 Total Signature
Marks) (4 Marks) Marks)
Marks) (15 Marks)
EXPERIMENT 6

Date of Performance:

Date of Submission :

AIM: Designing and Implementing a Tic-Tac-Toe Game using Prolog for Human-Computer
Interaction
S/W USED : C/C++/Java/Prolog
THEORY:
Efficient AI Decision-Making in Tic-Tac-Toe using Minimax with Alpha-Beta Pruning
This experiment investigates the utilization of Prolog for creating and engaging in a game of Tic-
Tac-Toe. Prolog's logical reasoning abilities offer an intriguing foundation for implementing the
game's rules and strategic decision-making. By encoding the game's logic and providing an
interactive interface, this experiment explores how Prolog can facilitate the development of AI-
driven games, showcasing its potential in simplifying complex decision structures for interactive
entertainment.
Algorithm: Min- max, Alpha – Beta Pruning
Create a variable called NODE_LIST and set it to the initial state of the Tic-Tac-Toe game.
Perform the BFS loop until the goal state (winning state or draw) is found or NODE_LIST is
empty.
a. Remove the first element from NODE_LIST. If NODE_LIST is empty, the search process
terminates.
b. For each possible move that can be made from the current state, do the following:
i) Apply the move to generate a new game state.
ii) If the new state is a goal state (either a winning state or a draw), the search process
terminates, and you've found a solution.
iii) Otherwise, add this new state to the end of NODE_LIST.
Depth-First Search (DFS) for Tic-Tac-Toe in Prolog:
Create a variable called NODE_LIST and set it to the initial state of the Tic-Tac-Toe game.
Perform the DFS loop until the goal state (winning state or draw) is found or NODE_LIST is
empty.
a. Remove the first element from NODE_LIST. If NODE_LIST is empty, the search process
terminates.
b. For each possible move that can be made from the current state, do the following:
i) Apply the move to generate a new game state.
ii) If the new state is a goal state (either a winning state or a draw), the search process
terminates, and you've found a solution.
iii) Otherwise, add this new state to the beginning of NODE_LIST.
It's important to note that in the context of a Tic-Tac-Toe game, the game state would include the
current positions of X and O on the board. The "possible moves" would refer to empty cells
where the current player can place their symbol. The "goal state" would be when a player wins or
when the game ends in a draw.
Remember, the implementation of BFS and DFS in Prolog would involve defining predicates
that represent the game state, possible moves, and checking for a goal state. The search loop
would involve using recursion and backtracking, which are fundamental concepts in Prolog
programming.
Advantages of Alpha-Beta Pruning in Prolog for Tic-Tac-Toe:
1. Significant Reduction in Search Space:
● Alpha-beta pruning helps in cutting off branches of the game tree that do not need
to be explored.
● This reduction in search space results in a significant speedup, allowing the
algorithm to explore deeper levels of the game tree within a reasonable time
frame.
2. Efficiently Identifies Best Moves:
● Alpha-beta pruning allows the algorithm to determine the best possible move by
minimizing the number of nodes evaluated.
● This efficiency is particularly important in games like Tic-Tac-Toe, where the
search space is manageable, but the algorithm still needs to consider multiple
moves ahead.
3. Improves Search Depth:
● By eliminating irrelevant branches early in the search process, alpha-beta pruning
enables the algorithm to explore deeper levels of the game tree.
● This improved search depth often results in better gameplay decisions by
considering more complex tactics and strategies.
Disadvantages of Alpha-Beta Pruning in Prolog for Tic-Tac-Toe:
1. Complex Implementation:
● Implementing alpha-beta pruning correctly requires careful programming and
understanding of the algorithm.
● The logic for maintaining alpha and beta values, along with recursive calls, can be
challenging to implement and debug.
2. Influence of Move Ordering:
● The effectiveness of alpha-beta pruning can be influenced by the order in which
moves are considered.
● A suboptimal move ordering might lead to fewer pruning opportunities, reducing
the overall efficiency of the algorithm.
3. Doesn't Address All Decision Factors:
● While alpha-beta pruning is excellent for minimizing unnecessary evaluations, it
doesn't account for all aspects of the game state.
● Some game positions might require considering additional evaluation criteria
beyond what alpha-beta pruning provides.
4. Doesn't Replace Heuristic Evaluation:
● Alpha-beta pruning optimizes the search process but still relies on an effective
heuristic evaluation function to assess the desirability of game states.
● The quality of the heuristic function can greatly impact the algorithm's decision-
making abilities.
In conclusion, alpha-beta pruning offers significant advantages by optimizing the search process
and identifying strong moves in games like Tic-Tac-Toe. However, its implementation
complexity and dependence on move ordering and heuristic evaluation should be taken into
account when designing a game-playing algorithm in Prolog.
PROGRAM :-To implement TIC TAC TOE game using adversial search.
INPUT and OUTPUT:

CONCLUSION: We designed and Implemented a Tic-Tac-Toe Game using Prolog for Human-
Computer Interaction.
SIGN AND REMARK:
DATE:

R1 R2 R3 R3 Total Signature
(3 (4 (4 (4 (15
Marks) Marks) Marks) Marks) Marks)
EXPERIMENT 7
Date of Performance:

Date of Submission:

AIM: Learning Hill Climbing Algorithm in AI through Practical Implementation in Prolog.


S/W used: C/C++/Java/Prolog.
THEORY:
The Hill Climbing algorithm engages in an ascent through the problem space, aiming to find an
optimal solution by iteratively improving the current state. Much like scaling a peak in a
landscape, each state symbolizes a potential solution configuration. Hill Climbing begins at an
initial state and advances toward better states by making locally optimal choices. It explores
neighboring states, selecting the one with the highest ascent, and continues this process until it
reaches a peak, a state where no immediate improvement is possible. Implementing Hill
Climbing often involves comparing the evaluation of neighboring states to identify the best
direction to progress.

Algorithm for Hill Climbing

1. Define the initial state of the problem.


2. Evaluate the current state using an objective function to determine its quality.
3. Repeat until no better state can be found or a stopping criterion is met: a. Generate
neighbouring states by making small changes to the current state. b. Evaluate the quality
of each neighbouring state using the objective function. c. Select the neighbouring state
with the highest evaluation (if it's better than the current state). d. If the selected state is
better than the current state, update the current state with the selected state.
4. Return the current state as the solution or the best-found solution.

Features of Hill Climbing algorithm:

1. Hill Climbing is a simple and intuitive algorithm that is easy to understand and
implement.
2. It can be used in a wide variety of optimization problems, including those with a large
search space and complex constraints.
3. Hill Climbing is often very efficient in finding local optima, making it a good choice for
problems where a good solution is needed quickly.
4. The algorithm can be easily modified and extended to include additional heuristics or
constraints.

Disadvantages of Hill Climbing algorithm:

1. Hill Climbing can get stuck in local optima, meaning that it may not find the global
optimum of the problem.
2. The algorithm is sensitive to the choice of initial solution, and a poor initial solution may
result in a poor final solution.
3. Hill Climbing does not explore the search space very thoroughly, which can limit its
ability to find better solutions.
4. It may be less effective than other optimization algorithms, such as genetic algorithms or
simulated annealing, for certain types of problems.

PROGRAM:To define initial state and its evaluation and provide recursive hill climbing
algorithm with a stopping criterion

OutPut:
INPUT and OUTPUT:

CONCLUSION: Learned Hill Climbing Algorithm in AI through Practical Implementation in


Prolog.
DATE:
SIGN AND REMARK:

R1 R2 R3 R3 Total Signature
(3 (4 (4 (4 (15
Marks) Marks) Marks) Marks) Marks)
EXPERIMENT 8
Date of Performance:

Date of Submission:

AIM: Prove goal sentences from following set statements in FOPL by applying forward,
backward.
S/W used: C/C++/Java/Prolog.
THEORY:
Forward Chaining in AI using FOL Proof

When utilizing an inference engine, forward chaining is indeed referred to as forward deduction
or forward reasoning. Forward chaining is a type of reasoning in which atomic clauses in a
knowledge base are used to retrieve more data in the forward path using inference principles
(Modus Ponens).
Before deriving new information, the inference engine evaluates current data, derivations, and
conditions in this sort of chaining. The manipulation of information in the knowledge base
results in the achievement of an objective (goal).

Forward chaining may be employed for application planning, monitoring, management, and
interpretation.

ii. Properties of forward chaining

● It follows a down-up strategy, going from bottom to top.


● It is the process of reaching a decision based on known facts or information, beginning
with the initial state and progressing to the target state.
● The forward-chaining method is also known as data-driven because we achieve our
objective by employing available data.
● The forward-chaining method is widely used in expert systems such as CLIPS, business
rule systems, and manufacturing rule systems.

Before delving further into the chaining illustration, let us first understand the preprocessing that
must be performed on the provided sentences to implement forward chaining, and that
preprocessing is as follows:

1. Change the clauses to First Order Predicate Logic.


2. Distinguish truths from generated clauses.
3. Make note of what we need to establish at the end.

iii. Examples of forward chaining

Some Fact:

1. It is a crime for Indians to give weapons to enemies of India.


2. Country XYZ is an enemy of India
3. XYZ has some Nuclear warheads
4. All Nuclear warheads were sold to XYZ by Rajiv
5. Rajiv is Indian
6. A nuclear warhead is a weapon

Goal:

Rajiv is criminal
iv. Facts Conversion into FOL
Facts Conversion into First-Order Logic
Step 1: It is a crime for Indians to give weapons to enemies of India.
FOL: Indian( P ) ^ Weapon( Q ) ^ Enemy( R ) ^ Sells( P, Q, R ) Criminal( P )
Step 2: Country XYZ is an enemy of India
FOL: Enemy( XYZ, India )
Step 3: XYZ has some Nuclear warheads
FOL: Owns( XYZ, x ) Nuclear warheads( x )
Step 4: All Nuclear warheads were sold to XYZ by Rajiv

FOL: Nuclear warheads( x ) ^ Owns( XYZ, x ) Sell( Rajiv, x, XYZ )


Step 5: Rajiv is Indian
FOL: Indian( Rajiv )
Step 6: A nuclear warhead is a weapon
FOL: Nuclear warhead( x ) Weapon ( x )
v. Forward chaining proof
Iteration 1:
1. Begin with the knowledge base and well-known facts: The facts that cannot be inferred
from any other (represented in the LHS of the clauses)

2. To join the connections, add inferences one at a time.


○ As Rule (1) does not fulfill the, it is unlikely to be incorporated.
○ Rules (2) and (3) FOL have already been incorporated.
○ Rule (4) corresponds to the conclusion, so we will include it.

3.
○ Rule(5) lacks RHS, therefore nothing would be incorporated.
○ Rule (6) has a weapon in RHS that is taken from Nuclear Warhead(x), which has
already been a component, so we will include a weapon in the following state.
Iteration 2:
As we can see from the first iteration, Rule(1) meets all of the LHS requirements. So now that
we have all four FOL in LHS, we can put Criminal(x) from RHS into the following stage.

We concluded that Rajiv is a criminal.


Hence proved!
Simple Example:-
% Define some facts
bird(sparrow).
bird(eagle).
mammal(cat).
mammal(dog).
% Define rules
can_fly(X) :- bird(X).
has_fur(X) :- mammal(X).
% Query to perform forward chaining
?- can_fly(X).
In this example, we have defined facts about birds and mammals. We also defined two rules:
can_fly(X) :- bird(X) and has_fur(X) :- mammal(X). When you query can_fly(X)., Prolog will
automatically use forward chaining to infer that sparrow and eagle can fly.

So, in Prolog, you don't explicitly execute a forward chaining command; instead, you define your
knowledge base with facts and rules and let Prolog's inference engine perform forward chaining
when you make queries.
vi. Advantages
● It may be employed to reach several inferences.
● It serves as a solid foundation for drawing inferences.
● It is more adaptable than backward chaining since the data generated from it is not
limited.
vii. Disadvantages
● Forward chaining can be a time-consuming procedure. Eliminating and synchronizing
available info may take a long period.The interpretation of facts or findings for this form
of chaining is not as obvious as it is for backward chaining. The former employs a goal-
driven approach that yields quick results.
Backward chaining
i. What is backward chaining?
Backward Chaining, also known as backward reasoning, is an inference engine reasoning
method that begins with an imagined objective. It employs backtracking to discover the most
optimum method for dispute resolution or to achieve the target state, in which the search begins
from the outcome and travels back to comprehend the circumstances from which it came.
Inference engines use this reasoning method to discover the circumstances and rules that led to a
logical outcome or conclusion.
ii. Properties of backward chaining
The following are the main traits that distinguish backward chaining from forward chaining:

● It is referred to as a top-down strategy.


● Backward chaining is founded on the law of modus ponens reasoning.
● Backward chaining divides the objective into sub-goals to demonstrate the truth of the
facts.
● A goal-driven strategy is used because a list of objectives determines which rules are
chosen and used.
● The backward-chaining method is used in game theory, automatic theorem-proving
techniques, inference engines, prove assistants and other artificial intelligence uses.
● For evidence, the backward-chaining technique mostly employed a depth-first search
strategy.

Before going any further with the chaining example, let's first comprehend the algorithm that
needs to be applied to the given phrases to apply backward chaining:

1. First, the system checks the information base to see if the objective has been stated.
2. If not, it scans the knowledge base for rules until it finds one with the objective in its
THEN portion in the shape of premises.
3. Once the hypotheses are specified and recognized, the system checks to see if they are in
the knowledge base; if not, a new objective is set. This new objective is known as a sub-
goal to be proven.
4. The system repeats the above stages until it discovers the premise that's not given by any
regulation. It is referred to as a primitive premise.
5. Finally, when the system locates the primitive, it prompts the user for additional data,
which is employed to establish both the sub-goal and the initial goal.

iii. Examples of backward chaining

Some Facts:
1. Ravi enjoys a wide variety of foods.
2. Banana are food.
3. Pizza is food.
4. A food is anything that anyone consumes and isn't harmed by.
5. Sam eats Idli and is still alive.
6. Bill eats everything Sam eats.

Goal:

Ravi Like Idli


Conversion into FOL:
The first sentence can be expressed as follows: ∀x Food ( x ) -> Likes ( Ravi, x )
The second sentence can be expressed as follows:: Food ( Banana )
The third sentence can be expressed as follows:: Food ( Pizza )
The fourth sentence can be expressed as follows:: ∀x ∀y Eats(x, y) ^ ¬ Harmed(y) -> Food(x)
The fifth sentence can be expressed as follows:: Eats ( Idli, Sam ) -> ¬ Harmed ( Sam )
The sixth sentence can be expressed as follows:: ∀x Eats ( x, Sam ) -> Eats ( x, Bill )
iv. Backward-Chaining proof
Iteration 1:
Separation of Facts:
● Food ( Banana )
● Food ( Pizza )
● Eats ( Idli, Sam ) -> ¬ Harmed ( Sam )
Iteration 2:

1. We must regard the statement we must establish to be true, i.e. Likes ( Ravi, Idli ) are
now true.
2. We can deduce from the first rule that Food ( Idli ) is true.
3. We can deduce from the fourth rule that Eats ( Idli, Ravi ) -> ¬ Harmed ( Ravi ) is true.
We demonstrated that the assertion we wished to prove is legitimate because we ended up with
all of the statements being true.
Hence Proved!
Simple example:-
% Define some facts
parent(john, mary).
parent(john, lisa).
parent(mary, ann).
parent(lisa, pat).
% Define rules
ancestor(X, Y) :- parent(X, Y).
ancestor(X, Y) :- parent(X, Z), ancestor(Z, Y).
% Query using backward chaining
?- ancestor(john, pat).
In this example, we have a knowledge base that represents parent-child relationships. We also
have two rules: ancestor/2 that defines the relationship between ancestors and descendants.
When you query ancestor(john, pat)., Prolog will use backward chaining to satisfy this goal. It
will find that john is the parent of lisa, and lisa is the parent of pat, which satisfies the goal, so it
will return true. If you query ancestor(john, ann)., Prolog will recursively apply the rule to find
that john is the parent of mary, and mary is the parent of ann, and it will return true. In Prolog,
you can use various built-in predicates like call/1, assert/1, and retract/1 to control the flow of
backward chaining and manipulate the knowledge base during querying. Backward chaining
allows you to find solutions to specific goals or queries based on the available facts and rules in
the knowledge base.
v. Advantages
Backward Chaining, like Forward Chaining, has several advantages, a few of which are listed
below:

● The problem typically begins by developing a hypothesis and afterward determining


whether or not it can be proven.
● Because it is concentrated on a specific objective, it only employs parts of the knowledge
base that are pertinent to the hypothesis.
● It is appropriate for issues requiring an expert system for diagnosis, prescribing, and
troubleshooting.
● Backward chaining is a faster procedure because it only analyzes and verifies the
necessary rules.
vi. Disadvantages
Backward Chaining is an immensely helpful reasoning method that conducts task analysis
swiftly and effectively assists the system in reaching the target state. However, this method has a
few disadvantages, including:

● It may pursue a specific line of reasoning and inquiry even if it demonstrates to be


impractical in the end.
● Backward chaining can be difficult to execute at times.
Differences Between Forward and Backward Chaining
The following is the distinction between forward and backward chaining:

● As the name implies, forward chaining begins with known facts and moves forward by
implementing logical rules to extract more information, and it keeps going until it reaches
the objective, whereas backward chaining begins with the goal and moves backward by
implementing logical deduction rules to evaluate the information that satisfies the
objective.
● Backward chaining is a goal-driven inference method, whereas forward chaining is a
data-driven inference technique.
● Forward chaining is referred to as the down-up method, whereas backward chaining is
referred to as the top-down approach.
● Backward chaining employs a depth-first search strategy, whereas forward chaining
employs a breadth-first search strategy.
● Both forward and reverse chaining use the Modus ponens reasoning rule.
● Forward chaining is useful for tasks like planning, design process tracking, diagnostic,
and categorization, whereas backward chaining is useful for tasks like categorization and
diagnosis.
● Forward-chaining may include multiple ASK questions from the information source,
whereas backward chaining may include fewer ASK questions.
● Forward chaining is slow because it tests all of the rules, whereas backward chaining is
quick because it only examines the rules that are needed.
● In artificial intelligence, backward and forward chaining are essential techniques of
logical thinking. The primary distinctions between these ideas are methodology, strategy,
technique, speed, and tactical direction.
● Forward chaining is essential for coders who want to create effective computer-based
systems using data-driven methods. Backward chaining is essential for coders who want
to use goal-driven algorithms to create effective solutions in complicated database
systems.

Conclusion: Goal sentences from following set statements in FOPL by applying forward,
backward proved.

DATE:
SIGN AND REMARK:

R1 R2 R3 R4 Total Signature
(3 Marks) (4 Marks) (4 Marks) (4 Marks) (15 Marks)
EXPERIMENT NO. 9

Date of Performance :

Date of Submission :
AIM: Create a Bayesian Network for given problem statement and draw inferences from it.(You
can belief and decision networks tool for modeling Bayesian Networks)
THEORY:
Bayesian Belief Network in artificial intelligence. Bayesian belief network is key computer
technology for dealing with probabilistic events and to solve a problem which has uncertainty.
We can define a Bayesian network as: "A Bayesian network is a probabilistic graphical
model which represents a set of variables and their conditional dependencies using a directed
acyclic graph.".It is also called a Bayes network, belief network, decision network, or
Bayesian model. Bayesian networks are probabilistic, because these networks are built from a
probability distribution, and also use probability theory for prediction and anomaly detection.
Real world applications are probabilistic in nature, and to represent the relationship between
multiple events, we need a Bayesian network. It can also be used in various tasks including
prediction, anomaly detection, diagnostics, automated insight, reasoning, time series prediction,
and decision making under uncertainty.
Bayesian Network can be used for building models from data and experts opinions, and it
consists of
Two parts: Directed Acyclic Graph, Table of conditional probabilities.
The generalized form of Bayesian network that represents and solve decision problems under
uncertain knowledge is known as an Influence diagram. A Bayesian network graph is made up of
nodes and Arcs (directed links), where:
o Each node corresponds to the random variables, and a variable can be continuous or discrete.
o Arc or directed arrows represent the causal relationship or conditional probabilities between
random variables. These directed links or arrows connect the pair of nodes in the graph.These
links represent that one node directly influence the other node, and if there is no directed
link that means that nodes are independent with each other
o In the above diagram, A, B, C, and D are random variables represented by the
nodes of the network graph.
o If we are considering node B, which is connected with node A by a directed arrow,
then node A is called the parent of Node B.
o Node C is independent of node A.
The Bayesian network has mainly two components: Causal Component and Actual numbers
Each node in the Bayesian network has condition probability distribution P(X i |Parent(X i ) ),
which determines the effect of the parent on that node. Bayesian network is based on Joint
probability distribution and conditional probability. So let's first understand the joint
probability distribution:
Joint probability distribution:
If we have variables x1, x2, x3,....., xn, then the probabilities of a different combination of x1,
x2, x3..
xn, are known as Joint probability distribution.

P[x 1 , x 2 , x 3 ,....., x n ], it can be written as the following way in terms of the joint probability
distribution.
= P[x 1 | x 2 , x 3 ,....., x n ]P[x 2 , x 3 ,....., x n ]
= P[x 1 | x 2 , x 3 ,....., x n ]P[x 2 |x 3 ,....., x n ]....P[x n-1 |x n ]P[x n ].

In general for each variable Xi, we can write the equation as:
P(X i |X i-1 ,........., X 1 ) = P(X i |Parents(X i ))
Explanation of Bayesian network:
Let's understand the Bayesian network through an example by creating a directed acyclic
graph:
Example: Harry installed a new burglar alarm at his home to detect burglary. The alarm reliably
responds at detecting a burglary but also responds for minor earthquakes. Harry has two
neighbors
David and Sophia, who have taken a responsibility to inform Harry at work when they hear the
alarm.
David always calls Harry when he hears the alarm, but sometimes he got confused with the
phone
ringing and calls at that time too. On the other hand, Sophia likes to listen to high music, so
sometimes.she misses to hear the alarm. Here we would like to compute the probability of
Burglary Alarm.

Problem:
Calculate the probability that alarm has sounded, but there is neither a burglary, nor an
earthquake occurred, and David and Sophia both called the Harry.

Solution:
o The Bayesian network for the above problem is given below. The network structure is showing
that burglary and earthquake is the parent node of the alarm and directly affecting the
probability of alarm's going off, but David and Sophia's calls depend on alarm
probability.
o The network is representing that our assumptions do not directly perceive the burglary and also
do not notice the minor earthquake, and they also not confer before calling.
o The conditional distributions for each node are given as conditional probabilities table or CPT.
o Each row in the CPT must be sum to 1 because all the entries in the table represent an
exhaustive set of cases for the variable.
o In CPT, a boolean variable with k boolean parents contains 2 K probabilities. Hence, if there
are two parents, then CPT will contain 4 probability values
List of all events occurring in this network:
o Burglary (B)
o Earthquake(E)
o Alarm(A)
o David Calls(D)
o Sophia calls(S)

It can be written as the events of problem statement in the form of probability: P[D, S, A, B, E],
can rewrite

the above probability statement using joint probability distribution:


P[D, S, A, B, E]= P[D | S, A, B, E]. P[S, A, B, E]
=P[D | S, A, B, E]. P[S | A, B, E]. P[A, B, E]
= P [D| A]. P [ S| A, B, E]. P[ A, B, E]
= P[D | A]. P[ S | A]. P[A| B, E]. P[B, E]
= P[D | A ]. P[S | A]. P[A| B, E]. P[B |E]. P[E]

Let's take the observed probability for the Burglary and earthquake component:
P(B= True) = 0.002, which is the probability of burglary.
P(B= False)= 0.998, which is the probability of no burglary.
P(E= True)= 0.001, which is the probability of a minor earthquake
P(E= False)= 0.999, Which is the probability that an earthquake not occurred.
We can provide the conditional probabilities as per the below tables:
Conditional probability table for Alarm A:
The Conditional probability of Alarm A depends on Burglar and earthquake:
B E P(A= True) P(A= False)
True True 0.94 0.06

True False 0.95 0.04


False True 0.31 0.69
False False 0.001 0.999
Conditional probability table for David Calls:
The Conditional probability of David that he will call depends on the probability of Alarm.
A P(D= True) P(D= False)
True 0.91 0.09
False 0.05 0.95
Conditional probability table for Sophia Calls:
The Conditional probability of Sophia that she calls is depending on its Parent Node
"Alarm."
A P(S= True) P(S= False)
True 0.75 0.25
False 0.02 0.98

From the formula of joint distribution, we can write the problem statement in the form of
probability
distribution:
P(S, D, A, ¬B, ¬E) = P (S|A) *P (D|A)*P (A|¬B ^ ¬E) *P (¬B) *P (¬E).
= 0.75* 0.91* 0.001* 0.998*0.999
= 0.00068045.
Hence, a Bayesian network can answer any query about the domain by using Joint
distribution.
# This function helps to calculate probability distribution, which goes into BBN (note, can
handle up to 2 parents)

# Create Network

# Convert the BBN to a join tree

# Set options for graph looks

# Generate graph

# Update margins and print the graph

# Define a function for printing marginal probabilities

# Use the function to print marginal probabilities

# To add evidence of events that happened so probability distribution can be recalculated

# Use function to add evidence

SIGN AND REMARK:

DATE:
R1 (3 R2 (4 R3 (4 R4 (4 Total Signature
Marks) Marks) Marks) Marks)
(15 Marks)

EXPERIMENT NO. 10

Date of Performance :

Date of Submission :

AIM: Implement a Planning Agent (Planning Programming)


THEORY:
What is planning in AI?
● The planning in Artificial Intelligence is about the decision-making tasks performed
by the robots or computer programs to achieve a specific goal.
● The execution of planning is about choosing a sequence of actions with a high
likelihood to complete the specific task.
Blocks-World planning problem
● The blocks-world problem is known as Sussman Anomaly.
● No interleaved planners of the early 1970s were unable to solve this problem, hence
it is considered as anomalous.
● When two sub goals G1 and G2 are given, a no interleaved planner produces
either a plan for G1 concatenated with a plan for G2, or vice-versa.
● In blocks-world problem, three blocks labeled as 'A', 'B', 'C' are allowed to rest on the
flat surface. The given condition is that only one block can be moved at a time to
achieve the goal.
● The start state and goal state are shown in the following diagram.

Components of Planning System


The planning consists of following important steps:
● Choose the best rule for applying the next rule based on the best available heuristics.
● Apply the chosen rule for computing the new problem state.
● Detect when a solution has been found.
● Detect dead ends so that they can be abandoned and the system’s effort is directed
in more fruitful directions.
● Detect when an almost correct solution has been found.
Goal stack planning

This is one of the most important planning algorithms, which is specifically used by
STRIPS.
● The stack is used in an algorithm to hold the action and satisfy the goal. A knowledge
base is used to hold the current state, actions.
● Goal stack is similar to a node in a search tree, where the branches are created if
there is a choice of an action.
The important steps of the algorithm are as stated below:
i. Start by pushing the original goal on the stack. Repeat this until the stack becomes
empty. If stack top is a compound goal, then push its unsatisfied sub goals on the stack.
ii. If stack top is a single unsatisfied goal then, replace it by an action and
push the action’s precondition on the stack to satisfy the condition.
iii. If stack top is an action, pop it from the stack, execute it and change the
knowledge base by the effects of the action.
iv. If stack top is a satisfied goal, pop it from the stack.

Program:- Define initial state and its pickup , putdown action, predicate function
and hence implement planning function.

CONCLUSION:
The planning process of any organization is essential to the overall success of the
company. All levels of planning must do their part to incorporate situational analysis,
alternative goals and plans, goals and plan evaluation, goal and plan selection,
implementation, and monitor control into their organization to achieve the desired
results. In addition to these steps, managers must also consider the various impacts such
as legal issues, ethics, and corporate social responsibility as well as internal and external
factors when devising plans for their organization. As proven by the ongoing success of
the Boeing Company, these steps and considerations in the hands of the right managers
insure a promising future.

SIGN AND REMARK:


DATE:

R1 R2 R3 R4 Total Signature
(3 Marks) (4 Marks) (4 Marks) (4 Marks) (15 Marks)
EXPERIMENT NO. 11

Date of Performance :

Date of Submission :

AIM: Design prototype for Expert System.


THEORY:

Designing a prototype for an expert system involves several steps and considerations. An expert

system is a computer program that uses knowledge and inference techniques to solve specific

problems or provide expert-level advice in a particular domain. Here's a high-level outline of

how you can design a prototype for an expert system:

Define the Problem Domain:


● Clearly define the problem or domain your expert system will address. This could
be anything from medical diagnosis to technical support.
Identify Expertise:
● Determine the source of expertise for your system. This could involve consulting
domain experts or gathering data from books, research papers, or databases.
Knowledge Acquisition:
● Gather and organize knowledge from experts or existing sources. This knowledge
can be represented as rules, facts, and heuristics. Common techniques for
knowledge representation include:
● Rule-based systems: If-Then rules
● Knowledge graphs: Representing knowledge as nodes and edges
● Frames: Structured representation of objects and their properties
● Ontologies: Formal representation of knowledge with relationships and
semantics
Knowledge Base:
● Create a knowledge base to store the acquired knowledge. This is where all the
rules, facts, and data will be stored for the expert system to use during inference.
Inference Engine:
● Develop the core inference engine responsible for making decisions and providing
recommendations based on the knowledge in the knowledge base. Common
inference techniques include:
● Forward chaining: Start with available facts and derive conclusions.
● Backward chaining: Start with a goal and work backward to find the
necessary facts.
● Bayesian networks: Probability-based reasoning.
● Fuzzy logic: Dealing with uncertainty.
User Interface:
● Design a user-friendly interface for interacting with the expert system. Depending
on your target audience, this could be a command-line interface, a web
application, or a mobile app.
Testing and Validation:
● Test the expert system with various scenarios and use cases to ensure it provides
accurate and reliable recommendations or solutions.
Knowledge Refinement:
● Continuously update and refine the knowledge base as new information becomes
available or the domain evolves.
Deployment:
● Deploy the expert system in a real-world setting where it can be used by
individuals or organizations to solve problems or provide expert advice.
Monitoring and Maintenance:
● Implement monitoring tools to keep track of system performance and user
feedback. Regularly update and maintain the system to ensure its accuracy and
relevance.
User Training:
● Provide training and documentation to users on how to effectively use the expert
system.
Scaling and Optimization:
● As the system gains more users and data, consider scalability and performance
optimization.

Remember that developing an expert system can be a complex task, and the choice of

programming languages and technologies may vary depending on your specific requirements.

Additionally, you may need to consider ethical and legal implications, especially if the system is

used in critical domains like healthcare or finance.

Practical implementation of an expert system prototype involves several steps and

considerations. Below, I'll outline a simplified example of how you might practically implement

an expert system prototype for a medical diagnosis domain using Python. Keep in mind that real-

world expert systems can be much more complex and may require a dedicated team and

resources.
Define the Problem Domain:

● In this example, let's create an expert system prototype for diagnosing common
illnesses based on symptoms.
Identify Expertise:
● Consult medical experts or gather medical literature to identify symptoms,
diseases, and diagnostic rules.
Knowledge Acquisition:
● Create a knowledge base with symptom-disease associations. For instance, if
"fever" and "cough" are symptoms, they may suggest "common cold" as a
possible disease.
Knowledge Base:
● Store the knowledge in a structured format, such as a Python dictionary or
database.

knowledge_base = {
"fever": ["common cold", "flu"],
"cough": ["common cold", "bronchitis"],
# Add more symptom-disease associations
}
se = {mInference Engine:
● Develop an inference engine to make diagnoses based on user-entered symptoms.
You can use a simple rule-based approach for this prototype.

User Interface:
● Create a basic command-line interface for users to input their symptoms and
receive a diagnosis.
Testing and Validation:
● Test the system with various symptom combinations to ensure it provides
accurate diagnoses based on the knowledge base.
Knowledge Refinement:
● Continuously update and refine the knowledge base with new symptom-disease
associations or medical guidelines.
Deployment:
● Deploy the prototype locally for testing or share it within a controlled
environment.
Monitoring and Maintenance:
● Monitor user interactions and feedback to identify any issues or improvements
needed.
User Training:
● Provide instructions on how to use the system effectively.
Scaling and Optimization:
● If you plan to expand the system or make it more efficient, consider optimizing
the code and potentially migrating to a more robust platform or framework.

Real-world expert systems in domains like medicine would require a much more extensive and

accurate knowledge base, as well as compliance with medical regulations. Additionally, you may

want to implement more advanced inference techniques and deploy the system in a secure and
accessible manner.

EXPERIMENT NO. 12

Date of Performance :

Date of Submission :

Aim : Case study of any existing successful AI system


Case Study: Google's AlphaGo

Background: Go is a complex and ancient board game with an enormous number of possible

moves, making it significantly more challenging for computers to play compared to chess. For

many years, Go was considered one of the last frontiers of human strategic gameplay that AI

couldn't master.

Problem Statement: Google's DeepMind set out to develop an AI system capable of playing Go

at a world-class level and defeating the world champion.

AI Solution:

Deep Reinforcement Learning: AlphaGo employed a combination of deep neural networks


and reinforcement learning. It used neural networks to evaluate board positions and
predict the best moves based on past experience.
Monte Carlo Tree Search (MCTS): AlphaGo used MCTS, a search algorithm that simulates
millions of possible future moves and outcomes, to select the best moves during
gameplay.
Training Data: The AI system was trained on a massive dataset of expert-level Go games to
learn patterns, strategies, and tactics. It also engaged in self-play to improve its gameplay
continuously.
Learning from Human Experts: AlphaGo played thousands of games against top human Go
players to learn and adapt its strategies.

Results:

● AlphaGo vs. Lee Sedol: In 2016, AlphaGo faced the world champion Go player, Lee
Sedol, in a five-game match. AlphaGo won four out of five games, demonstrating its
superiority in strategic gameplay.
● Advancements in AI: The success of AlphaGo marked a significant milestone in AI
research, showcasing the potential of deep learning, reinforcement learning, and neural
networks in solving complex, real-world problems.
● Impacts Beyond Games: The techniques developed for AlphaGo have applications in
fields like healthcare, finance, and logistics, where AI can be used for complex decision-
making and optimization.
● AlphaGo Zero: DeepMind later developed AlphaGo Zero, a version of the AI system that
learned entirely from self-play without any human data. It achieved an even higher level
of performance, further highlighting the capabilities of AI in mastering complex tasks.

Google's AlphaGo serves as an exemplary case study of how AI, particularly deep reinforcement

learning and neural networks, can excel in tasks previously considered too challenging for

machines. It not only revolutionized the field of AI but also demonstrated the potential for AI to

tackle complex problems across various domains.


EXPERIMENT NO. 13

Date of Performance :

Date of Submission :

CONTENT BEYOND SYLLABUS


AIM: Explore and analyze the application of Prolog for solving classic problems like the
monkey banana and optimizing solutions through the effective use of data structures.
13.1 Monkey Banana problem

Moves
1. Grasp banana
2. Climb box
3. Push box
4. Walk around
Program :
Making use of various states make monkey reach till banana by defining various states.

OUTPUT:

13.2 Data Structures

13.2.1 List and Members

OUTPUT:
13.2.2 Segmentation

OUTPUT:
CONCLUSION: SUCCESSFULLY IMPLEMENTED AND LEARNED THAT PROLOG IS USEFUL FOR

SOLVING FAMILIAR PROBLEMS LIKE MONKEY BANANA PUZZLE AND DATA STRUCTURE . USING
THE RIGHT DATA STRUCTURES MADE THE SOLUTION EVEN BETTER, REVEALING HOW SMART

PLANNING AND PROBLEM-SOLVING METHODS MATTER FOR DEALING WITH THOROGH SITUATIONS.

SIGN AND REMARK:


Date:

EXPERIMENT NO. 14

Date of Performance :

Date of Submission :
CONTENT BEYOND SYLLABUS

AIM: Explore and analyze the application of Prolog for solving classic problems like flight
planning.
14. Flight Problem

Program: Implement A FLIGHT ROUTE PLANNER


OUTPUT:

CONCLUSION: SUCCESSFULLY IMPLEMENTED AND LEARNED THAT PROLOG IS USEFUL

FOR SOLVING FAMILIAR PROBLEMS LIKE FINDING PERFECT FLIGHT ROUTES. IT ALSO

REVEALS HOW SMART PLANNING AND PROBLEM-SOLVING METHODS MATTER FOR DEALING

WITH TOUGH SITUATIONS.

SIGN AND REMARK:


DATE:

You might also like