[go: up one dir, main page]

0% found this document useful (0 votes)
53 views5 pages

Pacman

Uploaded by

binte tariq
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
53 views5 pages

Pacman

Uploaded by

binte tariq
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 5

International Journal of Innovative Technology and Exploring Engineering (IJITEE)

ISSN: 2278-3075 (Online), Volume-9 Issue-9, July 2020

Artificial Intelligence based Pacman Game


Himakar Sai Chowdary Maddipati, Aravind Kundurthi, Pothula Mahima Raaj, Kapudasi
Srilatha, Ravikishan Surapaneni

So,first we will design an intelligent pacman agent which will


Abstract: This paper is about implementing pacman game with find optimal paths through the maze to reach the goal
AI.The Game Pac-Man is a very challenging video game that can state,eating all dots,escaping from ghosts in minimum no.of
be useful in conducting AI(Artificial Intelligence) research.
steps. To design this intelligent agent, we implemented
Here,the reason we have implemented various AI algorithms for
pacman game is that it helps us to study AI by using visualizations several search algorithms. The search algorithms are
through which we can understand AI more ef- fectively.The main categorized into 2 types: Un- informed search algorithms(In
aim is to build an intelligent pacman agent which finds optimal this we will cover DFS,BFS ,UCS ),Informed search
paths through the maze to find a particular goal such as a algorithms(In this, we will implement A* search).The main
particular food position,escaping from ghosts.For that, we have difference between uninformed and informed is that the
implemented AI search algorithms like Depth first search,Breadth
first search,A*search,Uniform cost search.We have also uninformed search algorithms are not given any information
implemented multi-agents like Reflex agent,Minimax about the problem,whereas informed search algorithms are
agent,Alpha-beta agent.Through these multiagent algorithms,we given information about the problem.
can make pacman to react from its environmental conditions and
escape from ghosts to get high score.We have also done the
visualization part of the above AI algorithms by which anyone can
learn and understand AI algorithms easily.For visualisation of
algorithms,we have used python libraries matplotlib and
Networkx.
Keywords : Artificial Intelligence,search algorithms,multi
agents,pacman.

I. INTRODUCTION
Pac-man is one of the most popular arcade games in the
world. The player controls Pac-Man, who must eat all the dots
inside an enclosed maze while avoid- ing four colored ghosts. Fig. 1. Interface of Pacman game.
Eating large flashing dots called power pellets causes the
we will compare the performance of above stated algo-
ghosts to turn blue, allowing Pac-Man to eat them for bonus
rithms for pacman to find the optimal path. We have also pro-
points.Here,the main aim is to score points by collecting food
vided visualisation for the above said algorithms,which helps
from maze and escaping from the ghosts which are roaming
us in understanding the working of algorithms clearly.For
around the maze.If the ghost captures the pacman,then the
visualisation we have used matplotlib,NetworkX libraries of
game is over.
python.We have also implemented multiagent algorithms like
Reflex agent,Minimax-agent,Alpha-beta agent. By using
these algorithms,the pacman agent will try to escape from the
Revised Manuscript Received on July 30, 2020. ghost agents and eat all the food in the maze to win the game.
* Correspondence Author
Himakar Sai Chowdary Maddipati*, Department of Computer Science II. RELATED WORK
and Engineering,,VRSiddhartha Engineering College, Kanuru, India. Email:
maddipati.himakar@gmail.com Sebastian Thrun implemented artificial intelligence for
Aravind Kundurthi, Department of Computer Science and chess game by designing NeuroChess program which learns
Engineering,,VRSiddhartha Engineering College, Kanuru, India. Email:
aravindkundurthi005@gmail.com how to play chess game from the final outcome of all the
Pothula Mahima Raaj, Department of Computer Science and games by itself. NeuroChess learns various board evaluation
Engineering College,VR Siddhartha Engineering College ,Kanuru, India. functions,which are implemented using artificial neural net-
Email: mahi.raaj1999@gmail.com
Kapudasi Srilatha, Department of Computer Science and Engineering
works. César Villacı́s, Walter Fuertes, Mónica Santillán,
,VR Siddhartha Engineering College,Kanuru,India.Email: Hernán Aules, Ana Tacuri,Margarita Zambrano and Edgar
ksrilatha2222@gmail.com Salguero proposed a case study of an optimized
Ravikishan Surapaneni, Department of Computer Science and
Tic-Tac-Toe.They have implemented AI for a video game
Engineering, VR Siddhartha Engoineering College, Kanuru, India.Email:
suraki@vrsiddhartha.ac.in known as Tic-Tac-Toe. The model was implemented by using
several AI algorithms and a graphical UI including Semiotics;
© The Authors. Published by Blue Eyes Intelligence Engineering and this provided an attractive and nice environment.
Sciences Publication (BEIESP). This is an open access article under the CC
BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/)

Retrieval Number: I6975079920/2020©BEIESP Published By:


DOI: 10.35940/ijitee.I6975.079920 Blue Eyes Intelligence Engineering
Journal Website: www.ijitee.org 140 & Sciences Publication
Artificial Intelligence based Pacman Game

Its main purpose is stimulating cognitive development of


children. Veenus Chhabra and Kuldeep Tomar implemented
artificial intelligence for ludo game. Ludo is a game that is
played by two,three or four players.In this game,the players
race against their four tokens from initial to end state
according to rolling of a dice.They have implemented ludo
using Q-learning which is a type of reinforcement learning.
Sergey Karakovskiy and Julian Togelius implemented
Mario AI benchmark,benchmark for reinforcement learning
algo- rithms and game AI techniques.The benchmark is for
classic game Super Mario Bros, and completely open
source.The benchmark was used in various competitions Fig. 2. Visualization of DFS
,international conferences, and students from various parts of
the world implemented various different solutions to beat the B. Breadth First Search
benchmark. In Breadth-first search algorithm, the root node is
expanded first and then all the children of the root node are
III. SEARCH ALGORITHMS expanded,then their successors, and so on.Here all nodes are
Uninformed search is a class of search algorithms which expanded level by level,which means that all the nodes of a
operates in brute force-way. Uninformed search algorithms do particular level will be expanded before going to the next
not have additional information about state or search space level .BFS uses a queue (FIFO data structure) for the frontier.
other than how to traverse the tree.It is also called blind Breadth First Search returns a least cost solution in terms
search. of effort taken by Pacman to reach the food dot and results
are shown in Table II. But here, the expanded nodes are
A. Depth First Search relatively large(269 and 620 for mediumMaze and bigMaze
Depth-first search always tries to expand the deepest node respectively), which consumes a huge amount of time to find
in the stack of the search tree. Depth-first search algorithm is the best solution.
a search algorithm which uses a stack data Structure(LIFO
data structure). Through stack, the most recently generated TABLE II RESULTS OF BFS
node is chosen for expansion. Maze Cost Nodes Score
Algorithm for DFS: Expanded
tinyMaze 8 68 502
/* mediumMaze 68 268 442
function dfs(problem) returns solution or failure bigMaze 210 618 300
initialize the stack using the initial state of the problem
*s-seconds.
initialize the visited list to null
The visualization of BFS algorithm is shown in the Fig.3
loop do
below.Python libraries like NetworkX,Matplotlib are used to
if the stack is empty then return failure
create the graphs of the nodes that are expanded during the
choose a leaf node and remove it from the stack
search process.Also,visualization of queue is shown.
if node contains a goal state then return the node as a
solution
add the node to the visited list
expand chosen node and add the resulting nodes to the
stack only if they are not in the visited list
*/
The results of running depth first search algorithm are
shown in the above table.
TABLE IRESULTS OF DFS

Maze Cost Nodes Score Fig. 3. Visualization of BFS


Expanded
tinyMaze 10 14 500
mediumMaze 130 144 380 C. Uniform Cost Search
bigMaze 210 390 300
The main property of Uniform Cost Search algorithm
∗s-seconds. is,instead of expanding the deepest node, it tries to expand the
node having the least path cost.
But DFS does not provide us the best solution as the This is done by using the priority queue data structure in
solutions shown above are not least cost solutions.The visual- which the items are ordered by cost.
ization of DFS algorithm is shown in the Fig.2 below.Python
libraries like NetworkX,Matplotlib are used to create the
graphs of the nodes that are expanded during the search
process.Also,visualization of stack is shown.

Retrieval Number: I6975079920/2020©BEIESP Published By:


DOI: 10.35940/ijitee.I6975.079920 Blue Eyes Intelligence Engineering
Journal Website: www.ijitee.org 141 & Sciences Publication
International Journal of Innovative Technology and Exploring Engineering (IJITEE)
ISSN: 2278-3075 (Online), Volume-9 Issue-9, July 2020

The priority queue stores items in the form of(c1,c2,i)


Algorithm Maze Nodes Expanded Cost Time
where c1 is the cost associated with the node,c2 is the count UCS bigMaze 620 210 7.3s
specifying the number of items in the priority queue and i is A* bigMaze 549 210 0.6s
the item denoting the state and actions associated with the UCS mediumMaze 269 68 1.6s
A* mediumMaze 221 68 0.1s
node.Nodes are arranged in the ascending order of their costs
and in each iteration,the node with minimum cost is removed s-seconds.
out of the queue and its successor nodes are expanded.The Out of these,A* Search is the most efficient one in terms of
cost of each node is calculated based on the distance between nodes expanded and also cost in finding the goal state.
the current state and the initial state. For implementing A* Search in python,we have used the
We have run the UCS algorithm for tinymaze,mediummaze priority queue data structure same as in UCS algorithm.But
and bigmaze.The results are as shown below. the main difference here is,in A* Search we have used
TABLE III RESULTS OF UCS Manhattan Heuristic.The visualization of A* Search
Maze Cost Nodes Score
algorithm is also implemented using NetworkX and
Expanded Matplotlib python libraries,which is shown in the diagram
tinyMaze 8 15 502
below.
mediumMaze 68 269 442
bigMaze 210 620 300


s-seconds.
From the results ,we can observe that when compared to
DFS and BFS,UCS takes more time and cost to expand the
nodes and find the goal state.When compared to BFS, the
number of nodes expanded are slightly more,the cost and
score remains same but the time taken by UCS is more.
The visualization of UCS algorithm is also implemented
using NetworkX and Matplotlib python libraries,which is
shown in the diagram below. Fig. 5. Visualisation of A* Search.

IV. MULTIAGENTS

A. Reflex Agent
We create here a reflex agent which chooses at its turn a
random action from the legal ones.Note that this is different
from the random search agent, since a reflex agent does not
build a sequence of actions, but chooses one action and
executes it. The random reflex agent appears in listing below:
class RandomAgent(Agent):
Fig. 4. Visualisation of Uniform Cost Search.
/*This gets the action to be performed using this agent*/ def
D. A* Search getAction (self , gameState ):
A* search is a type of informed search algorithm which /*Possible directions in which pacman can move */
knows information about the problem beforehand. It evaluates legalMoves = gameState . getLegalActions ()
nodes by adding x(n), the cost to reach the node, and y(n), the /* Pick randomly among the legal */
cost to get from the node to the goal: f (n) = x(n) + y(n). As chosenIndex=random.choice(range(0,len(legalMoves)))
x(n) is the path cost from the start node to node n, and y(n) is return legalMoves [ chosenIndex ]
the cost of the optimal path from n to the goal, we have f(n)
=total cost of the cheapest solution through n. This reflex agent chooses its current action based only
So, if we should find the cheapest solution,we should on its current perception. The Reflex Agent gets all its legal
consider the node with the lowest value of f(n). actions, computes the scores of the states reachable with
The only difference between A* search and UCS is UCS these actions and selects the states that results into the state
considers the cost of reaching the node whereas A* Search with the maximum score. In case more states have the
considersthe sum of cost of reaching the node and cost of maximum score, it will choose randomly one The Reflex
reaching the goal node from that node(x+y) instead of agent algorithm has worked successfully for all the three
x.Here,the heuristic function used is Manhattan distance mazes(testClassic,openClassic,mediumClassic ).The
heuristic.This heuristic is used to find which node or state is resultant scores for various mazes are as shown below:
closer to the goal state.
We have also compared the performance of A* Search and
UCS algorithms on tinymaze,mediummaze and big maze and
shown them below in the figure.
TABLE IV UCS AND A* SEARCH COMPARISON

Retrieval Number: I6975079920/2020©BEIESP Published By:


DOI: 10.35940/ijitee.I6975.079920 Blue Eyes Intelligence Engineering
Journal Website: www.ijitee.org 142 & Sciences Publication
Artificial Intelligence based Pacman Game

TABLE V RESULTS OF REFLEX AGENT use of cuto-test based on limiting the depth for the search.
When the CUTOFF test is met, the tree leaves are evaluated
Maze Form Ghosts Pellets count Score using an heuristic evaluation function instead of the utility
testClassic 1 8 562
openClassic 1 86 1247 function.
mediumClassic 2 98 1253 H-MINIMAX(s,d) =
=EVAL(s) if CUTOFF-TEST(s, d)
B. Minimax Agent
=max(a CActions)(s) H-MINIMAX(RESULT(s,a), d+1) if
In case the world where the agent plans ahead includes PLAYER(s)= MAX
other agents which plan against it, adversarial search can =min(a CActions)(s) H-MINIMAX(RESULT(s,a), d+1) if
be used. One agent is called MAX and the other one MIN. PLAYER(s)=MIN
Utility(s; p) (called also payo function or objective function)
gives the nal numeric value for a game that ends in terminal The Minimax algorithm has worked successfully for all
state s for player p. For example, in chess the values can be the mazes(testClassic,openClassic,mediumClassic).The
+1, 0, ½. The game tree is a tree where the nodes are game scores for various mazes are as shown below:
states and the edges are moves. Max’s actions are added first.
Then, for each resulting state, the action of Min’s are added, TABLE VI RESULTS OF MINIMAX AGENT
and so on. A game tree can be seen in below figure.
Maze Form Ghosts Pellets count Score
testClassic 1 8 558
openClassic 1 86 1088
mediumClassic 2 98 1660

C. Alpha-beta pruning
In order to limit the number of game states from the game
Fig. 7. Tree diagram for understanding Minimax tree, alpha-beta prunning can be applied, where
algorithm a = the value of the best (highest value) choice there is so far at
any choice point along the path for MAX
Optimal decisions in games must give the best move for
b = the value of the best (lowest-value) choice there is so far at
MAX in the initial state, then MAX’s moves in all the states
any choice point along the path for MIN
resulting from each possible response by MIN, and so
The following are the functions that provide a brief overview
on.Minimax value ensures optimal strategy for MAX. The
about ALPHA BETA pruning procedure:
algorithm for computing this value is given in below listings.
function ALPHA-BETA-SEARCH(state) returns an
For the agent in Fig.7,the best move is a1 and the minimax
action v = MAX-VALUE( state , -INF,+INF)
value of the game is 3.
return the action in ACTIONS( state ) with value v
MINIMAX(s) =
function MAX-VALUE(state,α,β) returns a utility
=UTILITY(s) if TERMINAL-TEST(s)
value if TERMINAL-TEST(state) then return
=max(a CACTIONS)(s)
UTILITY(state) v=-INF
MINIMAX(RESULT(s,a)) if PLAYER(s)=
for each i in ACTIONS( state ) do
MAX
v=MAX(v,MIN-VALUE(RESULT(s,i),
=min(a CACTIONS)(s)
α,β))
MINIMAX(RESULT(s,a)) if
PLAYER(s)=MIN if v >= β then return v
The following are the functions that provide a brief α=MAX
overview about MINIMAX algorithm: (α,v) return
v
function MINIMAX-DECISION(state) returns an action function MIN-VALUE(state,α,β) returns a utility value
function MAX-VALUE(state) returns a utility value if TERMINAL-TEST(state) then return
if TERMINAL-TEST(state) then return UTILITY(state) UTILITY(state) v=-INF
v = -INFINITY for each i in ACTIONS( state ) do
for each a in ACTIONS( state ) do v=MIN(v,MAX-VALUE(RESULT(s,i),
v = MAX(v , MIN-VALUE(RESULT( state , a ) ) ) α,β))
return v if v <=α then return v
function MIN-VALUE(state) returns a utility value β=MIN(
if TERMINAL-TEST(state) then return β,v) return
UTILITY(state) v = -INFINITY v
for each a in ACTIONS( state ) do The Alpha-Beta pruning algorithm has worked successfully
v = MIN(v ,MAX-VALUE(RESULT(state,a))) for all the three mazes
return v (testClassic,openClassic,mediumClassic).
Result(state; a) is the state which results from the application
of action a in state. Minimax algorithm generates the entire
game search space. Imperfect real-time decisions involve the

Retrieval Number: I6975079920/2020©BEIESP Published By:


DOI: 10.35940/ijitee.I6975.079920 Blue Eyes Intelligence Engineering
Journal Website: www.ijitee.org 143 & Sciences Publication
International Journal of Innovative Technology and Exploring Engineering (IJITEE)
ISSN: 2278-3075 (Online), Volume-9 Issue-9, July 2020

The statistics for various mazes are as shown below: Ravikishan Surapaneni, is working as Associate
Professor in CSE department of VR Siddhartha
TABLE VII RESULTS OF ALPHA-BETA PRUNING Engineering College since 20 years.His areas of interest
include Data analytics and has published more than 20
papers in various reputed journals.
Maze Form Ghosts Pellets count Score
testClassic 1 8 548
openClassic 1 86 907
mediumClassic 2 98 1470

V. CONCLUSION
We have successfully implemented search algorithms like
Depth First Search,Breadth first search,Uniform Cost
Search,A* Search and also implemented MultiAgents like
Reflex Agent,MiniMax Agent,Alpha-Beta Agent.We have
also compared the performance of the above search
algorithmsOut of DFS and BFS,BFS is better in terms of its
performance.Of Uniforn Cost Search and A* Search,A* is
better in terms of its performance.We have also compared the
performance of the multiagent algorithms and conclude that
Reflex agent is best in terms of its performance compared to
the other multiagents.Finally,we are also successful in
implementing visualization to all the search algorithms.

REFERENCES
1. Russell, S., Norvig, P. (2010). Artificial Intelligence: A Modern Ap-
proach. Upper Saddle River, NJ: Prentice Hall.
2. Jamey Pittman.The pac-man dossier. http://home.comcast.net/
jpittman2/pacman/pacmandossier.html, 2011.
3. Y. Shoham and K. Leyton-Brown (2008) Multiagent systems: algo-
rithmic, game theoretic, and logical foundations,Cambridge University
Press.
4. P. Stone and M. Veloso (2000) Multiagent systems: a survey from a
machine learning perspective.
5. Masataro Asai and Alex S Fukunaga.”Tiebreaking strategies for A*
search: How to explore the nal frontier”. In AAAI, pages 673-679,
2016.
6. Game:Pacman,http://www.ling.helsinki.fi/kit/2006s/clt230/livewires/
gam es/pacman.pdf
7. M. Wooldridge (2002) :An introduction to multiagent systems : John
Wiley and Sons.
8. D. E. Knuth and R. W. Moore (1975): An analysis of alpha-beta
pruning.
9. D. Koller and B. Milch (2003) : Multi-agent influence diagrams for
representing and solving games.

AUTHORS PROFILE
Himakar Sai Chowdary Maddipati, is pursuing 4/4
B.Tech in computer science of engineering from VR
Siddhartha engineering college,Kanuru,India.His areas of
interest include Artificial Intelligence,Machine
learning,Virtual Reality.

Aravind Kundurthi, is pursuing 4/4 B.Tech in computer


science of engineering from VR Siddhartha engineering
college,Kanuru,India.His areas of interest include
Python,C,Machine learning.

Pothula Mahima Raaj, is pursuing 4/4 B.Tech in


computer science of engineering from VR Siddhartha
engineering college,Kanuru,India.Her areas of interest
include Business analytics and Cyber security.

Kapudasi Srilatha, is pursuing 4/4 B.Tech in computer


science of engineering from VR Siddhartha engineering
college,Kanuru,India.Her areas of interest include Java
programming,Python,Web technologies.

Retrieval Number: I6975079920/2020©BEIESP Published By:


DOI: 10.35940/ijitee.I6975.079920 Blue Eyes Intelligence Engineering
Journal Website: www.ijitee.org 144 & Sciences Publication

You might also like