Parallel and Distributed AI
Chapter 16: Rich & knight
Dr. Suthikshn Kumar
Parallel and Distributed computing
• Significant advances in parallel and distributed
computing
• What are the implications of these advances for
AI?
• Three areas:
– Psychological modeling
– Improving efficiency
– Helping to organize systems in modular fashion
• Parallel computers might be used to increase
significantly speed at which the systems could
run.
Psychological Modeling
• The production system was originally proposed as a model of
human information processing.
• The ideas of short term and long-term memory, independently
operating productions, matching, and so forth arose out of
psychological literature.
• In human brain while individual neurons are quite slow compared to
digital computer circuits, there are vast numbers of these richly
connected components, and they operate concurrently.
• Some models stress on parallel aspect, in which all productions
match and fire simultaneously
• SOAR is production system with dual mission:
– Architecture for building AI systems
– Model human intelligence
• SOAR incorporates both sequential and parallel aspects of
production systems .
Parallelism in Reasoning systems
• AI programs consume significant space and time
resources
• It is therefore important that AI algorithms make use of
advances in parallel computation.
• There are several sources of parallelism for speedup in
production systems:
– Production level parallelism, in which all the productions match
themselves against working memory in parallel
– Condition level parallelism, in which all of the conditions of single
production are matched in parallel
– Action level parallelism, in which all of the actions of a single
production are executed in parallel
• Task level parallelism, in which several cycles are
executed simultaneously.
Parallelizing AI Architectures
• The amount of task level parallelism available is
completely dependent on the nature of the task.
• In medical diagnosis, each production firing
might be dependent on the previous production
firing, thus enabling a long, sequential chain of
reasoning to occur.
• If the system is diagnosing five patients
simultaneously, productions involving different
patients would not interact with one another and
could be executed in parallel.
Parallelizing AI programming
languages
• Writing parallel programs is a difficult task.
• Parallel implementations of these languages will
make effective speedups more practical.
• Parallel LISP: Multilisp, QLISP,
• Parallel Prolog: Concurrent prolog, PARLOG
• PROLOG has two types of parallelism:
– OR-parallelism: multiple paths to the same goal are
taken in parallel
– AND-Parallelism: the portions of conhunctive goal are
pursued in parallel.
Parallelizing AI algorithms
• Examples:
– Ten authors can write a book much faster than one author
– Ten woman cannot bear a child any faster than one can!
– Likewise throwing more processors at an AI problem may not bring
desired benefits.
• Many problems can be solved efficiently by parallel methods.
• It is not always easy to convert a sequential algorithm into an
efficient parallel one.
• Some AI algorithms whose parallel aspects have been studied are:
– Best-first search
– Alpha-beta pruning
– Constraint satisfaction
– NLP
– Theorem proving
Custom parallel hardware
• One approach is to code an algorithm in a programming
language supported by a general-purpose parallel
computer.
• Other approach is to build custom parallel hardware that
directly implements a single algorithm.
• This has led to striking performance increases, as
demonstrated in the SPHINX speech recognition system,
where real-time performance was achieved through the
use of a beam search accelerator.
• DEEP thought chess machine uses a parallel tree-
search algorithm for searching game trees.
Distributed Reasoning Systems
• We define distributed reasoning system to be
one that is composed of a set of separate
modules ( often called agents ) and a set of
communication paths between them.
• It admits systems everywhere along a spectrum
that ranges from tightly coupled systems in
which there is a completely centralized control
mechanism and a shared knowledge base to
ones in which both control and knowledge are
fully integrated.
• Varying levels of granularity
Advantages of Distributed
Reasoning Systems
1. System modularity: It is easier to build and maintain a collection of
quasi independent modules than one huge one
2. Efficiency: Not all knowledge is needed for all tasks. By modularizing it,
we gain the ability to focus the problem-solving system’s efforts in ways
that are most likely to pay off.
3. Fast computer architectures : As problem solvers get more complex,
they need more and more cycles. Although machines continue to get
faster, the real speedups are beginning to come not from a single
processor with a huge associated memory, but from clusters of smaller
processors, each with its own memory. Distributed systems are better
able to exploit such architectures.
4. Heterogeneous Reasoning: The problem-solving techniques and
knowledge representation formalisms that are best for working on one
part of a problem may not be the best for working on another part.
5. Multiple Perspective: Diverse people to build the system
6. Distributed problems: Some problems are inherently distributed.
7. Reliability
Architecture for Distributed
Reasoning must provide:
1. A mechanism for ensuring that the activities of
the various agents in the system are
coordinated so that the overall problem-solving
system achieves its goals.
2. A communication structure that enables
information to be passed back and forth
among agents.
3. Distributed versions of the necessary
reasoning techniques.
Co-ordination and Co-operation
• One agent is in charge: The master agent makes a plan
and distributes pieces of the plan to other slave agents,
who then do as hey are told and report back their results.
• One agent is in charge and that agent decomposes the
problem into subproblems, but then negotiation occurs to
decide what agents will take responsibility for which
subtasks.
• No one agent is in charge, although there is a single
shared goal among all the agents. They must co-operate
both in forming a plan and executing it.
• No one agent is in charge, and there is no guarantee
that a single goal will be shared among all the agents.
They may even compete with each other.
Planning for Multi-agent execution
• A single agent:
1. Decomposes the goal into subgoals and
2. Assigns the subgoals to the various other
agents.
This kind of reasoning is called Multi-agent
reasoning.
Planning and Negotiation: Contract
nets
• A single agent performs the problem decomposition
but then negotiates with the other agents to determine
who will take on which subtasks.
• The contract net mechanism supports this kind of
interaction.
• In contract net, there are two roles that the agent can
assume:
1. Manager, who decomposes the problem, looks for
contractors to attack pieces of the problem, and
monitors the problem’s execution.
2. Contractor, who executes a subtask, possibly by
actually doing the job and possibly by recursively
becoming a manager and subcontracting subparts of
the job to other contractors.
Contract Nets
• Managers and contractors find each other
through a process of bidding:
1. A manager announces a task.
2. Contractors evaluate the task with respect to
their own abilities and the resource
requirements necessary to accomplish it.
3. Contractors make bids to the manager.
4. The manager chooses a single contractor and
waits for the results.
Distributed Control and
Communication
• Distributed Planning: There is no centralized controller.
• Assumptions:
– An agent is rational if it behaves in a manner that is optimal with
respect to its goals.
– Bounded rationality is a property of an agent that behaves in a
manner that is as nearly optimal with respect to its goals as its
resources allow.
• Planning with communication: FAC( Functionally
accurate, Cooperative) is a specific technique that
several communicating agents can use to distributed
problem solving.
• Planning without communication.
Planning without Communication:
Game Theory
• If agents cannot communicate, then we can use
standard notions of game theory to describe how Q
each of them should act.
• The most basic technique is payoff matrix.
• We assume that there are two agents, P and Q
and that there are only two actions that each of c d
them can perform ( a and b for P and c and d for
Q).
• Then the matrix shows the payoff for each of them 2
for each of the possible joint actions. a 4
• The number in the lower left of each box is payoff
P 2 1
for P; the number in the upper right is the payoff for
Q.
• Each agents goal is to maximize its own payoff. 3 1
• For example, P comes out best if it makes move b b
and Q makes move d. 1 5
• On the other hand, Q comes out best if P makes
move a and Q makes move c.
• Each must make decisions independently.
• P should choose move a than b. Why?
• Given that payoff for Q is higher in the c column, Q
can be predicted to choose c.
• Given that Q will choose c, P sees that it does
better to choose move a than move b.
Game-playing programs
• Both payoff matrices and tree-search algorithms
can be generalized to more than two players.
• In board games players usually take turns
making moves, whereas payoff matrices model
the kind of simultaneous decision making
common in real world.
• Games are usually Zero-sum, meaning that one
player’s gain is another player’s loss.
• Payoff matrices are sometimes zero-sum, but
need not be.
Communication: Blackboards and
messages
• Communication architectures supporting
distributed reasoning:
– Blackboard systems: in which communication takes
place through a shared knowledge structure called a
blackboard. Modules can post items on the
blackboard, and they can read and act on messages
that are posted by other modules.
– Message passing systems: in which one reasoning
module sends messages to one or more other
modules whose names are explicitly known.
Blackboard systems
• HEARSAY-II speech understanding system uses
blackboard approach
• The entire system consists of :
– A set of independent modules, called knowledge
sources, that contain the system’s domain specific
knowledge
– A blackboard, which is the shared data structure
through which the knowledge sources communicate
with each other
– A control system, which determines the order in which
knowledge sources will operate on the entries on the
blackboard.
Message passing systems
• MACE is an example of a message-passing distributed system.
• A MACE system is composed of five kinds of components:
1. Problem-solving agents, which are specialized to a problem domain.
2. System agents, which provide such facilities as command interpretation,
error handling, tracing
3. Facilities, which are built-in functions that agents can use for such things
as pattern matching and simulation.
4. A description database which maintains descriptions of the agents
5. Kernels, of which there is one for each processor, which handle such
functions as message routing and I/O transfers.
Although MACE directly supports a message-passing communication protocol,
it can be used to simulate a blackboard system.
A single problem-solving agent, or a collection of them can be used to simulate
each blackboard knowledge source.
Distributed Reasoning Algorithms
• JTMS : Justification based truth maintenance system
• JTMS works by considering an entire knowledge base and labeling
the nodes in the knowledge base so that the labeling is consistent
and well founded.
• In a single agent system, a justification is created as a part of
reasoning process.
• The Distributed truth maintenance system: In this intelligent
justification works as follows:
– Assume A1 solves a problem and reports the result to A2
– Then A1 also reports to A2 a justification that says “ Because A1 says
so”
– This justification is treated by A2 essentially like a premise justification.
– But A1 must also remember the justification, and it must remember that
it sent this justification to A2.
– If the justification ever becomes invalid in A1, then A1 must send a
message to A2 saying that A1 no longer says so.
Summary
• We have discussed parallel and distributed aspects of
AI.
• We examined psychological factors as well as efficiency
concerns.
• The book “ The society of Mind” ( Minsky ) gives ideas
based on how single human brain functions. This book
explores the notion that single minds are also distributed
systems, composed of collections of heterogeneous
agents that simultaneously cooperate and compete.
• Another source of analogies is the structure of human
organizations such as societies and corporations. A
team or a corporation or a government is a distributed
goal-oriented system.