[go: up one dir, main page]

0% found this document useful (0 votes)
66 views11 pages

Unit-4 (AI Notes)

notes on ai agents

Uploaded by

hs3905951
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
66 views11 pages

Unit-4 (AI Notes)

notes on ai agents

Uploaded by

hs3905951
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 11

UNIT-4

SOFTWARE AGENT
• Architecture for Intelligent Agent-
What is Intelligent Agent
1. This agent has some level of autonomy that allows it to perform specific,
predictable, and repetitive tasks for users or applications.

2. It’s also termed as ‘intelligent’ because of its ability to learn during the
process of performing tasks.

3. The two main functions of intelligent agents include perception and


action. Perception is done through sensors while actions are initiated
through actuators.

4. Intelligent agents consist of sub-agents that form a hierarchical


structure. Lower-level tasks are performed by these sub-agents.

5. The higher-level agents and lower-level agents form a complete system


that can solve difficult problems through intelligent behaviors or
responses.

Characteristics of Intelligent Agent


Intelligent agents have the following distinguishing characteristics:-

1. They have some level of autonomy that allows them to perform certain
tasks on their own.

2. They have a learning ability that enables them to learn even as tasks are
carried out.

3. They can interact with other entities such as agents, humans, and
systems.
4. New rules can be accommodated by intelligent agents incrementally.

5. They exhibit goal-oriented habits.

6. They are knowledge-based. They use knowledge regarding


communications, processes, and entities.

Structure of Intelligent Agent


The IA structure consists of three main parts: architecture, agent function, and
agent program.

1. Architecture: This refers to machinery or devices that consists of


actuators and sensors. The intelligent agent executes on this machinery.
Examples include a personal computer, a car, or a camera.

2. Agent function: This is a function in which actions are mapped from a


certain percept sequence. Percept sequence refers to a history of what
the intelligent agent has perceived.

3. Agent program: This is an implementation or execution of the agent


function. The agent function is produced through the agent program’s
execution on the physical architecture.

Working of Intelligent Agent


Intelligent agents work through three main components: sensors, actuators,
and effectors. Getting an overview of these components can improve our
understanding of how intelligent agents work.

1. Sensors: These are devices that detect any changes in the environment.
This information is sent to other devices. In artificial intelligence, the
environment of the system is observed by intelligent agents through
sensors.

2. Actuators: These are components through which energy is converted


into motion. They perform the role of controlling and moving a system.
Examples include rails, motors, and gears.

3. Effectors: The environment is affected by effectors. Examples include


legs, fingers, wheels, display screen, and arms.

Conclusion
Intelligent agents make work easier by performing certain time-consuming
and difficult tasks on behalf of systems or users. These agents are making the
automation of certain tasks possible.

With increased technological advancement, there will be enhanced


development of intelligent agents. This will further translate into complex
AI-driven devices that will solve current global challenges. There seems to be
no limit to this intriguing technology.

• Agent Communication-
Communication is necessary in order to allow collaboration, negotiation,
cooperation, etc… between independent entities. For this purpose, it requires
a well-defined, agreed and commonly understood semantics. Therefore, there
cannot be any interoperability without standards.

Agent communication is based on message passing, where agents


communicate by formulating and sending individual messages to each other.
The FIPA ACL specifies a standard message language by setting out the
encoding, semantics and pragmatics of the messages. The standard does not
set out a specific mechanism for the internal transportation of messages.
Instead, since different agents might run on different platforms and use
different networking technologies, FIPA just specifies how transporting and
encoding the messages between different remote platforms. The syntax of the
ACL is very close to the KQML communication language.

• Negotiation and Bargaining-


What is Negotiation
The term negotiation refers to a strategic discussion that resolves an issue in a
way that both parties find acceptable. In a negotiation, each party tries to
persuade the other to agree with their point of view. Negotiations involve
some give and take, which means one party will always come out on top of the
negotiation. The other, though, must concede—even if that concession is
nominal.

By negotiating, all involved parties try to avoid arguing but agree to reach
some form of compromise. Negotiating parties vary and can include buyers
and sellers, an employer and prospective employee, or governments of two or
more countries.

Working of Negotiation
Negotiations involve two or more parties who come together to reach some
end goal through compromise or resolution that is agreeable to all those
involved. One party will put its position forward, while the other will either
accept the conditions presented or counter with its own position. The process
continues until both parties agree to a resolution.

Participants learn as much as possible about the other party's position before
a negotiation begins, including what the strengths and weaknesses of that
position are, how to prepare to defend their positions, and any
counter-arguments the other party will likely make.

The length of time it takes for negotiations to take place depends on the
circumstances. Negotiation can take as little as a few minutes, or, in more
complex cases, much longer. For example, a buyer and seller may negotiate for
minutes or hours for the sale of a car. But the governments of two or more
countries may take months or years to negotiate the terms of a trade deal.

Key Factors in Negotiation


When it comes to negotiation, there are some key elements or factors that
come into play if you're going to be successful:

1. The Parties Involved: Who are the parties in the negotiation, and what
are their interests? What is the background of all involved, and how
does that affect their position in the discussion?

2. Relationships: What is the relationship between the parties and their


intermediaries in the negotiation? How are the parties connected and
what role does that play in the terms of the negotiation process?

3. Communication: How will the needs of the parties involved be best


communicated in order to secure their agreements through negotiation?
What is the most effective way to convey the desired outcomes and
needs? How can the parties be certain they are being heard?

4. Alternatives: Are there any alternatives to what either party initially


wants? If a direct agreement is not possible, will the parties need to seek
substitute outcomes?

5. Realistic Options: What options may be possible to achieve an outcome?


Have the parties expressed where there may be flexibility in their
demands?

6. Legitimate Claims: Are what each party requests and promises


legitimate? What evidence do the parties offer to substantiate their
claims and show their demands are valid? How will they guarantee they
will follow through on the results of the negotiation?

7. Level of Commitment: What is the amount of commitment required to


deliver the outcome of the negotiations? What is at stake for each party,
and do the negotiations consider the effort that will need to be made to
achieve the negotiated results?

Bargaining
In the social sciences, bargaining or haggling is a type of negotiation in which
the buyer and seller of a good or service debate the price or nature of a
transaction. If the bargaining produces agreement on terms, the transaction
takes place.

Although the most apparent aspect of bargaining in markets is as an


alternative pricing strategy to fixed prices, it can also include making
arrangements for credit or bulk purchasing, as well as serving as an important
method of clienteling.

Bargaining has largely disappeared in parts of the world where retail stores
with fixed prices are the most common place to purchase goods. However, for
expensive goods such as homes, antiques and collectibles, jewellery and
automobiles, bargaining can remain commonplace.

• Argumentation Among Agents-


The study of artificial intelligence (AI) is in many ways connected with the
study of argumentation. Though both fields have developed separately, the
last 20 years have witnessed an increase of mutual influence and exchange of
ideas. From this development, both fields stand to profit: argumentation
theory providing a rich source of ideas that may be used in the
computerization of theoretical and practical reasoning and of argumentative
interaction, and artificial intelligence providing the systems for testing these
ideas. In fact, combining argumentation theory with AI offers argumentation
theory a laboratory for examining implementations of its rules and
concepts.By their interdisciplinary nature, approaches to argumentation in AI
integrate insights from different perspectives (see Fig. 11.1). In the theoretical
systems perspective, the focus is on theoretical and formal models of
argumentation, for instance, extending the long tradition of philosophical and
formal logic. In the artificial systems perspective, the aim is to build computer
programs that model or support argumentative tasks, for instance, in online
dialogue games or in expert systems (computer programs that reproduce the
reasoning of a professional expert,e.g., in the law or in medicine). The natural
systems perspective helps to ground research by concentrating on
argumentation in its natural form, for instance, in the human mind or in an
actual debate.Since the 1990s, the main areas of AI that have been of interest
for argumentation theory are those of defeasible reasoning, multi-agent
systems, and models of legal argumentation. A great many articles about these
overlapping areas have appeared in journals in the realm of computation.1
The biennial COMMA conference series focuses on the study of computational
models of argument.
Non-monotonic Logic-
Today many artificial intelligence publications directly address issues related
to argumentation. A relevant development predating such contemporary
work is the area of non-monotonic logic. A logic is non-monotonic when a
conclusion that,according to the logic, follows from given premises need not
also follow when premises are added. In contrast, classical logic is monotonic.
For instance, in a standard classical analysis, from premises “Edith goes to
Vienna or Rome” and“Edith does not go to Rome,” it follows that “Edith goes to
Vienna,” irrespective of possible additional premises. In a non-monotonic
logic, it is possible to draw tentative conclusions, while keeping open the
possibility that additional information may lead to the retraction of such
conclusions. The standard example of non-monotonicity used in the literature
of the 1980s concerns the flying of birds. Typically, birds fly, so if you hear
about a bird, you will conclude that it can fly.

Reiter’s Logic for Default Reasoning


A prominent proposal in non-monotonic logic is Raymond Reiter’s (1980)
logic for default reasoning. In his system, non-monotonic inference steps are
applications of a set of given default rules. Reiter’s first example of a default
rule expresses that birds typically fly:

BIRDð Þx : M FLYð Þx =FLYð Þx

Here the M should be read as “it is consistent to assume.” The default rule
expresses that if x is a bird, and it is consistent to assume that x can fly, then
by default one can conclude that x can fly. One can then add exceptions, for
instance,using this expression in classical logic that if x is a penguin, x cannot
fly:

PENGUINð Þ!x ¬FLYð Þx

The general default rule can be applied to a specific bird, by instantiating the
variable x by an instance t. In this situation, from just the premise BIRD(t),one
can conclude (by default) FLY(t), but when one has a second premise
PENGUIN(t), the conclusion FLY(t) does not follow.

A more general form of a default rule is α : M β/γ, where the element α is the
prerequisite of the rule, β the justification, and γ the consequent. A special
case occurs when the justification and consequent coincide, as in the bird
example above; then we speak of a “normal default rule.”

Logic Programming-
Logic programming is a programming paradigm that is based on logic. This
means that a logic programming language has sentences that follow logic, so
that they express facts and rules. Computation using logic programming is
done by making logical inferences based on all available data. In order for
computer programs to make use of logic programming, there must be a base
of existing logic, called predicates. Predicates are used to build atomic
formulas or atoms, which state true facts. Predicates and atoms are used to
create formulas and perform queries. Logic languages most often rely on
queries in order to display relevant data. These queries can exist as part of
machine learning, which can be run without the need for manual intervention.

There are multiple different logic programming languages. The most common
language, Prolog (from the French programmation en logique, or
programming in logic), can also interface with other programming languages
such as Java and C. On top of being the most popular logic programming
language, Prolog was also one of the first such languages, with the first prolog
program created in the 1970s for use with interpretations. Prolog was
developed using first-order logic, also called predicate logic, which allows for
the use of variables rather than propositions. Prolog utilizes artificial
intelligence (AI) to help form its conclusions and can quickly process large
amounts of data. Prolog can be run with or without manual inputs, meaning in
it can be programmed to run automatically as part of data processing.

Logic programming, and especially Prolog, can help businesses and


organizations through:

Natural language processing: Natural language processing (NLP) allows


for better interactions between humans and computers. NLP can listen to
human language in real time, and then processes and translate it for
computers. This allows technology to “understand” natural language.
However, NLP is not limited just to spoken language. Instead, NLP can also be
utilized to read and understand documentation, both in physical print or from
word processing programs. NLP is used by technologies such as Amazon Alexa
and Google Home to process and understand spoken instructions, as well as
by email applications to filter spam emails and warn of phishing attempts.

Database management: Logic programming can be used for the creation,


maintenance, and querying of NoSQL databases. Logic programming can
create databases out of big data. The programming can identify which
information has been programmed as relevant, and store it in the appropriate
area. Users can then query these databases with specific questions, such as
“What’s the best route to get to New York,” and logic languages can quickly sift
through all of the data, run analyses, and return the relevant result with no
additional work required by the user.

Predictive analysis: With large data sets, logic languages can search for
inconsistencies or areas of differentiation in order to make predictions. This
can be useful in identifying potentially dangerous activities (such as going for
a bike ride in the middle of a thunderstorm) or for predicting failures of
industrial machines. It can also be used to analyze photos and make
predictions around the images, such as predicting the identity of objects in
satellite photos, or recognizing the patterns that differentiate craters from
regular land masses.

• Trust and Reputation in Multi-agent systems-

Artificial intelligence (AI) can help find solutions to many of society’s


problems. This can only be achieved if the technology is of high quality, and
developed and used in ways that earns peoples’ trust. Therefore, an EU
strategic framework based on EU values will give citizens the confidence to
accept AI-based solutions, while encouraging businesses to develop and
deploy them.
This is why the European Commission has proposed a set of actions to boost
excellence in AI, and rules to ensure that the technology is trustworthy.

The Regulation on a European Approach for Artificial Intelligence and the


update of the Coordinated Plan on AI will guarantee the safety and
fundamental rights of people and businesses, while strengthening investment
and innovation across EU countries.

Building trust through the first-ever legal framework on AI-


The Commission is proposing new rules to make sure that AI systems used in
the EU are safe, transparent, ethical, unbiased and under human control.
Therefore they are categorised by risk:

Unacceptable

Anything considered a clear threat to EU citizens will be banned: from social


scoring by governments to toys using voice assistance that encourages
dangerous behaviour of children.

High risk
• Critical infrastructures (e.g. transport), that could put the life and health
of citizens at risk

• Educational or vocational training, that may determine the access to


education and professional course of someone’s life (e.g. scoring of
exams)

• Safety components of products (e.g. AI application in robot-assisted


surgery)

• Employment, workers management and access to self-employment (e.g.


CV sorting software for recruitment procedures)

• Essential private and public services (e.g. credit scoring denying citizens
opportunity to obtain a loan)

• Law enforcement that may interfere with people’s fundamental rights


(e.g. evaluation of the reliability of evidence)

• Migration, asylum and border control management (e.g. verification of


authenticity of travel documents)

• Administration of justice and democratic processes (e.g. applying the


law to a concrete set of facts)

• They will all be carefully assessed before being put on the market and
throughout their lifecycle.

Limited risk
AI systems such as chatbots are subject to minimal transparency obligations,
intended to allow those interacting with the content to make informed
decisions. The user can then decide to continue or step back from using the
application.

Minimal risk
Free use of applications such as AI-enabled video games or spam filters. The
vast majority of AI systems falls into this category where the new rules do not
intervene as these systems represent only minimal or no risk for citizen’s
rights or safety.

You might also like