Artificial Intelligence
Dr. E. Sudheer Kumar
Assistant Professor
School of Computer Science and Engineering
Vellore Institute of Technology, Chennai
1
Artificial intelligence (AI)
• Artificial intelligence (AI) is the intelligence of machines or
software, as opposed to the intelligence of humans or animals.
• Artificial intelligence is the science of making machines that can
think like humans.
• Artificial Intelligence is composed of two
words Artificial and Intelligence, where Artificial defines "man-
made," and intelligence defines "thinking power", hence AI
means "a man-made thinking power.“
• Artificial Intelligence exists when a machine can have human-
based skills such as learning, reasoning, and solving problems
2
What is Artificial Intelligence ?
• making computers that think?
• the automation of activities we associate with human thinking,
like decision making, learning ... ?
• the art of creating machines that perform functions that
require intelligence when performed by people ?
• the study of mental faculties through the use of computational
models ?
3
What is Artificial Intelligence ?
• the study of computations that make it possible to
perceive, reason and act ?
• a field of study that seeks to explain and emulate
intelligent behaviour in terms of computational processes ?
• a branch of computer science that is concerned with the
automation of intelligent behaviour ?
• anything in Computing Science that we don't yet know how
to do properly ? (!)
4
What is Artificial Intelligence ?
THOUGHT Systems that thinkSystems that think
like humans rationally
Systems that act Systems that act
BEHAVIOUR like humans rationally
HUMAN RATIONAL
5
Acting humanly: The Turing Test approach
The computer would need to possess the following capabilities:
• NATURAL LANGUAGE PROCESSING -to enable it to communicate successfully in English
• KNOWLEDGE REPRESENTATION -to store what it knows or hear
• AUTOMATED REASONING - to use the stored information to answer questions and to draw
new conclusions
• MACHINE LEARNING- to adapt to new circumstances and to detect and extrapolate
patterns.
Turing’s test deliberately avoided direct physical interaction between the interrogator and
the computer, because physical simulation of a person is unnecessary for intelligence. It
includes a video signal so that the interrogator can test the subject’s perceptual abilities, as
well as the opportunity for the interrogator to pass physical objects “through the hatch.” To
pass the total Turing Test, the computer will need
• COMPUTER VISION - to perceive objects, and
• ROBOTICS - to manipulate objects and move about. 6
Thinking humanly: The cognitive modeling
approach
• Through introspection—trying to catch our own thoughts
as they go by
• Through psychological experiments—observing a person
in action
• Through Brain imaging—observing the brain in action.
If the program’s input–output behavior matches corresponding human
behavior, that is evidence that some of the program’s mechanisms could
also be operating in humans.
7
Thinking rationally: The “laws of
thought” approach
• The Greek philosopher Aristotle was one of the first to attempt to codify “right thinking,”
that is, irrefutable reasoning processes.
• LOGIC- laws of thought were supposed to govern the operation of the mind; their study
initiated the field called logic.
• LOGICIST- programs existed that could, in principle, solve any solvable problem described
in logical notation. (Although if no solution exists, the program might loop forever.) The so-
called logicist tradition within artificial intelligence hopes to build on such programs to
create intelligent systems.
Obstacles:
• It is not easy to take informal knowledge and state it in the formal terms required by logical
notation, particularly when the knowledge is less than 100% certain.
• There is a big difference between solving a problem “in principle” and solving it in practice.
Even problems with just a few hundred facts can exhaust the computational resources of
any computer unless it has some guidance as to which reasoning steps to try first. 8
Acting rationally: The rational agent
approach
• AGENT- An agent is just something that acts. Of course, all computer
programs do something, but computer agents are expected to do more:
operate autonomously, perceive their environment, persist over a prolonged
time period, adapt to
• RATIONAL AGENT- change, and create, and pursue goals. A rational agent is
one that acts so as to achieve the best outcome or, when there is uncertainty,
the best expected outcome.
The rational-agent approach has two advantages over the other approaches.
• It is more general than the “laws of thought” approach because correct
inference is just one of several possible mechanisms for achieving rationality.
• It is more amenable to scientific development than are approaches based on
human behavior or human thought. 9
Areas of AI and Some Dependencies
Knowledge
Search Logic Representation
Machine
Planning
Learning
Expert
NLP Vision Robotics Systems
10
Why Artificial Intelligence?
• With the help of AI, you can create such software or devices which
can solve real-world problems very easily and with accuracy such as
health issues, marketing, traffic issues, etc.
• With the help of AI, you can create your personal virtual Assistant,
such as Cortana, Google Assistant, Siri, etc.
• With the help of AI, you can build such Robots which can work in an
environment where survival of humans can be at risk.
• AI opens a path for other new technologies, new devices, and new
Opportunities.
11
Goals of Artificial Intelligence
1.Replicate human intelligence
2.Solve Knowledge-intensive tasks
3.An intelligent connection of perception and action
4.Building a machine which can perform tasks that requires human
intelligence such as:
1. Proving a theorem
2. Playing chess
3. Plan some surgical operation
4. Driving a car in traffic
5.Creating some system which can exhibit intelligent behavior, learn new
things by itself, demonstrate, explain, and can advise to its user.
12
What Comprises to
Artificial
Intelligence?
Intelligence is an intangible part
of our brain which is a
combination of Reasoning,
learning, problem-solving
perception, language
understanding, etc.
To achieve the above factors for
a machine or software, Artificial
Intelligence requires the
following discipline:
13
Advantages of Artificial Intelligence
• High Accuracy with less errors
• High-Speed
• High reliability
• Useful for risky areas
• Digital Assistant
• Useful as a public utility
14
Disadvantages of Artificial Intelligence
• High Cost
• Can't think out of the box
• No feelings and emotions
• Increase dependency on machines
• No Original Creativity
15
Evolution of AI
Pre-1950s: Theoretical Foundations
The roots of AI can be traced back to antiquity, with philosophers attempting to explain the
human mind as a symbolic system. However, the modern field of AI truly began to take shape
in the mid-20th century.
• 1843: Ada Lovelace, known as the world's first computer programmer, proposed the idea
that machines could manipulate symbols in addition to numbers, laying a fundamental
concept for AI.
• 1936: Alan Turing proposed the concept of a "universal machine" (later known as the Turing
Machine), a theoretical device that could solve any computation given enough time and
resources. This forms the basis of the digital computer and the principle of computability.
• 1943: Warren McCulloch and Walter Pitts proposed the first mathematical model of a
neural network, opening up the possibility of learning machines.
• 1949: Donald Hebb proposed a learning theory, now known as Hebbian learning, which
became a fundamental concept in the development of artificial neural networks.
16
1950s-1960s: Birth and Early Developments
The field of AI was officially born in the mid-20th century, starting with the coining of the term
"Artificial Intelligence."
• 1950: Alan Turing proposed the Turing Test to determine a machine's ability to exhibit intelligent
behavior. In the same year, Claude Shannon published a paper on machine learning chess.
• 1956: The Dartmouth Conference officially coined the term "Artificial Intelligence." The
participants, including John McCarthy, Marvin Minsky, Allen Newell, and Herbert Simon, became
the leaders of AI research for several decades.
• 1957: Frank Rosenblatt invented the perceptron, the first artificial neuron.
• 1958: John McCarthy developed Lisp, a programming language that became popular in AI
research.
• 1959: Arthur Samuel developed a self-learning program to play checkers, demonstrating the
power of machine learning.
• 1965: Joseph Weizenbaum created ELIZA, a natural language processing computer program,
demonstrating the potential of AI in understanding and generating human language.
17
1970s-1980s: AI Winter and the Rise of Expert Systems
• Despite the initial excitement, the lack of significant progress led to a
period known as the "AI Winter."
• 1970s-1980s: The AI Winter was characterized by reduced funding and
interest in AI research due to its failure to achieve its ambitious goals.
• 1972: Dendral, one of the first expert systems, was developed, marking a
shift in AI research towards solving specific problems.
• 1980: The Japanese Fifth Generation Computer Systems project aimed to
develop an "intelligent" computer, but ultimately it failed to meet its
objectives.
• 1986: The backpropagation algorithm was reintroduced, leading to a
resurgence in neural network research.
18
1990s-2000s: The Internet Era and Machine Learning
The advent of the internet provided a massive amount of data, fueling the
development of machine learning algorithms.
• 1997: IBM's Deep Blue defeated world chess champion Garry Kasparov, marking
a significant moment in the development of AI.
• 1999: Sony released AIBO, a robotic pet, demonstrating the capabilities of AI.
• 2011: IBM's Watson showcased the power of AI in understanding natural
language by winning the TV quiz show Jeopardy!, beating human champions.
• 2016: Google's AlphaGo program beat the world champion Go player, Lee Sedol,
marking a significant milestone in AI's ability to learn and make decisions.
• 2018: The journey of Generative pre-trained transformers (GPT) began in this
year when OpenAI, a leading AI company in the United States, introduced the
first GPT model. This marked a significant milestone in the field of generative
artificial intelligence.
19
2020s: GPT-3 and Beyond
• GPT Characteristics: GPT models are a type of large language model (LLM) that utilise the transformer
architecture. They are trained on vast amounts of unlabelled text data, which enables them to generate
content that closely resembles human writing. As of 2023, most LLMs share these characteristics and are
often broadly referred to as GPTs.
• GPT-n Series: OpenAI has released a series of increasingly advanced GPT models, known as the "GPT-n"
series. Each model in this series has been more capable than its predecessor, thanks to increases in size
(number of trainable parameters) and training. These models have formed the foundation for more task-
specific GPT systems, including models that are fine-tuned for following instructions. One such application
of these models is the ChatGPT chatbot service.
• March 2023: The latest model in the series, GPT-4, was released. This model represents the current
pinnacle of GPT development by OpenAI.
• Other GPT Models: The term "GPT" has also been adopted by other organizations in their model names and
descriptions. For instance, EleutherAI has created a series of GPT foundation models, and Cerebras has
recently developed seven models. Additionally, companies across various industries have developed GPT
models tailored to their specific needs. Examples include Salesforce's "EinsteinGPT" for customer
relationship management (CRM) and Bloomberg's "BloombergGPT" for finance.
20
• Future Developments
OpenAI's latest LLM was released to the public on September 12, 2024. The release included the o1-
preview and the o1-mini models.
• How smart is GPT-5? : OpenAI has touted a list of STEM benchmarks that show off o1’s reasoning
abilities, including:
• A similar performance to PhD students in benchmark tests on physics, chemistry, and biology.
• Placing in the top 500 students in the US qualifier for the USA Math Olympiad.
• Ranking in the 89th percentile in Codeforces, a competitive coding test.
• OpenAI's Future Direction: OpenAI has become more reserved about its operations and is less
likely to share its models like GPT-4 or GPT-5 with the open-source community. However, reports
suggest that OpenAI is developing a new open-source AI model for public release.
21
22
23
24
THE STATE OF THE ART
• Robotic vehicles
• Speech recognition
• Autonomous planning and
scheduling
• Game playing
• Spam fighting
• Logistics planning
• Robotics
• Machine Translation
25
Application of AI
26
27
28
29
30
31
32
33
34