[go: up one dir, main page]

0% found this document useful (0 votes)
10 views12 pages

I Have A PowerPoint File With Multiple Topics Cov

Artificial Intelligence power point

Uploaded by

golukasni
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
10 views12 pages

I Have A PowerPoint File With Multiple Topics Cov

Artificial Intelligence power point

Uploaded by

golukasni
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 12

"I have a PowerPoint file with multiple

topics covered in different slides. Please


go through the PPT and explain each topic
in detail, including key points, definitions,
examples, diagrams (if applicable), and
any important formulas or concepts. The
explanation should be structured, easy to
understand, and suitable for someone
learning the topics for the first time."
Here’s a structured, beginner-friendly guide to the topics covered in the PPT, with clear
definitions, key points, simple examples, and core formulas where applicable. Each
section maps to the slides and expands them into easy-to-understand notes suitable for
first-time learners.[1]

Introduction to AI

 Artificial Intelligence means non-natural systems that can understand, think, and
learn to perform tasks intelligently, often mimicking human abilities in problem
solving and decision making.[1]

 Core idea: Build computer programs that can perceive the world, reason about it,
and act to achieve goals effectively in varied environments. [1]

What is intelligence

 Intelligence: the capacity to learn, understand, solve novel problems, act rationally,
and act like humans across tasks requiring higher mental processes such as
creativity, reasoning, and pattern recognition. [1]

 Dictionary views (Webster/Oxford): emphasize learning, adapting to situations, and


applying knowledge; examples include classification, induction, deduction, language
processing, and optimization.[1]
What intelligence involves

 Interaction with the real world: perceive (vision, speech), understand (interpret), and
act (control, robotics) to have real effects. [1]

 Reasoning and planning: solve new problems, handle uncertainty, and make
decisions; learning and adaptation: improve behavior based on changing
environments.[1]

Intelligent systems

 An intelligent system acquires and applies knowledge to perceive (sense), reason


(infer), learn (improve), and decide under incomplete information (make
inferences/decisions with partial data).[1]

Cognitive science

 Cognitive science studies how the mind acquires knowledge and understands
through thought, experience, and senses, often using computational models to
explain learning and reasoning (e.g., why people remember in different ways). [1]

 Examples: learning builds associations; reasoning differs among people (e.g.,


shopping decisions), reflecting cognition in action. [1]

Definitions of AI (perspectives)

 McCarthy (1956): AI is the science and engineering of making intelligent machines,


especially computer programs.[1]

 Other views: making computers do human-level intelligent tasks; designing


intelligent agents that perceive and act to maximize success; computational
intelligence focuses on agent design. [1]

 Four classic foci: systems that think like humans, think rationally, act like humans,
and act rationally—leading to approaches like neural networks (human-like) and
logic-based systems (rational).[1]

General AI goals

 Replicate human intelligence, solve knowledge-intensive tasks, connect perception


to action, and enhance communication; engineering goal: build intelligent machines;
science goal: understand biological intelligence. [1]

AI system components
 Core components: heuristic search, knowledge representation, AI programming
languages/tools (e.g., LISP/PROLOG) working together to solve complex problems. [1]

Historical development of AI

 1943–55 (Conception): neural models (McCulloch & Pitts), Turing’s “Computing


Machinery and Intelligence,” Turing Test, early theorem provers and neural
networks.[1]

 1956 (Birth): Dartmouth conference; name “Artificial Intelligence” adopted; logic


theorist and problem-solving programs.[1]

 1952–69 (Adolescence): LISP (1958), Logic Theorist, General Problem Solver,


reasoning programs.[1]

 1969–79 (Youthfulness): weak vs strong AI, expert systems, PROLOG, natural


language understanding.[1]

 1980–present (Maturity): probability and Bayesian methods, backpropagation,


expert systems boom/bust, soft computing, agents, renewed interest in human-level
AI.[1]

Types of AI

 Narrow (Weak) AI: specialized for one task (e.g., Siri, recommender systems, self-
driving components, speech/image recognition); fails outside its trained scope. [1]

 General AI: would perform any intellectual task like humans; currently a research
goal with no existing systems matching full human flexibility. [1]

 Strong/Super AI: hypothetical systems surpassing human intelligence across tasks


with capabilities like autonomous reasoning, planning, and communication beyond
human level.[1]

Human vs artificial intelligence (high-level)

 Humans adapt via multi-sensory cognition, associative thinking, and flexible


memory; AI uses programmed structures, data, and binary representations, aiming
to capture and apply human knowledge at scale. [1]

Applications of AI

 Perception: machine vision, speech understanding, tactile sensing. [1]


 Robotics: autonomous navigation, manipulation; NLP: understanding, speech,
generation, translation; expert systems, machine learning, theorem proving,
symbolic math, and game playing.[1]

Intelligent agents

 Agent: perceives via sensors and acts via actuators (human agent: eyes/hands;
robotic agent: cameras/motors; software agent: files/keystrokes and outputs) in a
perceive–think–act loop.[1]

 Architecture vs agent program: architecture is hardware/platform with


sensors/actuators; agent program maps percept histories to actions; Agent =
Architecture + Agent Program.[1]

Rational agents

 Rational agent: has preferences, models uncertainty, and selects actions to


maximize a performance measure given its knowledge and percept sequence;
judged by performance, prior knowledge, available actions, and percept history. [1]

 Reinforcement learning ties rewards to actions; rationality means doing the “right”
thing as defined by performance criteria.[1]

Types of agents

 Simple reflex: condition–action rules based only on current percept; works in fully
observable settings; prone to loops in partial observability. [1]

 Model-based reflex: maintains internal state (a model) to handle partial observability


by tracking how the world evolves and how actions affect it. [1]

 Goal-based: choose actions to reduce distance to explicit goals; requires


search/planning; behavior is flexible and easier to change. [1]

 Utility-based: choose actions that maximize expected utility (happiness) over states
when multiple goals/uncertainty exist; uses a utility function mapping states to real
numbers.[1]

 Learning agent: improves performance using four parts—learning element


(improves), critic (feedback), performance element (acts), problem generator (seeks
informative experiences).[1]

Problem solving and search


 Problem solving: define search space, initial and goal states, and find a path via
production rules (valid moves) when direct solutions are not possible. [1]

 Key terms: problem, search space, search, well-defined problem (initial, goal,
transitions), and solution path.[1]

 Representations: lexical, structural, procedural, semantic; state-space


representation and problem reduction; good representations are complete, concise,
transparent, suppress irrelevant details, and computable. [1]

Production systems

 A production system solves problems using: a set of production rules, one or more
knowledge bases, a control strategy, and a rule applier; valued for expressiveness,
modularity, and conflict resolution mechanisms. [1]

 Characteristics: graph/tree data structures; control must be systematic and make


progress.[1]

Problem characteristics and nature

 Desirable traits: decomposability, predictable dynamics, clear solution criteria;


consider knowledge needs and human–computer interaction requirements. [1]

 AI problem types: path-finding, decomposable/recoverable/predictable problems,


quality-sensitive problems, state-finding, interactive, and knowledge-intensive
problems.[1]

Search fundamentals

 Terminology: search space, start/goal states, search tree, actions, transition model,
path cost, solution, and optimal solution. [1]

 Algorithm properties: completeness (finds a solution if one exists), optimality (lowest


cost), time complexity, space complexity.[1]

Uninformed search

 Uninformed (blind) search has no domain knowledge—explores the space


systematically using only tree structure and goal tests (e.g., BFS, DFS, DLS, IDDFS,
uniform-cost, bidirectional).[1]
 Informed (heuristic) search uses problem-specific knowledge to guide search;
heuristics aim for good solutions in reasonable time without guarantees of best-
possible outcomes.[1]

Breadth-first search (BFS)

 Strategy: explore level by level from the root using a FIFO queue; complete and
optimal when path cost correlates with depth. [1]

 Complexity: time T (b)=1+b 2+ b3 +⋯+b d=O( bd ), space O(b d ); pros: finds minimal
steps; cons: high memory and time if goal is deep. [1]

Depth-first search (DFS)

 Strategy: go deep along a path before backtracking using a stack; not optimal and
may loop in infinite spaces without checks. [1]

 Complexity: time O(n m) where m = max depth, space O(b m); pros: low memory, can
be fast on lucky paths; cons: can miss shallow solutions or loop. [1]

Depth-limited search (DLS)

 DFS with a depth cutoff ℓ to avoid infinite descent; fails with “cutoff” if no solution
within ℓ.[1]

 Properties: completeness if solution above ℓ; time O(b ℓ ), space O(b × ℓ); not optimal;
pro: memory efficient; con: can be incomplete or miss better solutions. [1]

Uniform-cost search (UCS)

 For weighted graphs: expands the path with lowest cumulative cost using a priority
queue; complete and optimal when costs are nonnegative. [1]

 Complexity: depends on optimal cost C ∗ and minimum step cost ε ; time/space



/ε ⌋
O(b 1+⌊ C ); pros: optimal; cons: may be slow if many low-cost paths exist. [1]

Iterative deepening DFS (IDDFS)

 Repeats DLS with increasing depth limits; combines BFS’s optimal shallow-solution
finding with DFS’s low memory.[1]

 Properties: complete for finite branching; time O(b d ), space O(b d); pro: efficient
memory; con: repeated work across iterations.[1]
Bidirectional search

 Runs forward from start and backward from goal, stopping when frontiers meet;
very fast when both directions are well-defined. [1]

 Properties (with BFS): complete, optimal, time/space O(b d ) on reduced depth per
side; pros: fast, less memory; cons: harder to implement, needs a known goal state.
[1]

Expert systems

 Definition: knowledge-intensive programs that use domain knowledge and inference


to perform expert-level tasks like diagnosis, design, analysis, advising, and
scheduling.[1]

 Classic quotes: Feigenbaum—intelligent program using knowledge and inference;


BCS—embodies expert knowledge to offer intelligent advice or decisions. [1]

Expert system characteristics

 Deep domain knowledge, heuristic search, ability to explain reasoning, handle


uncertainty/irrelevance, accept updates, communicate in natural language, and
mainly symbolic processing.[1]

Human experts vs expert systems

 Humans: deep tacit knowledge, use heuristics and common sense, explain
decisions, flexible, learn over time, limited availability, variable performance. [1]

 Expert systems: depend on encoded knowledge, limited common sense, consistent


performance, scalable availability, economical once built, durable knowledge, can
tutor via explicit rules; limited creativity and self-learning unless extended. [1]

Role of knowledge in expert systems

 Domain-specific knowledge drives problem solving; knowledge acquisition from


experts is encoded as facts, procedural rules, and heuristics to simulate expert
reasoning.[1]

Analysis of knowledge

 Components:

o Facts: statements of truth (e.g., “Dog is an animal”). [1]


o Procedural rules: invariant sequences (e.g., check traffic before merging). [1]

o Heuristics: rules of thumb gathered through experience (e.g., prefer controlled


emergency landing) that give expert systems flexibility beyond rigid
algorithms.[1]

Levels of knowledge analysis

 Identification (interviews), conceptualization (core concepts), epistemological


(structures, taxonomies), logical analysis (reasoning flow), implementational
(procedures for building/testing).[1]

Expert system architecture

 Main parts: knowledge base (facts/rules), inference engine (applies rules),


explanation facility (why/how), knowledge update facility (manual, direct by expert,
machine learning), user interface (often via NLP). [1]

User roles and interface

 Roles: tester (validate), tutor (supply/update knowledge), pupil (learn from system),
customer (use for tasks).[1]

 UI bridges human and system representations; accepts natural language inputs and
adapts to user’s expertise level; translates internal symbolic forms into user-
understandable outputs.[1]

Knowledge storage and generation

 Knowledge base: warehouse of domain facts, procedural and heuristic rules,


organized for retrieval and inference; knowledge acquisition is performed by
knowledge engineers from experts and documents. [1]

 Knowledge engineering: gather, transform, encode, and validate knowledge;


requires deep understanding of domain and careful representation choices. [1]

Inference engine and chaining

 Inference engine manages rule application, priority, and deriving new knowledge
while avoiding combinatorial explosion by using compiled, higher-level knowledge
where possible.[1]

 Reasoning paradigms:

o Forward chaining: data to goals (bottom-up). [1]


o Backward chaining: goal to data (top-down). [1]

Knowledge update and explanation

 Updates: manual by KE, direct by expert, or via machine learning for self-updating
knowledge; all expand the knowledge base over time. [1]

 Explanation facility: shows steps and justifications of reasoning to build user trust,
verify correct knowledge use, and debug decision modules. [1]

Programming languages for expert systems

 LISP (functional) and PROLOG (logic) are traditional AI languages supporting


symbolic processing, large knowledge bases, inference engines, NLP, and dynamic
knowledge acquisition.[1]

Expert system development process

 Steps: problem identification (is expert judgment needed?), prototype,


formalization/design, implementation (acquisition, update, testing), evaluation
(compare with expert behavior), and long-term evolution (expand/correct
knowledge, adapt to domain changes). [1]

 Practical concerns: representation/inference revisions, component-level revision,


knowledge base partitioning (maximize cohesion, minimize coupling),
integration/verification by KE and experts. [1]

Evaluation and long-term evolution

 Evaluate vs expert responses for completeness and consistency; long-term: add


functionality, correct and expand knowledge, handle domain expansion and external
changes, enabling joint growth of expert and system. [1]

Advantages and limitations of expert systems

 Advantages: reproducible, safe in risky environments, fewer errors, steady


performance, quick responses.[1]

 Limitations: wrong KB ⇒ wrong outputs, limited creativity, costly to build/maintain,


knowledge acquisition is hard, no self-learning without explicit mechanisms. [1]

Knowledge and knowledge representation


 Knowledge: facts, principles, beliefs, and heuristics; belief may be true/false,
hypothesis is a belief not known to be true. [1]

 Organization/management: indexed clusters, property lists, associative memory


ideas (HAM) to group related concepts efficiently. [1]

Knowledge engineering views

 Transfer view (move expert knowledge into a system) and modeling view (encode
as facts and rules) guide how KEs structure the knowledge base. [1]

What is knowledge representation (KR)

 KR encodes real-world information so machines can understand and reason; not just
storage—enables learning from experience to behave intelligently; central to
knowledge-based agents (KB of sentences).[1]

What to represent

 Objects, events, performance (how-to), meta-knowledge (knowledge about


knowledge), facts (truths), and a formal knowledge base of sentences for reasoning.
[1]

Reasoning in AI

 Reasoning draws logical conclusions and predictions from knowledge, facts, and
beliefs to enable rational behavior in machines. [1]

 Types:

o Deductive: general to specific; true premises guarantee true conclusions (e.g.,


All humans eat veggies; Suresh is human ⇒ Suresh eats veggies). [1]

o Inductive: specific to general via generalization; premises support but do not


guarantee conclusions (e.g., observed white pigeons ⇒ expect all pigeons are
white).[1]

o Abductive/common-sense/monotonic/non-monotonic: addressed conceptually


as reasoning modes for incomplete or changing knowledge bases. [1]

Frames and scripts

 Frames: structured templates for situations enabling reasoning with incomplete


information by defaulting shared properties of a class and specializing for instances.
[1]
 Scripts: specialized frames for event sequences (e.g., Trip to Zoo) with entry
conditions, results, props, roles, scenes; used to predict and interpret routine
activities via causal chains, though not always reliable for novel events. [1]

Perception: vision, speech, action

 Perception: interpret sensory inputs (vision, sound, touch) to guide actions; core to
building reactive robots and interactive systems. [1]

 Vision tasks: signal processing (enhancement), measurement (2D extent), pattern


recognition (classification), image understanding (multi-object scene analysis and
3D modeling).[1]

 Speech recognition: converts voice to text and can identify speakers; measured by
speaker dependence, continuous vs isolated words, real-time vs offline, grammar
breadth, and vocabulary size.[1]

 Action: navigation and manipulation are the primary action domains for agents and
robots interacting with environments.[1]

Key formulas and complexity reminders

 BFS: time O(b d ), space O(b d ); optimal if path cost increases with depth. [1]

 DFS: time O(n m) with depth m , space O(b m); not optimal.[1]

 DLS: time O(b ℓ ), space O(b ℓ); complete if solution above ℓ.[1]

/ε ⌋
 UCS: time/space roughly O(b 1+⌊ C ); optimal with nonnegative costs.[1]

 IDDFS: time O(b d ), space O(b d); optimal under same depth-cost assumption as BFS.
[1]

 Bidirectional (with BFS): time/space O(b d ) over two shallower frontiers; complete
and optimal with correct meeting and goal definition. [1]

Simple examples to connect concepts

 Simple reflex agent: vacuum robot that sucks dirt when “dirty” is perceived, without
memory of past states.[1]

 Model-based agent: robot tracking where it has been in a partially observable maze
to avoid revisiting unseen hazards.[1]
 Goal-based agent: pathfinding to a target location using distance estimates to select
actions that move closer to the goal.[1]

 Utility-based agent: self-driving policy balancing safety, comfort, and time—chooses


actions maximizing expected utility under uncertainty. [1]

 Forward vs backward chaining: medical expert system can start from observed
symptoms to deduce diseases (forward) or test if a disease hypothesis is supported
by required symptoms and test results (backward). [1]

If needed, diagrams to sketch

 Agent loop: Percepts → Agent program → Actions, with environment feeding sensors
and receiving actuators.[1]

 Search trees: root, frontier, levels for BFS/DFS; UCS with priority by path cost;
bidirectional meeting in the middle.[1]

 Expert system architecture: user interface ↔ explanation facility; knowledge base +


inference engine; knowledge update feeding knowledge base. [1]

 Frame/script: class slots and default values; script scenes with roles and props (Trip
to Zoo example).[1]

1. Artificial-Intelligence.pptx

You might also like