Introduction to Artificial Intelligence (AI) - Detailed Notes
Introduction to AI
**Definition of AI**
- Artificial Intelligence (AI) is a branch of computer science that aims to create machines that mimic
human intelligence.
- It involves developing algorithms that allow computers to perform tasks such as problem-solving,
learning, understanding language, and recognizing patterns.
**Goals of AI**
1. To create expert systems: Systems that exhibit intelligent behavior and advise users.
2. To implement human intelligence in machines: Make machines learn, reason, and make
decisions.
3. To solve complex problems with high accuracy and efficiency.
4. To automate repetitive tasks and enhance productivity.
AI Approaches
- **Top-down (Symbolic) Approach**: Uses rules and logic to simulate intelligent behavior. Example:
Expert systems.
- **Bottom-up (Sub-symbolic) Approach**: Mimics the structure of the human brain using artificial
neural networks.
- **Deductive Approach**: Based on logic and known facts.
- **Inductive Approach**: Based on learning from data and past experiences.
AI Techniques
- **Search Algorithms**: Used for problem-solving by exploring possible solutions (e.g., A*, DFS,
BFS).
- **Knowledge Representation**: Representing information about the world in a form that a computer
can use (e.g., semantic networks, frames).
- **Machine Learning**: Enables systems to learn and improve from data without being explicitly
programmed.
- **Neural Networks**: Inspired by the human brain, used for pattern recognition and classification
tasks.
- **Natural Language Processing (NLP)**: Enables machines to understand and interact using
human languages.
- **Robotics**: Designing and building robots that perform tasks in the real world.
Branches of AI
- **Machine Learning**: Study of algorithms that learn from data.
- **Natural Language Processing (NLP)**: Deals with understanding and generation of human
language.
- **Robotics**: Focuses on creating intelligent robots that can perform tasks.
- **Computer Vision**: Enables machines to interpret visual data.
- **Expert Systems**: Mimic decision-making abilities of human experts.
- **Fuzzy Logic Systems**: Deal with reasoning that is approximate rather than fixed and exact.
Applications of AI
- **Healthcare**: Diagnosis systems, robot-assisted surgeries, and drug discovery.
- **Education**: Personalized learning, intelligent tutoring systems.
- **Finance**: Credit scoring, fraud detection, algorithmic trading.
- **Transportation**: Self-driving cars, traffic management systems.
- **Customer Service**: Chatbots, virtual assistants.
- **Entertainment**: Game AI, recommendation systems (e.g., Netflix, YouTube).
Intelligent Agents that Learn by Example
- These agents use **Machine Learning (ML)** to learn from past data (examples).
- They improve their performance over time based on experience.
- Example: A spam email filter that learns from labeled emails.
- Learning types include supervised, unsupervised, and reinforcement learning.
Definition of a Rational Agent
- A **rational agent** is one that acts to achieve the best outcome based on its knowledge and
goals.
- It uses percepts (inputs from environment) to make decisions.
- The agent aims to maximize its performance measure based on rationality.
Reflex-Based Agent
- Reacts directly to current percepts without considering history.
- Uses simple condition-action rules (if-then rules).
- Does not store past data or adapt to changes.
- Example: A thermostat turns on the heater if the temperature is low.
Utility-Based Agent
- Considers **performance measures** and preferences.
- Chooses the action that maximizes **expected utility**.
- More flexible than reflex agents as it handles complex scenarios.
- Example: A self-driving car choosing between multiple safe and fast routes.
Environment in Which an Agent Operates
- **Fully Observable**: The agent can access all relevant information. (e.g., chess game)
- **Partially Observable**: The agent has limited information. (e.g., driving a car in fog)
- **Deterministic**: The outcome of actions is predictable. (e.g., solving a math problem)
- **Stochastic**: Outcomes involve randomness. (e.g., stock market)
- **Episodic**: Each decision is independent. (e.g., image classification)
- **Sequential**: Current decisions affect future outcomes. (e.g., playing a game)
- **Static**: Environment doesn't change while agent is thinking. (e.g., crossword puzzle)
- **Dynamic**: Environment can change during decision making. (e.g., real-time driving)
- **Discrete**: Finite number of actions or states. (e.g., board games)
- **Continuous**: Infinite possible actions or states. (e.g., robotic movement)