[go: up one dir, main page]

0% found this document useful (0 votes)
47 views3 pages

Key Concepts in Machine Learning

Chapter 3 covers the basics of Machine Learning, defining it as a subset of AI that learns from data. It explains key concepts such as supervised, unsupervised, and reinforcement learning, as well as essential components of ML models and techniques like feature scaling and cross-validation. The chapter also discusses various algorithms and methods, including decision trees, random forests, and recommendation systems.

Uploaded by

vijay9558402696
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
47 views3 pages

Key Concepts in Machine Learning

Chapter 3 covers the basics of Machine Learning, defining it as a subset of AI that learns from data. It explains key concepts such as supervised, unsupervised, and reinforcement learning, as well as essential components of ML models and techniques like feature scaling and cross-validation. The chapter also discusses various algorithms and methods, including decision trees, random forests, and recommendation systems.

Uploaded by

vijay9558402696
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd

Chapter 3: Basics of Machine Learning - 20 Questions

Q: Define Machine Learning and explain its significance in AI.

A: Machine Learning (ML) is a subset of AI that enables systems to learn from data without explicit
programming. Significance includes automation of decision-making, pattern recognition, and AI applications
like speech recognition.

Q: Differentiate between Supervised, Unsupervised, and Reinforcement Learning with


examples.

A: Supervised Learning: Trained on labeled data (e.g., Email spam detection). Unsupervised Learning:
Identifies patterns in unlabeled data (e.g., Customer segmentation). Reinforcement Learning: Learns via
rewards and penalties (e.g., Chess-playing AI).

Q: What are the key components of a Machine Learning model?

A: Dataset, Features, Model, Loss Function, Optimizer, and Evaluation Metrics.

Q: Explain the concepts of training, validation, and testing datasets in Machine Learning.

A: Training: Used to train the model. Validation: Tunes hyperparameters. Testing: Evaluates final model
performance.

Q: What is overfitting and underfitting? How can they be prevented?

A: Overfitting: Model learns noise instead of pattern (High training accuracy, low test accuracy). Underfitting:
Model is too simple (Poor accuracy). Prevention: More data, regularization, cross-validation, simpler model.

Q: Differentiate between classification and regression with suitable examples.

A: Classification: Predicts categorical output (e.g., Spam detection). Regression: Predicts continuous output
(e.g., House price prediction).

Q: Apply Linear Regression to predict the test score for a student who studied for 5 hours.

A: Using Linear Regression on the dataset, the predicted score for 5 hours is 65.

Q: Implement K-Means Clustering to group customers based on spending habits.

A: Applying K-Means clustering with k=2, customers are grouped based on income and spending score.

Q: Compute accuracy, precision, recall, and F1-score for the given confusion matrix.

A: Accuracy = 0.85, Precision = 0.80, Recall = 0.89, F1-score = 0.84.


Q: Implement a Decision Tree classifier to predict whether a customer will buy a product.

A: Using a Decision Tree on the dataset, a customer with Age=40, Salary=60000 is predicted to buy the
product.

Q: What are feature scaling techniques, and why are they important in Machine Learning?

A: Feature scaling techniques include Normalization and Standardization. They ensure that features
contribute equally to the model and improve convergence in optimization algorithms like Gradient Descent.

Q: Explain the concept of bias-variance tradeoff in Machine Learning.

A: High bias leads to underfitting (too simple model), while high variance leads to overfitting (too complex
model). The goal is to find a balance between the two for optimal performance.

Q: What are hyperparameters in Machine Learning? Give examples.

A: Hyperparameters are configuration settings that are set before training the model, such as learning rate,
number of hidden layers in a neural network, and the number of clusters in K-Means.

Q: What is cross-validation, and why is it used?

A: Cross-validation is a technique to evaluate a model's performance by dividing the dataset into training and
testing subsets multiple times. It helps in reducing overfitting and ensures better generalization.

Q: Explain the working of the Random Forest algorithm.

A: Random Forest is an ensemble learning method that creates multiple decision trees and averages their
predictions to improve accuracy and reduce overfitting.

Q: What is Principal Component Analysis (PCA) in Machine Learning?

A: PCA is a dimensionality reduction technique that transforms data into a set of orthogonal components to
reduce redundancy while retaining important information.

Q: Differentiate between Bagging and Boosting in ensemble learning.

A: Bagging (e.g., Random Forest) reduces variance by training multiple models in parallel. Boosting (e.g.,
AdaBoost, XGBoost) reduces bias by training models sequentially, giving more weight to misclassified
samples.

Q: Explain the role of activation functions in Neural Networks.

A: Activation functions introduce non-linearity into neural networks, enabling them to learn complex patterns.
Common functions include ReLU, Sigmoid, and Tanh.
Q: What is Gradient Descent, and how does it work?

A: Gradient Descent is an optimization algorithm that minimizes the loss function by iteratively updating
model parameters in the direction of the negative gradient.

Q: Explain the difference between Content-Based and Collaborative Filtering in


recommendation systems.

A: Content-Based Filtering recommends items based on item features and user preferences, while
Collaborative Filtering relies on user-item interactions and similarities between users or items.

You might also like