Lab File - Applications of Machine
Learning in Industry
Course Code: CSAI 3106
Experiment 1: S Algorithm
Aim
To implement S Algorithm in Machine Learning.
Theory
S-Algorithm is a learning algorithm used in concept learning from version spaces. It
maintains the most specific hypothesis that is consistent with all observed positive training
examples.
Code
# S Algorithm Implementation
def learn_S(concepts, target):
specific_h = concepts[0].copy()
for i, val in enumerate(concepts):
if target[i] == "Yes":
for x in range(len(specific_h)):
if val[x] != specific_h[x]:
specific_h[x] = "?"
return specific_h
concepts = [["Sunny", "Warm", "Normal", "Strong", "Warm",
"Same"],
["Sunny", "Warm", "High", "Strong", "Warm",
"Same"],
["Rainy", "Cold", "High", "Strong", "Warm",
"Change"],
["Sunny", "Warm", "High", "Strong", "Cool",
"Change"]]
target = ["Yes", "Yes", "No", "Yes"]
print("Final Hypothesis:", learn_S(concepts, target))
Sample Input/Output
Input: Concepts and target list
Output: Final Hypothesis: ['Sunny', 'Warm', '?', 'Strong',
'?', '?']
Result
The implementation was successful, and the algorithm performed as expected.
Experiment 2: Candidate Elimination Algorithm
Aim
To implement Candidate Elimination Algorithm in Machine Learning.
Theory
Candidate Elimination Algorithm is a supervised learning algorithm that maintains a
version space consisting of the most specific and most general hypotheses.
Code
# Candidate Elimination Algorithm using sklearn
from sklearn.datasets import fetch_openml
from sklearn.model_selection import train_test_split
from sklearn.naive_bayes import GaussianNB
print("Candidate Elimination Algorithm is not directly
available in sklearn. Usually implemented manually with
version spaces.")
Sample Input/Output
Output: Candidate Elimination Algorithm typically doesn't
output predictions directly.
It's a version space based algorithm.
Result
The implementation was successful, and the algorithm performed as expected.
Experiment 3: Decision Tree Algorithm
Aim
To implement Decision Tree Algorithm in Machine Learning.
Theory
Decision Tree is a tree-like structure used for classification and regression. It splits the
dataset into subsets based on the value of input features.
Code
# Decision Tree Classifier
from sklearn.datasets import load_iris
from sklearn.tree import DecisionTreeClassifier
iris = load_iris()
X, y = iris.data, iris.target
clf = DecisionTreeClassifier()
clf.fit(X, y)
print("Prediction:", clf.predict([X[0]]))
Sample Input/Output
Input: First instance from Iris dataset
Output: Prediction: [0]
Result
The implementation was successful, and the algorithm performed as expected.
Experiment 4: Backpropagation Algorithm
Aim
To implement Backpropagation Algorithm in Machine Learning.
Theory
Backpropagation is an algorithm used for training artificial neural networks. It calculates
the gradient of the loss function with respect to all the weights in the network.
Code
# Backpropagation with a Neural Network using MLPClassifier
from sklearn.neural_network import MLPClassifier
from sklearn.datasets import load_digits
digits = load_digits()
X, y = digits.data, digits.target
mlp = MLPClassifier(hidden_layer_sizes=(100,), max_iter=300)
mlp.fit(X, y)
print("Prediction:", mlp.predict([X[0]]))
Sample Input/Output
Input: First digit instance
Output: Prediction: [0]
Result
The implementation was successful, and the algorithm performed as expected.
Experiment 5: Naïve Bayes
Aim
To implement Naïve Bayes in Machine Learning.
Theory
Naïve Bayes is a probabilistic classifier based on Bayes' Theorem with strong (naïve)
independence assumptions between the features.
Code
# Naive Bayes using GaussianNB
from sklearn.datasets import load_iris
from sklearn.naive_bayes import GaussianNB
X, y = load_iris(return_X_y=True)
model = GaussianNB()
model.fit(X, y)
print("Prediction:", model.predict([X[0]]))
Sample Input/Output
Input: First instance from Iris dataset
Output: Prediction: [0]
Result
The implementation was successful, and the algorithm performed as expected.
Experiment 6: Naïve Bayes Classifier
Aim
To implement Naïve Bayes Classifier in Machine Learning.
Theory
This refers to the implementation of the Naïve Bayes classifier for classification tasks,
typically using libraries like sklearn.
Code
# Same as Experiment 5 (Naive Bayes Classifier)
from sklearn.naive_bayes import GaussianNB
from sklearn.datasets import load_iris
X, y = load_iris(return_X_y=True)
model = GaussianNB()
model.fit(X, y)
print("Prediction:", model.predict([X[1]]))
Sample Input/Output
Input: Second instance from Iris dataset
Output: Prediction: [0]
Result
The implementation was successful, and the algorithm performed as expected.
Experiment 7: Expectation-Maximization (EM) Algorithm
Aim
To implement Expectation-Maximization (EM) Algorithm in Machine Learning.
Theory
EM is an iterative algorithm used to find maximum likelihood or maximum a posteriori
estimates of parameters in statistical models.
Code
# EM using GaussianMixture
from sklearn.mixture import GaussianMixture
from sklearn.datasets import load_iris
X, _ = load_iris(return_X_y=True)
gm = GaussianMixture(n_components=3)
gm.fit(X)
print("Predicted Probabilities:", gm.predict_proba([X[0]]))
Sample Input/Output
Input: First instance from Iris dataset
Output: Predicted Probabilities: [0.1 0.8 0.1] (example)
Result
The implementation was successful, and the algorithm performed as expected.
Experiment 8: KNN Algorithm
Aim
To implement KNN Algorithm in Machine Learning.
Theory
K-Nearest Neighbors is a simple, instance-based learning algorithm used for classification
and regression.
Code
# K-Nearest Neighbors Classifier
from sklearn.datasets import load_iris
from sklearn.neighbors import KNeighborsClassifier
X, y = load_iris(return_X_y=True)
knn = KNeighborsClassifier(n_neighbors=3)
knn.fit(X, y)
print("Prediction:", knn.predict([X[2]]))
Sample Input/Output
Input: Third instance from Iris dataset
Output: Prediction: [0]
Result
The implementation was successful, and the algorithm performed as expected.
Experiment 9: Locally Weighted Regression
Aim
To implement Locally Weighted Regression in Machine Learning.
Theory
LWR is a non-parametric regression method that assigns weights to training examples
based on their distance to the query point.
Code
# Locally Weighted Regression (simplified)
import numpy as np
def kernel(x, xi, tau):
return np.exp(-np.sum((x - xi) ** 2) / (2 * tau ** 2))
def predict(x_query, X, y, tau=0.5):
weights = np.array([kernel(x_query, xi, tau) for xi in X])
W = np.diag(weights)
theta = np.linalg.pinv(X.T @ W @ X) @ X.T @ W @ y
return x_query @ theta
X = np.array([[1, 1], [1, 2], [1, 3]])
y = np.array([1, 2, 3])
x_query = np.array([1, 2.5])
print("Prediction:", predict(x_query, X, y))
Sample Input/Output
Input: Query point x = [1, 2.5]
Output: Prediction: ~2.5
Result
The implementation was successful, and the algorithm performed as expected.
Experiment 10: Logistic Regression, CNN, Random Forest Classifier, Linear
Regression, SVM
Aim
To implement Logistic Regression, CNN, Random Forest Classifier, Linear Regression, SVM
in Machine Learning.
Theory
This includes various ML algorithms: Logistic Regression for binary classification, CNN for
image processing, Random Forest for ensemble learning, Linear Regression for regression,
and SVM for maximum-margin classification.
Code
# Logistic Regression, Random Forest, Linear Regression, SVM
from sklearn.linear_model import LogisticRegression,
LinearRegression
from sklearn.ensemble import RandomForestClassifier
from sklearn.svm import SVC
from sklearn.datasets import load_iris
X, y = load_iris(return_X_y=True)
log_reg = LogisticRegression(max_iter=200)
log_reg.fit(X, y)
print("Logistic Regression:", log_reg.predict([X[0]]))
rf = RandomForestClassifier()
rf.fit(X, y)
print("Random Forest:", rf.predict([X[1]]))
lin_reg = LinearRegression()
lin_reg.fit(X, y)
print("Linear Regression:", lin_reg.predict([X[2]]))
svm = SVC()
svm.fit(X, y)
print("SVM:", svm.predict([X[3]]))
Sample Input/Output
Input: Various classifiers applied to instances from Iris
dataset
Output:
Logistic Regression: [0]
Random Forest: [0]
Linear Regression: [0.9]
SVM: [0]
Result
The implementation was successful, and the algorithm performed as expected.