[go: up one dir, main page]

0% found this document useful (0 votes)
130 views47 pages

Fods Lab Manual

The document is a Master Laboratory Manual for the Embedded Programming course in the Electronics and Communication Engineering department. It includes the vision and mission statements of the institute and department, program outcomes, educational objectives, and specific outcomes related to Data Science and Machine Learning. Additionally, it outlines a series of experiments for students, including implementations of various algorithms such as BFS, DFS, and A*.

Uploaded by

Mahu D
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
130 views47 pages

Fods Lab Manual

The document is a Master Laboratory Manual for the Embedded Programming course in the Electronics and Communication Engineering department. It includes the vision and mission statements of the institute and department, program outcomes, educational objectives, and specific outcomes related to Data Science and Machine Learning. Additionally, it outlines a series of experiments for students, including implementations of various algorithms such as BFS, DFS, and A*.

Uploaded by

Mahu D
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 47

DEPARTMENT OF ELECTRONICS &COMMUNICATION ENGINEERING

EC3403 – EMBEDDED PROGRAMMING

Master Laboratory Manual


DEPARTMENT OF ELECTRONICS &COMMUNICATION ENGINEERING

BONAFIDE CERTIFICATE

This is to certify that this is a b o n a f i d e r e c o r d o f w o r k d o n e b y

Mr./Ms………………………………………………... REG.NO. ……………………

of B.E / ELECTRONICS AND COMMUNICATION ENGINEERING

in CS3411 - FUNDAMENTALS OF DATA SCIENCE AND MACHINE LEARNING

in the IV semester during ………………………………

Staff-In- Charge Head of the Department

Submitted for practical examination held on .


Internal Examiner External Examiner
DEPARTMENT OF ELECTRONICS &COMMUNICATION ENGINEERING

VISION OF THE INSTITUTE:

To be an eminent centre for Academia, Industry and Research by imparting knowledge, relevant
practices and inculcating human values to address global challenges through novelty and sustainability.

MISSION OF THE INSTITUTE:

IM1: To create next generation leaders by effective teaching learning methodologies and instill scientific
spark in them to meet the global challenges.

IM2: To transform lives through deployment of emerging technology, novelty and sustainability.

IM3: To inculcate human values and ethical principles to cater to the societal needs.

IM4: To contribute towards the research ecosystem by providing a suitable, effective platform for
interaction between industry, academia and R & D establishments.

IM5: To nurture incubation centres enabling structured entrepreneurship and start-ups.

VISION OF THE DEPARTMENT:

To Excel in the emerging areas of Electronics and Communication Engineering by imparting knowledge,
relevant practices and inculcating human values to transform the students as potential resources to cater the
industrial and societal development through sustainable technology growth.

MISSION OF THE DEPARTMENT:

DM1: To provide strong fundamentals and technical skills through effective teaching learning Methodologies.
DM2: To transform lives of the students by fostering ethical values, creativity and innovation to become
Entrepreneurs and establish Start-ups.
DM3: To habituate the students to focus on sustainable solutions to improve the quality of life and welfare of the
society.
DM4: To provide an ambience for research through collaborations with industry and academia.
DM5: To inculcate learning of emerging technologies for pursuing higher studies leading to lifelong learning.
PROGRAM OUTCOMES (POs):

PO1: Engineering knowledge


Apply the knowledge of mathematics, science, engineering fundamentals, and an
engineering specialization to the solution of complex engineering problems.

PO2: Problem analysis


Identify, formulate, review research literature, and analyze complex engineering
problems reaching substantiated conclusions using first principles of mathematics, natural
sciences, and engineering sciences.
PO3: Design/development of solutions
Design solutions for complex engineering problems and design system components or
processes that meet the specified needs with appropriate consideration for the public health
and safety, and the cultural, societal, and environmental considerations.
PO4: Conduct investigations of complex problems
Use research-based knowledge and research methods including design of experiments,
analysis and interpretation of data, and synthesis of the information to provide valid
conclusions.
PO5: Modern tool usage
Create, select, and apply appropriate techniques, resources, and modern engineering
and IT tools including prediction and modeling to complex engineering activities with an
understanding of the limitations.
PO6: The engineer and society
Apply reasoning informed by the contextual knowledge to assess societal, health,
safety, legal and cultural issues and the consequent responsibilities relevant to the professional
engineering practice.
PO7: Environment and sustainability
Understand the impact of the professional engineering solutions in societal and
environmental contexts, and demonstrate the knowledge of, and need for sustainable
development.

PO8: Ethics
Apply ethical principles and commit to professional ethics and responsibilities and
norms of the engineering practice.
PO9: Individual and team work
Function effectively as an individual, and as a member or leader in diverse teams, and
in multidisciplinary settings.

PO10: Communication
Communicate effectively on complex engineering activities with the engineering
community and with society at large, such as, being able to comprehend and write effective
reports and design documentation, make effective presentations, and give and receive clear
instructions.
PO11: Project management and finance
Demonstrate knowledge and understanding of the engineering and management
principles and apply these to one’s own work, as a member and leader in a team, to manage
projects and in multidisciplinary environments.

PO12: Life-long learning


Recognize the need for, and have the preparation and ability to engage in
independent and life-long learning in the broadest context of technological change.

PROGRAM EDUCATIONAL OBJECTIVES (PEOs)

PEO1: Contribute to the industry as an Engineer through sound knowledge acquired in core engineering
to develop new processes and implement the solutions for industrial problems.

PEO2: Establish an organization / industry as an Entrepreneur with professionalism, leadership Equality,


teamwork, and ethical values to meet the societal needs.

PEO3: Create a better future by pursuing higher education / research and develop the sustainable
products / solutions to meet the demand.

PROGRAMME SPECIFIC OUTCOMES (PSOs)

The students will demonstrate the abilities

PSO1: To analyze, design and develop quality solutions in Fundamentals of Data Science and Machine
Learning by adapting the emerging technologies.

PSO2: To innovate ideas and solutions for real time problems in industrial and domestic automation
using Data Science and Machine Learning / IOT tools.
CS3411 - FUNDAMENTALS OF DATA SCIENCE AND MACHINE LEARNING

LIST OF EXPERIMENTS:

1. Implement naïve Bayes models


2. Implement Bayesian Networks
3. Build Regression models
4. Build decision trees and random forests
5. Build SVM models
6. Implement ensembling techniques
7. Implement clustering algorithms
8. Implement EM for Bayesian networks
9. Build simple NN models
10. Build deep learning NN models

COURSE OUTCOMES:

Upon Completion of the course the students will be able to

CO1: Understand the benefits, uses, and different facets of Data Science.

CO2: Master the Data Science process, including goal definition, data analysis, and

presentation.

CO3: Perform and interpret correlation and regression analyses.

CO4: Differentiate between various learning algorithms and address issues of over fitting, under

fitting, and generalization.

CO5: Build, train, and optimize neural networks while addressing deep learning challenges
INDEX OF CONTENTS

CO’s PO’s &


EX.NO DATE NAME OF THE EXPERIMENT SIGN
MAPPED PSO’s
MAPPED
1
Implement naïve Bayes models

2
Implement Bayesian Networks

3
Build Regression models

4 Build decision trees and random forests

5 Build SVM models

6
Implement ensembling techniques

7
Implement clustering algorithms

8
Implement EM for Bayesian networks

9 Build simple NN models

10 Build deep learning NN models

11 Implementation of Uninformed search


algorithms (BFS, DFS)

12 Implementation of Informed search


algorithms (A*, memory-bounded A*)
EXP.NO: 11 DATE :

IMPLEMENTATION OF UNINFORMED SEARCH ALGORITHMS(BFS, DFS)

AIM:
To implement uninformed search algorithms of BFS(Breadth-First Search) and DFS(Depth-First
Search) using Python.

APPARATUS REQUIRED:

1. IDLE PYTHON OR VS CODE


2. PC

ALGORITHM:

Step 1 : Start.
Step 2 : Initialize an empty queue and mark all vertices as unvisited.
Step 3 : Add the starting vertex to the queue and mark it as visited.
Step 4 : While the queue is not empty:
Dequeue the next vertex from the queue.
For each adjacent vertex that is not visited, mark it as visited
and add it to the queue.
Step 5 : Repeat step 3 until the queue is empty.
Step 6 : Exit.

PROGRAM:
A) Breadth-First Search(BFS)

graph = {'5' : ['3','7'],'3' : ['2', '4'],'7' : ['8'],'2' : [],'4' : ['8'],'8' : []}

visited = [] # List for visited nodes.

queue = [] #Initialize a queue

def bfs(visited, graph, node): #function for BFS

visited.append(node) queue.append(node)

while queue: # Creating loop to visit each node

m = queue.pop(0)

print (m, end = "

1
")

for neighbour in graph[m]:

2
if neighbour not in visited:

visited.append(neighbour)

queue.append(neighbour)

# Driver Code

print("The Breadth-First Search of the graph

is:") bfs(visited, graph, '5') # function calling

OUTPUT:

3
B) Depth-First Search(DFS)

# Using a Python dictionary to act as an adjacency list

graph = { '5' : ['3','7'], '3' : ['2', '4'], '7' : ['8'], '2' : [], '4' : ['8'], '8' : [] }

visited = set() # Set to keep track of visited nodes of the graph.

def dfs(visited, graph, node): #function for dfs

if node not in visited:

print (node)

visited.add(node)

for neighbour in graph[node]:

dfs(visited, graph, neighbour)

# Driver Code

print("The Depth-First Search of the graph is:")

dfs(visited, graph, '5') # function calling

OUTPUT:

RESULT:

Thus the implementation of uninformed search algorithms of BFS and DFS is written and executed
successfully.

4
EXP.NO: 12 DATE :

IMPLEMENTATION OF INFORMED SEARCH ALGORITHMS(A*, MEMORY-BOUNDED A*)

AIM:
To implement informed search algorithms of A* and memory-bounded A* using python.

APPARATUS REQUIRED:

a. IDLE PYTHON OR VS CODE


b. PC

ALGORITHM:

Step 1 : Start
Step 2 : Initialize the start and goal nodes.
Step 3 : Create an empty open list and a closed list.
Step 4 : Add the start node to the open list.
Step 5 : While the open list is not empty:
Find the node with the lowest f score in open list.
If this node is the goal, return the path.
Otherwise, move the node to the closed list and consider its neighbors.
For each neighbor, calculate its g score (the cost to move to this node
from the start) and h score (the heuristic estimate of the cost to
move from this node to the goal).
If the neighbor is not in the open list or its new g score is lower than its
old g score, update the neighbor's f, g, and h scores and set its parent
to the current node.
Add the neighbor to the open list if it is not already there.
Step 6 : If the goal node is not reached, there is no path.
Step 7 : Exit

PROGRAM:

a) A*

print("A* Implementation in Python:\n")

def f(g, h,n):


return g[n] + h[n]
#remove front, add to visited
def update(to_remove, to_add,
m):
5
to_remove.remove(m)

to_add.append(m)
def a_star_algo(cost, heuristic, start, goals):
path = [] #optimal path
pathSet = []
## closed list
Closed_list = [] # ex: S, A, ...

## open list
open_list = [start]
path_len = {}
path_len[start] = 0
#for back-tracking:
parent_node = {}
parent_node[start]=start
while len(open_list) > 0:
#get node with least f
node = None
for n in open_list:
if node == None or f(path_len, heuristic, n) < f(path_len, heuristic, node):
node = n
if node == None: #path does not exist
break
if node in goals: #[6, 7, 10]
f_n = f(path_len, heuristic, node)
reconstruct = []
aux = node
while parent_node[aux] != aux: # [(S, 9, S), (A, 6, S)]
reconstruct.append(aux) #[ A, S]
aux = parent_node[aux]
reconstruct.append(start)
reconstruct.reverse()
pathSet.append((reconstruct, f_n))
update(open_list, closed_list, node)
continue
#explore the current node

6
path_cost = cost[node] #[0, 0, 5, 9, -1, 6, -1, -1, -1, -1, -1]
for adj_node in range(0, len(path_cost)):

weight = path_cost[adj_node]
if weight > 0:
if adj_node not in open_list and adj_node not in closed_list:

open_list.append(adj_node)
parent_node[adj_node] = node
path_len[adj_node] = path_len[node] + weight
else:
if path_len[adj_node] > path_len[node] + weight:

path_len[adj_node] = path_len[node] + weight


parent_node[adj_node] = node
if adj_node in closed_list:
update(closed_list, open_list, adj_node)
update(open_list, closed_list, node)

if len(pathSet) > 0:
pathSet = sorted(pathSet, key=lambda x: x[1]) #[([1,5,7], 8), ([1,2,3], 10)]
path = pathSet[0][0]

return path
#driver
code #Input
give_cost = [[0,1,2.1],[1,0,1],[3.1,1,0]]
start=0

give_goals=[2,3]
heuristic = [1,2.1,0]
getPath = a_star_algo(give_cost, heuristic, start, give_goals)
print(getPath)

OUTPUT:

7
b) Memory-Bounded A*
import copy
from heapq import heappush, heappop

# we have defined 3 x 3 board therefore n =


3.. n = 3
# bottom, left, top, right

row = [ 1, 0, -1, 0 ]
col = [ 0, -1, 0, 1 ]

class priorityQueue:

def init (self):


self.heap = []

# Inserts a new key 'k'


def push(self, k):
heappush(self.heap, k)

# remove minimum element


def pop(self):
return heappop(self.heap)

# Check if queue is empty


def empty(self):
if not self.heap:
return True
else:
return False

class node:
def init (self, parent, mat, empty_tile_pos, cost, level):

# parent node of current node


self.parent = parent
8
# matrix

self.mat = mat

# position of empty tile


self.empty_tile_pos = empty_tile_pos

# Total Misplaced
tiles self.cost = cost

# Number of moves so far


self.level = level

def lt (self, nxt):


return self.cost < nxt.cost

# Calculate number of non-blank tiles not in their goal position


def calculateCost(mat, final) -> int:

count = 0
for i in range(n):

for j in range(n):
if ((mat[i][j]) and (mat[i][j] != final[i]
[j])):
count +=
1 return count

def newNode(mat, empty_tile_pos, new_empty_tile_pos,


level, parent, final) -> node:

new_mat =
copy.deepcopy(mat) x1 =
empty_tile_pos[0]

y1 = empty_tile_pos[1]
x2 = new_empty_tile_pos[0]
y2 = new_empty_tile_pos[1]

9
new_mat[x1][y1], new_mat[x2][y2] = new_mat[x2][y2], new_mat[x1][y1]

10
# Set number of misplaced tiles
cost = calculateCost(new_mat, final)
new_node = node(parent, new_mat, new_empty_tile_pos, cost,
level)
return new_node

#print the N x N matrix def


printMatrix(mat):
for i in range(n): for j in
range(n):
print("%d " % (mat[i][j]), end = " ")
print()

def isSafe(x, y):


return x >= 0 and x < n and y >= 0 and y < n

def
printPath(root):
if root ==
None:
return

printPath(root.parent)
printMatrix(root.mat)
print()
def solve(initial, empty_tile_pos,
final): pq = priorityQueue()

# Create the root node


cost = calculateCost(initial, final)

root = node(None, initial,


empty_tile_pos, cost, 0)

pq.push(root)
11
while not pq.empty():

12
minimum = pq.pop()

# If minimum is the answer node if


minimum.cost == 0:

# Print the path from root to destination;


printPath(minimum)
return

# Produce all possible


children for i in range(4):
new_tile_pos = [

minimum.empty_tile_pos[0] + row[i],
minimum.empty_tile_pos[1] + col[i], ]

if is Safe(new_tile_pos[0], new_tile_pos[1]):

# Create a child node


child = newNode(minimum.mat,

minimum.empty_tile_pos,
new_tile_pos,

minimum.level + 1,

minimum, final,)

# Add child to list of live nodes


pq.push(child)

# Driver Code
# 0 represents the blank space

# Initial state
initial = [ [ 2, 8, 3 ],
[ 1, 6, 4 ],
[ 7, 0, 5 ] ]

13
# Final State
final = [ [ 1, 2, 3
],
[ 8, 0, 4 ],
[ 7, 6, 5 ] ]

# Blank tile position during start state


empty_tile_pos = [ 2, 1 ]

# Function call
solve(initial, empty_tile_pos, final)

OUTPUT:

RESULT:

Thus the implementation of A* and memory-bounded A* using python is written and executed

14
successfully.

15
EXP.NO: 01 DATE :

IMPLEMENT NAIVE BAYES MODELS

AIM:
To implement a naïve Bayes model using Python.

APPARATUS REQUIRED:

1. GOOGLE COLLAB
2. PC

ALGORITHM:

Step 1: Start
Step 2: import the dataset and necessary
dependencies Step 3: Calculate Prior Probability of
Classes P(y) Step 4: Calculate the Likelihood Table
for all features
Step 5: Calculate Posterior Probability for each class using
the Naive Bayesian equation
Step 6: End

PROGRAM:

from sklearn.datasets import load_iris


from sklearn.model_selection import train_test_split
from sklearn.naive_bayes import GaussianNB X, y = load_iris(return_X_y=True)
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.5, random_state=0)
gnb = GaussianNB()
y_pred = gnb.fit(X_train, y_train).predict(X_test)
print("Number of mislabeled points out of a total %d points : %d"
% (X_test.shape[0], (y_test != y_pred).sum()))

OUTPUT:

RESULT:

Thus the implementation of the naïve Bayes model using python is written and executed successfully.
16
EXP.NO: 02 DATE :

IMPLEMENT BAYESIAN NETWORKS

AIM:
To implement Bayesian Networks using Python.

APPARATUS REQUIRED:

1. GOOGLE COLLAB
2. PC

ALGORITHM:

Step 1: start
Step 2: Construct the Bayesian network by specifying the nodes and their
conditional probability distributions.
Step 3: Identify the set of observed variables (i.e., the evidence).
Step 4: For each unobserved variable, compute its posterior probability
given the evidence using the Bayes' rule and the conditional
probability distributions of the variable and its parents.
Step 5: Return the posterior probabilities of interest.
Step 6: stop
.

PROGRAM:
import pandas as pd # for data
manipulation import networkx as nx # for
drawing graphs
import matplotlib.pyplot as plt # for drawing
graphs # for creating Bayesian Belief Networks
(BBN) from pybbn.graph.dag import Bbn
from pybbn.graph.edge import Edge, EdgeType
from pybbn.graph.jointree import EvidenceBuilder
from pybbn.graph.node import BbnNode
from pybbn.graph.variable import Variable
from pybbn.pptc.inferencecontroller import InferenceController

# Set Pandas options to display more columns


pd.options.display.max_columns=50

# Read in the weather data csv


17
df=pd.read_csv('weatherAUS.csv', encoding='utf-8')

# Drop records where target


RainTomorrow=NaN
df=df[pd.isnull(df['RainTomorrow'])==False]

# For other columns with missing values, fill them in with column mean
df=df.fillna(df.mean())

# Create bands for variables that we want to use in the model


df['WindGustSpeedCat']=df['WindGustSpeed'].apply(lambda x: '0.<=40' if x<=40 else
'1.40-50' if 40<x<=50 else '2.>50')
df['Humidity9amCat']=df['Humidity9am'].apply(lambda x: '1.>60' if x>60 else '0.<=60')
df['Humidity3pmCat']=df['Humidity3pm'].apply(lambda x: '1.>60' if x>60 else '0.<=60')

# Create nodes by manually typing in probabilities


H9am = BbnNode(Variable(0, 'H9am', ['<=60', '>60']), [0.30658, 0.69342])
H3pm = BbnNode(Variable(1, 'H3pm', ['<=60', '>60']), [0.92827, 0.07173, 0.55760, 0.44240])
W = BbnNode(Variable(2, 'W', ['<=40', '40-50', '>50']), [0.58660, 0.24040, 0.17300])
RT = BbnNode(Variable(3, 'RT', ['No', 'Yes']), [0.92314, 0.07686, 0.89072, 0.10928, 0.76008,
0.23992, 0.64250, 0.35750, 0.49168, 0.50832, 0.32182, 0.67818])

OUTPUT:

<ipython-input-26-4b316ae6c79a>:23: FutureWarning: Dropping of nuisance

columns in DataFrame reductions (with 'numeric_only=None') is deprecated;

in a future version this will raise TypeError. Select only valid columns

before calling the reduction.


df=df.fillna(df.mean())

RESULT:

Thus the implementation of the Bayesian Networks using python is written and executed successfully.

18
EXP.NO: 03 DATE :

BUILD REGRESSION MODELS

AIM:
To build Regression Models using Python.

APPARATUS REQUIRED:

1. GOOGLE COLLAB
2. PC

ALGORITHM:

Step 1: Start the program.


Step 2: Import the necessary modules.
Step 3: Create two arrays namely x and y.
Step 4: Using plot() function display the output of the
code . Step 5: Display the result.
Step 6: Stop the program.

PROGRAM:
import numpy as np
from sklearn.linear_model import LinearRegression
x = [[0, 1], [5, 1], [15, 2], [25, 5], [35, 11], [45, 15], [55, 34], [60, 35]]
y = [4, 5, 20, 14, 32, 22, 38, 43]
x, y = np.array(x), np.array(y)
model = LinearRegression().fit(x, y) r_sq = model.score(x, y)
print(f"coefficient of determination: {r_sq}") print(f"intercept: {model.intercept_}")
print(f"coefficients: {model.coef_}")
y_pred = model.predict(x)
print(f"predicted response:\n{y_pred}")

OUTPUT:

RESULT:

Thus the building of regression model using python is written and executed successfully.

19
EXP.NO: 04 DATE :

BUILD DECISION TREES AND RANDOM FORESTS

AIM:

To build decision trees and random forests using Python.


.
APPARATUS REQUIRED:

1. GOOGLE COLLAB
2. PC

ALGORITHM:

Step 1: Start the program.


Step 2: Import the necessary modules.
Step 3: Use matplotlib package to display the result
Step 4:display the data and compute EM for Bayesian network.
Step 5: Stop the program

PROGRAM:

%matplotlib inline

import numpy as

np
import matplotlib.pyplot as plt
import seaborn as sns; sns.set()
from sklearn.datasets import make_blobs

x, y = make_blobs(n_samples=300,
centers=4, random_state=0,
cluster_std=1.0)
plt.scatter(X[:, 0], X[:, 1], c=y, s=50, cmap='rainbow');
from sklearn.tree import DecisionTreeClassifier
tree = DecisionTreeClassifier().fit(X, y)
def visualize_classifier(model, X, y, ax=None, cmap='rainbow'):

ax = ax or plt.gca()

20
# Plot the training points
ax.scatter(X[:, 0], X[:, 1], c=y, s=30, cmap=cmap,

21
clim=(y.min(), y.max()), zorder=3)
ax.axis('tight')
ax.axis('off')
xlim =

ax.get_xlim() ylim

= ax.get_ylim()

# fit the estimator


model.fit(X, y)
Xx, yy = np.meshgrid(np.linspace(*xlim, num=200),
np.linspace(*ylim, num=200))
Z = model.predict(np.c_[xx.ravel(), yy.ravel()]).reshape(xx.shape)

# Create a color plot with the results


n_classes = len(np.unique(y))
contours = ax.contourf(xx, yy, Z, alpha=0.3,
levels=np.arange(n_classes + 1) - 0.5,
cmap=cmap, clim=(y.min(), y.max()),
zorder=1)

ax.set(xlim=xlim, ylim=ylim)
visualize_classifier(DecisionTreeClassifier(), X,
y)

OUTPUT:

RESULT:

Thus the building of decision trees and random forests using python is written and executed

22
successfully.
.

23
EXP.NO: 05 DATE :

BUILD SVM MODELS

AIM:
To build SVM models using Python.

APPARATUS REQUIRED:

1. GOOGLE COLLAB
2. PC

ALGORITHM:
Step 1: Start
Step 2: Load the dataset you want to use for training the model.
Step 3: Preprocess the data to make it ready for training the model.
Step 4: Split the data into two sets - one for training the model and the other for
testing the model.
Step 5: splitting it into training and testing sets.
Step 6: Once the SVM model is defined, you can train it using the training set.
Step 7: Use cross-validation techniques like grid search to find the optimal values
for the hyperparameters.
Step 8: Stop.

PROGRAM:

import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
dataset = pd.read_csv('Social_Network_Ads.csv')
X = dataset.iloc[:, [2, 3]].values

y = dataset.iloc[:, 4].values
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.25, random_state = 0)

from sklearn.preprocessing import StandardScaler


sc = StandardScaler()
X_train = sc.fit_transform(X_train)

X_test = sc.transform(X_test)
from sklearn.svm import SVC
classifier = SVC(kernel = 'rbf', random_state = 0)

24
classifier.fit(X_train, y_train)
y_pred = classifier.predict(X_test)
from sklearn.metrics import confusion_matrix, accuracy_score

cm = confusion_matrix(y_test,
y_pred) print(cm)
accuracy_score(y_test,y_pred)
from matplotlib.colors import ListedColormap
X_set, y_set = X_test, y_test
X1, X2 = np.meshgrid(np.arange(start = X_set[:, 0].min() - 1, stop = X_set[:, 0].max() + 1, step
= 0.01),
np.arange(start = X_set[:, 1].min() - 1, stop = X_set[:, 1].max() + 1, step = 0.01))
plt.contourf(X1, X2, classifier.predict(np.array([X1.ravel(), X2.ravel()]).T).reshape(X1.shape),
alpha = 0.75, cmap = ListedColormap(('red', 'green')))
plt.xlim(X1.min(), X1.max())
plt.ylim(X2.min(), X2.max())
for i, j in enumerate(np.unique(y_set)):

plt.scatter(X_set[y_set == j, 0], X_set[y_set == j, 1],


c = ListedColormap(('red', 'green'))(i), label =
j)
plt.title('SVM (Test set)')
plt.xlabel('Age')
plt.ylabel('Estimated Salary')
plt.legend()
plt.show()

OUTPUT:

RESULT:
Thus the building of SVM models using python is written and executed successfully.

25
EXP.NO: 06 DATE :

IMPLEMENT ENSEMBLING TECHNIQUES

AIM:
To implement ensembling techniques using Python.

APPARATUS REQUIRED:

1. GOOGLE COLLAB
2. PC

ALGORITHM:

Step 1: In bagging, multiple models are trained to make the final


prediction.

Step 2: In boosting, multiple weak models are trained to correct the


errors of the previous model.
Step 3: In stacking, multiple models are trained to combine the predictions of
the base models to make the final prediction.
Step 4: In averaging, multiple models are trained independently to make the
final prediction.
Step 5: In blending, multiple models are trained independently and their
predictions are combined using a weighted average.
Step 6: In ensemble of ensembles, multiple ensembles are created using
different techniques and combined to create a final prediction.

PROGRAM:
# importing utility modules

import pandas as pd
from sklearn.model_selection import train_test_split
from sklearn.metrics import mean_squared_error

# importing machine learning models for prediction


from sklearn.ensemble import
RandomForestRegressor import xgboost as xgb
from sklearn.linear_model import LinearRegression

# loading train data set in dataframe from train_data.csv file


df = pd.read_csv("train_data.csv")
# getting target data from the dataframe
26
target = df["target"]
# getting train data from the dataframe train
= df.drop("target")

# Splitting between train data into training and validation dataset


X_train, X_test, y_train, y_test = train_test_split(
train, target, test_size=0.20)

# initializing all the model objects with default


parameters model_1 = LinearRegression()
model_2 = xgb.XGBRegressor()
model_3 = RandomForestRegressor()

# training all the model on the training dataset


model_1.fit(X_train, y_target)
model_2.fit(X_train, y_target)
model_3.fit(X_train, y_target)

# predicting the output on the validation dataset


pred_1 = model_1.predict(X_test)
pred_2 = model_2.predict(X_test)
pred_3 = model_3.predict(X_test)

# final prediction after averaging on the prediction of all 3 models


pred_final = (pred_1+pred_2+pred_3)/3.0

# printing the mean squared error between real value and predicted value
print(mean_squared_error(y_test, pred_final))

OUTPUT:

RESULT:
Thus the implementation of ensembling techniques using python is written and executed
successfully.

27
EXP.NO: 07 DATE :

IMPLEMENT CLUSTERING ALGORITHMS

AIM:
To implement clustering algorithms using Python.

APPARATUS REQUIRED:

1. GOOGLE COLLAB
2. PC

ALGORITHMS

Step 1:- Start the program.


Step 2:- Import the necessary modules.
Step 3:- Create two arrays namely x and y.
Step 4:- construct kmeans cluster using Kmeans().
Step 5:- Display the result.

PROGRAM:
# affinity propagation clustering
from numpy import unique
from numpy import where
from sklearn.datasets import
make_classification from sklearn.cluster import
AffinityPropagation from matplotlib import
pyplot
# define dataset
X, _ = make_classification(n_samples=1000, n_features=2, n_informative=2,
n_redundant=0, n_clusters_per_class=1, random_state=4)
# define the model
model = AffinityPropagation(damping=0.9) #
fit the model
model.fit(X)
# assign a cluster to each example

yhat = model.predict(X)
# retrieve unique
28
clusters clusters =
unique(yhat)

29
# create scatter plot for samples from each cluster for
cluster in clusters:
# get row indexes for samples with this cluster
row_ix = where(yhat == cluster)
# create scatter of these samples
pyplot.scatter(X[row_ix, 0], X[row_ix, 1])
# show the plot
pyplot.show()

OUTPUT:

RESULT:
Thus the implementation of clustering algorithms using python is written and executed
successfully.

30
EXP.NO: 08 DATE :

IMPLEMENT EM FOR BAYESIAN NETWORKS

AIM:
To implement EM for Bayesian Networks using Python.

APPARATUS REQUIRED:

1. GOOGLE COLLAB
2. PC

ALGORITHM:

Step 1:- Start the program.


Step 2:- Import the necessary modules.
Step 3:- Read the csv file containing data.
Step 4:- display the data and compute EM for Bayesian network.
Step 5:- Stop the program

PROGRAM:

import numpy as np

import pandas as pd

import csv

from pgmpy.estimators import MaximumLikelihoodEstimator

from pgmpy.models import BayesianModel

from pgmpy.inference import VariableElimination

heartDisease = pd.read_csv('heart.csv')

heartDisease = heartDisease.replace('?',np.nan)

print('Sample instances from the dataset are given below')

print(heartDisease.head())

31
print('\n Attributes and datatypes')
print(heartDisease.dtypes)

model= BayesianModel([('age','heartdisease'),('sex','heartdisease'),('exang','heartdisease'),

('cp','heartdisease'),('heartdisease','restecg'),('heartdisease','chol')])

print('\nLearning CPD using Maximum likelihood estimators')

model.fit(heartDisease,estimator=MaximumLikelihoodEstimator)

print('\n Inferencing with Bayesian Network:')

HeartDiseasetest_infer = VariableElimination(model)

print('\n 1. Probability of HeartDisease given evidence= restecg')

q1=HeartDiseasetest_infer.query(variables=['heartdisease'],evidence={'restecg':1})

print(q1)

print('\n 2. Probability of HeartDisease given evidence= cp ')

q2=HeartDiseasetest_infer.query(variables=['heartdisease'],evidence={'cp':2})

print(q2)

32
OUTPUT:

RESULT:
Thus the implementation of EM for Bayesian Networks using python is written and executed
successfully.

33
EXP.NO: 09 DATE :

BUILD SIMPLE NN MODELS

AIM:
To build simple NN models using Python.

APPARATUS REQUIRED:

1. GOOGLE COLLAB
2. PC
3. DATA SET

ALGORITHM:

Step 1 : Start
Step 2 : Create a sigmoid function
Step 3 : Initialize the requires parameters weight and bias.
Step 4 : Create forword propagation function which takes x ,the initialized parameters input
and returns a2 , cache .
Step 5 : Create a calculate_cost function which takes a2 , y as input parameters and return the
cost.
Step 6 : Now create a backword_propagation function which takes x , y , cache and
parameters as input parameters and returns grads.
Step 7 : Create a update_parameters function which takes parameters , grads , learning_rate
as input parameters and returns the updated values.
Step 8 : Now create model which takes the input parameters x , y , n_x , n_h , n_y ,
num_of_iters , learning_rate and returns the parameters as output.
Step 9 : Create a predict function which takes x and parameters as input parameters and
returns the prediction as output.
Step 10 : Stop

PROGRAM:
# Import python libraries required in this example:
import numpy as np
from scipy.special import expit as activation_function
f rom scipy.stats import truncnorm

# DEFINE THE NETWORK

# Generate random numbers within a truncated


(bounded) # normal distribution:
def truncated_normal(mean=0, sd=1, low=0, upp=10):

34
return truncnorm(
(low - mean) / sd, (upp - mean) / sd, loc=mean, scale=sd)

# Create the ‘Nnetwork’ class and define its


arguments: # Set the number of neurons/nodes for
each layer
# and initialize the weight matrices:
class Nnetwork:

def init (self,


no_of_in_nodes,
no_of_out_nodes,
no_of_hidden_nodes,
learning_rate):

self.no_of_in_nodes = no_of_in_nodes
self.no_of_out_nodes = no_of_out_nodes
self.no_of_hidden_nodes = no_of_hidden_nodes
self.learning_rate = learning_rate
self.create_weight_matrices()

def create_weight_matrices(self):
""" A method to initialize the weight matrices of the neural
network""" rad = 1 / np.sqrt(self.no_of_in_nodes)
X = truncated_normal(mean=0, sd=1, low=-rad, upp=rad)
self.weights_in_hidden = X.rvs((self.no_of_hidden_nodes,
self.no_of_in_nodes))
rad = 1 / np.sqrt(self.no_of_hidden_nodes)
X = truncated_normal(mean=0, sd=1, low=-rad, upp=rad)
self.weights_hidden_out = X.rvs((self.no_of_out_nodes,
self.no_of_hidden_nodes))

def train(self, input_vector, target_vector):


pass # More work is needed to train the network

def run(self, input_vector):


"""
35
running the network with an input vector 'input_vector'.

36
'input_vector' can be tuple, list or ndarray
"""
# Turn the input vector into a column vector:
input_vector = np.array(input_vector, ndmin=2).T
# activation_function() implements the expit function,
# which is an implementation of the sigmoid function:
input_hidden = activation_function(self.weights_in_hidden @ input_vector)
output_vector = activation_function(self.weights_hidden_out @ input_hidden)
return output_vector

# RUN THE NETWORK AND GET A RESULT

# Initialize an instance of the class:


simple_network = Nnetwork(no_of_in_nodes=2,
no_of_out_nodes=2,
no_of_hidden_nodes=4,
learning_rate=0.6)

# Run simple_network for arrays, lists and tuples with shape


(2): # and get a result:
simple_network.run([(3, 4)])

OUTPUT:

RESULT:
Thus the building of simple NN models using python is written and executed successfully.

37
EXP.NO: 10 DATE :

BUILD DEEP LEARNING NN MODELS

AIM:
To build deep learning NN models using Python.

APPARATUS REQUIRED:

1. GOOGLE COLLAB
2. PC
3. DATA SET

ALGORITHM:
Step 1: Start
Step 2: Import the necessary libraries
Step 3: Set the random seed for reproducibility
Step 4: Define the model architecture
Step 5: Initialize the weights and biases for the input and hidden layers
Step 6: Initialize the weights and biases for the hidden and output
layers Step 7: Define the input data, labels and hyperparameters
Step 8: Define the Tensorflow variables for the weights and biases
Step 9: Define the forward propagation and backpropagation algorithms
Step 10: Train the model and make some predictions with the trained
model
Step 11: End

PROGRAM:
from sklearn.neural_network import MLPClassifier

X = [[0., 0.], [1., 1.]]


y = [0, 1]
clf = MLPClassifier(solver='lbfgs', alpha=1e-5,
hidden_layer_sizes=(5, 2), random_state=1)
clf.fit(X, y)

OUTPUT:

RESULT:
Thus the building of deep learning NN models using python is written and executed successfully.

38
39

You might also like