Ex.
No: 1 IMPLEMENTATION OF REGRESSION MODEL
Date : 11/08/2023 USING NEURAL NETWORK.
AIM:
To write a Python program to implement a regression model using a neural
network.
PROCEDURE:
Step 1: Import packages and classes.
Step 2: Provide data and Split into input(x) and output(y) variables.
Step 3: Define the Keras Model.
Step 4: Compile the Keras Model.
Step 5: Fit the Keras Model on the Dataset.
Step 6: Evaluate the Keras Model.
SOURCE CODE:
from numpy import loadtxt
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense
1
Written and Submitted By: Registration No:
dataset = loadtxt('/content/DL dataset.csv', delimiter=',')
X = dataset[:,0:8]
y = dataset[:,8]
model = Sequential()
model.add(Dense(12, input_shape=(8,), activation='relu'))
model.add(Dense(8, activation='relu'))
model.add(Dense(1, activation='sigmoid'))
model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
model.fit(X, y, epochs=150, batch_size=10)
_, accuracy = model.evaluate(X, y)
print('Accuracy: %.2f' % (accuracy*100))
OUTPUT:
Accuracy: 77.86
X values :
Pregnancies Glucose BloodPressure SkinThickness Insulin BMI DiabetesPedigreeFunction Age
6 148 72 35 0 33.6 0.627 50
1 85 66 29 0 26.6 0.351 31
8 183 64 0 0 23.3 0.672 32
1 89 66 23 94 28.1 0.167 21
0 137 40 35 168 43.1 2.288 33
5 116 74 0 0 25.6 0.201 30
3 78 50 32 88 31 0.248 26
10 115 0 0 0 35.3 0.134 29
2 197 70 45 543 30.5 0.158 53
Y values :
2
Written and Submitted By: Registration No:
Outcom
e
1
0
1
0
1
0
1
0
1
RESULT:
The implementation of regression model in neural network is
successfully executed and verified.
3
Written and Submitted By: Registration No:
Ex. No : 2 IMPLEMENTATION OF ADDITION OF TWO
Date :21/08/2023 NUMBERS.
AIM:
To write a python program to implement addition of two numbers.
PROCEDURE:
Step 1: Import packages, classes and load the dataset.
Step 2: Prepare the data.
Step 3: Define the Neural Network.
Step 4: Compile the Model.
Step 5: Train the Model.
Step 6: Test the Model.
Step 7: Print the Values.
SOURCE CODE:
import tensorflow as tf
from tensorflow import keras
import numpy as np
from keras.models import Sequential
from keras.layers import Dense
num_samples = 1000
4
Written and Submitted By: Registration No:
X_train = np.random.rand(num_samples, 2)
y_train = X_train[:, 0] + X_train[:, 1]
model = Sequential()
model.add(Dense(8, input_shape=(2,), activation='relu'))
model.add(Dense(1))
model.compile(loss='mse', optimizer='adam')
batch_size = 32
epochs = 100
model.fit(X_train, y_train, batch_size=batch_size, epochs=epochs, verbose=1)
test_input = np.array([[1, 2], [0.3, 0.4]])
predicted_sum = model.predict(test_input)
print("Predicted sums of [[1, 2], [0.3, 0.4]] array = “)
print(predicted_sum)
OUTPUT:
Predicted Sums of [[1, 2], [0.3, 0.4]] array = [[2.9054897] [0.7184528]
RESULT:
The implementation of addition of two numbers in python is
successfully executed and verified.
5
Written and Submitted By: Registration No:
Ex. No : 3
IMPLEMENTATION OF PERCEPTRON.
Date :04/09/2023
AIM:
To write a python program to implement Perceptron.
PROCEDURE:
Step 1: Import packages, and classes.
Step 2: Define the Dataset.
Step 3: Define the Model.
Step 4: Fit the Model.
Step 5: Define the Model Evaluation Method.
Step 6: Evaluate the Model.
Step 7: Summarize the result.
Step 8: Define a New Data.
Step 9: Make a Prediction.
Step 10: Summarize the Prediction.
6
Written and Submitted By: Registration No:
SOURCE CODE:
from sklearn.datasets import make_classification
from sklearn.linear_model import Perceptron
from sklearn.model_selection import cross_val_score
from sklearn.model_selection import RepeatedStratifiedKFold
from numpy import mean
from numpy import std
X, y = make_classification(n_samples=1000, n_features=10, n_informative=10,
n_redundant=0, random_state=1)
model = Perceptron()
model.fit(X, y)
cv = RepeatedStratifiedKFold(n_splits=10, n_repeats=3, random_state=1)
scores = cross_val_score(model, X, y, scoring='accuracy', cv=cv, n_jobs=-1)
print('Mean Accuracy: %.3f (%.3f)' % (mean(scores), std(scores)))
row = [0.12777556,-3.64400522,-2.23268854,-
1.82114386,1.75466361,0.1243966,1.03397657,2.35822076,1.01001752,0.567684
85]
yhat = model.predict([row])
print('Predicted Class: %d' % yhat)
7
Written and Submitted By: Registration No:
OUTPUT:
Mean Accuracy : 0.847 (0.052)
Predicted class : 1 (Here 1 refers to True and 0 refers to False)
RESULT:
The implementation of Perceptron is successfully executed and verified.
8
Written and Submitted By: Registration No:
Ex. No : 4 IMPLEMENTATION OF FEED FORWARD NEURAL
Date :11/09/2023 NETWORK.
AIM:
To write a Python program to implement a feed-forward neural network.
PROCEDURE:
Step 1: Import packages, Libraries.
Step 2: Loading the MNIST dataset from Keras.
Step 3: Data Exploration.
Step 4: Data Preprocessing.
Step5: Building the Neural Network Model.
Step 6: Model Summary.
Step 7: Compiling the Model.
Step 8: Model Training.
Step 9: Evaluating the Model.
Step 10: Visualizing a Random Test Image.
Step 11: Making Predictions.
Step 12: Plotting Training History.
9
Written and Submitted By: Registration No:
SOURCE CODE:
import tensorflow as tf
from tensorflow import keras
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import random
%matplotlib inline
mnist = tf.keras.datasets.mnist
(x_train, y_train), (x_test, y_test) = mnist.load_data()
plt.matshow(x_train[0])
x_train = x_train / 255
x_test = x_test / 255
model = keras.Sequential([
keras.layers.Flatten(input_shape=(28, 28)),
keras.layers.Dense(128, activation='relu'),
keras.layers.Dense(10, activation='softmax')])
model.summary()
model.compile(optimizer='sgd',loss='sparse_categorical_crossentropy',metrics=['ac
curacy'])
history=model.fit(x_train, y_train,validation_data=(x_test,y_test),epochs=10)
test_loss,test_acc=model.evaluate(x_test,y_test)
print("Loss=%.3f" %test_loss)
print("Accuracy=%.3f" %test_acc)
n=random.randint(0,9999)
plt.imshow(x_test[n])
plt.show()
10
Written and Submitted By: Registration No:
predicted_value=model.predict(x_test)
print("hand written number in the image is= %d",
%np.argmax(predicted_value[n]))
history.history.keys()
plt.plot(history.history['accuracy'])
plt.plot(history.history['val_accuracy'])
plt.title('Graph Visualization')
plt.ylabel('accuracy')
plt.xlabel('epoch')
plt.legend(['Train', 'Validation'], loc='upper left')
plt.show()
plt.plot(history.history['accuracy'])
plt.plot(history.history['val_accuracy'])
plt.plot(history.history['loss'])
plt.plot(history.history['val_loss'])
plt.ylabel('accuracy/loss')
plt.xlabel('epoch')
plt.legend(['accuracy', 'val_accuarcy','loss'], 'val_loss')
plt.show()
OUTPUT:
Predicting Result
Hand written number in the image is= 6
11
Written and Submitted By: Registration No:
Loss = 0.112
Accuracy = 0.967
Graph Visualization :
12
Written and Submitted By: Registration No:
13
Written and Submitted By: Registration No:
RESULT:
The implementation of the Feed Forward Neural Network is successfully
executed and verified.
Ex. No : 5
IMPLEMENTATION OF TRANSFER LEARNING.
Date :09/10/2023
AIM:
To write a Python program to implement Transfer Learning.
PROCEDURE:
Step 1: Set up the Kaggle API.
Step 2: Download the Dogs vs Cats Dataset.
14
Written and Submitted By: Registration No:
Step 4: Prepare data and build a model using VGG16 as the base.
Step 5: Preprocess and augment data using generators.
Step 6: Train the model and visualize its performance.
Step 7: Make predictions on a test image.
SOURCE CODE:
!mkdir -p ~/.kaggle
!cp kaggle.json ~/.kaggle/
!kaggle datasets download -d salader/dogs-vs-cats
import zipfile
zip_ref = zipfile.ZipFile('/content/dogs-vs-cats.zip', 'r')
zip_ref.extractall('/content')
zip_ref.close()
import tensorflow
from tensorflow import keras
from keras import Sequential
from keras.layers import Dense,Flatten
from keras.applications.vgg16 import VGG16
conv_base = VGG16(
weights='imagenet',
include_top = False,
input_shape=(150,150,3)
)
conv_base.summary()
model = Sequential()
15
Written and Submitted By: Registration No:
model.add(conv_base)
model.add(Flatten())
model.add(Dense(256,activation='relu'))
model.add(Dense(1,activation='sigmoid'))
model.summary()
conv_base.trainable = False
train_ds = keras.utils.image_dataset_from_directory(
directory = '/content/train',
labels='inferred',
label_mode = 'int',
batch_size=32,
image_size=(150,150)
)
validation_ds = keras.utils.image_dataset_from_directory(
directory = '/content/test',
labels='inferred',
label_mode = 'int',
batch_size=32,
image_size=(150,150)
)
def process(image,label):
image = tensorflow.cast(image/255. ,tensorflow.float32)
return image,label
train_ds = train_ds.map(process)
validation_ds = validation_ds.map(process)
model.compile(optimizer='adam',loss='binary_crossentropy',metrics=['accuracy'])
history = model.fit(train_ds,epochs=10,validation_data=validation_ds)
import matplotlib.pyplot as plt
plt.plot(history.history['accuracy'],color='red',label='train')
plt.plot(history.history['val_accuracy'],color='blue',label='validation')
plt.legend()
plt.show()
16
Written and Submitted By: Registration No:
plt.plot(history.history['loss'],color='red',label='train')
plt.plot(history.history['val_loss'],color='blue',label='validation')
plt.legend()
plt.show()
import cv2
from google.colab.patches import cv2_imshow
img = cv2.imread('/content/dog123.jpeg')
cv2_imshow(img)
from keras.preprocessing import image
import numpy as np
test_image = image.load_img('/content/dog123.jpeg',target_size=(150,150))
plt.imshow(test_image)
test_image = image.img_to_array(test_image)
test_image = np.expand_dims(test_image,axis=0)
result = model.predict(test_image)
i=0
if(result>=0.5):
print("Dog")
else:
print("Cat")
OUTPUT:
Graph Visualization :
17
Written and Submitted By: Registration No:
Dog :
18
Written and Submitted By: Registration No:
RESULT:
The implementation of Transfer Learning is successfully executed and
verified.
Ex. No : 6 IMPLEMENTATION OF IMAGE CLASSIFICATION
Date :16/10/2023 USING CNN.
AIM:
To write a Python program to implement image classification using CNN.
PROCEDURE:
Step 1: Import The Libraries.
Step 2: Visualize the Images.
19
Written and Submitted By: Registration No:
Step 3: Prepare the Dataset.
Step4: Building a CNN Model
Step 5: Compile and Fit the Model
Step 6: Plot the Error and Accuracy.
Step 7: Test the Model
SOURCE CODE:
import numpy as np
import matplotlib.pyplot as plt
import tensorflow as tf
from tensorflow import keras
from keras.layers import *
from keras.models import *
from keras.preprocessing import image
from keras.preprocessing.image import ImageDataGenerator
import os, shutil
import warnings
warnings.filterwarnings('ignore')
import os
!pip install kaggle
!mkdir -p ~/.kaggle
!cp kaggle.json ~/.kaggle/
!chmod 600 ~/.kaggle/kaggle.json
!kaggle datasets download -d misrakahmed/vegetable-image-dataset
from zipfile import ZipFile
dataset="/content/vegetable-image-dataset.zip"
with ZipFile(dataset,'r') as zip:
zip.extractall()
print("Data is extracted")
20
Written and Submitted By: Registration No:
train_path = "/content/Vegetable Images/train"
validation_path = "/content/Vegetable Images/validation"
test_path = "/content/Vegetable Images/test"
image_categories = os.listdir('/content/Vegetable Images/train')
def plot_images(image_categories):
plt.figure(figsize=(12, 12))
for i, cat in enumerate(image_categories):
image_path = train_path + '/' + cat
images_in_folder = os.listdir(image_path)
first_image_of_folder = images_in_folder[0]
first_image_path = image_path + '/' + first_image_of_folder
img = image.load_img(first_image_path)
img_arr = image.img_to_array(img)/255.0
plt.subplot(4, 4, i+1)
plt.imshow(img_arr)
plt.title(cat)
plt.axis('off')
plt.show()
# Call the function
plot_images(image_categories)
# 1. Train Set
train_gen = ImageDataGenerator(rescale = 1.0/255.0)
train_image_generator = train_gen.flow_from_directory(
train_path,
target_size=(150, 150),
batch_size=32,
21
Written and Submitted By: Registration No:
class_mode='categorical')
# 2. Validation Set
val_gen = ImageDataGenerator(rescale = 1.0/255.0)
val_image_generator = train_gen.flow_from_directory(
validation_path,
target_size=(150, 150),
batch_size=32,
class_mode='categorical')
# 3. Test Set
test_gen = ImageDataGenerator(rescale = 1.0/255.0)
test_image_generator = train_gen.flow_from_directory(
test_path,
target_size=(150, 150),
batch_size=32,
class_mode='categorical')
class_map = dict([(v, k) for k, v in train_image_generator.class_indices.items()])
print(class_map)
{0: 'Bean', 1: 'Bitter_Gourd', 2: 'Bottle_Gourd', 3: 'Brinjal', 4: 'Broccoli', 5:
'Cabbage', 6: 'Capsicum', 7: 'Carrot', 8: 'Cauliflower', 9: 'Cucumber', 10: 'Papaya',
11: 'Potato', 12: 'Pumpkin', 13: 'Radish', 14: 'Tomato'}
model = Sequential()
model.add(Conv2D(filters=32, kernel_size=3, strides=1, padding='same',
activation='relu', input_shape=[150, 150, 3]))
model.add(MaxPooling2D(2, ))
model.add(Conv2D(filters=64, kernel_size=3, strides=1, padding='same',
activation='relu'))
model.add(MaxPooling2D(2))
model.add(Flatten())
model.add(Dense(128, activation='relu'))
model.add(Dropout(0.25))
22
Written and Submitted By: Registration No:
model.add(Dense(128, activation='relu'))
model.add(Dense(15, activation='softmax'))
model.summary()
early_stopping = keras.callbacks.EarlyStopping(patience=5)
model.compile(optimizer='Adam', loss='categorical_crossentropy',
metrics='accuracy')
hist = model.fit(train_image_generator,
epochs=100,
verbose=1,
validation_data=val_image_generator,
steps_per_epoch = 15000//32,
validation_steps = 3000//32,
callbacks=early_stopping)
h = hist.history
plt.style.use('ggplot')
plt.figure(figsize=(10, 5))
plt.plot(h['loss'], c='red', label='Training Loss')
plt.plot(h['val_loss'], c='red', linestyle='--', label='Validation Loss')
plt.plot(h['accuracy'], c='blue', label='Training Accuracy')
plt.plot(h['val_accuracy'], c='blue', linestyle='--', label='Validation Accuracy')
plt.xlabel("Number of Epochs")
plt.legend(loc='best')
plt.show()
model.evaluate(test_image_generator)
test_image_path = '/content/Vegetable Images/test/Cauliflower/1050.jpg'
def generate_predictions(test_image_path):
test_img = image.load_img(test_image_path, target_size=(150, 150))
test_img_arr = image.img_to_array(test_img)/255.0
test_img_input = test_img_arr.reshape((1, test_img_arr.shape[0],
test_img_arr.shape[1], test_img_arr.shape[2]))
23
Written and Submitted By: Registration No:
predicted_label = np.argmax(model.predict(test_img_input))
predicted_vegetable = class_map[predicted_label]
plt.figure(figsize=(4, 4))
plt.imshow(test_img_arr)
plt.title("Predicted Label: {}".format(predicted_vegetable,))
plt.grid()
plt.axis('off')
plt.show()
generate_predictions(test_image_path)
OUTPUT:
Visualizing Image from Each Label :
24
Written and Submitted By: Registration No:
Graph Visualization :
Predicted Result :
25
Written and Submitted By: Registration No:
RESULT:
The implementation of Image Classification using CNN is successfully
executed and verified.
26
Written and Submitted By: Registration No:
Ex. No : 7
IMPLEMENTATION OF LSTM.
Date :30/10/2023
AIM:
To write a python program to implement LSTM in Stock Price Prediction.
PROCEDURE:
Step 1: Import Libraries and Dataset.
Step 2: Accessing the open stock Price Col.
Step3: Feature Scaling.
Step 4: Getting the Input and Output.
Step 5: Reshaping the Input.
Step 6: Building the Model.
Step 7: Making a Prediction and Visualization(Test Set).
Step 8: Getting Prediction and Visualization(Training Set).
SOURCE CODE:
import numpy as np
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
27
Written and Submitted By: Registration No:
from sklearn.preprocessing import MinMaxScaler
training_set = pd.read_csv('/content/drive/MyDrive/google stock prediction
dataset/trainset.csv')
test_set = pd.read_csv('/content/drive/MyDrive/google stock prediction
dataset/testset.csv')
training_set.head()
training_set = training_set.iloc[:,1:2].values
test_set = test_set.iloc[:, 1:2].values
training_set
sc = MinMaxScaler()
training_set = sc.fit_transform(training_set)
training_set
X_train = training_set[0:1257]
y_train = training_set[1:1258]
print(f"Before Rehshape {X_train.shape}")
X_train = np.reshape(X_train, (1257, 1, 1))
print(f"After Rehshape {X_train.shape}")
from keras.models import Sequential
from keras.layers import Dense
from keras.layers import LSTM
model = Sequential()
model.add(LSTM(units = 4, activation = 'sigmoid', input_shape = (None, 1)))
model.add(Dense(units = 1))
model.compile(optimizer = 'adam', loss = 'mean_squared_error')
model.fit(X_train, y_train, batch_size = 32, epochs = 200)
real_stock_price = test_set
inputs = real_stock_price
inputs = sc.transform(test_set)
28
Written and Submitted By: Registration No:
inputs = np.reshape(inputs, (125, 1, 1))
predicted_stock_price = model.predict(inputs)
predicted_stock_price = sc.inverse_transform(predicted_stock_price)
plt.plot( real_stock_price , color = 'red' , label = 'Real Google Stock Price')
plt.plot( predicted_stock_price , color = 'blue' , label = 'Predicted Google Stock
Price')
plt.title('Google Stock Price Prediction')
plt.xlabel( 'time' )
plt.ylabel( 'Google Stock Price' )
plt.legend()
plt.show()
training_set = pd.read_csv('/content/drive/MyDrive/google stock prediction
dataset/trainset.csv')
training_set = training_set.iloc[:, 1:2].values
real_stock_price = training_set
inputs = real_stock_price
inputs = sc.transform(inputs)
inputs = np.reshape(inputs, (1259, 1, 1))
predicted_stock_price = model.predict(inputs)
predicted_stock_price = sc.inverse_transform(predicted_stock_price)
plt.plot( real_stock_price , color = 'red' , label = 'Real Google Stock Price')
plt.plot( predicted_stock_price , color = 'blue' , label = 'Predicted Google Stock
Price')
plt.title('Google Stock Price Prediction')
plt.xlabel( 'time' )
plt.ylabel( 'Google Stock Price' )
plt.legend()
plt.show()
OUTPUT:
X_train :
[[0.01011148]
29
Written and Submitted By: Registration No:
[0.01388614]
[0.01690727]
...
[0.9805695 ]
[0.97637719]
[0.97543954]]
[[0.01388614]
[0.01690727]
[0.02109298]
...
[0.97637719]
[0.97543954]
[0.9674549 ]]
y_train :
[[0.01388614]
[0.01690727]
[0.02109298]
...
[0.97637719]
[0.97543954]
[0.9674549 ]]
Before Reshape :
(1257, 1)
After Reshape :
(1257, 1, 1)
30
Written and Submitted By: Registration No:
Test Set Graph Visualization :
Training Set Graph Visualization :
31
Written and Submitted By: Registration No:
RESULT:
The implementation of LSTM is successfully executed and verified.
32
Written and Submitted By: Registration No:
Ex. No: 8
IMPLEMENTATION OF AUTO ENCODER.
Date :06/11/2023
AIM:
To write a Python program to implement Auto Encoder.
PROCEDURE:
Step 1: Importing The Libraries.
Step 2: Loading The Dataset.
Step3: Data Preprocessing.
Step 4: Define Autoencoder Architecture.
Step 5: Build Autoencoder, Encoder, and Decoder Models.
Step 6: Compile and Train the Autoencoder.
Step 7: Generate Encoded and Decoded Images.
Step 8: Visualization.
Step 9: Displaying original and Reconstructed Images.
SOURCE CODE:
import numpy as np
import matplotlib.pyplot as plt
33
Written and Submitted By: Registration No:
import tensorflow as tf
import keras
from keras import layers
from keras.datasets import mnist
(x_train, _), (x_test, _) = mnist.load_data()
print(x_train.shape)
print(x_test.shape)
x_train = x_train.astype('float32') / 255.
x_test = x_test.astype('float32') / 255.
x_train = x_train.reshape((len(x_train), np.prod(x_train.shape[1:])))
x_test = x_test.reshape((len(x_test), np.prod(x_test.shape[1:])))
print(x_train.shape)
print(x_test.shape)
encoded_dimensions = 32
input_image = keras.Input(shape=(x_train.shape[1
encoded = layers.Dense(encoded_dimensions, activation='relu')(input_image)
decoded = layers.Dense(x_train.shape[1], activation='sigmoid')(encoded)
autoencoder = keras.Model(inputs = input_image, outputs = decoded)
#Encoder Model
encoder = keras.Model(inputs = input_image, outputs = encoded)
#Decoder Model
encoded_input = keras.Input(shape=(encoded_dimensions,))
decoder_layer = autoencoder.layers[-1]
decoder = keras.Model(encoded_input, decoder_layer(encoded_input))
autoencoder.compile(optimizer='adam', loss='binary_crossentropy')
autoencoder.fit(x_train, x_train, epochs=50, batch_size=256, shuffle=True,
validation_data = (x_test, x_test))
34
Written and Submitted By: Registration No:
encoded_imgs = encoder.predict(x_test)
decoded_imgs = decoder.predict(encoded_imgs)
n=1
plt.figure(figsize=(20, 4))
for i in range(0, n):
# Original Images
ax = plt.subplot(2, n, i + 1)
plt.imshow(x_test[i].reshape(28, 28))
plt.gray()
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
# Reconstructed Images
ax = plt.subplot(2, n, i + 1 + n)
plt.imshow(decoded_imgs[i].reshape(28, 28))
plt.gray()
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
plt.show()
OUTPUT:
x_train Shape :
(60000, 784)
x_test Shape :
(10000, 784)
35
Written and Submitted By: Registration No:
Original Image and Reconstructed Image :
RESULT:
The implementation of Auto Encoder is successfully executed and
verified.
36
Written and Submitted By: Registration No:
37
Written and Submitted By: Registration No: