[go: up one dir, main page]

0% found this document useful (0 votes)
32 views3 pages

Assignment No 2

The document outlines a TensorFlow Keras implementation for training a neural network on the MNIST dataset, including data loading, preprocessing, model creation, and training. The model achieves a final accuracy of approximately 95.3% on the test set after 10 epochs. Additionally, it includes visualizations of training accuracy and loss over epochs.

Uploaded by

avantiahiwale18
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
32 views3 pages

Assignment No 2

The document outlines a TensorFlow Keras implementation for training a neural network on the MNIST dataset, including data loading, preprocessing, model creation, and training. The model achieves a final accuracy of approximately 95.3% on the test set after 10 epochs. Additionally, it includes visualizations of training accuracy and loss over epochs.

Uploaded by

avantiahiwale18
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 3

04/11/2024, 19:11 Untitled0.

ipynb - Colab

import tensorflow as tf
from tensorflow import keras
import matplotlib.pyplot as plt
import random

mnist = tf.keras.datasets.mnist

(x_train, y_train), (x_test, y_test) = mnist.load_data()

Downloading data from https://storage.googleapis.com/tensorflow/tf-keras-datasets/mnist.npz


11490434/11490434 ━━━━━━━━━━━━━━━━━━━━ 0s 0us/step

x_train = x_train / 255


x_test = x_test / 255

model = keras.Sequential([
keras.layers.Flatten(input_shape=(28, 28)),
keras.layers.Dense(128, activation="relu"),
keras.layers.Dense(10, activation="softmax")
])
model.summary()

/usr/local/lib/python3.10/dist-packages/keras/src/layers/reshaping/flatten.py:37: UserWarning: Do not pass an `input_shape`/`input_d


super().__init__(**kwargs)
Model: "sequential"
┏━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━┓
┃ Layer (type) ┃ Output Shape ┃ Param # ┃
┡━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━┩
│ flatten (Flatten) │ (None, 784) │ 0 │
├──────────────────────────────────────┼─────────────────────────────┼─────────────────┤
│ dense (Dense) │ (None, 128) │ 100,480 │
├──────────────────────────────────────┼─────────────────────────────┼─────────────────┤
│ dense_1 (Dense) │ (None, 10) │ 1,290 │
└──────────────────────────────────────┴─────────────────────────────┴─────────────────┘
Total params: 101,770 (397.54 KB)
Trainable params: 101,770 (397.54 KB)
Non-trainable params: 0 (0.00 B)

model.compile(optimizer='sgd',
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
# Train the model
history = model.fit(x_train, y_train,
validation_data=(x_test, y_test),
epochs=10,
verbose=1)

Epoch 1/10
1875/1875 ━━━━━━━━━━━━━━━━━━━━ 7s 3ms/step - accuracy: 0.7322 - loss: 1.0378 - val_accuracy: 0.9058 - val_loss: 0.3532
Epoch 2/10
1875/1875 ━━━━━━━━━━━━━━━━━━━━ 8s 2ms/step - accuracy: 0.9022 - loss: 0.3540 - val_accuracy: 0.9185 - val_loss: 0.2905
Epoch 3/10
1875/1875 ━━━━━━━━━━━━━━━━━━━━ 7s 3ms/step - accuracy: 0.9171 - loss: 0.2998 - val_accuracy: 0.9299 - val_loss: 0.2554
Epoch 4/10
1875/1875 ━━━━━━━━━━━━━━━━━━━━ 9s 2ms/step - accuracy: 0.9265 - loss: 0.2636 - val_accuracy: 0.9370 - val_loss: 0.2316
Epoch 5/10
1875/1875 ━━━━━━━━━━━━━━━━━━━━ 5s 3ms/step - accuracy: 0.9342 - loss: 0.2355 - val_accuracy: 0.9403 - val_loss: 0.2141
Epoch 6/10
1875/1875 ━━━━━━━━━━━━━━━━━━━━ 10s 3ms/step - accuracy: 0.9394 - loss: 0.2148 - val_accuracy: 0.9429 - val_loss: 0.1985
Epoch 7/10
1875/1875 ━━━━━━━━━━━━━━━━━━━━ 5s 3ms/step - accuracy: 0.9454 - loss: 0.1986 - val_accuracy: 0.9454 - val_loss: 0.1864
Epoch 8/10
1875/1875 ━━━━━━━━━━━━━━━━━━━━ 4s 2ms/step - accuracy: 0.9484 - loss: 0.1852 - val_accuracy: 0.9492 - val_loss: 0.1759
Epoch 9/10
1875/1875 ━━━━━━━━━━━━━━━━━━━━ 6s 3ms/step - accuracy: 0.9537 - loss: 0.1676 - val_accuracy: 0.9509 - val_loss: 0.1681
Epoch 10/10
1875/1875 ━━━━━━━━━━━━━━━━━━━━ 8s 2ms/step - accuracy: 0.9558 - loss: 0.1588 - val_accuracy: 0.9534 - val_loss: 0.1580

# e. Evaluate the network


test_loss,test_acc=model.evaluate(x_test,y_test)
print("Loss=%.3f" %test_loss)
print("Accuracy=%.3f" %test_acc)
n=random.randint(0,9999)
plt.imshow(x_test[n])
plt.show()
predicted_value=model.predict(x_test)
plt.imshow(x_test[n])
plt.show()
print('Predicted value:',predicted_value[n])

https://colab.research.google.com/drive/1c2uS42eU1I9ON1wi5q8wn0aELjqxslzc#scrollTo=S-d3eQiMYL8W 1/3
04/11/2024, 19:11 Untitled0.ipynb - Colab

313/313 ━━━━━━━━━━━━━━━━━━━━ 1s 2ms/step - accuracy: 0.9456 - loss: 0.1839


Loss=0.158
Accuracy=0.953

313/313 ━━━━━━━━━━━━━━━━━━━━ 1s 2ms/step

Predicted value: [1.12933776e-04 2.62869246e-08 1.26581244e-05 2.97227634e-05


2.76144474e-05 1.91331674e-05 1.56903557e-08 9.93339777e-01
9 16245699 06 6 44897390 03]

# f. Plot the training loss and accuracy


# plotting the training Accuracy
plt.plot(history.history['accuracy'])
# plt.plot(history.history['accuracy'])
plt.plot(history.history['val_accuracy'])
plt.title('model accuracy')
plt.ylabel('accuracy')
plt.xlabel('epoch')
plt.legend(['Train','Validation'],loc='upper left')
plt.show()

https://colab.research.google.com/drive/1c2uS42eU1I9ON1wi5q8wn0aELjqxslzc#scrollTo=S-d3eQiMYL8W 2/3
04/11/2024, 19:11 Untitled0.ipynb - Colab

plt.plot(history.history['loss'])
plt.plot(history.history['val_loss'])
plt.title('model loss')
plt.ylabel('loss')
plt.xlabel('epoch')
plt.legend(['Train','Validation'],loc='upper right')
plt.show()

https://colab.research.google.com/drive/1c2uS42eU1I9ON1wi5q8wn0aELjqxslzc#scrollTo=S-d3eQiMYL8W 3/3

You might also like