Deep Learning
Lecture-4
Dr. Abdul Jaleel
Associate Professor
Handwritten digits classification
using neural network
Handwritten Digits
MNIST Handwritten Digit Classification Dataset
andwritten Digits Classification:
A Simple Neural Network
Handwritten Digits Classification
Handwritten Digits Classification
Handwritten Digits as Input
Handwritten Digits as Input
Handwritten Digits as Input: Array Flattening
Handwritten Digits as Input: 28x28 Array
Handwritten digits classification
using neural network
(Python Implementation)
1) we will classify handwritten digits using a simple neural network
which has only input and output layers.
2) We will than add a hidden layer and
see how the performance of the model improves
array([[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 14, 1, 154, 253, [ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 24, 114, 221,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
90, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 253, 253, 253, 253, 201, 78, 0, 0, 0, 0, 0, 0, 0,
0, 0],
0, 0], 0, 0],
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 139, 253, [ 0, 0, 0, 0, 0, 0, 0, 0, 23, 66, 213, 253, 253,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
190, 2, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 253, 253, 198, 81, 2, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0],
0, 0], 0, 0],
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 11, 190, [ 0, 0, 0, 0, 0, 0, 18, 171, 219, 253, 253, 253, 253,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
253, 70, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 195, 80, 9, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0],
0, 0], 0, 0],
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 35, [ 0, 0, 0, 0, 55, 172, 226, 253, 253, 253, 253, 244, 133,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
241, 225, 160, 108, 1, 0, 0, 0, 0, 0, 0, 0, 0, 11, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0],
0, 0], 0, 0],
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, [ 0, 0, 0, 0, 136, 253, 253, 253, 212, 135, 132, 16, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
81, 240, 253, 253, 119, 25, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0],
0, 0], 0, 0],
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 3,
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, [ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
18, 18, 18, 126, 136, 175, 26, 166, 255, 247, 127, 0,
0, 45, 186, 253, 253, 150, 27, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0,
0, 0], 0, 0],
0, 0],
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, [ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
[ 0, 0, 0, 0, 0, 0, 0, 0, 30, 36, 94, 154, 170,
0, 0, 16, 93, 252, 253, 187, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
253, 253, 253, 253, 253, 225, 172, 253, 242, 195, 64, 0,
0, 0], 0, 0],
0,
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, [ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0],
0, 0, 0, 0, 249, 253, 249, 64, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
[ 0, 0, 0, 0, 0, 0, 0, 49, 238, 253, 253, 253, 253,
0, 0], 0, 0]], dtype=uint8)
253, 253, 253, 253, 251, 93, 82, 82, 56, 39, 0, 0, 0,
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0],
0, 46, 130, 183, 253, 253, 207, 2, 0, 0, 0, 0, 0,
[ 0, 0, 0, 0, 0, 0, 0, 18, 219, 253, 253, 253, 253,
0, 0],
253, 198, 182, 247, 241, 0, 0, 0, 0, 0, 0, 0, 0,
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 39,
0, 0],
148, 229, 253, 253, 253, 250, 182, 0, 0, 0, 0, 0, 0,
[ 0, 0, 0, 0, 0, 0, 0, 0, 80, 156, 107, 253, 253,
0, 0],
205, 11, 0, 43, 154, 0, 0, 0, 0, 0, 0, 0, 0,
Very simple neural network with no hidden layers
Very simple neural network with no hidden layers
Neural Network Using hidden layer
313/313 [==============================]
- 0s 1ms/step - loss: 0.0966 - accuracy: 0.9716
[0.09658893942832947, 0.9715999960899353]
Using Flatten layer so that we don't have to call .reshape on input dataset
313/313 [==============================]
- 0s 1ms/step - loss: 0.0813 - accuracy: 0.9779
[0.08133944123983383, 0.9779000282287598]
Epoch 1/10
1875/1875 [==============================] - 3s 2ms/step - loss: 0.2959 - accuracy: 0.9185
Epoch 2/10 1875/1875 [==============================] - 3s 2ms/step - loss: 0.1368 - accuracy: 0.9603
Epoch 3/10 1875/1875 [==============================] - 3s 2ms/step - loss: 0.0995 - accuracy: 0.9703
Epoch 4/10 1875/1875 [==============================] - 3s 2ms/step - loss: 0.0771 - accuracy: 0.9772
Epoch 5/10 1875/1875 [==============================] - 3s 2ms/step - loss: 0.0628 - accuracy: 0.9806
Epoch 6/10 1875/1875 [==============================] - 3s 2ms/step - loss: 0.0519 - accuracy: 0.9841
Epoch 7/10 1875/1875 [==============================] - 3s 2ms/step - loss: 0.0442 - accuracy: 0.9865
Epoch 8/10 1875/1875 [==============================] - 3s 2ms/step - loss: 0.0369 - accuracy: 0.9886
Epoch 9/10 1875/1875 [==============================] - 3s 2ms/step - loss: 0.0300 - accuracy: 0.9910
Epoch 10/10 1875/1875 [==============================] - 3s 2ms/step - loss: 0.0264 - accuracy: 0.9917
Out[59]:
<tensorflow.python.keras.callbacks.History at 0x1fe24629e80>
Task for next class-
Execute the code given in Lecture 2,3,4.
Prepare Notes of what u have understand so far.
There will be a marking activity in next class.