[go: up one dir, main page]

0% found this document useful (0 votes)
31 views5 pages

Snake Identification Model

The document describes using a convolutional neural network to perform image classification on a snake image dataset. It loads pretrained VGG16 model for feature extraction, trains a classifier on extracted features, and evaluates the model's performance on test data. Key steps include: 1) Loading and preprocessing the train and test datasets 2) Extracting features from images using VGG16 and flattening them 3) Training a classifier on extracted features 4) Evaluating the trained model on test data and generating predictions 5) Calculating the confusion matrix and classification report to analyze results

Uploaded by

Nura Muhammad
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
31 views5 pages

Snake Identification Model

The document describes using a convolutional neural network to perform image classification on a snake image dataset. It loads pretrained VGG16 model for feature extraction, trains a classifier on extracted features, and evaluates the model's performance on test data. Key steps include: 1) Loading and preprocessing the train and test datasets 2) Extracting features from images using VGG16 and flattening them 3) Training a classifier on extracted features 4) Evaluating the trained model on test data and generating predictions 5) Calculating the confusion matrix and classification report to analyze results

Uploaded by

Nura Muhammad
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 5

21/03/2024, 20:32 Untitled18.

ipynb - Colaboratory

import tensorflow as tf
from tensorflow.keras.preprocessing.image import ImageDataGenerator
from tensorflow.keras.applications import VGG16
from sklearn.metrics import classification_report, confusion_matrix
import numpy as np

from google.colab import drive


drive.mount('/content/drive')

Mounted at /content/drive

# Define paths to your dataset


train_data_dir = '/content/drive/MyDrive/archive/Snake Images/train'
validation_data_dir = '/content/drive/MyDrive/archive/Snake Images/test'
test_data_dir = '/content/drive/MyDrive/archive/Snake Images/test'

# Define image dimensions and batch size


img_width, img_height = 150, 150
batch_size = 32

# Data preprocessing and augmentation


train_datagen = ImageDataGenerator(
rescale=1. / 255,
rotation_range=20,
width_shift_range=0.2,
height_shift_range=0.2,
shear_range=0.2,
zoom_range=0.2,
horizontal_flip=True,
fill_mode='nearest'
)

validation_datagen = ImageDataGenerator(rescale=1. / 255)

# Load dataset
train_generator = train_datagen.flow_from_directory(
train_data_dir,
target_size=(img_width, img_height),
batch_size=batch_size,
class_mode='binary'
)

validation_generator = validation_datagen.flow_from_directory(
validation_data_dir,
target_size=(img_width, img_height),
batch_size=batch_size,
class_mode='binary'
)

Found 1785 images belonging to 2 classes.


Found 269 images belonging to 2 classes.

# Load pre-trained VGG16 model


base_model = VGG16(weights='imagenet', include_top=False, input_shape=(img_width, img_height, 3))

Downloading data from https://storage.googleapis.com/tensorflow/keras-applications/vgg16/vgg16_weights_tf_dim_ordering_tf_kernels_no


58889256/58889256 [==============================] - 0s 0us/step

# Feature extraction
train_features = base_model.predict(train_generator)
validation_features = base_model.predict(validation_generator)

56/56 [==============================] - 438s 8s/step


9/9 [==============================] - 79s 9s/step

# Flatten extracted features


train_features_flat = np.reshape(train_features, (train_features.shape[0], -1))
validation_features_flat = np.reshape(validation_features, (validation_features.shape[0], -1))

# Define your classifier (fully connected layers)

https://colab.research.google.com/drive/1Te4mMvCo9pjYuaL7_XGtEXjgG9AdijHR#scrollTo=R3Lhw5NR5s1m&printMode=true 1/5
21/03/2024, 20:32 Untitled18.ipynb - Colaboratory
model = tf.keras.models.Sequential([
tf.keras.layers.Flatten(input_shape=train_features_flat.shape[1:]),
tf.keras.layers.Dense(512, activation='relu'),
tf.keras.layers.Dropout(0.5),
tf.keras.layers.Dense(1, activation='sigmoid')
])

# Compile the model


model.compile(optimizer='adam',
loss='binary_crossentropy',
metrics=['accuracy'])

# Train the model


history = model.fit(train_features_flat, train_generator.labels,
epochs=50,
batch_size=batch_size,
validation_data=(validation_features_flat, validation_generator.labels))

Epoch 22/50
56/56 [==============================] - 3s 55ms/step - loss: 0.1432 - accuracy: 0.9490 - val_loss: 1.3318 - val_accuracy: 0.5167
Epoch 23/50
56/56 [==============================] - 3s 51ms/step - loss: 0.1199 - accuracy: 0.9658 - val_loss: 1.4194 - val_accuracy: 0.5353
Epoch 24/50
56/56 [==============================] - 3s 60ms/step - loss: 0.1117 - accuracy: 0.9636 - val_loss: 1.3955 - val_accuracy: 0.5204
Epoch 25/50
56/56 [==============================] - 4s 77ms/step - loss: 0.1039 - accuracy: 0.9658 - val_loss: 1.5249 - val_accuracy: 0.5353
Epoch 26/50
56/56 [==============================] - 4s 75ms/step - loss: 0.0931 - accuracy: 0.9709 - val_loss: 1.4648 - val_accuracy: 0.5539
Epoch 27/50
56/56 [==============================] - 3s 60ms/step - loss: 0.0820 - accuracy: 0.9754 - val_loss: 1.6223 - val_accuracy: 0.5279
Epoch 28/50
56/56 [==============================] - 4s 66ms/step - loss: 0.0805 - accuracy: 0.9748 - val_loss: 1.5921 - val_accuracy: 0.5390
Epoch 29/50
56/56 [==============================] - 5s 84ms/step - loss: 0.0777 - accuracy: 0.9787 - val_loss: 1.7463 - val_accuracy: 0.5242
Epoch 30/50
56/56 [==============================] - 4s 71ms/step - loss: 0.0706 - accuracy: 0.9804 - val_loss: 1.6777 - val_accuracy: 0.5242
Epoch 31/50
56/56 [==============================] - 4s 71ms/step - loss: 0.0657 - accuracy: 0.9826 - val_loss: 1.8249 - val_accuracy: 0.5279
Epoch 32/50
56/56 [==============================] - 4s 73ms/step - loss: 0.0641 - accuracy: 0.9838 - val_loss: 1.7091 - val_accuracy: 0.5353
Epoch 33/50
56/56 [==============================] - 4s 70ms/step - loss: 0.0497 - accuracy: 0.9866 - val_loss: 1.6487 - val_accuracy: 0.5613
Epoch 34/50
56/56 [==============================] - 4s 69ms/step - loss: 0.0660 - accuracy: 0.9787 - val_loss: 2.1088 - val_accuracy: 0.5056
Epoch 35/50
56/56 [==============================] - 4s 70ms/step - loss: 0.0550 - accuracy: 0.9832 - val_loss: 1.7918 - val_accuracy: 0.5204
Epoch 36/50
56/56 [==============================] - 5s 89ms/step - loss: 0.0454 - accuracy: 0.9888 - val_loss: 1.6865 - val_accuracy: 0.5316
Epoch 37/50
56/56 [==============================] - 4s 72ms/step - loss: 0.0428 - accuracy: 0.9899 - val_loss: 1.9269 - val_accuracy: 0.5428
Epoch 38/50
56/56 [==============================] - 4s 71ms/step - loss: 0.0546 - accuracy: 0.9832 - val_loss: 1.7999 - val_accuracy: 0.5390
Epoch 39/50
56/56 [==============================] - 5s 84ms/step - loss: 0.0605 - accuracy: 0.9793 - val_loss: 1.7046 - val_accuracy: 0.5539
Epoch 40/50
56/56 [==============================] - 5s 81ms/step - loss: 0.0500 - accuracy: 0.9815 - val_loss: 1.7393 - val_accuracy: 0.5428
Epoch 41/50
56/56 [==============================] - 4s 72ms/step - loss: 0.0535 - accuracy: 0.9804 - val_loss: 2.0305 - val_accuracy: 0.5390
Epoch 42/50
56/56 [==============================] - 4s 79ms/step - loss: 0.0402 - accuracy: 0.9854 - val_loss: 1.7947 - val_accuracy: 0.5465
Epoch 43/50
56/56 [==============================] - 5s 88ms/step - loss: 0.0329 - accuracy: 0.9910 - val_loss: 2.2354 - val_accuracy: 0.5167
Epoch 44/50
56/56 [==============================] - 4s 72ms/step - loss: 0.0455 - accuracy: 0.9860 - val_loss: 1.9019 - val_accuracy: 0.5167
Epoch 45/50
56/56 [==============================] - 4s 72ms/step - loss: 0.0490 - accuracy: 0.9810 - val_loss: 1.8153 - val_accuracy: 0.5613
Epoch 46/50
56/56 [==============================] - 5s 90ms/step - loss: 0.0512 - accuracy: 0.9810 - val_loss: 2.2398 - val_accuracy: 0.5353
Epoch 47/50
56/56 [==============================] - 4s 71ms/step - loss: 0.0609 - accuracy: 0.9793 - val_loss: 1.5205 - val_accuracy: 0.4944
Epoch 48/50
56/56 [==============================] - 4s 71ms/step - loss: 0.0921 - accuracy: 0.9664 - val_loss: 1.7049 - val_accuracy: 0.5390
Epoch 49/50
56/56 [==============================] - 5s 82ms/step - loss: 0.0797 - accuracy: 0.9725 - val_loss: 1.8777 - val_accuracy: 0.5353
Epoch 50/50
56/56 [==============================] - 5s 80ms/step - loss: 0.0498 - accuracy: 0.9843 - val_loss: 2.0736 - val_accuracy: 0.5539

https://colab.research.google.com/drive/1Te4mMvCo9pjYuaL7_XGtEXjgG9AdijHR#scrollTo=R3Lhw5NR5s1m&printMode=true 2/5
21/03/2024, 20:32 Untitled18.ipynb - Colaboratory
# Evaluate the model on test data (if available)
test_generator = validation_datagen.flow_from_directory(
test_data_dir,
target_size=(img_width, img_height),
batch_size=batch_size,
class_mode='binary',
shuffle=False
)

test_features = base_model.predict(test_generator)
test_features_flat = np.reshape(test_features, (test_features.shape[0], -1))

test_loss, test_acc = model.evaluate(test_features_flat, test_generator.labels, verbose=2)


print('\nTest accuracy:', test_acc)

Found 269 images belonging to 2 classes.


9/9 [==============================] - 66s 7s/step
9/9 - 0s - loss: 2.1603 - accuracy: 0.5093 - 114ms/epoch - 13ms/step

Test accuracy: 0.5092936754226685

# Generate predictions
predictions = model.predict(test_features_flat)
predicted_classes = np.where(predictions > 0.5, 1, 0)

9/9 [==============================] - 0s 10ms/step

print(predicted_classes)

https://colab.research.google.com/drive/1Te4mMvCo9pjYuaL7_XGtEXjgG9AdijHR#scrollTo=R3Lhw5NR5s1m&printMode=true 3/5
21/03/2024, 20:32 Untitled18.ipynb - Colaboratory
[1]
[1]

# Confusion matrix
conf_matrix = confusion_matrix(test_generator.labels, predicted_classes)
print('\nConfusion Matrix:')
print(conf_matrix)

Confusion Matrix:
[[43 85]
[47 94]]

# Classification report
class_names = ['Non-Venomous', 'Venomous']
print('\nClassification Report:')
print(classification_report(test_generator.labels, predicted_classes, target_names=class_names))

Classification Report:
precision recall f1-score support

Non-Venomous 0.48 0.34 0.39 128


Venomous 0.53 0.67 0.59 141

accuracy 0.51 269


macro avg 0.50 0.50 0.49 269
weighted avg 0.50 0.51 0.50 269

# Fine-tuning (optional)
for layer in base_model.layers:
layer.trainable = True

# Compile the model again after fine-tuning


model.compile(optimizer=tf.keras.optimizers.Adam(1e-5),
loss='binary_crossentropy',
metrics=['accuracy'])

# Classification report
class_names = ['Non-Venomous', 'Venomous']
print('\nClassification Report:')
print(classification_report(test_generator.labels, predicted_classes, target_names=class_names))

Classification Report:
precision recall f1-score support

Non-Venomous 0.48 0.34 0.39 128


Venomous 0.53 0.67 0.59 141

accuracy 0.51 269


macro avg 0.50 0.50 0.49 269
weighted avg 0.50 0.51 0.50 269

import matplotlib.pyplot as plt

# Get the first batch of images and labels from the training set
images, labels = next(train_generator)

# Display the first nine images and labels


plt.figure(figsize=(50, 50))
for i in range(19):
ax = plt.subplot(6, 6, i + 1)
plt.imshow(images[i])
plt.title(f'Label: {labels[i]}')
plt.axis('off')
plt.show()

https://colab.research.google.com/drive/1Te4mMvCo9pjYuaL7_XGtEXjgG9AdijHR#scrollTo=R3Lhw5NR5s1m&printMode=true 4/5
21/03/2024, 20:32 Untitled18.ipynb - Colaboratory

Could not connect to the reCAPTCHA service. Please check your internet connection and reload to get a reCAPTCHA challenge.

https://colab.research.google.com/drive/1Te4mMvCo9pjYuaL7_XGtEXjgG9AdijHR#scrollTo=R3Lhw5NR5s1m&printMode=true 5/5

You might also like