A practical implementation of transfer learning using Keras and MobileNet for image classification. This project demonstrates how to leverage pre-trained deep learning models and adapt them to classify new categories through transfer learning techniques.
This repository contains a Jupyter notebook that showcases:
- Using pre-trained MobileNet for image classification
- Identifying classification limitations in pre-trained models
- Implementing transfer learning to extend model capabilities
- Training a custom classifier with minimal data
Pre-trained models like MobileNet are trained on large datasets (e.g., ImageNet with 1000 classes) but may misclassify images that are similar to their training data but belong to different categories.
Example from this project:
- MobileNet correctly classifies various dog breeds (German Shepherd, Labrador, Poodle)
- However, it misclassifies a Blue Tit (European bird) as a Chickadee (North American bird) due to their visual similarity
This project demonstrates how to retrain the model using transfer learning to correctly classify these edge cases.
The project uses MobileNet, a lightweight convolutional neural network designed for mobile and embedded vision applications:
- Size: ~17MB
- Efficiency: Optimized for speed and low computational cost
- Design: Uses depthwise separable convolutions
- Paper: MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications (Howard et al., 2017)
Load MobileNet → Test on dog images → Success ✓
→ Test on bird images → Misclassification ✗
The notebook first loads MobileNet with ImageNet weights and tests it on various images:
- German Shepherd - Correctly classified
- Labrador - Correctly classified
- Poodle - Correctly classified
- Blue Tit - Incorrectly classified as Chickadee
Uses google_images_download package to automatically download training images:
- 100 images of Blue Tits
- 100 images of Crows
- Filters: JPEG format, size >400x300 pixels
Model Architecture Modification:
Base Model: MobileNet (pre-trained, top layer removed)
↓
Global Average Pooling
↓
Dense Layer (1024 neurons, ReLU)
↓
Dense Layer (1024 neurons, ReLU)
↓
Dense Layer (512 neurons, ReLU)
↓
Output Layer (2 classes: Blue Tit & Crow, Softmax)Training Strategy:
- Freeze base layers: Keep MobileNet's pre-trained weights unchanged
- Train only top layers: Only the newly added dense layers are trainable
- Optimizer: Adam
- Loss function: Categorical cross-entropy
- Batch size: 32
- Epochs: 10
- Training time: <2 minutes on GTX 1070 GPU
After training, the model can correctly classify:
- Blue Tits (previously misclassified)
- Crows (new category)
keras
tensorflow
numpy
IPython
google_images_download-
Clone the repository:
git clone https://github.com/ferhat00/Deep-Learning.git cd Deep-Learning/Transfer\ Learning\ CNN/
-
Install dependencies:
pip install keras tensorflow numpy google_images_download
-
Open the Jupyter notebook:
jupyter notebook "Transfer Learning in Keras using MobileNet.ipynb" -
Run the cells sequentially to:
- Test pre-trained MobileNet
- Download training images
- Build and train the transfer learning model
- Test predictions on new images
Deep-Learning/
├── Transfer Learning CNN/
│ ├── Transfer Learning in Keras using MobileNet.ipynb # Main notebook
│ ├── German_Shepherd.jpg # Test image
│ ├── labrador1.jpg # Test image
│ ├── poodle1.jpg # Test image
│ ├── blue_tit.jpg # Test image (misclassified initially)
│ ├── crow.jpg # Test image
│ ├── MobileNet architecture.PNG # Architecture diagram
│ └── mobilenet_v1.png # Model structure visualization
└── README.md
- Transfer Learning: Leveraging pre-trained models to solve new problems with minimal data
- Feature Extraction: Using frozen convolutional layers as feature extractors
- Fine-tuning: Adding and training custom classification layers
- Data Augmentation: Using ImageDataGenerator for efficient batch processing
- Model Architecture: Understanding and modifying deep learning architectures
- Target size: 224x224 pixels (MobileNet input size)
- Preprocessing: MobileNet-specific preprocessing function
- Normalization: Pixel values scaled to [0, 1]
- Trainable parameters: Only the last ~20 layers
- Non-trainable parameters: Base MobileNet convolutional layers
- Data flow: Images → ImageDataGenerator → Batches → Model
The transfer learning approach successfully:
- ✅ Maintains accuracy on original ImageNet classes
- ✅ Correctly classifies previously misclassified Blue Tits
- ✅ Adds new classification capability (Crows)
- ✅ Achieves high accuracy with minimal training data (<200 images total)
- ✅ Trains in under 2 minutes on modern GPUs
This project can be extended to:
- Add more bird species or animal categories
- Implement data augmentation techniques
- Fine-tune more layers for better accuracy
- Deploy the model as a web service or mobile app
- Use other pre-trained models (ResNet, VGG, EfficientNet)
Ferhat
This project is open source and available for educational purposes.
This project demonstrates the power of transfer learning - adapting powerful pre-trained models to solve specific problems with minimal data and computational resources.