[go: up one dir, main page]

0% found this document useful (0 votes)
14 views4 pages

TD 3 Computer Vision

The document outlines a deep learning homework project focused on implementing various Convolutional Neural Network (CNN) architectures for image classification using TensorFlow. It includes steps for setting up the environment, building and training different models (Feed-Forward Neural Network, Single-Block CNN, Multi-Block CNN), visualizing feature maps, and analyzing training results. Additionally, it poses questions for comparison and analysis of model performance and dropout effects.

Uploaded by

youness.bht2000
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
14 views4 pages

TD 3 Computer Vision

The document outlines a deep learning homework project focused on implementing various Convolutional Neural Network (CNN) architectures for image classification using TensorFlow. It includes steps for setting up the environment, building and training different models (Feed-Forward Neural Network, Single-Block CNN, Multi-Block CNN), visualizing feature maps, and analyzing training results. Additionally, it poses questions for comparison and analysis of model performance and dropout effects.

Uploaded by

youness.bht2000
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 4

Deep Learning Homework: CNN Architectures

Master MIATE

Project
Apply the same preprocessing to the folders containing Flower data and proceed with implementing the
following project.

1 Setup
Install required packages:
1 pip install tensorflow matplotlib numpy

2 Feed-Forward Neural Network


Implement a simple FFNN for image classification:
1 import tensorflow as tf
2 from tensorflow . keras import layers , models
3
4 def build_ffnn ( input_shape =(32 , 32 , 3) , num_classes =10) :
5 model = models . Sequential ([
6 layers . Flatten ( input_shape = input_shape ) ,
7 layers . Dense (256 , activation = ’ relu ’) ,
8 layers . Dense (128 , activation = ’ relu ’) ,
9 layers . Dense ( num_classes , activation = ’ softmax ’)
10 ])
11 return model
12
13 # Compile and train
14 ffnn = build_ffnn ()
15 ffnn . compile ( optimizer = ’ adam ’ ,
16 loss = ’ s p a r s e _ c a t e g o r i c a l _ c r o s s e n t r o p y ’ ,
17 metrics =[ ’ accuracy ’ ])
18 history = ffnn . fit ( train_images , train_labels ,
19 epochs =10 , validation_data =( test_images , test_labels ) )

3 Single-Block CNN
Implement a CNN with one convolutional block:
1 def b u i l d _ s i n gle _b lo ck_ cn n ( input_shape =(32 , 32 , 3) , num_classes =10) :
2 model = models . Sequential ([
3 layers . Conv2D (32 , (3 , 3) , activation = ’ relu ’ , input_shape = input_shape ) ,
4 layers . MaxPooling2D ((2 , 2) ) ,
5 layers . Flatten () ,
6 layers . Dense (64 , activation = ’ relu ’) ,
7 layers . Dense ( num_classes , activation = ’ softmax ’)
8 ])
9 return model
10
11 # Train with and without dropout
12 single_cnn = bu il d_ sin gl e_ blo ck _c nn ()
13 single_cnn . compile ( optimizer = ’ adam ’ , loss = ’ s p a r s e _ c a t e g o r i c a l _ c r o s s e n t r o p y ’ ,
14 metrics =[ ’ accuracy ’ ])
15 history = single_cnn . fit ( train_images , train_labels , epochs =10 ,
16 validation_data =( test_images , test_labels ) )

4 Multi-Block CNN
Implement a deeper CNN with multiple blocks:
1 def b u i l d _ m u l t i_bl ock_ cnn ( input_shape =(32 , 32 , 3) , num_classes =10 , dropout_rate =0.5) :
2 model = models . Sequential ([
3 # Block 1
4 layers . Conv2D (32 , (3 , 3) , activation = ’ relu ’ , padding = ’ same ’ , input_shape =
input_shape ) ,
5 layers . Conv2D (32 , (3 , 3) , activation = ’ relu ’ , padding = ’ same ’) ,
6 layers . MaxPooling2D ((2 , 2) ) ,
7 layers . Dropout ( dropout_rate ) ,
8
9 # Block 2
10 layers . Conv2D (64 , (3 , 3) , activation = ’ relu ’ , padding = ’ same ’) ,
11 layers . Conv2D (64 , (3 , 3) , activation = ’ relu ’ , padding = ’ same ’) ,
12 layers . MaxPooling2D ((2 , 2) ) ,
13 layers . Dropout ( dropout_rate ) ,
14
15 # Classifier
16 layers . Flatten () ,
17 layers . Dense (128 , activation = ’ relu ’) ,
18 layers . Dropout ( dropout_rate ) ,
19 layers . Dense ( num_classes , activation = ’ softmax ’)
20 ])
21 return model
22
23 # Compare with and without dropout
24 multi_cnn = buil d_mu lti_b lock _cnn ( dropout_rate =0.5)
25 multi_cnn . compile ( optimizer = ’ adam ’ , loss = ’ s p a r s e _ c a t e g o r i c a l _ c r o s s e n t r o p y ’ ,
26 metrics =[ ’ accuracy ’ ])
27 history = multi_cnn . fit ( train_images , train_labels , epochs =20 ,
28 validation_data =( test_images , test_labels ) )

5 Feature Map Visualization


Create a function to visualize feature maps:
1 import matplotlib . pyplot as plt
2
3 def v i s u a l i z e _fe at ur e_m ap s ( model , image , layer_names = None ) :
4 if layer_names is None :
5 layer_names = [ layer . name for layer in model . layers
6 if ’ conv ’ in layer . name ]
7
8 outputs = [ layer . output for layer in model . layers
9 if layer . name in layer_names ]
10 viz_model = models . Model ( inputs = model . inputs , outputs = outputs )
11 feature_maps = viz_model . predict ( image [ np . newaxis , ...])
12
13 for layer_name , fmap in zip ( layer_names , feature_maps ) :
14 print ( f " Layer : { layer_name } , Feature map shape : { fmap . shape } " )
15
16 # Plot first few channels
17 plt . figure ( figsize =(15 , 15) )
18 for i in range ( min (16 , fmap . shape [ -1]) ) :
19 plt . subplot (4 , 4 , i +1)
20 plt . imshow ( fmap [0 , : , : , i ] , cmap = ’ viridis ’)
21 plt . axis ( ’ off ’)
22 plt . suptitle ( layer_name )
23 plt . show ()
24
25 # Example usage
26 sample_image = train_images [0]
27 vi s u a l i z e _ f e a tur e_ ma ps ( multi_cnn , sample_image )

6 Training Results Visualization


Add this function to visualize the training history:
1 def p l o t _ t r a i n ing_ hist ory ( history , title = ’ Training Results ’) :
2 """ Plot training and validation accuracy and loss """
3 plt . figure ( figsize =(12 , 4) )
4
5 # Plot accuracy
6 plt . subplot (1 , 2 , 1)
7 plt . plot ( history . history [ ’ accuracy ’] , label = ’ Train Accuracy ’)
8 plt . plot ( history . history [ ’ val_accuracy ’] , label = ’ Validation Accuracy ’)
9 plt . title ( f ’{ title } - Accuracy ’)
10 plt . xlabel ( ’ Epoch ’)
11 plt . ylabel ( ’ Accuracy ’)
12 plt . ylim ([0 , 1])
13 plt . legend ()
14
15 # Plot loss
16 plt . subplot (1 , 2 , 2)
17 plt . plot ( history . history [ ’ loss ’] , label = ’ Train Loss ’)
18 plt . plot ( history . history [ ’ val_loss ’] , label = ’ Validation Loss ’)
19 plt . title ( f ’{ title } - Loss ’)
20 plt . xlabel ( ’ Epoch ’)
21 plt . ylabel ( ’ Loss ’)
22 plt . legend ()
23
24 plt . tight_layout ()
25 plt . show ()
26
27 # Example usage for each model :
28 pl o t _ t r a i n i n g _hi story ( ffnn_history , ’ FFNN Training ’)
29 pl o t _ t r a i n i n g _hi story ( single_cnn_history , ’ Single - Block CNN Training ’)
30 pl o t _ t r a i n i n g _hi story ( multi_cnn_history , ’ Multi - Block CNN Training ’)
31
32 # For comparing models with / without dropout
33 plt . figure ( figsize =(8 , 6) )
34 plt . plot ( no _dropout_history . history [ ’ val_accuracy ’] ,
35 label = ’ No Dropout Val Accuracy ’)
36 plt . plot ( w i t h_dropout_history . history [ ’ val_accuracy ’] ,
37 label = ’ With Dropout Val Accuracy ’)
38 plt . title ( ’ Dropout Effect on Validation Accuracy ’)
39 plt . xlabel ( ’ Epoch ’)
40 plt . ylabel ( ’ Accuracy ’)
41 plt . legend ()
42 plt . show ()

Expected Output
The function will generate two types of plots:
• Model Training Curves: Side-by-side plots of accuracy and loss for both training and validation
sets

• Comparative Plots: Direct comparison of validation accuracy between different configurations


(e.g., with/without dropout)

7 Questions
1. Compare the performance of FFNN vs. CNN architectures. What differences do you observe?

2. How does dropout affect the training process and final accuracy in both single-block and multi-block
CNNs?

3. Analyze the feature maps from different layers. What patterns do you notice as you go deeper in the
network?

4. Which architecture would you recommend for a real-world image classification problem and why?

5. Experiment with different dropout rates (0.2, 0.5, 0.8). Report your findings.

You might also like