[go: up one dir, main page]

0% found this document useful (0 votes)
34 views12 pages

Tensorflow Cheat Sheet For Deep Learning Model Building

The document provides a cheat sheet for building various types of neural networks using TensorFlow, including Feedforward Neural Networks, Convolutional Neural Networks, Recurrent Neural Networks, Long Short-Term Memory networks, Gated Recurrent Units, and Transfer Learning with VGG16. Each section includes code snippets for model creation, layer addition, and compilation with appropriate loss functions and metrics. The content is aimed at data science practitioners looking to implement deep learning models efficiently.

Uploaded by

sh.ashfaqueme49
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
34 views12 pages

Tensorflow Cheat Sheet For Deep Learning Model Building

The document provides a cheat sheet for building various types of neural networks using TensorFlow, including Feedforward Neural Networks, Convolutional Neural Networks, Recurrent Neural Networks, Long Short-Term Memory networks, Gated Recurrent Units, and Transfer Learning with VGG16. Each section includes code snippets for model creation, layer addition, and compilation with appropriate loss functions and metrics. The content is aimed at data science practitioners looking to implement deep learning models efficiently.

Uploaded by

sh.ashfaqueme49
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 12

@ @ @ @ @

da da da da da
ta ta ta ta ta
sc sc sc sc sc
ie ie ie ie ie
nc nc nc nc nc
eb eb eb eb eb
ra ra ra ra ra
in in in in in

@ @ @ @ @
da da da da da
ta ta ta ta ta
sc sc sc sc sc
ie ie ie ie ie
nc nc nc nc nc
eb eb eb eb eb

Save for later reference


for Deep
ra ra ra ra ra

Learning
DATA SCIENCE BRAIN

in in in in in
@datasciencebrain

@ @ @ @ @
da da da da da
ta ta ta ta ta
sc sc sc sc sc
ie ie ie ie ie
nc nc nc nc nc
Model Building
CHEAT SHEET
TENSORFLOW

eb eb eb eb eb
ra ra ra ra ra
in in in in in
Our website: deepakjosecodes.com

01 FEEDFORWARD NEURAL NETWORK

in

in

in
model = models.Sequential()
ra

ra

ra
eb

eb

eb
model.add(layers.Flatten(input_shape=(input_size,))) #
nc

nc

nc
ie

ie

ie
Adjust input_size based on your data
sc

sc

sc
ta

ta

ta
da

da

da
in

in

in
@

@
ra

ra

ra
# Add hidden layers
eb

eb

eb
nc

nc

nc
model.add(layers.Dense(128, activation='relu'))
ie

ie

ie
sc

sc

sc
model.add(layers.Dropout(0.2)) # Optional: Add
ta

ta

ta
da

da

da
dropout for regularization
in

in

in
@

@
ra

ra

ra
eb

eb

eb
nc

nc

nc
# Add output layer
ie

ie

ie
sc

sc

sc
model.add(layers.Dense(output_size,
ta

ta

ta
da

da

da
in

in

in
activation='softmax')) # Adjust output_size based on
@

@
ra

ra

ra
eb

eb

eb
your problem
nc

nc

nc
ie

ie

ie
sc

sc

sc
ta

ta

ta

model.compile(optimizer='adam',
da

da

da
in

in

in
@

@
ra

ra

ra
loss='sparse_categorical_crossentropy', # Use
eb

eb

eb
nc

nc

nc

'categorical_crossentropy' for one-hot encoded labels


ie

ie

ie
sc

sc

sc

metrics=['accuracy'])
ta

ta

ta
da

da

da
@

DATA SCIENCE BRAIN


@datasciencebrain 2
Our website: deepakjosecodes.com

02 CONVOLUTIONAL NEURAL NETWORK

in

in

in
model = models.Sequential()
ra

ra

ra
eb

eb

eb
model.add(layers.Conv2D(32, (3, 3), activation='relu',
nc

nc

nc
ie

ie

ie
input_shape=(img_height, img_width, channels)))
sc

sc

sc
ta

ta

ta
model.add(layers.MaxPooling2D((2, 2)))
da

da

da
in

in

in
@

@
ra

ra

ra
eb

eb

eb
nc

nc

nc
# Add more convolutional and pooling layers as
ie

ie

ie
sc

sc

sc
needed
ta

ta

ta
da

da

da
in

in

in
@

@
ra

ra

ra
eb

eb

eb
model.add(layers.Flatten())
nc

nc

nc
ie

ie

ie
model.add(layers.Dense(128, activation='relu'))
sc

sc

sc
ta

ta

ta
model.add(layers.Dense(output_size,
da

da

da
in

in

in
@

@
ra

ra

ra
activation='softmax'))
eb

eb

eb
nc

nc

nc
ie

ie

ie
sc

sc

sc

model.compile(optimizer='adam',
ta

ta

ta
da

da

da
in

in

loss='sparse_categorical_crossentropy', in
@

@
ra

ra

ra
eb

eb

eb

metrics=['accuracy'])
nc

nc

nc
ie

ie

ie
sc

sc

sc
ta

ta

ta
da

da

da
@

DATA SCIENCE BRAIN


@datasciencebrain 3
Our website: deepakjosecodes.com

03 RECURRENT NEURAL NETWORK

in

in

in
model = models.Sequential()
ra

ra

ra
eb

eb

eb
model.add(layers.SimpleRNN(128,
nc

nc

nc
ie

ie

ie
sc

sc

sc
activation='relu', input_shape=(timesteps,
ta

ta

ta
da

da

da
in

in

in
features)))
@

@
ra

ra

ra
eb

eb

eb
nc

nc

nc
ie

ie

ie
sc

sc

sc
# Add more recurrent layers or use LSTM/GRU
ta

ta

ta
da

da

da
layers
in

in

in
@

@
ra

ra

ra
eb

eb

eb
nc

nc

nc
ie

ie

ie
model.add(layers.Dense(output_size,
sc

sc

sc
ta

ta

ta
da

da

activation='softmax')) da
in

in

in
@

@
ra

ra

ra
eb

eb

eb
nc

nc

nc
ie

ie

ie
model.compile(optimizer='adam',
sc

sc

sc
ta

ta

ta
da

da

da

loss='sparse_categorical_crossentropy',
in

in

in
@

@
ra

ra

ra
eb

eb

eb

metrics=['accuracy'])
nc

nc

nc
ie

ie

ie
sc

sc

sc
ta

ta

ta
da

da

da
@

DATA SCIENCE BRAIN


@datasciencebrain 4
Our website: deepakjosecodes.com

04 LONG SHORT-TERM MEMORY

in

in

in
model = models.Sequential()
ra

ra

ra
eb

eb

eb
model.add(layers.LSTM(128, activation='relu',
nc

nc

nc
ie

ie

ie
sc

sc

sc
input_shape=(timesteps, features)))
ta

ta

ta
da

da

da
in

in

in
@

@
ra

ra

ra
eb

eb

eb
# Add more LSTM layers if needed
nc

nc

nc
ie

ie

ie
sc

sc

sc
ta

ta

ta
da

da

da
model.add(layers.Dense(output_size,
in

in

in
@

@
ra

ra

ra
eb

eb

eb
activation='softmax'))
nc

nc

nc
ie

ie

ie
sc

sc

sc
ta

ta

ta
da

da

model.compile(optimizer='adam', da
in

in

in
@

@
ra

ra

ra
eb

eb

eb
loss='sparse_categorical_crossentropy',
nc

nc

nc
ie

ie

ie
metrics=['accuracy'])
sc

sc

sc
ta

ta

ta
da

da

da
in

in

in
@

@
ra

ra

ra
eb

eb

eb
nc

nc

nc
ie

ie

ie
sc

sc

sc
ta

ta

ta
da

da

da
@

DATA SCIENCE BRAIN


@datasciencebrain 5
Our website: deepakjosecodes.com

05 GATED RECURRENT UNIT

in

in

in
model = models.Sequential()
ra

ra

ra
eb

eb

eb
model.add(layers.GRU(128, activation='relu',
nc

nc

nc
ie

ie

ie
sc

sc

sc
input_shape=(timesteps, features)))
ta

ta

ta
da

da

da
in

in

in
@

@
ra

ra

ra
eb

eb

eb
# Add more GRU layers if needed
nc

nc

nc
ie

ie

ie
sc

sc

sc
ta

ta

ta
da

da

da
model.add(layers.Dense(output_size,
in

in

in
@

@
ra

ra

ra
eb

eb

eb
activation='softmax'))
nc

nc

nc
ie

ie

ie
sc

sc

sc
ta

ta

ta
da

da

model.compile(optimizer='adam', da
in

in

in
@

@
ra

ra

ra
eb

eb

eb
loss='sparse_categorical_crossentropy',
nc

nc

nc
ie

ie

ie
metrics=['accuracy'])
sc

sc

sc
ta

ta

ta
da

da

da
in

in

in
@

@
ra

ra

ra
eb

eb

eb
nc

nc

nc
ie

ie

ie
sc

sc

sc
ta

ta

ta
da

da

da
@

DATA SCIENCE BRAIN


@datasciencebrain 6
Our website: deepakjosecodes.com

06 TRANSFER LEARNING(E.G., VGG16)

from tensorflow.keras.applications import VGG16


in

in

in
ra

ra

ra
eb

eb

eb
nc

nc

nc
# Load pre-trained VGG16 model without the top layer
ie

ie

ie
sc

sc

sc
base_model = VGG16(weights='imagenet', include_top=False,
ta

ta

ta
da

da

da
input_shape=(img_height, img_width, channels))
in

in

in
@

@
ra

ra

ra
eb

eb

eb
nc

nc

nc
# Freeze convolutional layers
ie

ie

ie
sc

sc

sc
for layer in base_model.layers:
ta

ta

ta
layer.trainable = False
da

da

da
in

in

in
@

@
ra

ra

ra
eb

eb

eb
model = models.Sequential()
nc

nc

nc
ie

ie

ie
model.add(base_model)
sc

sc

sc
ta

ta

ta
da

da

da
in

in

in
@

@
# Add custom classification layers
ra

ra

ra
eb

eb

eb
model.add(layers.Flatten())
nc

nc

nc
ie

ie

model.add(layers.Dense(256, activation='relu'))
ie
sc

sc

sc
ta

ta

ta

model.add(layers.Dropout(0.5))
da

da

da
in

in

model.add(layers.Dense(output_size, activation='softmax')) in
@

@
ra

ra

ra
eb

eb

eb
nc

nc

nc
ie

ie

ie

model.compile(optimizer='adam',
sc

sc

sc

loss='sparse_categorical_crossentropy',
ta

ta

ta
da

da

da

metrics=['accuracy'])
@

DATA SCIENCE BRAIN


@datasciencebrain 7
Our website: deepakjosecodes.com

07 BATCH NORMALIZATION

model.add(layers.BatchNormalization())
in

in

in
ra

ra

ra
eb

eb

eb
nc

nc

nc
08 DATA AUGMENTATION
ie

ie

ie
sc

sc

sc
ta

ta

ta
from tensorflow.keras.preprocessing.image import
da

da

da
in

in

in
@

@
ra

ra

ra
ImageDataGenerator
eb

eb

eb
nc

nc

nc
ie

ie

ie
datagen = ImageDataGenerator(
sc

sc

sc
ta

ta

ta
rotation_range=20,
da

da

da
in

in

in
@

@
width_shift_range=0.2,
ra

ra

ra
eb

eb

eb
height_shift_range=0.2,
nc

nc

nc
ie

ie

ie
horizontal_flip=True,
sc

sc

sc
ta

ta

ta
shear_range=0.2
da

da

da
in

in

in
)
@

@
ra

ra

ra
eb

eb

eb
nc

nc

datagen.fit(X_train) # X_train is your training data nc


ie

ie

ie
sc

sc

sc
ta

ta

ta
da

da

da
in

in

model.fit(datagen.flow(X_train, y_train, batch_size=batch_size), in


@

@
ra

ra

ra
eb

eb

eb

epochs=epochs)
nc

nc

nc
ie

ie

ie
sc

sc

sc
ta

ta

ta
da

da

da
@

DATA SCIENCE BRAIN


@datasciencebrain 8
Our website: deepakjosecodes.com

09 EARLY STOPPING

from tensorflow.keras.callbacks import EarlyStopping


in

in

in
ra

ra

ra
eb

eb

eb
nc

nc

nc
early_stopping = EarlyStopping(monitor='val_loss', patience=3,
ie

ie

ie
sc

sc

sc
restore_best_weights=True)
ta

ta

ta
da

da

da
in

in

in
@

@
ra

ra

ra
model.fit(X_train, y_train, epochs=epochs, validation_data=
eb

eb

eb
nc

nc

nc
(X_val, y_val), callbacks=[early_stopping])
ie

ie

ie
sc

sc

sc
ta

ta

ta
10
da

da

da
LEARNING RATE SCHEDULER
in

in

in
@

@
ra

ra

ra
eb

eb

eb
from tensorflow.keras.callbacks import LearningRateScheduler
nc

nc

nc
ie

ie

ie
sc

sc

sc
ta

ta

ta
def scheduler(epoch, lr):
da

da

da
in

in

in
if epoch % 10 == 0 and epoch != 0:
@

@
ra

ra

ra
eb

eb

eb
return lr * 0.9
nc

nc

else: nc
ie

ie

ie
sc

sc

sc
return lr
ta

ta

ta
da

da

da
in

in

in
@

@
ra

ra

ra
lr_scheduler = LearningRateScheduler(scheduler)
eb

eb

eb
nc

nc

nc
ie

ie

ie
sc

sc

sc

model.fit(X_train, y_train, epochs=epochs, validation_data=


ta

ta

ta
da

da

da

(X_val, y_val), callbacks=[lr_scheduler])


@

DATA SCIENCE BRAIN


@datasciencebrain 9
Our website: deepakjosecodes.com

11 EARLY STOPPING
from sklearn.model_selection import GridSearchCV
in

in

in
ra

ra

ra
from tensorflow.keras.wrappers.scikit_learn import KerasClassifier
eb

eb

eb
nc

nc

nc
# Define your model creation function
ie

ie

ie
def create_model(optimizer='adam', hidden_units=128, dropout_rate=0.2):
sc

sc

sc
model = models.Sequential()
ta

ta

ta
model.add(layers.Flatten(input_shape=(input_size,)))
da

da

da
in

in

in
@

@
# Add hidden layers
ra

ra

ra
eb

eb

eb
model.add(layers.Dense(hidden_units, activation='relu'))
nc

nc

nc
model.add(layers.Dropout(dropout_rate))
ie

ie

ie
sc

sc

sc
# Add output layer
ta

ta

ta
model.add(layers.Dense(output_size, activation='softmax'))
da

da

da
in

in

in
@

@
model.compile(optimizer=optimizer,
ra

ra

ra
eb

eb

eb
loss='sparse_categorical_crossentropy',
nc

nc

nc
metrics=['accuracy'])
return model
ie

ie

ie
sc

sc

sc
ta

ta

ta
# Create a KerasClassifier with your model creation function
da

da

da
model = KerasClassifier(build_fn=create_model, epochs=10, batch_size=32, verbose=0)
in

in

in
@

@
ra

ra

ra
eb

eb

eb
# Define the hyperparameters to search
param_grid = {
nc

nc

nc
'optimizer': ['adam', 'sgd', 'rmsprop'],
ie

ie

ie
sc

sc

sc
'hidden_units': [64, 128, 256],
ta

ta

ta

'dropout_rate': [0.2, 0.5, 0.8]


da

da

da

}
in

in

in
@

@
ra

ra

ra
# Use GridSearchCV for hyperparameter search
eb

eb

eb

grid = GridSearchCV(estimator=model, param_grid=param_grid, cv=3)


nc

nc

nc

grid_result = grid.fit(X_train, y_train)


ie

ie

ie
sc

sc

sc
ta

ta

ta

# Print the best parameters and corresponding accuracy


da

da

da

print("Best Parameters: ", grid_result.best_params_)


@

print("Best Accuracy: ", grid_result.best_score_)

DATA SCIENCE BRAIN


@datasciencebrain 10
Never Miss a Post!
Turn on the Notifications

Was it
helpful?
Checkout our YouTube Channel
for Machine Learning Projects and Other
amazing Data Science Related Content
youtube.com/@dsbrain

DATA SCIENCE BRAIN


@datasciencebrain
LIKE TO SUPPORT

SAVE FOR LATER


COMMENT

SHARE
Checkout Our
Other Posts

Save for later reference

You might also like