10/27/24, 4:11 PM Bản sao của GradientDescent_implementation.
ipynb - Colab
keyboard_arrow_down Implementing the Gradient Descent Algorithm
In this lab, we'll implement the basic functions of the Gradient Descent algorithm to find the boundary in a small dataset. First, we'll start with
some functions that will help us plot and visualize the data.
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
#Some helper functions for plotting and drawing lines
def plot_points(X, y):
admitted = X[np.argwhere(y==1)]
rejected = X[np.argwhere(y==0)]
plt.scatter([s[0][0] for s in rejected], [s[0][1] for s in rejected], s = 25, color = 'blue', edgecolor = 'k')
plt.scatter([s[0][0] for s in admitted], [s[0][1] for s in admitted], s = 25, color = 'red', edgecolor = 'k')
def display(m, b, color='g--'):
plt.xlim(-0.05,1.05)
plt.ylim(-0.05,1.05)
x = np.arange(-10, 10, 0.1)
plt.plot(x, m*x+b, color)
keyboard_arrow_down Reading and plotting the data like in the figure
data = pd.read_csv('/content/data.csv')
X = data.iloc[:, :2].values # Features (e.g., scores)
y = data.iloc[:, 2].values # Labels (0 or 1 for rejection/admission)
# Step 3: Plot the points
plt.figure(figsize=(8, 6))
plot_points(X, y)
# Show the plot
plt.show()
keyboard_arrow_down TODO: Implementing the basic functions
Here is your turn to shine. Implement the following formulas, as explained in the text.
https://colab.research.google.com/drive/1beFjQpJa9lH_wXSyR-SMDuH3bV-GwcRc#scrollTo=tctrDsA8Pp0x&printMode=true 1/5
10/27/24, 4:11 PM Bản sao của GradientDescent_implementation.ipynb - Colab
Sigmoid activation function
1
σ(x) =
−x
1 + e
Output (prediction) formula
^ = σ(w1 x1 + w2 x2 + b)
y
Error function
^) = −y log(y
E rror(y, y ^) − (1 − y) log(1 − y
^)
The function that updates the weights
^)xi
wi ⟶ wi + α(y − y
^)
b ⟶ b + α(y − y
# Implement the following functions
# Activation (sigmoid) function
def sigmoid(x):
return 1 / (1 + np.exp(-x))
# Output (prediction) formula
def output_formula(features, weights, bias):
linear_combination = np.dot(features, weights) + bias # w1*x1 + w2*x2 + b
return sigmoid(linear_combination) # σ(z)
# Error (log-loss) formula
def error_formula(y, output):
output = np.clip(output, 1e-15, 1 - 1e-15) # Avoid log(0)
return - (y * np.log(output) + (1 - y) * np.log(1 - output))
# Gradient descent step
def update_weights(x, y, weights, bias, learnrate):
output = output_formula(x, weights, bias)
error = y - output # The difference between actual and predicted
# Update weights
weights += learnrate * np.dot(x.T, error) # wi⟶wi+α(y−y^)xi
# Update bias
bias += learnrate * np.sum(error) # b⟶b+α(y−y^)
return weights, bias
keyboard_arrow_down Training function
This function will help us iterate the gradient descent algorithm through all the data, for a number of epochs. It will also plot the data, and some
of the boundary lines obtained as we run the algorithm.
np.random.seed(44)
epochs = 1000
learnrate = 0.01
def train(features, targets, epochs, learnrate, graph_lines=False):
errors = []
n_records, n_features = features.shape
last_loss = None
weights = np.random.normal(scale=1 / n_features**.5, size=n_features)
bias = 0
for e in range(epochs):
del_w = np.zeros(weights.shape)
for x, y in zip(features, targets):
output = output_formula(x, weights, bias)
error = y - output
weights += learnrate * error * x # wi⟶wi+α(y−y^)xi
bias += learnrate * error
# Printing out the log-loss error on the training set
out = output_formula(features, weights, bias)
loss = np.mean(error_formula(targets, out))
errors.append(loss)
if e % (epochs / 10) == 0:
print("\n========== Epoch", e,"==========")
if last loss and last loss < loss:
https://colab.research.google.com/drive/1beFjQpJa9lH_wXSyR-SMDuH3bV-GwcRc#scrollTo=tctrDsA8Pp0x&printMode=true 2/5
10/27/24, 4:11 PM Bản sao của GradientDescent_implementation.ipynb - Colab
if last_loss and last_loss < loss:
print("Train loss: ", loss, " WARNING - Loss Increasing")
else:
print("Train loss: ", loss)
last_loss = loss
predictions = out > 0.5
accuracy = np.mean(predictions == targets)
print("Accuracy: ", accuracy)
if graph_lines and e % (epochs / 100) == 0:
display(-weights[0]/weights[1], -bias/weights[1])
# Plotting the solution boundary
plt.title("Solution boundary")
display(-weights[0]/weights[1], -bias/weights[1], 'black')
# Plotting the data
plot_points(features, targets)
plt.show()
# Plotting the error
plt.title("Error Plot")
plt.xlabel('Number of epochs')
plt.ylabel('Error')
plt.plot(errors)
plt.show()
keyboard_arrow_down Time to train the algorithm!
When we run the function, we'll obtain the following:
10 updates with the current training loss and accuracy
A plot of the data and some of the boundary lines obtained. The final one is in black. Notice how the lines get closer and closer to the best
fit, as we go through more epochs.
A plot of the error function. Notice how it decreases as we go through more epochs.
train(X, y, epochs, learnrate, True)
https://colab.research.google.com/drive/1beFjQpJa9lH_wXSyR-SMDuH3bV-GwcRc#scrollTo=tctrDsA8Pp0x&printMode=true 3/5
10/27/24, 4:11 PM Bản sao của GradientDescent_implementation.ipynb - Colab
========== Epoch 0 ==========
Train loss: 0.7098665378927036
Accuracy: 0.41414141414141414
========== Epoch 100 ==========
Train loss: 0.32513202435815564
Accuracy: 0.9393939393939394
========== Epoch 200 ==========
Train loss: 0.24637366443016112
Accuracy: 0.9393939393939394
========== Epoch 300 ==========
Train loss: 0.21316037815874506
Accuracy: 0.9191919191919192
========== Epoch 400 ==========
Train loss: 0.19474896531988117
Accuracy: 0.9292929292929293
========== Epoch 500 ==========
Train loss: 0.18301781748808638
Accuracy: 0.9191919191919192
========== Epoch 600 ==========
Train loss: 0.17488517089875472
Accuracy: 0.9191919191919192
========== Epoch 700 ==========
Train loss: 0.1689187251395508
Accuracy: 0.9191919191919192
========== Epoch 800 ==========
Train loss: 0.16435985225696728
Accuracy: 0.9191919191919192
========== Epoch 900 ==========
Train loss: 0.16076817638181243
Accuracy: 0.9191919191919192
https://colab.research.google.com/drive/1beFjQpJa9lH_wXSyR-SMDuH3bV-GwcRc#scrollTo=tctrDsA8Pp0x&printMode=true 4/5
10/27/24, 4:11 PM Bản sao của GradientDescent_implementation.ipynb - Colab
keyboard_arrow_down Do the same process for other activation functions
# ReLU activation function
def relu(x):
return np.maximum(0, x)
# Derivative of ReLU
def relu_derivative(x):
return np.where(x > 0, 1, 0)
# Tanh activation function
def tanh(x):
return np.tanh(x)
# Derivative of Tanh
def tanh_derivative(x):
return 1 - np.tanh(x)**2
# Leaky ReLU activation function
def leaky_relu(x, alpha=0.01):
return np.where(x > 0, x, alpha * x)
# Derivative of Leaky ReLU
def leaky_relu_derivative(x, alpha=0.01):
return np.where(x > 0, 1, alpha)
# Output (prediction) formula using ReLU
def output_formula_relu(features, weights, bias):
linear_combination = np.dot(features, weights) + bias
return relu(linear_combination)
# Output (prediction) formula using Tanh
def output_formula_tanh(features, weights, bias):
linear_combination = np.dot(features, weights) + bias
return tanh(linear_combination)
# Output (prediction) formula using Leaky ReLU
def output_formula_leaky_relu(features, weights, bias):
linear_combination = np.dot(features, weights) + bias
return leaky_relu(linear_combination)
def train(features, targets, epochs, learnrate, graph_lines=False, activation='sigmoid'):
errors = []
n_records, n_features = features.shape
last_loss = None
weights = np.random.normal(scale=1 / n_features**.5, size=n_features)
bias = 0
for e in range(epochs):
for x, y in zip(features, targets):
linear_combination = np.dot(x, weights) + bias
# Choose activation function
if activation == 'sigmoid':
output = sigmoid(linear_combination)
elif activation == 'relu':
output = relu(linear_combination)
elif activation == 'tanh':
https://colab.research.google.com/drive/1beFjQpJa9lH_wXSyR-SMDuH3bV-GwcRc#scrollTo=tctrDsA8Pp0x&printMode=true 5/5