[go: up one dir, main page]

0% found this document useful (0 votes)
15 views2 pages

Lab 4

The document provides a Python implementation of an Artificial Neural Network (ANN) that includes methods for forward propagation, back propagation, and training. It initializes weights and biases, applies the sigmoid activation function, and updates parameters based on the computed gradients. The example demonstrates training the model on a simple XOR dataset and outputs the predicted results after training.

Uploaded by

Riya
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
15 views2 pages

Lab 4

The document provides a Python implementation of an Artificial Neural Network (ANN) that includes methods for forward propagation, back propagation, and training. It initializes weights and biases, applies the sigmoid activation function, and updates parameters based on the computed gradients. The example demonstrates training the model on a simple XOR dataset and outputs the predicted results after training.

Uploaded by

Riya
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 2

Implement Artificial Neural Network training process in Python by using

Forward Propagation, Back Propagation.

In [ ]: import numpy as np

In [ ]: class NeuralNetwork:
def __init__(self, input_size, hidden_size, output_size):
self.input_size = input_size
self.hidden_size = hidden_size
self.output_size = output_size

# Initialize weights and biases


self.W1 = np.random.randn(input_size, hidden_size)
self.b1 = np.zeros((1, hidden_size))
self.W2 = np.random.randn(hidden_size, output_size)
self.b2 = np.zeros((1, output_size))

def sigmoid(self, x):


return 1 / (1 + np.exp(-x))

def sigmoid_derivative(self, x):


return x * (1 - x)

def forward(self, X):


# Forward propagation
self.z1 = np.dot(X, self.W1) + self.b1
self.a1 = self.sigmoid(self.z1)
self.z2 = np.dot(self.a1, self.W2) + self.b2
self.a2 = self.sigmoid(self.z2)
return self.a2

def backward(self, X, y, learning_rate):


# Backpropagation
m = X.shape[0]

# Compute gradients
dz2 = self.a2 - y
dW2 = (1 / m) * np.dot(self.a1.T, dz2)
db2 = (1 / m) * np.sum(dz2, axis=0, keepdims=True)
dz1 = np.dot(dz2, self.W2.T) * self.sigmoid_derivative(self.a1)
dW1 = (1 / m) * np.dot(X.T, dz1)
db1 = (1 / m) * np.sum(dz1, axis=0, keepdims=True)

# Update weights and biases


self.W2 -= learning_rate * dW2
self.b2 -= learning_rate * db2
self.W1 -= learning_rate * dW1
self.b1 -= learning_rate * db1

def train(self, X, y, epochs, learning_rate):


for epoch in range(epochs):
# Forward propagation
output = self.forward(X)

# Backpropagation
self.backward(X, y, learning_rate)
# Print loss every 100 epochs
if epoch % 100 == 0:
loss = np.mean(np.square(y - output))
print(f'Epoch {epoch}, Loss: {loss}')

In [ ]: # Example usage:
input_size = 2
hidden_size = 3
output_size = 1

X = np.array([[0, 0], [0, 1], [1, 0], [1, 1]])


y = np.array([[0], [1], [1], [0]])

model = NeuralNetwork(input_size, hidden_size, output_size)


model.train(X, y, epochs=1500, learning_rate=0.1)

# Test the trained model


print("\nTest the trained model:")
test_data = np.array([[0, 0], [0, 1], [1, 0], [1, 1]])
print("Input data:", test_data)
print("Predicted output:", model.forward(test_data))

Epoch 0, Loss: 0.2540284888880736


Epoch 100, Loss: 0.2481118936377926
Epoch 200, Loss: 0.24707045881310424
Epoch 300, Loss: 0.2459173217627494
Epoch 400, Loss: 0.24458841431077494
Epoch 500, Loss: 0.2430246840949699
Epoch 600, Loss: 0.24117169551706386
Epoch 700, Loss: 0.23897885970688926
Epoch 800, Loss: 0.23639776505085808
Epoch 900, Loss: 0.23338085662821298
Epoch 1000, Loss: 0.22988274121414448
Epoch 1100, Loss: 0.22586593026811916
Epoch 1200, Loss: 0.22131081917450088
Epoch 1300, Loss: 0.2162270455628404
Epoch 1400, Loss: 0.21066141124563656

Test the trained model:


Input data: [[0 0]
[0 1]
[1 0]
[1 1]]
Predicted output: [[0.3011205 ]
[0.56285686]
[0.57052136]
[0.59377489]]

You might also like