[go: up one dir, main page]

0% found this document useful (0 votes)
13 views3 pages

Practical 2

Uploaded by

soham pawar
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
13 views3 pages

Practical 2

Uploaded by

soham pawar
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 3

practical-2

March 2, 2024

[1]: import numpy as np

[2]: import numpy as np

def sigmoid(x):
return 1 / (1 + np.exp(-x))

def sigmoid_derivative(x):
return x * (1 - x)

# Input datasets
inputs = np.array([[0, 0], [0, 1], [1, 0], [1, 1]])
expected_output = np.array([[0], [1], [1], [0]])

epochs = 10000
lr = 0.1
inputLayerNeurons, hiddenLayerNeurons, outputLayerNeurons = 2, 2, 1 #␣
↪Corrected the error here

[3]: #Random weights and bias initialization


hidden_weights = np.random.
↪uniform(size=(inputLayerNeurons,hiddenLayerNeurons))

hidden_bias =np.random.uniform(size=(1,hiddenLayerNeurons))
output_weights = np.random.
↪uniform(size=(hiddenLayerNeurons,outputLayerNeurons))

output_bias = np.random.uniform(size=(1,outputLayerNeurons))

[4]: print("Initial hidden weights:")


print(hidden_weights)
print("Initial hidden biases:")
print(hidden_bias)
print("Initial output weights:")
print(output_weights)
print("Initial output biases:")
print(output_bias)

Initial hidden weights:


[[0.98547328 0.36156278]

1
[0.89834258 0.68994383]]
Initial hidden biases:
[[0.40637087 0.99482403]]
Initial output weights:
[[0.92217867]
[0.33421019]]
Initial output biases:
[[0.93382078]]

[5]: # Training algorithm


for i in range(epochs): # Added 'i' as the iteration variable
# Forward Propagation
hidden_layer_activation = np.dot(inputs, hidden_weights)
hidden_layer_activation += hidden_bias
hidden_layer_output = sigmoid(hidden_layer_activation)

output_layer_activation = np.dot(hidden_layer_output, output_weights)


output_layer_activation += output_bias
predicted_output = sigmoid(output_layer_activation)

[8]: error = expected_output - predicted_output


d_predicted_output = error * sigmoid_derivative(predicted_output)

error_hidden_layer = d_predicted_output.dot(output_weights.T)
d_hidden_layer = error_hidden_layer * sigmoid_derivative(hidden_layer_output)

[9]: #Updating Weights and Biases


output_weights += hidden_layer_output.T.dot(d_predicted_output)␣
↪* lr

output_bias += np.sum(d_predicted_output,axis=0,keepdims=True)␣
↪* lr

hidden_weights += inputs.T.dot(d_hidden_layer) * lr
hidden_bias += np.sum(d_hidden_layer,axis=0,keepdims=True) * lr

[10]: print("Final hidden weights:")


print(hidden_weights)
print("Final hidden bias:")
print(hidden_bias)
print("Final output weights:")
print(output_weights)
print("Final output bias:")
print(output_bias)

Final hidden weights:


[[0.98499546 0.3613384 ]
[0.89787441 0.68970486]]
Final hidden bias:

2
[[0.40370383 0.99394531]]
Final output weights:
[[0.90981752]
[0.32070904]]
Final output bias:
[[0.91687507]]

[11]: print("\nOutput from neural network after 10,000 epochs:", end=' ')
print(*np.squeeze(predicted_output))

Output from neural network after 10,000 epochs: 0.8495805633464564


0.874478376593645 0.8741471924116296 0.8876724776316712

[ ]:

You might also like