Practical 2
Practical 2
March 2, 2024
def sigmoid(x):
return 1 / (1 + np.exp(-x))
def sigmoid_derivative(x):
return x * (1 - x)
# Input datasets
inputs = np.array([[0, 0], [0, 1], [1, 0], [1, 1]])
expected_output = np.array([[0], [1], [1], [0]])
epochs = 10000
lr = 0.1
inputLayerNeurons, hiddenLayerNeurons, outputLayerNeurons = 2, 2, 1 #␣
↪Corrected the error here
hidden_bias =np.random.uniform(size=(1,hiddenLayerNeurons))
output_weights = np.random.
↪uniform(size=(hiddenLayerNeurons,outputLayerNeurons))
output_bias = np.random.uniform(size=(1,outputLayerNeurons))
1
[0.89834258 0.68994383]]
Initial hidden biases:
[[0.40637087 0.99482403]]
Initial output weights:
[[0.92217867]
[0.33421019]]
Initial output biases:
[[0.93382078]]
error_hidden_layer = d_predicted_output.dot(output_weights.T)
d_hidden_layer = error_hidden_layer * sigmoid_derivative(hidden_layer_output)
output_bias += np.sum(d_predicted_output,axis=0,keepdims=True)␣
↪* lr
hidden_weights += inputs.T.dot(d_hidden_layer) * lr
hidden_bias += np.sum(d_hidden_layer,axis=0,keepdims=True) * lr
2
[[0.40370383 0.99394531]]
Final output weights:
[[0.90981752]
[0.32070904]]
Final output bias:
[[0.91687507]]
[11]: print("\nOutput from neural network after 10,000 epochs:", end=' ')
print(*np.squeeze(predicted_output))
[ ]: