ASSIGNMENT 3
Question 1
Problem Statement: Design an ADALINE model to implement the ANDNOT function. Train
the model using the delta learning rule and test its accuracy.
The ANDNOT function is defined as:
ANDNOT(A,B)= A AND ¬B
ANDNOT Function Truth Table:-
A B ANDNOT(A,B)
0 0 0
0 1 0
1 0 1
1 1 0
Question:
1. Implement an ADALINE network using Python to learn the ANDNOT function.
2. Use the given truth table for training.
3. Initialize weights randomly and update them using the delta rule.
4. Use a small learning rate (e.g., 0.1) and iterate until the error is minimized and plot
error at each iteration.
5. Test the trained model on the given input values and display the results.
Program
import numpy as np
import matplotlib.pyplot as plt
# 1. Define training data
X = np.array([
[0, 0],
[0, 1],
[1, 0],
[1, 1]
])
y = np.array([0, 0, 1, 0]) # ANDNOT outputs
# 2. Initialize weights and bias randomly
np.random.seed(42) # For reproducibility
weights = np.random.randn(2)
bias = np.random.randn()
learning_rate = 0.1
# 3. Training parameters
epochs = 100
errors = []
# 4. Training using Delta Rule
for epoch in range(epochs):
total_error = 0
for xi, target in zip(X, y):
net_input = np.dot(xi, weights) + bias
output = net_input # Linear output for ADALINE
error = target - output
weights += learning_rate * error * xi
bias += learning_rate * error
total_error += error**2 # Squared error
errors.append(total_error)
# Early stopping if error is sufficiently small
if total_error < 0.001:
break
# 5. Plot error over epochs
plt.plot(errors)
plt.title('Error over epochs')
plt.xlabel('Epochs')
plt.ylabel('Total Squared Error')
plt.grid(True)
plt.show()
# 6. Testing the trained model
print("Trained weights:", weights)
print("Trained bias:", bias)
print("\nTesting Results:")
for xi in X:
net_input = np.dot(xi, weights) + bias
output = 1 if net_input >= 0.5 else 0 # Threshold at 0.5
print(f"Input: {xi} => Predicted: {output}")
Output
Question 2
Problem Statement: Implement a MADALINE network that learns the XNOR function using
two ADALINE units as the first layer and a hard limiter (majority function) as the output layer.
Train the network using the Least Mean Square (LMS) rule.
XNOR Function Truth Table: -
A B XNOR(A,B)
0 0 1
0 1 0
1 0 0
1 1 1
Question:
1. Implement a MADALINE model in Python to learn the XNOR function.
2. Structure of the MADALINE:
- First Layer (ADALINE units): Two neurons with weight updates using the Delta
rule.
- Second Layer (Majority Rule): A hard limiter that decides the final output based
on the two ADALINE outputs.
3. Training Phase:
- Initialize weights randomly.
- Use the Least Mean Square (LMS) rule to train the ADALINE units.
- Adjust weights until the error is minimized and plot the error at each iteration.
4. Testing Phase:
- Evaluate the trained MADALINE on the XNOR truth table inputs.
- Print the final outputs.
Program
import numpy as np
import matplotlib.pyplot as plt
# 1. Define training data
X = np.array([
[0, 0],
[0, 1],
[1, 0],
[1, 1]
])
y = np.array([1, 0, 0, 1]) # XNOR outputs
# 2. Initialize weights and biases randomly for two ADALINE units
np.random.seed(42) # Reproducibility
weights1 = np.random.randn(2)
bias1 = np.random.randn()
weights2 = np.random.randn(2)
bias2 = np.random.randn()
learning_rate = 0.1
epochs = 100
errors = []
# 3. Hard limiter (majority rule) function
def majority_rule(out1, out2):
# Output 1 if both outputs agree (both positive or both negative)
if out1 >= 0 and out2 >= 0:
return 1
elif out1 < 0 and out2 < 0:
return 1
else:
return 0
# 4. Training
for epoch in range(epochs):
total_error = 0
for xi, target in zip(X, y):
# First layer outputs (linear outputs)
net1 = np.dot(xi, weights1) + bias1
net2 = np.dot(xi, weights2) + bias2
# Final output through majority rule
final_output = majority_rule(net1, net2)
# Error based on final output
error = target - final_output
# Update each ADALINE unit separately using LMS rule
# Only update if final output is wrong
if error != 0:
weights1 += learning_rate * (target - (1 if net1 >= 0 else 0)) * xi
bias1 += learning_rate * (target - (1 if net1 >= 0 else 0))
weights2 += learning_rate * (target - (1 if net2 >= 0 else 0)) * xi
bias2 += learning_rate * (target - (1 if net2 >= 0 else 0))
total_error += error**2 # Squared error
errors.append(total_error)
# Early stopping
if total_error == 0:
break
# 5. Plot error over epochs
plt.plot(errors)
plt.title('Error over epochs')
plt.xlabel('Epochs')
plt.ylabel('Total Squared Error')
plt.grid(True)
plt.show()
# 6. Testing Phase
print("Trained weights and biases:")
print(f"Weights1: {weights1}, Bias1: {bias1}")
print(f"Weights2: {weights2}, Bias2: {bias2}\n")
print("Testing Results:")
for xi in X:
net1 = np.dot(xi, weights1) + bias1
net2 = np.dot(xi, weights2) + bias2
final_output = majority_rule(net1, net2)
print(f"Input: {xi} => Predicted: {final_output}")
Output