Tutorial Sheet-1
1. Can a single neuron perceptron learn to correctly classify the following data, where each consists of
three binary input values and a binary classification value: (111,1), (110,1), (011,1), (010,0), (000,0)?
Either show that this is impossible or construct such a Perceptron.
2. A perceptron with single neuron uses a unipolar binary function, weights w1 = 0.2 and w2 = -0.5, and a threshold
= -0.2. The input vector is x = [0, 1]T and the desired output 0.6. Learning rate = 0.4. What is the equation of the
decision boundary after one iteration of PLR?
3. Consider a single sigmoid threshold unit with three inputs, x1, x2, and x3. The output y is
1
y = g ( w0 + w1 x1 + w2 x2 + w3 x3 ) where g ( z ) = . We input values of either 0 or 1 for each of
1 + e−z
these inputs. Assign values to weights w0, w1, w2 and w3 so that the output of the sigmoid unit is greater
than 0.5 if an only if (x1 AND x2) OR x3.
4. In the network shown, the output of each unit is a constant C multiplied by the weighted sum of inputs.
Can any function represented by this network also be represented by a single unit ANN (Perceptron)? If
so, draw the equivalent perceptron, showing the weights and activation function. Otherwise, explain
why not. _ _
4 + +
w1
X1
_ _ + +
w5 3
w2
2 _ _ _ _
w3 w6
_ _ _ _
w4 1
X2
1 2 3 4
5. Design a two layer neural network (with unipolar binary transfer function) to classify the region shown into
+ and – regions.
6. Consider the simple, single-input, single-output neural network shown in Figure 1 below. Assuming
sigmoidal hidden-unit and linear output-unit activation functions what values of the weights will
approximate the function in Figure 2?
w1 w5
w2
x
w3 w6
1 w7
w4
Figure 1 1 Figure 2
7. In the figure shown below, three lines passing through the origin split the x-y plane into six regions. Is it
possible to design a two layer network with single neuron in the output layer which classifies the shaded
regions from the other regions? Activation functions of the neurons are unipolar binary activation functions
(hardlim). If possible, then design the network. If not, describe why not in one or two sentences
-1
8. Apply steepest descent method to the following function F(x) and find the value of x after one iteration. The
starting value of x is − 2 1T and the learning rate is 0.05
F ( x) = x12 + 4 x1 x2 + 20x2 2 + x1 + x2 + 1
Will this algorithm converge to a solution or not? Justify your answer.
9. What is the maximum stable learning rate for the function
F ( x) = 5x12 − 6 x1 x2 + 5x2 2 + 4 x1 + 4 x2 Sketch a contour plot of the function.
Consider a function F ( x) = x1 + x2 . Find the derivative of the function at the point 0.5 0.5 in the direction of
2 2 t
a vector p = − 3 4 . Also find the second derivative along the vector p.
t
10. The following network uses logsig activation functions in the hidden layer and purelin in the output layer.
The initial weights are shown in figure. Training samples are x=1, t1=0.8 and t2= 0.6. Use backpropagation
to determine the hidden layer weights after one epoch. Assume a learning rate of 0.8
1 w3=0.1 3
y1
w1=0.5
x
w2=0.3 w5=0.2 w4=0.4
w6=0.1
y2
2 4
11. For the neural network shown, the hidden layer neurons have tansig activation functions. The output unit is
linear. The training set is shown in the table. The initial values of all weights are 0.5. Compute the value of
weight c after the first update, using backpropagation in batch mode, with a learning constant of 0.1
12. Figure shows a two-layer neural network. Activation functions of the hidden layer neurons are tansig and
output layer are linear (identity function). Learning rate is 0.4, w1=w2=w3=w5=0.2, w4=w6=w8=w10=0.1,
w7=0.5, w9=0.4. Find the modified error of neuron 4. Find the weight w4 after applying backpropagation
for one epoch. (Penalty for showing steps which are not required)