[go: up one dir, main page]

0% found this document useful (0 votes)
630 views5 pages

Solution Manual Neural Networks and Lear

This document provides solutions to problems from Chapter 1 of the textbook "Neural Networks and Learning Machines" by Haykin. Problem 1.1 verifies equations for the perceptron learning rule. Problem 1.2 defines the output of a perceptron with a tanh activation function. Problem 1.3 shows how perceptrons can perform basic logic operations like AND, OR, and complement. Problem 1.4 gives the weight and bias for a Gaussian classifier. Problem 1.5 provides the equations for the weights and bias of a Bayes optimal classifier.

Uploaded by

Saket Kumar
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
630 views5 pages

Solution Manual Neural Networks and Lear

This document provides solutions to problems from Chapter 1 of the textbook "Neural Networks and Learning Machines" by Haykin. Problem 1.1 verifies equations for the perceptron learning rule. Problem 1.2 defines the output of a perceptron with a tanh activation function. Problem 1.3 shows how perceptrons can perform basic logic operations like AND, OR, and complement. Problem 1.4 gives the weight and bias for a Gaussian classifier. Problem 1.5 provides the equations for the weights and bias of a Bayes optimal classifier.

Uploaded by

Saket Kumar
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 5

Access Full Complete Solution Manual Here

https://www.book4me.xyz/solution-manual-neural-networks-and-learning-machines-haykin/

CHAPTER 1
Rosenblatt’s Perceptron

Problem 1.1

(1) If wT(n)x(n) > 0, then y(n) = +1.


If also x(n) belongs to C1, then d(n) = +1.
Under these conditions, the error signal is
e(n) = d(n) - y(n) = 0
and from Eq. (1.22) of the text:
w(n + 1) = w(n) + ηe(n)x(n) = w(n)
This result is the same as line 1 of Eq. (1.5) of the text.

(2) If wT(n)x(n) < 0, then y(n) = -1.


If also x(n) belongs to C2, then d(n) = -1.
Under these conditions, the error signal e(n) remains zero, and so from Eq. (1.22)
we have
w(n + 1) = w(n)
This result is the same as line 2 of Eq. (1.5).

(3) If wT(n)x(n) > 0 and x(n) belongs to C2 we have


y(n) = +1
d(n) = -1
The error signal e(n) is -2, and so Eq. (1.22) yields
w(n + 1) = w(n) -2ηx(n)
which has the same form as the first line of Eq. (1.6), except for the scaling factor 2.

(4) Finally if wT(n)x(n) < 0 and x(n) belongs to C1, then


y(n) = -1
d(n) = +1
In this case, the use of Eq. (1.22) yields
w(n + 1) = w(n) +2ηx(n)
which has the same mathematical form as line 2 of Eq. (1.6), except for the scaling
factor 2.

Problem 1.2

The output signal is defined by

y = tanh  ---
v
 2

= tanh  --- + --- ∑ w i x i


b 1
2 2 
i
https://www.book4me.xyz/solution-manual-neural-networks-and-learning-machines-haykin/

Equivalently, we may write


b + ∑ wi xi = y (1)
i

where

′ –1
y = 2 tanh ( y )

Equation (1) is the equation of a hyperplane.

Problem 1.3

(a) AND operation: Truth Table 1


Inputs Output
x1 x2 y
1 1 1
0 1 0
1 0 0
0 0 0

This operation may be realized using the perceptron of Fig. 1

x1 o w1 = 1
v
o o o y
w2 = 1 Hard Figure 1: Problem 1.3
+1 limiter
o o
x2 b = -1.5

The hard limiter input is

v = w1 x1 + w2 x2 + b
= x 1 + x 2 – 1.5

If x1 = x2 = 1, then v = 0.5, and y = 1


If x1 = 0, and x2 = 1, then v = -0.5, and y = 0
If x1 = 1, and x2 = 0, then v = -0.5, and y = 0
If x1 = x2 = 0, then v = -1.5, and y = 0
https://www.book4me.xyz/solution-manual-neural-networks-and-learning-machines-haykin/

These conditions agree with truth table 1.

OR operation: Truth Table 2


Inputs Output
x1 x2 y
1 1 1
0 1 1
1 0 1
0 0 0

The OR operation may be realized using the perceptron of Fig. 2:

x1 o w1 = 1
Hard
v limiter
o o o y
w2 = 1 Figure 2: Problem 1.3
+1
o o
x2 b = -0.5

In this case, the hard limiter input is

v = x 1 + x 2 – 0.5

If x1 = x2 = 1, then v = 1.5, and y = 1


If x1 = 0, and x2 = 1, then v = 0.5, and y = 1
If x1 = 1, and x2 = 0, then v = 0.5, and y = 1
If x1 = x2 = 0, then v = -0.5, and y = -1

These conditions agree with truth table 2.


https://www.book4me.xyz/solution-manual-neural-networks-and-learning-machines-haykin/

COMPLEMENT operation: Truth Table 3


Input x, Output, y
1 0
0 1

The COMPLEMENT operation may be realized as in Figure 3::

Hard
w1 = -1 v limiter
o o o o y

b = -0.5 Figure 3: Problem 1.3

The hard limiter input is

v = wx + b = – x + 0.5

If x = 1, then v = -0.5, and y = 0


If x = 0, then v = 0.5, and y = 1

These conditions agree with truth table 3.

(b) EXCLUSIVE OR operation: Truth table 4


Inputs Output
x1 x2 y
1 1 0
0 1 1
1 0 1
0 0 0

This operation is nonlinearly separable, which cannot be solved by the perceptron.

Problem 1.4

The Gaussian classifier consists of a single unit with a single weight and zero bias, determined in
accordance with Eqs. (1.37) and (1.38) of the textbook, respectively, as follows:

1
w = -----2- ( µ 1 – µ 2 )
σ
= – 20
https://www.book4me.xyz/solution-manual-neural-networks-and-learning-machines-haykin/

1 2 2
b = --------2- ( µ 2 – µ 1 )

= 0

Problem 1.5

Using the condition

2
C = σ I

in Eqs. (1.37) and (1.38) of the textbook, we get the following formulas for the weight vector and
bias of the Bayes classifier:

1
w = -----2- ( µ 1 – µ 2 )
σ
1 2 2
b = --------2- ( µ 1 – µ 2 )

You might also like