[go: up one dir, main page]

0% found this document useful (0 votes)
12 views1 page

Tutorial 2

The document outlines a tutorial involving neural networks, PCA, and optimization methods. It includes tasks such as adjusting hidden layer weights using backpropagation, reducing dimensions of a dataset with PCA, and comparing Newton's method with gradient descent. Additionally, it discusses performing iterations of Newton's method and analyzing the nature of equilibrium points and Taylor series expansions.

Uploaded by

happy nain
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
12 views1 page

Tutorial 2

The document outlines a tutorial involving neural networks, PCA, and optimization methods. It includes tasks such as adjusting hidden layer weights using backpropagation, reducing dimensions of a dataset with PCA, and comparing Newton's method with gradient descent. Additionally, it discusses performing iterations of Newton's method and analyzing the nature of equilibrium points and Taylor series expansions.

Uploaded by

happy nain
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 1

Tutorial-2

1. The following network uses logsig (sigmoid or unipolar continuous with activation
functions in the hidden layer and purelin in the otput layer. The initial weights are shown
in figure. Training samples are x=1, t1=0.8 and t2= 0.6. Use backpropagation to determine
the hidden layer weights after one epoch. Assume a learning rate of 0.8. Neurons 1 and 2
are hidden neurons and neurons 3 and 4 are output neurons.

1 w3=0.2 3
y1
w1=0.5
x

w2=0.3 w5=0.1 w4=0.4


w6=0.1
y2
2 4

2. A 2-dimensional data set is shown in the matrix. Use PCA to reduce the dimension from
2 to 1

 6 − 4
− 3 5 
 
− 2 6 
 
 7 − 3

3. Consider the following 3 points in 2-d space: (1, 1), (2, 2), (3, 3). What is the first principle
component? Suppose we now project the original data points into 1-d space defined by the
principle component computed in Part A, what is the variance of the projected data?

4. Compare the performance of Newton’s method and gradient descent on the function
1 1 1
𝐹(𝑥) = 𝑥12 + 𝑥22 − 𝑥1 𝑥2 from initial guess x0 = [ ]. Find the nature of the equilibrium
2 2 0
point.

5. For the function 𝐹(𝑥) = 𝑥12 − 𝑥12 𝑥22 + 𝑥1 𝑥2 , perform one iteration of Newton’s method
1
from the initial guess [ ].
1

Find the second order Taylor series expansion of F(x) about x0. Is this quadratic function
minimized at the point found in part 1?

You might also like