[go: up one dir, main page]

0% found this document useful (0 votes)
85 views10 pages

UML - Unit 2

Backpropagation is a crucial process for training neural networks, where the error from forward propagation is used to adjust the weights of the network to minimize loss. It is easy to implement and flexible, but requires high-quality training data and can be time-consuming. The document also details a practical example of forward and backward passes, illustrating how to calculate and update weights based on errors.

Uploaded by

Dhruvee Vadhvana
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
85 views10 pages

UML - Unit 2

Backpropagation is a crucial process for training neural networks, where the error from forward propagation is used to adjust the weights of the network to minimize loss. It is easy to implement and flexible, but requires high-quality training data and can be time-consuming. The document also details a practical example of forward and backward passes, illustrating how to calculate and update weights based on errors.

Uploaded by

Dhruvee Vadhvana
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 10

BACKPROPAGATION IN NEURAL

NETWORKS
Backpropagation is a process involved in training a neural network. It involves
taking the error rate of a forward propagation and feeding this loss backward
through the neural network layers to fine-tune the weights.

Backpropagation is the essence of neural net training. It is the practice of


fine-tuning the weights of a neural net based on the error rate (i.e. loss)
obtained in the previous epoch (i.e. iteration.) Proper tuning of the weights
ensures lower error rates, making the model reliable by increasing its
generalization.

In order to make this example as useful as possible, we’re just going to touch
on related concepts like loss functions, optimization functions, etc.

Advantages of Using the


Backpropagation Algorithm in Neural
Networks

● No previous knowledge of a neural network is needed, making it easy to


implement.
● It’s straightforward to program since there are no other parameters
besides the inputs.
● It doesn’t need to learn the features of a function, speeding up the
process.
● The model is flexible because of its simplicity and applicable to many
scenarios.

Limitations of Using the


Backpropagation Algorithm in Neural
Networks
● Training data can impact the performance of the model, so high-quality
data is essential.
● Noisy data can also affect backpropagation, potentially tainting its
results.
● It can take a while to train backpropagation models and get them up to
speed.
● Backpropagation requires a matrix-based approach, which can lead to
other issues.
Input values

X1=0.05

X2=0.10

Initial weight

W1=0.15 w5=0.40

W2=0.20 w6=0.45

W3=0.25 w7=0.50

W4=0.30 w8=0.55

Bias Values

b1=0.35 b2=0.60

Target Values

T1=0.01

T2=0.99
Now, we first calculate the values of H1 and H2 by a forward pass.

Forward Pass

To find the value of H1 we first multiply the input value from the weights as

H1=x1×w1+x2×w2+b1

H1=0.05×0.15+0.10×0.20+0.35

H1=0.3775

To calculate the final result of H1, we performed the sigmoid function as

We will calculate the value of H2 in the same way as H1

H2=x1×w3+x2×w4+b1

H2=0.05×0.25+0.10×0.30+0.35

H2=0.3925

To calculate the final result of H1, we performed the sigmoid function as


Now, we calculate the values of y1 and y2 in the same way as we calculate the H1 and

H2.

To find the value of y1, we first multiply the input value i.e., the outcome of H1 and H2

from the weights as

y1=H1×w5+H2×w6+b2

y1=0.593269992×0.40+0.596884378×0.45+0.60

y1=1.10590597

To calculate the final result of y1 we performed the sigmoid function as

We will calculate the value of y2 in the same way as y1


y2=H1×w7+H2×w8+b2

y2=0.593269992×0.50+0.596884378×0.55+0.60

y2=1.2249214

To calculate the final result of H1, we performed the sigmoid function as

Our target values are 0.01 and 0.99. Our y1 and y2 value is not matched with our target

values T1 and T2.

Now, we will find the total error, which is simply the difference between the outputs

from the target outputs. The total error is calculated as

So, the total error is


Now, we will backpropagate this error to update the weights using a backward pass.

Backward pass at the output layer

To update the weight, we calculate the error correspond to each weight with the help of

a total error. The error on weight w is calculated by differentiating total error with

respect to w.

We perform backward process so first consider the last weight w5 as

ADVERTISEMENT

ADVERTISEMENT

From equation two, it is clear that we cannot partially differentiate it with respect to w5

because there is no any w5. We split equation one into multiple terms so that we can

easily differentiate it with respect to w5 as

Now, we calculate each term one by one to differentiate Etotal with respect to w5 as
Putting the value of e-y in equation (5)
So, we put the values of in equation no (3) to find the final

result.

Now, we will calculate the updated weight w5new with the help of the following formula
In the same way, we calculate w6new,w7new, and w8new and this will give us the following

values

w5new=0.35891648

w6new=408666186

w7new=0.511301270

w8new=0.561370121

You might also like