[go: up one dir, main page]

0% found this document useful (0 votes)
407 views11 pages

Neural Networks Explained for Class 12

The document provides an overview of neural networks, explaining their structure, components, and functioning, including input, hidden, and output layers. It discusses various types of neural networks such as perceptrons, feedforward, convolutional, recurrent, and generative adversarial networks, along with their applications. Additionally, it highlights the future impact of neural networks on society, emphasizing advancements in deep learning, accuracy, efficiency, support for autonomous systems, and personalized experiences.

Uploaded by

zohratoyyiba
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
407 views11 pages

Neural Networks Explained for Class 12

The document provides an overview of neural networks, explaining their structure, components, and functioning, including input, hidden, and output layers. It discusses various types of neural networks such as perceptrons, feedforward, convolutional, recurrent, and generative adversarial networks, along with their applications. Additionally, it highlights the future impact of neural networks on society, emphasizing advancements in deep learning, accuracy, efficiency, support for autonomous systems, and personalized experiences.

Uploaded by

zohratoyyiba
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd

Grade XII Artificial Intelligence Term 2 Notes

UNIT 6: Understanding Neural Networks


What is neural network?
A neural network is a machine learning program, or model, that makes decisions in a manner similar to
the human brain, by using processes that mimic the way biological neurons work together to identify
phenomena, weigh options and arrive at conclusions. The main advantage of neural networks is that
they can extract data features automatically without needing any input from the programmer.
It makes our work easy by auto replying the emails, suggesting email replies, spam filtering, Facebook
image tagging, showing items of our interest in the e-shopping web portals, and many more. One of the
best-known examples of a neural network is Google’s search algorithm and chatbots.
PARTS OF A NEURAL NETWORK: -
1. Input Layer: This layer consists of units representing the input fields. Each unit corresponds to a
specific feature or attribute of the problem being solved.
2. Hidden Layers: These layers, which may include one or more, are located between the input and
output layers. Each hidden layer contains nodes or artificial neurons, which process the input data.
These nodes are interconnected, and each connection has an associated weight.
The process of training deep neural networks is called Deep Learning. The term “deep” in deep learning
refers to the number of hidden layers (also called depth) of a neural network. A neural network that
consists of more than three layers—which would be inclusive of the inputs and the output layers—can
be considered a deep learning algorithm.
3. Output Layer: This layer consists of one or more units representing the target field(s). The output
units generate the final predictions or outputs of the neural network. Each node is connected to others,
and each connection is assigned a weight. If the output of a node exceeds a specified threshold value,
the node is activated, and its output is passed to the next layer of the network. Otherwise, no data is
transmitted to the subsequent layer.
Diagram of neural network
COMPONENTS OF A NEURAL NETWORK

Neurons:

• Neurons (also known as nodes) are the fundamental building blocks of a neural network.

• They receive inputs from other neurons or external sources.

• Each neuron computes a weighted sum of its inputs, applies an activation function, and produces an
output.

2. Weights:

• Weights represent the strength of connections between neurons.

• Each connection (synapse) has an associated weight.

• Weights convey the importance of that feature in predicting the final output.

• During training, neural networks learn optimal weights to minimize error.

3. Activation Functions

• Activation functions in neural networks are like decision-makers for each neuron.

They decide whether a neuron should be activated (send a signal) or not based on the input it receives.

• Different types of Activation Functions are Sigmoid Function, Tanh Function,

ReLU (Rectified Linear Unit): The ReLU (Rectified Linear Unit) activation function is a simple and
computationally efficient non-linear function used in deep learning that outputs the input directly if it's
positive, but outputs zero for any negative input. Mathematically, it's defined as f(x) = max(0, x).

• These functions help neural networks learn and make decisions by adding non- linearities to the
model, allowing them to understand complex patterns in data.

4. Bias:

• Bias terms are constants added to the weighted sum before applying the activation function.
• They allow the network to shift the activation function horizontally.
• Bias helps account for any inherent bias in the data.
5. Connections:

• Connections represent the synapses between neurons.


• Each connection has an associated weight, which determines its influence on the output of the
connected neurons.
• Biases (constants) are also associated with each neuron, affecting its activation threshold.
6. Learning Rule:
• Neural networks learn by adjusting their weights and biases.
• The learning rule specifies how these adjustments occur during training.
• Backpropagation, a common learning algorithm, computes gradients and updates weights to minimize
the network’s error.
7. Propagation Functions:

• These functions define how signals propagate through the network during both forward and backward
passes. Forward pass is known as Forward Propagation and backward pass is known as Back
propagation.
• Forward Propagation
In the forward propagation, input data flows through the layers, and activations are computed. Here,
input data flows through the network layers, and activations are computed. The predicted output is
compared to the actual target (ground truth), resulting in an error (loss).
• Back Propagation
Back propagation is the essence of neural network training. It is the practice of fine-tuning the weights
of a neural network based on the error rate (i.e. loss) obtained in the previous epoch (i.e. iteration.)
Proper tuning of the weights ensures lower error rates, making the model reliable by increasing its
generalization and helping the network to improve its prediction over time.
Back propagation, (short for “backward propagation of errors”) is an optimization algorithm used during
neural network training. It adjusts the weights of the network based on the error (loss) obtained in the
previous iteration (epoch).
In Back propagation, gradients are propagated to update weights using optimization algorithms (e.g.,
gradient descent).

WORKING OF A NEURAL NETWORK


Above Figure illustrates that the output of a neuron can be expressed as a linear combination of weight
‘w’ and bias ‘b’, expressed mathematically as w * x + b. Each input is assigned a weight to show its
importance. These weights are used to calculate the total input by multiplying each input with its weight
and adding them together. This total input then goes through an activation function, which decides if
the node should "fire" or activate based on the result. If it fires, the output is passed to the next layer.
This process continues, with each node passing its output to the next layer, defining the network as a
feedforward network.
Example:
Let us see a simple problem for perceptron calculation and decide whether output is active (1) or
inactive (0)
Let the features be represented as x1,x2 and x3 as inputs.
Input Layer:
Feature 1, x1 = 2 (Priority 2)
Feature 2, x2 = 3 (Priority 3)
Feature 3, x3 = 1 (Priority 1)
(Just assigned priority)
Hidden Layer:
Weight 1, w1 = 0.4
Weight 2, w2 = 0.2
Weight 3, w3 = 0.6
(Weight assigned based on priority)
Let bias = 0.1
Let threshold = 3.0
Output: Using the formula: (Perceptron value)
∑wixi + bias = w1x1 + w2x2 + w3x3 + bias
= (0.4*2) + (0.2*3) + (0.6*1) + 0.1
= 0.8 + 0.6 + 0.6 + 0.1
=2.1
Now, we apply the threshold value:
If output > threshold, then output = 1 (active)
If output < threshold, then output = 0 (inactive)
In this case:
Output (2.1) < threshold (3.0)
So, the output of the hidden layer is:
Output = 0
This means that the neuron in the hidden layer is inactive.
Picture representation of above calculation
Using Neural Network concept decide “Whether you should go for surfing or not” based on the
following scenario.

Imagine you are standing on the shore, contemplating whether to go for surfing or not. You glance out
at the ocean, assessing the conditions. Are the waves big and inviting? Is the line-up crowded with
fellow surfers? And perhaps most importantly, has there been any recent shark activity?
Using Neural Network concept decide “Whether you should go for surfing or not”
Note: The decision to go or not to go is our predicted outcome, ŷ or y-hat. (The estimated or predicted
values in a regression or other predictive model are termed the ŷ.)
If Output is 1, then Yes (go for surfing), else 0 implies No (do not go for surfing)
Solution:
1. Let us assume that there are three factors influencing your decision-making:
• Are the waves good? (Yes: 1, No: 0)
• Is the line-up empty? (Yes: 1, No: 0)
• Has there been a recent shark attack? (Yes: 0, No: 1)
2. Then, let us assume the following, giving us the following inputs:
• X1 = 1, since the waves are pumping
• X2 = 0, since the crowds are out
• X3 = 1, since there has not been a recent shark attack
3. Now, we need to assign some weights to determine importance. Larger weights signify that particular
variables are of greater importance to the decision or outcome.
• W1 = 5, since large swells do not come often (Priority 1)
• W2 = 2, since you are used to the crowds (Priority 3)
• W3 = 4, since you have a fear of sharks (Priority 2)
Let Bias= -3 and Threshold =3
Then
ŷ = (1*5) + (0*2) + (1*4) – 3 = 6
Since ŷ > threshold (6>3) output of this node would be 1. In this instance, you would go surfing.

TYPES OF NEURAL NETWORKS


i) Standard Neural Network (Perceptron):

The perceptron, created by Frank Rosenblatt in 1958, is a simple neural network with a single layer of
input nodes fully connected to a layer of output nodes. It uses Threshold Logic Units (TLUs) as artificial
neurons.
Application: Useful for binary classification tasks like spam detection and basic decision-making.
Feed Forward Neural Network (FFNN):

FFNN, also known as multi-layer perceptrons (MLPs) have an input layer, one or more hidden layers, and
an output layer. Data flows only in one direction, from input to output. They use activation functions
and weights to process information in a forward manner . FFNNs are efficient for handling noisy data
and are relatively straightforward to implement, making them versatile tools in various AI applications.
Application: used in tasks like image recognition, natural language processing (NLP), and regression.
Convolutional Neural Network (CNN):
CNNs utilize filters to extract features from images, enabling robust recognition of patterns and objects
CNNs differ from standard neural networks by incorporating a three-dimensional arrangement of
neurons, which is particularly effective for processing visual data.
Application: Dominant in computer vision for tasks such as object detection, image
recognition, style transfer, and medical imaging.

Recurrent Neural Network (RNN):


RNNs are designed for sequential data and feature feedback loops to allow information persistence
across time steps. They have feedback connections that allow data to flow in a loop. If the prediction is
wrong, the learning rate is employed to make small changes. Hence, making it gradually increase
towards making the right prediction during the backpropagation.
Application: Used in NLP for language modeling, machine translation, chatbots, as well as in speech
recognition, time series prediction, and sentiment analysis.

Generative Adversarial Network (GAN):


GANs consist of two neural networks – a generator and a discriminator – trained simultaneously. The
generator creates new data instances, while the discriminator evaluates them for authenticity. They are
used for unsupervised learning and can generate new data samples. GANs are used for generating
realistic data, such as images and videos.
Application: Widely employed in generating synthetic data for various tasks like image generation, style
transfer, and data augmentation.
Future of neural network and its impact on society:
The following are the four ways in which neural network’s future impact our society.
i) By Advancing Deep Learning
ii) By Improving accuracy and efficiency
iii) By Supporting Autonomous system
iv) By Personalizing experience

i) Advancing Deep Learning


Future Outlook:

 Deep learning will move toward more efficient architectures like transformers and
graph neural networks (GNNs).
 Emergence of self-supervised and unsupervised learning will reduce dependency on
labeled data.
 Integration with neuroscience may lead to more brain-inspired models, enhancing
cognitive abilities.

Societal Impact:

 Accelerates scientific discovery in medicine, physics, and biology.


 Enables breakthroughs in natural language understanding (e.g., real-time translation,
summarization).
 May reshape education and workforce skills by introducing adaptive learning systems.

ii) Improving Accuracy and Efficiency

Future Outlook:

 Neural networks will become more energy-efficient through techniques like


quantization, pruning, and neuromorphic computing.
 Enhanced training methods will reduce computational costs and environmental impact.
 Models will become more interpretable and robust, improving trustworthiness.

Societal Impact:

 Reliable AI systems will be used in critical sectors such as healthcare, law enforcement,
and finance.
 Reduction in algorithmic bias will promote fairness and equality.
 Greater efficiency allows wider adoption in developing regions and smaller enterprises.

iii) Supporting Autonomous Systems

Future Outlook:

 Neural networks will power next-generation autonomous vehicles, drones, and robots.
 Advancements in sensor fusion, perception, and decision-making will enhance
autonomy.
 Real-time learning will allow systems to adapt dynamically to new environments.

Societal Impact:

 Could lead to safer, more efficient transportation systems and delivery services.
 Robotics in agriculture, construction, and elder care will help address labor shortages.
 Raises ethical and regulatory challenges, particularly in accountability and safety.
iv) Personalizing Experience

Future Outlook:

 AI systems will deliver highly personalized content, from entertainment to education.


 Neural networks will drive emotion-aware and context-aware interactions.
 Personal assistants will evolve into proactive agents that anticipate user needs.

Societal Impact:

 Enhances user engagement in digital platforms, e-commerce, and learning environments.


 Could improve mental health support, fitness coaching, and lifestyle management.
 Risks include privacy erosion, addiction, and filter bubbles.

Multiple Choice Questions:


1. What is a neural network?
A. A biological network of neurons in the brain.
B. A type of computer hardware used for complex calculations.
C. A mathematical equation for linear regression.
D. A machine learning model inspired by the human brain.
2. What are neurons in a neural network?
A. Cells in the human brain.
B. Nodes that make up the layers of a neural network.
C. Mathematical functions that process inputs and produce outputs.
D. None of the above.
3. What is the role of activation functions in neural networks?
A. They determine the learning rate of the network.
B. They introduce non-linearity, allowing the network to learn complex patterns.
C. They control the size of the neural network.
D. They define the input features of the network.
4. What is backpropagation in neural networks?
A. The process of adjusting weights and biases to minimize the error in the network.
B. The process of propagating signals from the output layer to the input layer.
C. The process of initializing the weights and biases of the network.
D. The process of defining the architecture of the neural network.
5. Which type of neural network is commonly used for image recognition?
A. Feedforward neural network.
B. Convolutional neural network.
C. Recurrent neural network.
D. Perceptron.
6. How do neural networks learn from data?
A. By adjusting their weights and biases based on the error in their predictions.
B. By memorizing the training data.
C. By using a fixed set of rules.
D. By ignoring the input data.

Ans:
1. D. A machine learning model inspired by the human brain.
2. C. Mathematical functions that process inputs and produce outputs.
3. B. They introduce non-linearity, allowing the network to learn complex patterns.
4. A. The process of adjusting weights and biases to minimize the error in the network.
5. B. Convolutional neural network.
6. A. By adjusting their weights and biases based on the error in their predictions.

Competency Based Questions:


1. You are a data scientist working for a healthcare company, tasked with developing a neural network
model to predict the likelihood of a patient having a heart attack based on their medical history. The
dataset you have been provided with contains various features such as age, gender, family medical
history, blood pressure, cholesterol levels, and previous medical conditions. Your task is to determine
which type of neural network model would you build so that you can accurately predict whether a
patient is at risk of having a heart attack.
Ans: Standard Neural Network (Perceptron) which is useful for binary classification tasks like spam
detection and basic decision-making. Here also we have to predict whether a patient is at risk of having
a heart attack or not (Binary classification).
2. You are planning to go out for dinner and are deciding between two restaurants. You consider three
factors which are food quality, ambience, and distance. Using a neural network concept, determine the
inputs and weight, perceptron and decision.

Restaurant A (Ravis): High ambience, good food but far away from your home.
Let Bias =3 and Threshold= 10
Inputs Priority Weight
X1 food quality 2 4
X2 ambience 1 6
X3 distance 3 2

Restaurant B (Royal): very low ambience, good food which is near by your home.
Let Bias =3 and Threshold= 10
Inputs Priority Weight
X1 food quality 2 4
X2 ambience 3 2
X3 distance 1 6
Case 1: If you want quality food (X1=1), good ambience (X2=1), and distance not a matter (X3=0)
Restaurant A perceptron value: X1 x W1+ X2 x W2 + X3 x W3+bias=1 x 4 +1 x 6+0 x 2+3=13
Restaurant B perceptron value: X1 x W1+ X2 x W2 + X3 x W3+bias=1 x 4 +1 x 2+0 x 6+3=9
You will prefer Ravis Restaurant (Perceptron > threshold (13>10))
Case 2: If you want quality food (X1=1), no ambience (X2=0), and less distance (X3=1)
Restaurant A perceptron value: X1 x W1+ X2 x W2 + X3 x W3+bias=1 x 4 +0x 6+1 x 2+3=9
Restaurant B perceptron value: X1 x W1+ X2 x W2 + X3 x W3+bias=1 x 4 +0 x 2+1 x 6+3=13
You will prefer Royal Restaurant (Perceptron > threshold (13>10))

3. You are deciding between two colleges and are considering three factors: academic reputation,
campus facilities, and tuition fees. Using a neural network concept: What would be the inputs and
weights? Apply the formula and calculate the outcome.

College A: good academic reputation, good campus facilities but high tuition fees.
Let Bias =3 and Threshold= 10
Inputs Priority Weight
X1 academic reputation 1 8
X2campus facilities 2 4
X3 tuition fees 3 2

College B: Bad academic reputation, good campus facilities and very less tuition fees.
Let Bias =3 and Threshold= 10
Inputs Priority Weight
X1 academic reputation 3 2
X2campus facilities 2 4
X3 tuition fees 1 8

Case 1: If you want reputed college (X1=1), good campus facilities (X2=1), fees not consider (X3=0)
College A perceptron value: X1 x W1+ X2 x W2 + X3 x W3+bias=1 x 8+ 1 x 4+ 0 x 2 + 3=15
College B perceptron value: X1 x W1+ X2 x W2 + X3 x W3+bias=1 x 2 + 1 x4+ 0 x 8 + 3=9
You will prefer College A (Perceptron > threshold (15>10))

Case 2: If you don’t want reputed college (X1=0), no good campus facilities (X2=0), less fees (X3=1)
College A perceptron value: X1 x W1+ X2 x W2 + X3 x W3+bias=0 x 8 + 0 x 4+ 1 x 2 + 3=5
College B perceptron value: X1 x W1+ X2 x W2 + X3 x W3+bias=0 x 2 + 0 x 4+ 1 x 8 + 3=11
You will prefer College B (Perceptron > threshold (11>10))

_________________________

You might also like