[go: up one dir, main page]

0% found this document useful (0 votes)
24 views17 pages

Deep Learning

MCQ
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
24 views17 pages

Deep Learning

MCQ
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 17

Unit I: Introduction to Neural Networks

1. Which of the following serves as the main inspiration for neural networks?
- a) DNA
- b) Human brain
- c) Internet
- d) Quantum mechanics

2. What is a perceptron primarily used for?


- a) Multiclass classification
- b) Binary classification
- c) Regression
- d) Clustering

3. The output of a perceptron is typically:


- a) Continuous
- b) Binary
- c) Discrete
- d) Categorical

4. What kind of function is used to classify inputs in a perceptron?


- a) Non-linear function
- b) Linear function
- c) Polynomial function
- d) Exponential function

5. How does a perceptron update its weights during training?


- a) Backpropagation
- b) Gradient Descent
- c) Perceptron learning rule
- d) Genetic Algorithm

6. What is the basic building block of a neural network?


- a) Node
- b) Perceptron
- c) Matrix
- d) Hyperplane

7. What are the inputs to a neuron in a neural network called?


- a) Signals
- b) Features
- c) Outputs
- d) Weights

8. Which of the following is NOT a hyperparameter in neural networks?


- a) Learning rate
- b) Number of hidden layers
- c) Biases
- d) Batch size
9. Which of the following is an activation function?
- a) Sigmoid
- b) Gradient Descent
- c) Softmax
- d) Rectified Linear Unit (ReLU)

10. Which activation function is suitable for binary classification problems?


- a) ReLU
- b) Sigmoid
- c) Tanh
- d) Softmax

11. Multiclass classification in perceptrons is achieved by:


- a) Multiple output neurons
- b) Backpropagation
- c) Single-layer perceptron
- d) Non-linear activation functions

12. What is the main purpose of activation functions in a neural network?


- a) Transform input signals
- b) Make neural networks differentiable
- c) Add non-linearity to the model
- d) Increase learning rate

13. A perceptron can classify data points that are:


- a) Linearly separable
- b) Non-linearly separable
- c) Polynomially separable
- d) Randomly distributed

14. The output of a neuron is determined by:


- a) Weight updates
- b) Input-output relationships
- c) Activation function
- d) Error rate

15. Neural networks can be simplified by:


- a) Reducing the number of neurons
- b) Adjusting the learning rate
- c) Making assumptions about data
- d) Using batch normalization

16. Which of the following is NOT a typical neural network parameter?


- a) Weights
- b) Biases
- c) Neurons
- d) Cost function

17. Which function is most suitable for regression tasks?


- a) ReLU
- b) Softmax
- c) Linear
- d) Sigmoid

18. How is bias used in neural networks?


- a) To shift the activation function
- b) To normalize data
- c) To minimize error
- d) To prevent overfitting

19. The perceptron algorithm aims to:


- a) Minimize the error rate
- b) Increase dimensionality
- c) Decrease convergence time
- d) Maximize classification accuracy

20. An assumption often made in neural networks is:


- a) Linearly separable data
- b) Non-linear relationships
- c) Equal weight distribution
- d) Batch learning

21. What kind of problems can a single-layer perceptron NOT solve?


- a) XOR problem
- b) AND problem
- c) OR problem
- d) NOT problem

22. In neural networks, parameters are learned by:


- a) Forward propagation
- b) Backpropagation
- c) Activation functions
- d) Hyperparameter tuning

23. Which of the following activation functions is linear?


- a) Sigmoid
- b) Tanh
- c) Identity
- d) ReLU

24. A hyperparameter that determines how much the model adjusts during each update
is called:
- a) Momentum
- b) Learning rate
- c) Epoch
- d) Batch size

25. The weights of a neuron in a neural network represent:


- a) Output of the neuron
- b) Strength of the input signal
- c) Activation of the neuron
- d) Error rate

26. In a neural network, what is the function of the bias?


- a) Scale inputs
- b) Shift activation threshold
- c) Improve convergence
- d) Increase non-linearity

27. Neural networks are typically used for:


- a) Sorting algorithms
- b) Classification and regression
- c) Database management
- d) File compression

28. Which of the following is a supervised learning algorithm?


- a) Perceptron
- b) K-Means
- c) DBSCAN
- d) PCA

29. Perceptrons use which of the following techniques to make decisions?


- a) Decision trees
- b) Hyperplane separation
- c) Nearest neighbor
- d) Clustering

30. The output layer in a neural network for multiclass classification typically
uses:
- a) ReLU
- b) Softmax
- c) Sigmoid
- d) Tanh

31. What is one limitation of a perceptron?


- a) Can only classify linearly separable data
- b) Requires multiple layers
- c) Uses complex weight matrices
- d) Cannot handle large datasets

32. What happens if a neural network does not have an activation function?
- a) It becomes non-linear
- b) It becomes a simple linear model
- c) It has zero error
- d) It overfits the data

33. What is the term for updating the weights of a perceptron?


- a) Forward propagation
- b) Gradient descent
- c) Backpropagation
- d) Hebbian learning

34. Which of these is a property of the sigmoid activation function?


- a) Outputs values between -1 and 1
- b) Outputs values between 0 and 1
- c) Outputs values between -infinity and infinity
- d) Outputs only positive values

35. A parameter of a neural network is:


- a) Number of layers
- b) Weights and biases
- c) Number of epochs
- d) Batch size

36. The primary function of the learning rate is to:


- a) Control the size of weight updates
- b) Set the number of neurons
- c) Define the structure of the network
- d) Influence the activation function

37. Which of the following is an assumption made to simplify neural networks?


- a) All inputs are equally important
- b) Data is linearly separable
- c) Data is normalized
- d) Batch sizes are small

38. How are hyperparameters chosen in a neural network?


- a) Automatically through learning
- b) Manually through experimentation
- c) Based on output accuracy
- d) Fixed by the algorithm

39. The function of weights in a neural network is to:


- a) Define the importance of inputs
- b) Normalize data
- c) Control learning speed
- d) Ensure model accuracy

40. Which of the following activation functions is most suitable for multiclass
classification?
- a) ReLU
- b) Softmax
- c) Sigmoid
- d) Tanh

---

Unit II: Applying Neural Networks

1. What does "flow of information" in a neural network refer to?


- a) Data transfer between layers
- b) Backpropagation
- c) Weight updates
- d) Model training

2. Which type of neural network is commonly used for image recognition tasks?
- a) Convolutional Neural Network (CNN)
- b) Recurrent Neural Network (RNN)
- c) Single Layer Perceptron
- d) Deep Belief Network

3. In image recognition, the count of pixels helps in:


- a) Measuring image size
- b) Defining input dimensions
- c) Weight initialization
- d) Error calculation

4. What is the key to learning in neural networks?


- a) Feedforward propagation
- b) Weight matrices
- c) Backpropagation
- d) Number of layers

5. Which algorithm is responsible for information flow in neural networks?


- a) Gradient Descent
- b) Feedforward
- c) Genetic Algorithm
- d) Support Vector Machine

6. What is the role of feedforward in a neural network?


- a) To propagate input data to the output layer
- b) To compute gradients
- c) To minimize the cost function
- d) To initialize weights

7. Vectorized feedforward is used to:


- a) Speed up the feedforward process
- b) Simplify backpropagation
- c) Increase accuracy
- d) Reduce overfitting

8. Which operation is optimized in vectorized feedforward implementation?


- a) Matrix multiplication
- b) Gradient descent
- c) Activation function
- d) Weight initialization

9. The image recognition process in neural networks relies on:


- a) Counting pixels
- b) Vectorizing images
- c) Feature extraction
- d) Gradient descent

10. What is the benefit of using matrix operations in feedforward propagation?


- a) Improves computation speed
- b) Reduces model complexity
- c) Increases model accuracy
- d) Enhances feature extraction

11. In a neural network, the connection between two layers is represented by:
- a) Weights
- b) Biases
- c) Neurons
- d) Activation functions

12. How is dimensionality of weight matrices determined?


- a) By the number of neurons in each layer
- b) By input-output relationship
- c) By activation function
- d) By learning rate

13. Information flow between layers in a neural network is controlled by:


- a) Activation functions
- b) Weight matrices
- c) Loss functions
- d) Learning rate

14. The count of pixels in an image is directly related to:


- a) Input dimensions of the neural network
- b) Loss function complexity
- c) Weight matrix size
- d) Model accuracy

15. Feedforward propagation in a neural network is:


- a) A process of passing inputs through the network
- b) A method of updating weights
- c) A loss minimization technique
- d) A feature extraction method

16. Which of the following increases the efficiency of feedforward propagation?


- a) Vectorized implementation
- b) Small batch sizes
- c) Complex loss functions
- d) Higher learning rates

17. The process of image recognition in neural networks typically involves:


- a) Weight updates
- b) Pixel count analysis
- c) Matrix operations
- d) Backpropagation
18. Why are neural networks used in image recognition tasks?
- a) They can handle high-dimensional data like images
- b) They require no activation functions
- c) They minimize memory usage
- d) They avoid overfitting

19. What does "vectorization" help achieve in neural networks?


- a) Faster computation
- b) Improved accuracy
- c) Reduced weight updates
- d) Simplified activation functions

20. Which matrix is learned during the training of a neural network?


- a) Weight matrix
- b) Identity matrix
- c) Covariance matrix
- d) Correlation matrix

21. The weight matrix in a neural network connects:


- a) Input and output neurons
- b) Layers of neurons
- c) Neurons and biases
- d) Activation functions and outputs

22. Which neural network algorithm uses matrix multiplication for efficient
computation?
- a) Feedforward
- b) Gradient Descent
- c) Backpropagation
- d) Genetic Algorithm

23. How does feedforward propagation differ from backpropagation?


- a) Feedforward is the forward pass, backpropagation is the error correction
- b) Backpropagation computes output, feedforward computes weights
- c) Feedforward optimizes the loss function, backpropagation trains the model
- d) Backpropagation is used for feature extraction

24. Information flow in a neural network is primarily controlled by:


- a) Activation functions
- b) Weights and biases
- c) Input data
- d) Learning rate

25. Which of the following methods is used to represent the connection between
neurons?
- a) Weights
- b) Biases
- c) Activation functions
- d) Loss function
26. The count of pixels in an image helps define:
- a) Input layer size
- b) Learning rate
- c) Batch size
- d) Bias values

27. The process of propagating input data to output through a neural network is
called:
- a) Feedforward
- b) Backpropagation
- c) Gradient Descent
- d) Feature extraction

28. What helps neural networks handle high-dimensional inputs like images?
- a) Matrix operations
- b) Feature engineering
- c) Sigmoid activation
- d) Dropout layers

29. How does the weight matrix affect the neural network's output?
- a) By adjusting the strength of the connections between neurons
- b) By normalizing inputs
- c) By modifying the learning rate
- d) By determining the loss function

30. The learning process in a neural network involves:


- a) Adjusting weight matrices
- b) Increasing input dimensions
- c) Optimizing the activation function
- d) Reducing the number of neurons

31. A common application of neural networks is:


- a) Image recognition
- b) Sorting algorithms
- c) Data compression
- d) Database queries

32. What is a key advantage of vectorized feedforward implementation?


- a) Speeds up computation
- b) Reduces model complexity
- c) Improves accuracy
- d) Minimizes error rate

33. Which of the following matrices is important for information flow in a neural
network?
- a) Weight matrix
- b) Covariance matrix
- c) Transition matrix
- d) Identity matrix
34. Information flow in a neural network is impacted by:
- a) Weight matrix dimensions
- b) Batch size
- c) Input data normalization
- d) Activation function type

35. Vectorized implementation in feedforward propagation is crucial for:


- a) Speeding up training
- b) Reducing dimensionality
- c) Simplifying activation functions
- d) Decreasing memory usage

36. Which component defines the connection between neurons in different layers?
- a) Weights
- b) Biases
- c) Input data
- d) Output layer

37. Why is vectorization used in feedforward propagation?


- a) To optimize matrix operations
- b) To handle non-linearity
- c) To minimize error
- d) To improve learning rate

38. How do weight matrices impact neural network performance?


- a) They determine the strength of neuron connections
- b) They simplify input data
- c) They ensure that the network is nonlinear
- d) They reduce the size of training data

39. What is the primary purpose of feedforward in a neural network?


- a) To propagate inputs through layers
- b) To minimize the loss function
- c) To update weights
- d) To compute gradients

40. How is dimensionality related to weight matrices in neural networks?


- a) It defines the number of parameters to learn
- b) It impacts the model accuracy
- c) It determines the activation function
- d) It affects the learning rate

---

Unit III: Training Neural Networks

1. What does training a neural network primarily involve?


- a) Adjusting weights and biases
- b) Increasing the number of neurons
- c) Changing the activation function
- d) Modifying the input data

2. The loss function in a neural network measures:


- a) The error between the predicted and actual values
- b) The number of neurons
- c) The strength of connections between layers
- d) The learning rate

3. Which of the following is a popular loss function for binary classification?


- a) Cross-entropy
- b) Mean squared error
- c) Hinge loss
- d) Categorical cross-entropy

4. What does "backpropagation" aim to do in neural networks?


- a) Adjust weights to minimize the loss function
- b) Propagate inputs through the network
- c) Compute the output layer
- d) Initialize weights randomly

5. Which function is commonly used in the output layer of a binary classification


network?
- a) Sigmoid
- b) ReLU
- c) Tanh
- d) Softmax

6. A neural network learns by updating:


- a) Weights and biases
- b) Activation functions
- c) Input data
- d) Neuron count

7. The complexity of the loss function depends on:


- a) The number of layers and neurons
- b) The input data size
- c) The optimization algorithm used
- d) The learning rate chosen

8. During the training phase, the neural network updates its weights to:
- a) Minimize the loss function
- b) Increase the accuracy
- c) Reduce the number of neurons
- d) Simplify input data

9. Which optimizer is commonly used to minimize the loss function during training?
- a) Gradient Descent
- b) Genetic Algorithm
- c) K-Means
- d) Support Vector Machines
10. The backpropagation algorithm works by computing:
- a) Gradients of the loss function with respect to weights
- b) The output layer
- c) Input values
- d) The activation function

11. What is the primary purpose of the sigmoid activation function in


backpropagation?
- a) To introduce non-linearity
- b) To initialize weights
- c) To compute loss
- d) To speed up learning

12. Training in batches during backpropagation is known as:


- a) Mini-batch gradient descent
- b) Full-batch gradient descent
- c) Online learning
- d) Batch normalization

13. What happens to the weight matrix during each iteration of training?
- a) It gets updated to minimize the loss
- b) It remains constant
- c) It is initialized randomly again
- d) It gets normalized

14. The loss function in a neural network helps to:


- a) Quantify the error in predictions
- b) Adjust the number of neurons
- c) Modify the input dimensions
- d) Increase the activation function's complexity

15. What does updating the biases in a neural network help with?
- a) Shifting the activation function
- b) Changing the number of layers
- c) Optimizing the input data
- d) Minimizing the error rate

16. In backpropagation, the gradients are used to:


- a) Update weights and biases
- b) Compute activation functions
- c) Analyze the input data
- d) Adjust the loss function

17. What is the advantage of using mini-batches during training?


- a) Reduces computation cost
- b) Increases training accuracy
- c) Speeds up gradient descent
- d) Improves model complexity
18. The goal of training a neural network is to:
- a) Minimize the loss function
- b) Maximize the number of layers
- c) Adjust the input data
- d) Change the activation function

19. Sigmoid backpropagation is typically used for:


- a) Binary classification problems
- b) Image recognition tasks
- c) Multi-class classification
- d) Regression models

20. During backpropagation, the gradient of the loss function is:


- a) Used to update weights and biases
- b) Propagated from output to input
- c) Used to initialize weights
- d) Used for forward propagation

21. Batch processing during backpropagation helps to:


- a) Improve the convergence of the model
- b) Reduce overfitting
- c) Avoid the vanishing gradient problem
- d) Increase model accuracy

22. Which of the following is crucial for adjusting the weights in a neural
network?
- a) Gradients of the loss function
- b) Input data dimensions
- c) Number of neurons in the output layer
- d) Learning rate

23. The learning rate controls:


- a) How much weights and biases are updated during training
- b) The number of layers in the neural network
- c) The size of input data
- d) The output of the activation function

24. What does batch size refer to in neural network training?


- a) The number of data points processed before weight updates
- b) The total number of neurons in the network
- c) The number of input features
- d) The size of the weight matrix

25. Which of the following optimizers is known for adaptive learning rates?
- a) Adam
- b) Gradient Descent
- c) Genetic Algorithm
- d) Linear Regression

26. What happens when the learning rate is too high during training?
- a) The model may fail to converge
- b) The loss function decreases smoothly
- c) Training takes too long
- d) The gradients become too small

27. The purpose of training a neural network in batches is to:


- a) Speed up training and reduce memory usage
- b) Reduce the learning rate
- c) Simplify the loss function
- d) Change the activation function

28. Which component defines how much each neuron contributes to the output?
- a) Weights
- b) Biases
- c) Activation functions
- d) Input data

29. The output of a neural network in binary classification typically uses:


- a) Sigmoid activation function
- b) ReLU activation function
- c) Tanh activation function
- d) Softmax activation function

30. In neural networks, bias values are used to:


- a) Shift the activation function
- b) Determine the input layer size
- c) Optimize the learning rate
- d) Control gradient descent

31. Backpropagation uses the chain rule to:


- a) Compute gradients for weight updates
- b) Increase input data size
- c) Analyze the loss function
- d) Initialize neuron connections

32. What is a key feature of batch training in neural networks?


- a) Faster convergence
- b) Increased accuracy
- c) Fewer weight updates
- d) More complex loss functions

33. The purpose of backpropagation is to:


- a) Adjust the weights to minimize the error
- b) Propagate inputs forward through the network
- c) Compute the output layer values
- d) Initialize biases randomly

34. During neural network training, overfitting can be reduced by:


- a) Using regularization techniques
- b) Increasing the learning rate
- c) Decreasing batch size
- d) Simplifying the loss function

35. Which optimizer adjusts the learning rate during training based on past
gradient updates?
- a) Adam
- b) SGD (Stochastic Gradient Descent)
- c) Genetic Algorithm
- d) Momentum-based optimization

36. Neural networks are trained by minimizing:


- a) The loss function
- b) The number of neurons
- c) The learning rate
- d) The number of input features

37. Which term describes updating weights after processing all training examples?
- a) Full-batch gradient descent
- b) Stochastic gradient descent
- c) Mini-batch gradient descent
- d) Online learning

38. The process of propagating the error back through the network to update weights
is called:
- a) Backpropagation
- b) Feedforward propagation
- c) Weight initialization
- d) Sigmoid activation

39. What is the goal of using mini-batches in backpropagation?


- a) To speed up training and reduce overfitting
- b) To increase the learning rate
- c) To add more neurons
- d) To make the loss function more complex

40. Which loss function is commonly used for multi-class classification problems?
- a) Categorical cross-entropy
- b) Mean squared error
- c) Hinge loss
- d) Binary cross-entropy
---

Unit I: Introduction to Neural Networks

1. b) Human brain
2. b) Binary classification
3. b) Binary
4. b) Linear function
5. c) Perceptron learning rule
6. b) Perceptron
7. b) Features
8. c) Biases
9. a) Sigmoid and d) Rectified Linear Unit (ReLU)
10. b) Sigmoid
11. a) Multiple output neurons
12. c) Add non-linearity to the model
13. a) Linearly separable
14. c) Activation function
15. a) Reducing the number of neurons
16. d) Cost function
17. c) Linear
18. a) To shift the activation function
19. a) Minimize the error rate
20. b) Non-linear relationships
21. a) XOR problem
22. b) Backpropagation
23. c) Identity
24. b) Learning rate
25. b) Strength of the input signal
26. b) Shift activation threshold
27. b) Classification and regression
28. a) Perceptron
29. b) Hyperplane separation
30. b) Softmax
31. a) Can only classify linearly separable data
32. b) It becomes a simple linear model
33. b) Gradient descent
34. b) Outputs values between 0 and 1
35. b) Weights and biases
36. a) Control the size of weight updates
37. b) Data is linearly separable
38. b) Manually through experimentation
39. a) Define the importance of inputs
40. b) Softmax

---

Unit II: Applying Neural Networks

1. a) Data transfer between layers


2. a) Convolutional Neural Network (CNN)
3. b) Defining input dimensions**
4. c) Backpropagation
5. a) Gradient Descent
6. a) To propagate input data to the output layer
7. a) Speed up the feedforward process
8. a) Matrix multiplication
9. c) Feature extraction
10. a) Improves computation speed
11. a) Weights
12. a) By the number of neurons in each layer
13. b) Weight matrices
14. a) Input dimensions of the neural network
15. a) A process of passing inputs through the network
16. a) Vectorized implementation
17. c) Matrix operations
18. a) They can handle high-dimensional data like images
19. a) Faster computation
20. a) Weight matrix
21. b) Layers of neurons
22. a) Feedforward
23. a) Feedforward is the forward pass, backpropagation is the error correction
24. b) Weights and biases
25. a) Weights
26. a) Input layer size
27. a) Feedforward
28. a) Matrix operations
29. a) By adjusting the strength of the connections between neurons
30. a) Adjusting weight matrices
31. a) Image recognition
32. a) Speeds up computation
33. a) Weight matrix
34. a) Weight matrix dimensions
35. a) Speeding up training
36. a) Weight
37. a) To optimize matrix operations
38. a) They determine the strength of neuron connections
39. a) To propagate inputs through layers
40. a) It defines the number of parameters to learn

---

You might also like