[go: up one dir, main page]

0% found this document useful (0 votes)
11 views3 pages

ML Q Bank

Question bank

Uploaded by

Ayub Shaik
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
11 views3 pages

ML Q Bank

Question bank

Uploaded by

Ayub Shaik
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 3

1. Explain the candidate elimination algorithm with following example.

Origin Manufacturer Color Decade Type Example Type


Japan Honda Blue 1980 Economy Positive
Japan Toyota Green 1970 Sports Negative
Japan Toyota Blue 1990 Economy Positive
USA Chrysler Red 1980 Economy Negative
Japan Honda White 1980 Economy Positive
2. What is the procedure of building Decision Tree using ID3 algorithm with Gain and Entropy. Illustrate with
example
3. What are the issues in Decision tree learning? How they are overcome.
4. What are the important objectives of machine learning?
5. Discuss different important examples of machine Learning.
6. What do you mean by Gain and Entropy? How is it used to build the Decision tree in
7. algorithm? Illustrate using an example.
8. Explain find – S algorithm with given example. Give its application.
Example
Origin Manufacturer Color Decade Type
Type
Japan Honda Blue 1980 Economy Positive
Japan Toyota Green 1970 Sports Negative
Japan Toyota Blue 1990 Economy Positive
USA Chrysler Red 1980 Economy Negative
Japan Honda White 1980 Economy Positive

9. Consider the following set of training example :

Instance Classification a1 a2
1 + T T
2 + T T
3 - T F
4 + F F
5 - F T
6 - F T
 What is the entropy of this collection of training example with respect to the target function classification?
What is the information gain of a2 relative to these training examples?
 What are issues in decision tree learning? How are they overcome?

10. How is Naïve Bayes algorithm useful for learning and classifying text?
11. What are Bayesian Belief nets? Where are they used? Can it solve all types of problems?
12. What is a probabilistic graphical model? What is the difference between Markov networks
and Bayesian networks?
13. What is learning?. Write any four learning techniques and in each case give the expression
for weight- updating.
14. How does the candidate elimination algorithm work? What are its limitations?
15. What is the entropy, and what is its significance?
16. What is a decision tree? In the ID3 algorithm, what is the expected information gain, and
how is it used? What is the gain ratio, and what is the advantage of using the gain ratio over
using the expected information gain? Describe a strategy that can be used to avoid
overfitting in decision trees.

17. How can a decision tree be converted into a rule set? Illustrate with an example. What are
the advantages of the rule set representation over the decision tree representation?
18. Describe the Naive Bayesian method of classification. What assumptions does this method
make about the attributes and the classification? Give an example where this assumption is
not justified. What is the Laplacian correction, and why is it necessary?

19. Discuss Supervised and Unsupervised Learning and give two examples
20. What are the advantages and disadvantages of decision trees?
21. Explain how back propagation algorithm works for multilayer feed forward network
22. Explain perceptron and Delta training rule.
23. What are the steps in Back propagation algorithm? Why a Multilayer neural network is
required?
24. What are the steps in Reproduction cycle? Which type of applications are suitable for using
GA?
25. Describe in brief (any two)
1. Lazy and eager learning
2. Genetic programming and parallelizing GA
26. What is linearly in separable problem? Design a two layer network of perceptron to
implement A OR B
27. Consider a multilayer feed forward neural network. Enumerate and explain steps in back
propagation algorithm use to train network.
28. Explain Bayesian belief network and conditional independence with example.
29. Explain salient features of a Genetic Algorithm. Describe basic genetic algorithm using all
the necessary steps of fitness function evaluation.
30. What methods for dimensionality reduction do you know and how do they compare with
each other?
31. What are some good ways for performing feature selection that do not involve exhaustive
search?
32. What are the advantages and disadvantages of neural networks?
33. Explain Principle Component Analysis (PCA).
34. What is the maximal margin classifier? How this margin can be achieved and why is it
beneficial?
35. How do we train SVM? What about hard SVM and soft SVM?
36. What is a kernel? Explain the Kernel trick
37. Which kernels do you know? How to choose a kernel?
38. What is an Artificial Neural Network?
39. How to train an ANN? What is back propagation?
40. How does a neural network with three layers (one input layer, one inner layer and one output
layer) compare to a logistic regression?
41. What is Principal Component Analysis (PCA)? Under what conditions is PCA effective?
How is it related to eigenvalue decomposition (EVD)?
42. What are the differences between Factor Analysis and Principal Component Analysis?
43. How will you use SVD to perform PCA? When SVD is better than EVD for PCA?
44. Why do we need to center data for PCA and what can happen if we don’t do it?
45. Do we need to normalize data for PCA? Why?
46. Is PCA a linear model or not? Why?

47. Why do you need to use cluster analysis?

48. Give examples of some cluster analysis methods?

49. Differentiate between partitioning method and hierarchical methods.


50. Explain K-Means and its objective?
51. How do you select K for K-Means?
52. How would you assess the quality of clustering?

53. What is regression analysis?

54. What is linear regression? Why is it called linear?

55. What is a Perceptron?

56. How do Multi-Layer Perceptrons work?

57. What is the basic idea of a Support Vector Machine?

58. What is the Kernel trick? (This relates to search spaces.)

59. Discuss various Artificial Neural Network Architectures.

60. Explain Support Vector Classification in details

61. Explain back propagation algorithm and derive expressions for weight update relations.

62. How do K-means and K-Medoids/PAM form cluster? What is the main difference between
K-means and K-medoids with respect to constructing clusters?

63. Assume the following dataset is given: (2,2), (4,4), (5,5), (6,6), (7,7), (9,9), (0,6), (6,0). K-
Means is used with k=3 to cluster the dataset. Moreover, Manhattan distance is used as the
distance function (formula below) to compute distances between centroids and objects in
the dataset. Moreover, K-Means’ initial clusters C1, C2, and C3 are as follows:

C1: {(2,2), (4,4), (6,6)}

C2: {(0,6), (6,0)}


C3: {(5,5), (7, 7), (9,9)}
}
Now K-means is run for a single iteration; what are the new clusters and what are their
centroids?

You might also like