[go: up one dir, main page]

0% found this document useful (0 votes)
17 views78 pages

Unit 3 (A) NGP

Uploaded by

animehv5500
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
17 views78 pages

Unit 3 (A) NGP

Uploaded by

animehv5500
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 78

Unit-3 Part-1

Decision Tree Learning


• Decision Tree Learning Algorithm
• Inductive bias
• Inductive Inference with decision trees
• Entropy & Information Theory
• Information Gain
• ID-3 Algorithm
• Issues in Decision Tree Learning
Decision Tree
• Decision Tree is a Supervised learning technique that can be used for both
classification and Regression problems, but mostly it is preferred for solving
Classification problems.
• It is a tree-structured classifier, where internal nodes represent the
features of a dataset, branches represent the decision rules and each leaf
node represents the outcome.
• In a Decision tree, there are two nodes, which are the Decision
Node and Leaf Node.
• Decision nodes are used to make any decision and have multiple branches.
• Leaf nodes are the output of those decisions and do not contain any
further branches.
• The decisions or the test are performed on the basis of features of the
given dataset.
Decision Tree
• It is a graphical representation for getting all the possible solutions
to a problem/decision based on given conditions.
• It is called a decision tree because, similar to a tree, it starts with the
root node, which expands on further branches and constructs a tree-
like structure.
• In order to build a tree, we use the CART algorithm, which stands
for Classification and Regression Tree algorithm.
• A decision tree can contain categorical data (YES/NO) as well
as numeric data.
Decision Tree Terminologies
• Root Node: Root node is from where the decision tree starts. It
represents the entire dataset, which further gets divided into two or
more homogeneous sets.
• Leaf Node: Leaf nodes are the final output node, and the tree cannot
be segregated further after getting a leaf node.
• Splitting: Splitting is the process of dividing the decision node/root
node into sub-nodes according to the given conditions.
• Branch/Sub Tree: A tree formed by splitting the tree.
• Pruning: Pruning is the process of removing the unwanted branches
from the tree.
• Parent/Child node: The root node of the tree is called the parent
node, and other nodes are called the child nodes.
How does the Decision Tree algorithm
Work?
• Step-1: Begin the tree with the root node, says S, which contains the
complete dataset.
• Step-2: Find the best attribute in the dataset using Attribute Selection
Measure (ASM).
• Step-3: Divide the S into subsets that contains possible values for the best
attributes.
• Step-4: Generate the decision tree node, which contains the best attribute.
• Step-5: Recursively make new decision trees using the subsets of the
dataset created in step -3. Continue this process until a stage is reached
where you cannot further classify the nodes and called the final node as a
leaf node.
Attribute Selection Measures
• While implementing a Decision tree, the main issue arises that how to
select the best attribute for the root node and for sub-nodes. So, to
solve such problems there is a technique which is called as Attribute
selection measure or ASM. By this measurement, we can easily select
the best attribute for the nodes of the tree. There are two popular
techniques for ASM, which are:
• Information Gain
• Gini Index
Information Gain:
• Information gain is the measurement of changes in entropy after the
segmentation of a dataset based on an attribute.
• It calculates how much information a feature provides us about a
class.
• According to the value of information gain, we split the node and
build the decision tree.
• A decision tree algorithm always tries to maximize the value of
information gain, and a node/attribute having the highest information
gain is split first. It can be calculated using the below formula:

Information Gain= Entropy(S)- [(Weighted Avg) *Entropy(each feature)


Entropy:
• Entropy is a metric to measure the impurity in a given attribute. It
specifies randomness in data. Entropy can be calculated as:

• Entropy(s)= -P(yes)log2 P(yes)- P(no) log2 P(no)

• S= Total number of samples


• P(yes)= probability of yes
• P(no)= probability of no
Gini Index:
• Gini index is a measure of impurity or purity used while creating a
decision tree in the CART(Classification and Regression Tree)
algorithm.
• An attribute with the low Gini index should be preferred as compared
to the high Gini index.
• It only creates binary splits, and the CART algorithm uses the Gini
index to create binary splits.
• Gini index can be calculated using the below formula:
ID3
• ID3 stands for Iterative Dichotomiser 3 and is named such because
the algorithm iteratively (repeatedly) dichotomizes(divides) features
into two or more groups at each step.
• ID3 uses a top-down greedy approach to build a decision tree. In
simple words, the top-down approach means that we start building
the tree from the top and the greedy approach means that at each
iteration we select the best feature at the present moment to create a
node.
How does ID3 select the best feature?
• ID3 uses Information Gain or just Gain to find the best feature.
• Information Gain calculates the reduction in the entropy and
measures how well a given feature separates or classifies the target
classes. The feature with the highest Information Gain is selected as
the best one.
• Entropy is the measure of disorder/ uncertainty/purity/information
content and the Entropy of a dataset is the measure of disorder in the
target feature of the dataset.
• In the case of binary classification (where the target column has only
two types of classes) entropy is 0 if all values in the target column are
homogenous(similar) and will be 1 if the target column has equal
number values for both the classes.
Entropy
• We denote our dataset as S, entropy is calculated as:
• Entropy(S) = - ∑ pᵢ * log₂(pᵢ) ; i = 1 to n
• where,
n is the total number of classes in the target column (in our case n = 2
i.e YES and NO)
pᵢ is the probability of class ‘i’ or the ratio of “number of rows with
class i in the target column” to the “total number of rows” in the
dataset.
Information Gain
• Information Gain for a feature column A is calculated as:
• IG(S, A) = Entropy(S) - ∑((|Sᵥ| / |S|) * Entropy(Sᵥ))
• where Sᵥ is the set of rows in S for which the feature column A has
value v, |Sᵥ| is the number of rows in Sᵥ and likewise |S| is the
number of rows in S.
ID3 Steps
1.Calculate the Information Gain of each feature.
2.Considering that all rows don’t belong to the same class, split the
dataset S into subsets using the feature for which the Information
Gain is maximum.
3.Make a decision tree node using the feature with the maximum
Information gain.
4.If all rows belong to the same class, make the current node as a leaf
node with the class as its label.
5.Repeat for the remaining features until we run out of all features, or
the decision tree has all leaf nodes.
Gain(S , Temp)= 0.0289
Practice Question:
Practice Question:
Computing Information Gain for Continuous-Valued
Attributes

• Continuous attributes can be represented as floating point variables. For example,


temperature, width, height, or weight of a body.
• Step 1:sort the data in ascending order.
• Step 2: Find the midpoint of first two numbers where target class changes and
calculate the information gain (i.e. selecting midpoint candidates with different
class labels. )
Pruning: Getting an Optimal Decision tree

• Pruning is a process of deleting the unnecessary nodes from a tree in


order to get the optimal decision tree.
• A too-large tree increases the risk of overfitting, and a small tree may
not capture all the important features of the dataset. Therefore, a
technique that decreases the size of the learning tree without
reducing accuracy is known as Pruning. There are mainly two types of
tree pruning technology used:
• Cost Complexity Pruning (Rule Post Pruning)
• Reduced Error Pruning (Node Pruning)
How to avoid Overfitting?
Inductive Bias
• The inductive bias (also known as learning bias) of a learning
algorithm is the set of assumptions that the learner uses to predict
outputs of given inputs that it has not encountered. In machine
learning, one aims to construct algorithms that are able to learn to
predict a certain target output.
• in the case of decision trees, the depth of the tress is the
inductive bias. If the depth of the tree is too low, then there is
too much generalization in the model.
• The bias of a model is a measure of how close our prediction
is to the actual value on average from an average model.
Note that bias is not a measure of a single model, it
encapsulates the scenario in which we collect many datasets,
create models for each dataset, and average the error over all
of models.
Inductive Inference
• Decision tree learning is a method that uses inductive inference to
approximate a target function, which will produce discrete values.
• It is widely used, robust to noisy data, and considered a practical
method for learning disjunctive expressions.
• Inductive reasoning takes you from the specific to the general.
• When you draw results/conclusions based on sample evidence
is called as inductive inference.
Advantages of the Decision Tree

• It is simple to understand as it follows the same process which a


human follow while making any decision in real-life.
• It can be very useful for solving decision-related problems.
• It helps to think about all the possible outcomes for a problem.
• There is less requirement of data cleaning compared to other
algorithms.
Disadvantages of the Decision Tree

• The decision tree contains lots of layers, which makes it complex.


• It may have an overfitting issue, which can be resolved using
the Random Forest algorithm.
• For more class labels, the computational complexity of the decision
tree may increase.

You might also like