Classification: Basic Concepts
and Decision Trees
A programming task
Classification: Definition
Given a collection of records (training set )
Each record contains a set of attributes, one of the
attributes is the class.
Find a model for class attribute as a
function of the values of other attributes.
Goal: previously unseen records should be
assigned a class as accurately as possible.
A test set is used to determine the accuracy of the
model. Usually, the given data set is divided into
training and test sets, with training set used to build the
model and test set used to validate it.
Illustrating Classification
Task
Examples of Classification
Task
Predicting tumor cells as benign or malignant
Classifying credit card transactions
as legitimate or fraudulent
Classifying secondary structures of protein
as alpha-helix, beta-sheet, or random
coil
Categorizing news stories as finance,
weather, entertainment, sports, etc
Classification Using
Distance
Place items in class to which they are
closest.
Must determine distance between an
item and a class.
Classes represented by
Centroid: Central value.
Medoid: Representative point.
Individual points
Algorithm:
KNN
K Nearest Neighbor
(KNN):
Training set includes classes.
Examine K items near item to be
classified.
New item placed in class with the most
number of close items.
O(q) for each tuple to be classified.
(Here q is the size of the training set.)
KNN
Classification Techniques
Decision Tree based Methods
Rule-based Methods
Memory based reasoning
Neural Networks
Nave Bayes and Bayesian Belief Networks
Support Vector Machines
Example of a Decision Tree
s
al
al
u
c
c
i
i
r
r
uo
o
o
n
i
t
ss
eg
eg
n
t
t
a
cl
ca
ca
co
Tid Refund Marital
Status
Taxable
Income Cheat
Yes
Single
125K
No
No
Married
100K
No
No
Single
70K
No
Yes
Married
120K
No
No
Divorced 95K
Yes
No
Married
No
Yes
Divorced 220K
No
No
Single
85K
Yes
No
Married
75K
No
10
No
Single
90K
Yes
60K
Splitting Attributes
Refund
Yes
No
NO
MarSt
Single, Divorced
TaxInc
< 80K
NO
NO
> 80K
YES
10
Training Data
Married
Model: Decision Tree
Another Example of
Decision Tree
al
al
us
c
c
i
i
o
or
or
nu
i
g
g
t
ss
e
e
n
t
t
a
l
c
ca
ca
co
10
Tid Refund Marital
Status
Taxable
Income Cheat
Yes
Single
125K
No
No
Married
100K
No
No
Single
70K
No
Yes
Married
120K
No
No
Divorced 95K
Yes
No
Married
No
Yes
Divorced 220K
No
No
Single
85K
Yes
No
Married
75K
No
10
No
Single
90K
Yes
60K
Married
MarSt
NO
Single,
Divorced
Refund
No
Yes
NO
TaxInc
< 80K
NO
> 80K
YES
There could be more than one tree that
fits the same data!
Decision Tree Classification
Task
Decision
Tree
Apply Model to Test Data
Test Data
Start from the root of tree.
Refund
Yes
10
No
NO
MarSt
Single, Divorced
TaxInc
< 80K
NO
Married
NO
> 80K
YES
Refund Marital
Status
Taxable
Income Cheat
No
80K
Married
Apply Model to Test Data
Test Data
Refund
Yes
10
No
NO
MarSt
Single, Divorced
TaxInc
< 80K
NO
Married
NO
> 80K
YES
Refund Marital
Status
Taxable
Income Cheat
No
80K
Married
Apply Model to Test Data
Test Data
Refund
Yes
10
No
NO
MarSt
Single, Divorced
TaxInc
< 80K
NO
Married
NO
> 80K
YES
Refund Marital
Status
Taxable
Income Cheat
No
80K
Married
Apply Model to Test Data
Test Data
Refund
Yes
10
No
NO
MarSt
Single, Divorced
TaxInc
< 80K
NO
Married
NO
> 80K
YES
Refund Marital
Status
Taxable
Income Cheat
No
80K
Married
Apply Model to Test Data
Test Data
Refund
Yes
10
No
NO
MarSt
Single, Divorced
TaxInc
< 80K
NO
Married
NO
> 80K
YES
Refund Marital
Status
Taxable
Income Cheat
No
80K
Married
Apply Model to Test Data
Test Data
Refund
Yes
Refund Marital
Status
Taxable
Income Cheat
No
80K
Married
10
No
NO
MarSt
Single, Divorced
TaxInc
< 80K
NO
Married
NO
> 80K
YES
Assign Cheat to No
Decision Tree Classification
Task
Decision
Tree
Decision Tree Induction
Many Algorithms:
Hunts Algorithm (one of the earliest)
CART
ID3, C4.5
SLIQ,SPRINT
General Structure of Hunts
Algorithm
Let Dt be the set of training
records that reach a node t
General Procedure:
If Dt contains records that
belong the same class yt, then
t is a leaf node labeled as yt
If Dt is an empty set, then t is
a leaf node labeled by the
default class, yd
If Dt contains records that
belong to more than one
class, use an attribute test to
split the data into smaller
subsets. Recursively apply the
procedure to each subset.
Dt
Hunts Algorithm
Dont
Cheat
Refund
Yes
No
Dont
Cheat
Dont
Cheat
Refund
Refund
Yes
Yes
No
Dont
Cheat
Single,
Divorced
Cheat
Dont
Cheat
Marital
Status
Married
Single,
Divorced
No
Marital
Status
Married
Dont
Cheat
Taxable
Income
Dont
Cheat
< 80K
>= 80K
Dont
Cheat
Cheat
Tree Induction
Greedy strategy.
Split the records based on an attribute test
that optimizes certain criterion.
Issues
Determine how to split the records
How to specify the attribute test condition?
How to determine the best split?
Determine when to stop splitting
Tree Induction
Greedy strategy.
Split the records based on an attribute test
that optimizes certain criterion.
Issues
Determine how to split the records
How to specify the attribute test condition?
How to determine the best split?
Determine when to stop splitting
How to Specify Test
Condition?
Depends on attribute types
Nominal
Ordinal
Continuous
Depends on number of ways to split
2-way split
Multi-way split
Splitting Based on Nominal
Multi-way split: Use as many partitions as distinct
Attributes
values.
CarType
Family
Luxury
Sports
Binary split: Divides values into two subsets.
Need to find optimal partitioning.
{Sports,
Luxury}
CarType
{Family}
OR
{Family,
Luxury}
CarType
{Sports}
Splitting Based on Ordinal
Attributes
Multi-way split: Use as many partitions as distinct
values.
Size
Small
Medium
Binary split: Divides values into two subsets.
Need to find optimal partitioning.
{Small,
Medium}
Large
Size
{Large}
What about this split?
OR
{Small,
Large}
{Medium,
Large}
Size
Size
{Medium}
{Small}
Splitting Based on
Different ways of handling
Continuous
Attributes
Discretization to form an ordinal categorical
attribute
Static discretize once at the beginning
Dynamic ranges can be found by equal interval
bucketing, equal frequency bucketing
(percentiles), or clustering.
Binary Decision: (A < v) or (A v)
consider all possible splits and finds the best cut
can be more compute intensive
Splitting Based on
Continuous Attributes
Tree Induction
Greedy strategy.
Split the records based on an attribute test
that optimizes certain criterion.
Issues
Determine how to split the records
How to specify the attribute test condition?
How to determine the best split?
Determine when to stop splitting
How to determine the Best
Split
Before Splitting: 10 records of class 0,
10 records of class 1
Which test condition is the best?
How to determine the Best
Split
Greedy approach:
Nodes with homogeneous class distribution are
preferred
Need a measure of node impurity:
Non-homogeneous,
Homogeneous,
High degree of impurity
Low degree of impurity
Measures of Node Impurity
Gini Index
Entropy
Misclassification error
How to Find the Best Split
Before Splitting:
M0
A?
Yes
B?
No
Yes
Node N1
Node N2
Node N3
M1
M2
M3
M12
No
Node N4
M4
M34
Gain = M0 M12 vs M0 M34
Measure of Impurity: GINI
Gini Index for a given node t :
GINI (t ) 1 [ p ( j | t )]2
j
(NOTE: p( j | t) is the relative frequency of class j at node t).
Maximum (1 - 1/nc) when records are equally distributed
among all classes, implying least interesting information
Minimum (0.0) when all records belong to one class,
implying most interesting information
C1
C2
0
6
Gini=0.000
C1
C2
1
5
Gini=0.278
C1
C2
2
4
Gini=0.444
C1
C2
3
3
Gini=0.500
Examples for computing
GINI
GINI (t ) 1 [ p ( j | t )]2
j
C1
C2
0
6
P(C1) = 0/6 = 0
C1
C2
1
5
P(C1) = 1/6
C1
C2
2
4
P(C1) = 2/6
P(C2) = 6/6 = 1
Gini = 1 P(C1)2 P(C2)2 = 1 0 1 = 0
P(C2) = 5/6
Gini = 1 (1/6)2 (5/6)2 = 0.278
P(C2) = 4/6
Gini = 1 (2/6)2 (4/6)2 = 0.444
Splitting Based on GINI
Used in CART, SLIQ, SPRINT.
When a node p is split into k partitions (children),
the quality of split is computed as,
GINI split
where,
ni
GINI (i )
i 1 n
ni = number of records at child i,
n = number of records at node p.
Binary Attributes: Computing
Splits into two partitions
GINI
Index
Effect of
Weighing partitions:
Larger and Purer Partitions are sought for.
Parent
B?
Yes
No
C1
C2
Gini = 0.500
Gini(N1)
= 1 (5/6)2 (2/6)2
= 0.194
Gini(N2)
= 1 (1/6)2 (4/6)2
= 0.528
Node N1
Node N2
C1
C2
N1 N2
5
1
2
4
Gini=0.333
Gini(Children)
= 7/12 * 0.194 +
5/12 * 0.528
= 0.333
Categorical Attributes:
For each distinct value, gather counts for each
Computing
Gini Index
class in the dataset
Use the count matrix to make decisions
Two-way split
(find best partition of values)
Multi-way split
CarType
C1
C2
Gini
Family Sports Luxury
1
2
1
4
1
1
0.393
C1
C2
Gini
CarType
{Sports,
{Family}
Luxury}
3
1
2
4
0.400
C1
C2
Gini
CarType
{Family,
{Sports}
Luxury}
2
2
1
5
0.419
Continuous Attributes:
Computing Gini Index
Use Binary Decisions based on one
value
Several Choices for the splitting
value
Each splitting value has a count
matrix associated with it
Number of possible splitting values
= Number of distinct values
Class counts in each of the
partitions, A < v and A v
Simple method to choose best v
For each v, scan the database to
gather count matrix and compute its
Gini index
Computationally Inefficient!
Repetition of work.
Continuous Attributes:
Computing
Gini
Index...
For efficient computation: for each attribute,
Sort the attribute on values
Linearly scan these values, each time updating the count matrix
and computing gini index
Choose the split position that has the least gini index
Cheat
No
No
No
Yes
Yes
Yes
No
No
No
No
100
120
125
220
Taxable Income
60
Sorted Values
Split Positions
70
55
75
65
85
72
90
80
95
87
92
97
110
122
172
230
<=
>
<=
>
<=
>
<=
>
<=
>
<=
>
<=
>
<=
>
<=
>
<=
>
<=
>
Yes
No
Gini
0.420
0.400
0.375
0.343
0.417
0.400
0.300
0.343
0.375
0.400
0.420
Alternative Splitting Criteria
based
on
INFO
Entropy at a given node t:
Entropy (t ) p ( j | t ) log p ( j | t )
j
(NOTE: p( j | t) is the relative frequency of class j at node t).
Measures homogeneity of a node.
Maximum (log nc) when records are equally distributed
among all classes implying least information
Minimum (0.0) when all records belong to one class,
implying most information
Entropy based computations are similar to the
GINI index computations
Examples for computing
Entropy
Entropy (t ) p ( j | t ) log p ( j | t )
j
C1
C2
0
6
P(C1) = 0/6 = 0
C1
C2
1
5
P(C1) = 1/6
C1
C2
2
4
P(C1) = 2/6
P(C2) = 6/6 = 1
Entropy = 0 log 0 1 log 1 = 0 0 = 0
P(C2) = 5/6
Entropy = (1/6) log2 (1/6) (5/6) log2 (1/6) = 0.65
P(C2) = 4/6
Entropy = (2/6) log2 (2/6) (4/6) log2 (4/6) = 0.92
Splitting Based on INFO...
Information Gain:
GAIN
split
Entropy ( p )
Entropy
(
i
)
i 1
Parent Node, p is split into k partitions;
ni is number of records in partition i
Measures Reduction in Entropy achieved because of the
split. Choose the split that achieves most reduction
(maximizes GAIN)
Used in ID3 and C4.5
Disadvantage: Tends to prefer splits that result in large
number of partitions, each being small but pure.
Splitting Based on INFO...
Gain Ratio:
GainRATIO
GAIN
n
n
SplitINFO log
SplitINFO
n
n
Split
split
i 1
Parent Node, p is split into k partitions
ni is the number of records in partition i
Adjusts Information Gain by the entropy of the
partitioning (SplitINFO). Higher entropy partitioning
(large number of small partitions) is penalized!
Used in C4.5
Designed to overcome the disadvantage of Information
Gain
Splitting Criteria based on
Classification error at a node t :
Classification
Error
Error (t ) 1 max P (i | t )
i
Measures misclassification error made by a node.
Maximum (1 - 1/nc) when records are equally distributed
among all classes, implying least interesting information
Minimum (0.0) when all records belong to one class,
implying most interesting information
Examples for Computing
Error
Error (t ) 1 max P (i | t )
i
C1
C2
0
6
P(C1) = 0/6 = 0
C1
C2
1
5
P(C1) = 1/6
C1
C2
2
4
P(C1) = 2/6
P(C2) = 6/6 = 1
Error = 1 max (0, 1) = 1 1 = 0
P(C2) = 5/6
Error = 1 max (1/6, 5/6) = 1 5/6 = 1/6
P(C2) = 4/6
Error = 1 max (2/6, 4/6) = 1 4/6 = 1/3
Comparison among
Splitting Criteria
For a 2-class problem:
Misclassification Error vs
Gini
Parent
A?
Yes
Node N1
Gini(N1)
= 1 (3/3)2 (0/3)2
=0
Gini(N2)
= 1 (4/7)2 (3/7)2
= 0.489
No
Node N2
C1
C2
Gini = 0.42
Gini(Children)
= 3/10 * 0
+ 7/10 * 0.489
= 0.342
Tree Induction
Greedy strategy.
Split the records based on an attribute test
that optimizes certain criterion.
Issues
Determine how to split the records
How to specify the attribute test condition?
How to determine the best split?
Determine when to stop splitting
Stopping Criteria for Tree
Induction
Stop expanding a node when all the
records belong to the same class
Stop expanding a node when all the
records have similar attribute values
Early termination (to be discussed later)
Decision Tree Based
Classification
Advantages:
Inexpensive to construct
Extremely fast at classifying unknown records
Easy to interpret for small-sized trees
Accuracy is comparable to other classification
techniques for many simple data sets
Example: C4.5
Simple depth-first construction.
Uses Information Gain
Sorts Continuous Attributes at each node.
Needs entire data to fit in memory.
Unsuitable for Large Datasets.
Needs out-of-core sorting.
You can download the software from:
http://www.cse.unsw.edu.au/~quinlan/c4.5r8.tar.
gz
Practical Issues of
Classification
Underfitting and Overfitting
Missing Values
Costs of Classification
Underfitting and Overfitting
(Example)
500 circular and 500
triangular data points.
Circular points:
0.5 sqrt(x12+x22) 1
Triangular points:
sqrt(x12+x22) > 0.5 or
sqrt(x12+x22) < 1
Underfitting and Overfitting
Overfitting
Underfitting: when model is too simple, both training and test errors are large
Overfitting due to Noise
Decision boundary is distorted by noise point
Overfitting due to
Insufficient Examples
Lack of data points in the lower half of the diagram makes it difficult
to predict correctly the class labels of that region
- Insufficient number of training records in the region causes the
decision tree to predict the test examples using other training
records that are irrelevant to the classification task
Notes on Overfitting
Overfitting results in decision trees that
are more complex than necessary
Training error no longer provides a good
estimate of how well the tree will perform
on previously unseen records
Need new ways for estimating errors
Estimating Generalization
Errors
Re-substitution errors: error on training ( e(t) )
Generalization errors: error on testing ( e(t))
Methods for estimating generalization errors:
Optimistic approach: e(t) = e(t)
Pessimistic approach:
For each leaf node: e(t) = (e(t)+0.5)
Total errors: e(T) = e(T) + N 0.5 (N: number of leaf
nodes)
For a tree with 30 leaf nodes and 10 errors on training
(out of 1000 instances):
Training error = 10/1000 = 1%
Generalization error = (10 + 300.5)/1000 = 2.5%
Reduced error pruning (REP):
uses validation data set to estimate generalization
error
Occams Razor
Given two models of similar generalization
errors, one should prefer the simpler model
over the more complex model
For complex models, there is a greater
chance that it was fitted accidentally by
errors in data
Therefore, one should include model
complexity when evaluating a model
Minimum Description
Length (MDL)
X
X1
X2
X3
X4
y
1
0
0
1
Xn
B
1
C
1
X
X1
X2
X3
X4
y
?
?
?
?
Xn
Cost(Model,Data) = Cost(Data|Model) + Cost(Model)
Cost is the number of bits needed for encoding.
Search for the least costly model.
Cost(Data|Model) encodes the misclassification errors.
Cost(Model) uses node encoding (number of children) plus
splitting condition encoding.
How to Address Overfitting
Pre-Pruning (Early Stopping Rule)
Stop the algorithm before it becomes a fully-grown tree
Typical stopping conditions for a node:
Stop if all instances belong to the same class
Stop if all the attribute values are the same
More restrictive conditions:
Stop if number of instances is less than some user-specified
threshold
Stop if class distribution of instances are independent of the
available features (e.g., using 2 test)
Stop if expanding the current node does not improve impurity
measures (e.g., Gini or information gain).
How to Address
Overfitting
Post-pruning
Grow decision tree to its entirety
Trim the nodes of the decision tree in a
bottom-up fashion
If generalization error improves after trimming,
replace sub-tree by a leaf node.
Class label of leaf node is determined from
majority class of instances in the sub-tree
Can use MDL for post-pruning
Example of Post-Pruning
Training Error (Before splitting) = 10/30
Class = Yes
20
Pessimistic error = (10 + 0.5)/30 = 10.5/30
Class = No
10
Training Error (After splitting) = 9/30
Pessimistic error (After splitting)
Error = 10/30
= (9 + 4 0.5)/30 = 11/30
A?
A1
PRUNE!
A4
A3
A2
Class =
Yes
Class =
Yes
Class =
Yes
Class =
Yes
Class = No
Class = No
Class = No
Class = No
Examples of Post-pruning
Case 1:
Optimistic error?
Dont prune for both cases
Pessimistic error?
C0: 11
C1: 3
C0: 2
C1: 4
C0: 14
C1: 3
C0: 2
C1: 2
Dont prune case 1, prune case 2
Reduced error pruning?
Case 2:
Depends on validation set
Handling Missing Attribute
Values
Missing values affect decision tree
construction in three different ways:
Affects how impurity measures are computed
Affects how to distribute instance with missing
value to child nodes
Affects how a test instance with missing value
is classified
Computing Impurity
Measure
Before Splitting:
Entropy(Parent)
= -0.3 log(0.3)-(0.7)log(0.7) = 0.8813
Refund=Yes
Refund=No
Refund=?
Class Class
= Yes = No
0
3
2
4
1
Split on Refund:
Entropy(Refund=Yes) = 0
Entropy(Refund=No)
= -(2/6)log(2/6) (4/6)log(4/6) = 0.9183
Missing
value
Entropy(Children)
= 0.3 (0) + 0.6 (0.9183) = 0.551
Gain = 0.9 (0.8813 0.551) = 0.3303
Distribute Instances
Refund
Yes
No
Probability that Refund=Yes is 3/9
Refund
Yes
No
Probability that Refund=No is 6/9
Assign record to the left child with
weight = 3/9 and to the right child
with weight = 6/9
Classify Instances
Married
New record:
Single
Divorce
d
Total
Class=No
Class=Yes
6/9
2.67
Total
3.67
6.67
Refund
Yes
NO
No
Single,
Divorced
MarSt
Married
TaxInc
< 80K
NO
NO
> 80K
YES
Probability that Marital Status
= Married is 3.67/6.67
Probability that Marital Status
={Single,Divorced} is 3/6.67
Scalable Decision Tree Induction
Methods
SLIQ (EDBT96 Mehta et al.)
SPRINT (VLDB96 J. Shafer et al.)
Integrates tree splitting and tree pruning: stop growing the
tree earlier
RainForest (VLDB98 Gehrke, Ramakrishnan & Ganti)
Constructs an attribute list data structure
PUBLIC (VLDB98 Rastogi & Shim)
Builds an index for each attribute and only class list and the
current attribute list reside in memory
Builds an AVC-list (attribute, value, class label)
BOAT (PODS99 Gehrke, Ganti, Ramakrishnan & Loh)
Uses bootstrapping to create several small samples