Data Mining
Cluster Analysis: Advanced Concepts
and Algorithms
Lecture Notes for Chapter 8
Introduction to Data Mining
by
Tan, Steinbach, Kumar
Introduction to Data Mining, 2nd Edition
Tan, Steinbach, Karpatne, Kumar
Outline
Prototype-based
– Fuzzy c-means
– Mixture Model Clustering
– Self-Organizing Maps
Density-based
– Grid-based clustering
– Subspace clustering: CLIQUE
– Kernel-based: DENCLUE
Graph-based
– Chameleon
– Jarvis-Patrick
– Shared Nearest Neighbor (SNN)
Characteristics of Clustering Algorithms
3/31/2021 Introduction to Data Mining, 2nd Edition 2
Tan, Steinbach, Karpatne, Kumar
2
Hard (Crisp) vs Soft (Fuzzy) Clustering
Hard (Crisp) vs. Soft (Fuzzy) clustering
– For soft clustering allow point to belong to more than
one cluster
– For K-means, generalize objective function
k
𝑆𝑆𝐸 𝑤 𝑑𝑖𝑠𝑡 𝒙 , 𝒄 w
j 1
ij 1
𝑤 : weight with which object xi belongs to cluster 𝒄𝒋
– To minimize SSE, repeat the following steps:
Fix 𝒄𝒋 and determine 𝑤 (cluster assignment)
Fix 𝑤 and recompute 𝒄𝒋
– Hard clustering: 𝑤 {0,1}
3/31/2021 Introduction to Data Mining, 2nd Edition 3
Tan, Steinbach, Karpatne, Kumar
Soft (Fuzzy) Clustering: Estimating Weights
c1 x c2
1 2.5 5
𝑆𝑆𝐸 𝑤 2.5 1 𝑤 5 2.5
2.25𝑤 6.25𝑤
SSE(x) has a minimum value of 2.25 when wx1 = 1, wx2 = 0
3/31/2021 Introduction to Data Mining, 2nd Edition 4
Tan, Steinbach, Karpatne, Kumar
4
Fuzzy C-means
p: fuzzifier (p > 1)
Objective function
k
𝑆𝑆𝐸 𝑤 𝑑𝑖𝑠𝑡 𝒙 , 𝒄 w
j 1
ij 1
𝑤 : weight with which object 𝒙 belongs to cluster 𝒄𝒋
𝑝: is a power for the weight not a superscript and controls how
“fuzzy” the clustering is
– To minimize objective function, repeat the following:
Fix 𝒄𝒋 and determine 𝑤
Fix 𝑤 and recompute 𝒄
– Fuzzy c-means clustering: 𝑤 [0,1]
Bezdek, James C. Pattern recognition with fuzzy objective function algorithms. Kluwer Academic Publishers, 1981.
3/31/2021 Introduction to Data Mining, 2nd Edition 5
Tan, Steinbach, Karpatne, Kumar
Fuzzy C-means
c1 x c2
1 2.5 5
𝑆𝑆𝐸 𝑤 2.5 1 𝑤 5 2.5
2.25𝑤 6.25𝑤
SSE(x) has a minimum value of 1.654 when wx1 = 0.74, wx2 = 0.36
3/31/2021 Introduction to Data Mining, 2nd Edition 6
Tan, Steinbach, Karpatne, Kumar
6
Fuzzy C-means
Objective function: p: fuzzifier (p > 1)
k
𝑆𝑆𝐸 𝑤 𝑑𝑖𝑠𝑡 𝒙 , 𝒄 w
j 1
ij 1
Initialization: choose the weights 𝑤 randomly subject to
k
the constraint that w
j 1
ij 1
Repeat:
– Update centroids: 𝒄𝒋 𝑤 𝒙 / 𝑤
– Update weights: 𝑤 1/𝑑𝑖𝑠𝑡 𝒙 , 𝒄 / 1/𝑑𝑖𝑠𝑡 𝒙 , 𝒄
3/31/2021 Introduction to Data Mining, 2nd Edition 7
Tan, Steinbach, Karpatne, Kumar
Fuzzy K-means Applied to Sample Data
0.95
0.9
0.85
0.8
0.75
0.7
0.65
0.6
0.55
0.5
maximum
membership
3/31/2021 Introduction to Data Mining, 2nd Edition 8
Tan, Steinbach, Karpatne, Kumar
8
An Example Application: Image Segmentation
Modified versions of fuzzy c-means have been
used for image segmentation
– Especially fMRI images (functional magnetic
resonance images)
References
– Gong, Maoguo, Yan Liang, Jiao Shi, Wenping Ma, and Jingjing Ma. "Fuzzy c-means clustering with local
information and kernel metric for image segmentation." Image Processing, IEEE Transactions on 22, no. 2
(2013): 573-584.
From left to right: original images, fuzzy c-means, EM, BCFCM
– Ahmed, Mohamed N., Sameh M. Yamany, Nevin Mohamed, Aly A. Farag, and Thomas Moriarty. "A modified
fuzzy c-means algorithm for bias field estimation and segmentation of MRI data." Medical Imaging, IEEE
Transactions on 21, no. 3 (2002): 193-199.
3/31/2021 Introduction to Data Mining, 2nd Edition 9
Tan, Steinbach, Karpatne, Kumar
Clustering Using Mixture Models
Idea is to model the set of data points as arising from a
mixture of distributions
– Typically, normal (Gaussian) distribution is used
– But other distributions have been very profitably used
Clusters are found by estimating the parameters of the
statistical distributions using the Expectation-
Maximization (EM) algorithm
k-means is a special case of this approach
Provides a compact representation of clusters
The probabilities with which point belongs to each cluster provide a
functionality similar to fuzzy clustering.
3/31/2021 Introduction to Data Mining, 2nd Edition 10
Tan, Steinbach, Karpatne, Kumar
10
Probabilistic Clustering: Example
Informal example: consider
modeling the points that
generate the following
histogram.
Looks like a combination of two
normal (Gaussian) distributions
Suppose we can estimate the
mean and standard deviation of each normal distribution.
– This completely describes the two clusters
– We can compute the probabilities with which each point belongs to each
cluster
– Can assign each point to the
cluster (distribution) for which it
is most probable.
3/31/2021 Introduction to Data Mining, 2nd Edition 11
Tan, Steinbach, Karpatne, Kumar
11
Probabilistic Clustering: EM Algorithm
Initialize the parameters
Repeat
For each point, compute its probability under each distribution
Using these probabilities, update the parameters of each distribution
Until there is no change
Very similar to of K-means
Consists of assignment and update steps
Can use random initialization
– Problem of local minima
For normal distributions, typically use K-means to initialize
If using normal distributions, can find elliptical as well as
spherical shapes.
3/31/2021 Introduction to Data Mining, 2nd Edition 12
Tan, Steinbach, Karpatne, Kumar
12
Probabilistic Clustering: Updating Centroids
Update formula for weights assuming an estimate for statistical parameters
m m
c j xi p (C j | xi ) / p (C j | xi )
𝒙𝒊 𝑖𝑠 𝑎 𝑑𝑎𝑡𝑎 𝑝𝑜𝑖𝑛𝑡
𝐶𝒋 𝑖𝑠 𝑎 𝑐𝑙𝑢𝑠𝑡𝑒𝑟
𝒄𝒋 𝑖𝑠 𝑎 𝑐𝑒𝑛𝑡𝑟𝑜𝑖𝑑
i 1 i 1
Very similar to the fuzzy k-means formula
– Weights are probabilities
– Weights are not raised to a power 𝑝 𝒙𝒊 𝐶𝒋 𝑝 𝐶𝒋
– Probabilities calculated using Bayes rule: 𝑝 𝐶𝒋 𝒙𝒊 ∑ 𝑝 𝒙𝒊 𝐶 𝑝 𝐶𝒍
Need to assign weights to each cluster
– Weights may not be equal
– Similar to prior probabilities
1 m
– Can be estimated: p (C j ) p(C j | xi )
m i 1
3/31/2021 Introduction to Data Mining, 2nd Edition 13
Tan, Steinbach, Karpatne, Kumar
13
More Detailed EM Algorithm
3/31/2021 Introduction to Data Mining, 2nd Edition 14
Tan, Steinbach, Karpatne, Kumar
14
Probabilistic Clustering Applied to Sample Data
0.95
0.9
0.85
0.8
0.75
0.7
0.65
0.6
0.55
0.5
maximum
probability
3/31/2021 Introduction to Data Mining, 2nd Edition 15
Tan, Steinbach, Karpatne, Kumar
15
Probabilistic Clustering: Dense and Sparse Clusters
6
?
4
0
y
-2
-4
-6
-8
-10 -8 -6 -4 -2 0 2 4
x
3/31/2021 Introduction to Data Mining, 2nd Edition 16
Tan, Steinbach, Karpatne, Kumar
16
Problems with EM
Convergence can be slow
Only guarantees finding local maxima
Makes some significant statistical assumptions
Number of parameters for Gaussian distribution
grows as O(d2), d the number of dimensions
– Parameters associated with covariance matrix
– K-means only estimates cluster means, which grow
as O(d)
3/31/2021 Introduction to Data Mining, 2nd Edition 17
Tan, Steinbach, Karpatne, Kumar
17
Alternatives to EM
Method of moments / Spectral methods
– ICML 2014 workshop bibliography
https://sites.google.com/site/momentsicml2014/bibliography
Markov chain Monte Carlo (MCMC)
Other approaches
3/31/2021 Introduction to Data Mining, 2nd Edition 18
Tan, Steinbach, Karpatne, Kumar
18
SOM: Self-Organizing Maps
Self-organizing maps (SOM)
– Centroid based clustering scheme
– Like K-means, a fixed number of clusters are specified
– However, the spatial relationship
of clusters is also specified,
typically as a grid
– Points are considered one by one
– Each point is assigned to the
closest centroid, and this centroid is updated
– Other centroids are updated based
on their spatial proximity to the closest centroid
Kohonen, Teuvo, and Self-Organizing Maps. "Springer series in information sciences." Self-
organizing maps 30 (1995).
3/31/2021 Introduction to Data Mining, 2nd Edition 19
Tan, Steinbach, Karpatne, Kumar
19
SOM: Self-Organizing Maps
Updates are weighted by distance
– Centroids farther away are affected less
The impact of the updates decreases with each time
– At some point the centroids will not change much
SOM can be viewed as a type of dimensionality reduction
– If a 2D (3D) grid is used, the results can be easily visualized, and it can facilitate the
interpretation of clusters
3/31/2021 Introduction to Data Mining, 2nd Edition 20
Tan, Steinbach, Karpatne, Kumar
20
SOM Clusters of LA Times Document Data
3/31/2021 Introduction to Data Mining, 2nd Edition 21
Tan, Steinbach, Karpatne, Kumar
21
Another SOM Example: 2D Points
3/31/2021 Introduction to Data Mining, 2nd Edition 22
Tan, Steinbach, Karpatne, Kumar
22
Issues with SOM
High computational complexity
No guarantee of convergence
Choice of grid and other parameters is somewhat
arbitrary
Lack of a specific objective function
3/31/2021 Introduction to Data Mining, 2nd Edition 23
Tan, Steinbach, Karpatne, Kumar
23
Grid-based Clustering
A type of density-based clustering
3/31/2021 Introduction to Data Mining, 2nd Edition 24
Tan, Steinbach, Karpatne, Kumar
24
Subspace Clustering
Until now, we found clusters by considering all of the
attributes
Some clusters may involve only a subset of attributes, i.e.,
subspaces of the data
3/31/2021 Introduction to Data Mining, 2nd Edition 25
Tan, Steinbach, Karpatne, Kumar
25
Clusters in subspaces
3/31/2021 Introduction to Data Mining, 2nd Edition 26
Tan, Steinbach, Karpatne, Kumar
26
Clusters in subspaces
3/31/2021 Introduction to Data Mining, 2nd Edition 27
Tan, Steinbach, Karpatne, Kumar
27
Clusters in subspaces
3/31/2021 Introduction to Data Mining, 2nd Edition 28
Tan, Steinbach, Karpatne, Kumar
28
Clique – A Subspace Clustering Algorithm
A grid-based clustering algorithm that
methodically finds subspace clusters
– Partitions the data space into rectangular units of
equal volume in all possible subspaces
– Measures the density of each unit by the fraction of
points it contains
– A unit is dense if the fraction of overall points it
contains is above a user specified threshold,
– A cluster is a set of contiguous (touching) dense units
3/31/2021 Introduction to Data Mining, 2nd Edition 29
Tan, Steinbach, Karpatne, Kumar
29
Clique Algorithm
It is impractical to check volume units in all
possible subspaces, since there is an
exponential number of such units
Monotone property of density-based clusters:
– If a set of points cannot form a density based cluster
in k dimensions, then the same set of points cannot
form a density based cluster in all possible supersets
of those dimensions
Very similar to Apriori algorithm
Can find overlapping clusters
3/31/2021 Introduction to Data Mining, 2nd Edition 30
Tan, Steinbach, Karpatne, Kumar
30
Clique Algorithm
3/31/2021 Introduction to Data Mining, 2nd Edition 31
Tan, Steinbach, Karpatne, Kumar
31
Limitations of Clique
Time complexity is exponential in number of
dimensions
– Especially if “too many” dense units are generated at
lower stages
May fail if clusters are of widely differing
densities, since the threshold is fixed
– Determining appropriate threshold and unit interval
length can be challenging
3/31/2021 Introduction to Data Mining, 2nd Edition 32
Tan, Steinbach, Karpatne, Kumar
32
Denclue (DENsity CLUstering)
Based on the notion of kernel-density estimation
– Contribution of each point to the density is given by
an influence or kernel function
Formula and plot of Gaussian Kernel
– Overall density is the sum of the contributions of all
points
3/31/2021 Introduction to Data Mining, 2nd Edition 33
Tan, Steinbach, Karpatne, Kumar
33
Example of Density from Gaussian Kernel
3/31/2021 Introduction to Data Mining, 2nd Edition 34
Tan, Steinbach, Karpatne, Kumar
34
DENCLUE Algorithm
Find the density function
Identify local maxima (density attractors)
Assign each point to the density attractor
– Follow direction of maximum increase in density
Define clusters as groups consisting of points associated with density attractor
Discard clusters whose density attractor has a density less than a user specified
minimum,
Combine clusters connected by paths of points that are connected by points with
density above
3/31/2021 Introduction to Data Mining, 2nd Edition 35
Tan, Steinbach, Karpatne, Kumar
35
Graph-Based Clustering: General Concepts
Graph-Based clustering needs a proximity graph
– Each edge between two nodes has a weight which is the
proximity between the two points
Many hierarchical clustering algorithms can be viewed in
graph terms
– MIN (single link) merges subgraph pairs connected with a lowest
weight edge
– Group Average merges subgraph pairs that have high average
connectivity
In the simplest case, clusters are connected
components in the graph.
3/31/2021 Introduction to Data Mining, 2nd Edition 36
Tan, Steinbach, Karpatne, Kumar
36
Graph-Based Clustering: Chameleon
Based on several key ideas
– Sparsification of the proximity graph
– Partitioning the data into clusters that are
relatively pure subclusters of the “true” clusters
– Merging based on preserving characteristics of
clusters
3/31/2021 Introduction to Data Mining, 2nd Edition 37
Tan, Steinbach, Karpatne, Kumar
37
Graph-Based Clustering: Sparsification
The amount of data that needs to be
processed is drastically reduced
– Sparsification can eliminate more than 99% of the
entries in a proximity matrix
– The amount of time required to cluster the data is
drastically reduced
– The size of the problems that can be handled is
increased
3/31/2021 Introduction to Data Mining, 2nd Edition 38
Tan, Steinbach, Karpatne, Kumar
38
Graph-Based Clustering: Sparsification …
Clustering may work better
– Sparsification techniques keep the connections to the most
similar (nearest) neighbors of a point while breaking the
connections to less similar points.
– This reduces the impact of noise and outliers and sharpens
the distinction between clusters.
Sparsification facilitates the use of graph
partitioning algorithms (or algorithms based
on graph partitioning algorithms)
– Chameleon and Hypergraph-based Clustering
3/31/2021 Introduction to Data Mining, 2nd Edition 39
Tan, Steinbach, Karpatne, Kumar
39
Limitations of Current Merging Schemes
Existing merging schemes in hierarchical
clustering algorithms are static in nature
– MIN:
Merge two clusters based on their closeness (or minimum
distance)
– GROUP-AVERAGE:
Merge two clusters based on their average connectivity
3/31/2021 Introduction to Data Mining, 2nd Edition 40
Tan, Steinbach, Karpatne, Kumar
40
Limitations of Current Merging Schemes
(a)
(b)
(c)
(d)
Closeness schemes Average connectivity schemes
will merge (a) and (b) will merge (c) and (d)
3/31/2021 Introduction to Data Mining, 2nd Edition 41
Tan, Steinbach, Karpatne, Kumar
41
Chameleon: Clustering Using Dynamic Modeling
Adapt to the characteristics of the data set to find the
natural clusters
Use a dynamic model to measure the similarity between
clusters
– Main properties are the relative closeness and relative inter-
connectivity of the cluster
– Two clusters are combined if the resulting cluster shares certain
properties with the constituent clusters
– The merging scheme preserves self-similarity
3/31/2021 Introduction to Data Mining, 2nd Edition 42
Tan, Steinbach, Karpatne, Kumar
42
Relative Interconnectivity
3/31/2021 Introduction to Data Mining, 2nd Edition 43
Tan, Steinbach, Karpatne, Kumar
43
Relative Closeness
3/31/2021 Introduction to Data Mining, 2nd Edition 44
Tan, Steinbach, Karpatne, Kumar
44
Chameleon: Steps
Preprocessing Step:
Represent the data by a Graph
– Given a set of points, construct the k-nearest-neighbor (k-NN)
graph to capture the relationship between a point and its k
nearest neighbors
– Concept of neighborhood is captured dynamically (even if region
is sparse)
Phase 1: Use a multilevel graph partitioning
algorithm on the graph to find a large number of
clusters of well-connected vertices
– Each cluster should contain mostly points from one “true” cluster,
i.e., be a sub-cluster of a “real” cluster
3/31/2021 Introduction to Data Mining, 2nd Edition 45
Tan, Steinbach, Karpatne, Kumar
45
Chameleon: Steps …
Phase 2: Use Hierarchical Agglomerative
Clustering to merge sub-clusters
– Two clusters are combined if the resulting cluster shares certain
properties with the constituent clusters
– Two key properties used to model cluster similarity:
Relative Interconnectivity: Absolute interconnectivity of two
clusters normalized by the internal connectivity of the
clusters
Relative Closeness: Absolute closeness of two clusters
normalized by the internal closeness of the clusters
3/31/2021 Introduction to Data Mining, 2nd Edition 46
Tan, Steinbach, Karpatne, Kumar
46
Experimental Results: CHAMELEON
3/31/2021 Introduction to Data Mining, 2nd Edition 47
Tan, Steinbach, Karpatne, Kumar
47
Experimental Results: CURE (10 clusters)
3/31/2021 Introduction to Data Mining, 2nd Edition 48
Tan, Steinbach, Karpatne, Kumar
48
Experimental Results: CURE (15 clusters)
3/31/2021 Introduction to Data Mining, 2nd Edition 49
Tan, Steinbach, Karpatne, Kumar
49
Experimental Results: CHAMELEON
3/31/2021 Introduction to Data Mining, 2nd Edition 50
Tan, Steinbach, Karpatne, Kumar
50
Experimental Results: CURE (9 clusters)
3/31/2021 Introduction to Data Mining, 2nd Edition 51
Tan, Steinbach, Karpatne, Kumar
51
Experimental Results: CURE (15 clusters)
3/31/2021 Introduction to Data Mining, 2nd Edition 52
Tan, Steinbach, Karpatne, Kumar
52
Experimental Results: CHAMELEON
3/31/2021 Introduction to Data Mining, 2nd Edition 53
Tan, Steinbach, Karpatne, Kumar
53
Spectral Clustering
Spectral clustering is a graph-based clustering
approach
– Does a graph partitioning of the proximity graph of a data
set
– Breaks the graph into components, such that
The nodes in a component are strongly connected to other
nodes in the component
The nodes in a component are weakly connected to nodes in
other components
See simple example below (W is the proximity matrix)
3/31/2021 Introduction to Data Mining, 2nd Edition 54
Tan, Steinbach, Karpatne, Kumar
54
Spectral Clustering
For the simple graph below, the proximity matrix
can be written as
– Because the graph consists of two connected
components finding clusters is easy.
– More generally, we need an automated approach
Must be able to handle graphs where the components are not
completely separate
Spectral graph partitioning provides such an approach
Based on eigenvalue decomposition of a slight modification
of the proximity matrix.
3/31/2021 Introduction to Data Mining, 2nd Edition 55
Tan, Steinbach, Karpatne, Kumar
55
Clustering via Spectral Graph Partitioning
Uses an eigenvalue based approach to do the graph
portioning
– Based on the Laplacian matrix (L) of a graph, which is derived
from the proximity matrix (W)
W is also known as the weighted adjacency matrix
Define a diagonal matrix D
𝑊 , 𝑖𝑓 𝑖 𝑗
𝐷
0, 𝑜𝑡ℎ𝑒𝑟𝑤𝑖𝑠𝑒
kth diagonal entry of D is the sum of the edges of the kth node
of W
– See example below
3/31/2021 Introduction to Data Mining, 2nd Edition 56
Tan, Steinbach, Karpatne, Kumar
56
Clustering via Spectral Graph Partitioning …
Define a matrix called the Laplacian of the graph, L
𝐋 𝐃 𝐖
L has the following properties:
– It is symmetric
– vTLv ≥ 0, i.e., the matrix is positive semi-definite and all
eigenvalues are positive.
– Last eigenvalue is 0
Can write L in terms of its eigenvalue decomposition as
– 𝐋 𝚲𝐕, where 𝚲 is a diagonal matrix of the eigenvalues and
𝐕 is the matrix of eigenvectors
3/31/2021 Introduction to Data Mining, 2nd Edition 57
Tan, Steinbach, Karpatne, Kumar
57
A Slightly More Complicated Example
3/31/2021 Introduction to Data Mining, 2nd Edition 58
Tan, Steinbach, Karpatne, Kumar
58
Spectral Graph Clustering Algorithm
Given the Laplacian of a graph, it is easy to
define a spectral graph clustering algorithm
We simply apply k-means to the matrix
consisting of the first k eignenvectors of L
Note that we cluster the rows of that matrix
3/31/2021 Introduction to Data Mining, 2nd Edition 59
Tan, Steinbach, Karpatne, Kumar
59
Application of K-means and Spectral Clustering to a 2-D Ring
3/31/2021 Introduction to Data Mining, 2nd Edition 60
Tan, Steinbach, Karpatne, Kumar
60
Strengths and Limitations
Can detect clusters of different shape and sizes
Sensitive to how graph is created and sparsified
Sensitive to outliers
Time complexity depends on the sparsity of the
data matrix
– Improved by sparsification
3/31/2021 Introduction to Data Mining, 2nd Edition 61
Tan, Steinbach, Karpatne, Kumar
61
Graph-Based Clustering: SNN Approach
Shared Nearest Neighbor (SNN) graph: the weight of an edge is
the number of shared nearest neighbors between vertices given
that the vertices are connected
i j i j
4
3/31/2021 Introduction to Data Mining, 2nd Edition 62
Tan, Steinbach, Karpatne, Kumar
62
Graph-Based Clustering: SNN Approach
Shared Nearest Neighbor (SNN) graph: the weight of an edge is
the number of shared neighbors between vertices given that the
vertices are connected
i j i j
4
If two points are similar to many of the same points, then they are
likely similar to one another, even if a direct measurement of
similarity does not indicate this.
3/31/2021 Introduction to Data Mining, 2nd Edition 63
Tan, Steinbach, Karpatne, Kumar
63
Creating the SNN Graph
Sparse Graph Shared Near Neighbor Graph
Link weights are similarities Link weights are number of
between neighboring points Shared Nearest Neighbors
3/31/2021 Introduction to Data Mining, 2nd Edition 64
Tan, Steinbach, Karpatne, Kumar
64
Jarvis-Patrick Clustering
First, the k-nearest neighbors of all points are found
– In graph terms this can be regarded as breaking all but the k
strongest links from a point to other points in the proximity graph
A pair of points is put in the same cluster if
– any two points share more than T neighbors and
– the two points are in each others k nearest neighbor list
For instance, we might choose a nearest neighbor list of
size 20 and put points in the same cluster if they share
more than 10 near neighbors
Jarvis-Patrick clustering is too brittle
3/31/2021 Introduction to Data Mining, 2nd Edition 65
Tan, Steinbach, Karpatne, Kumar
65
When Jarvis-Patrick Works Reasonably Well
Original Points Jarvis Patrick Clustering
6 shared neighbors out of 20
3/31/2021 Introduction to Data Mining, 2nd Edition 66
Tan, Steinbach, Karpatne, Kumar
66
When Jarvis-Patrick Does NOT Work Well
Smallest threshold, T, Threshold of T - 1
that does not merge
clusters.
3/31/2021 Introduction to Data Mining, 2nd Edition 67
Tan, Steinbach, Karpatne, Kumar
67
SNN Density-Based Clustering
Combines:
– Graph based clustering (similarity definition based on
number of shared nearest neighbors)
– Density based clustering (DBScan-like approach)
SNN density measures whether a point is
surrounded by similar points (with respect to its
nearest neighbors)
3/31/2021 Introduction to Data Mining, 2nd Edition 68
Tan, Steinbach, Karpatne, Kumar
68
SNN Clustering Algorithm
1. Compute the similarity matrix
This corresponds to a similarity graph with data points for nodes and
edges whose weights are the similarities between data points
2. Sparsify the similarity matrix by keeping only the k most similar
neighbors
This corresponds to only keeping the k strongest links of the similarity
graph
3. Construct the shared nearest neighbor graph from the sparsified
similarity matrix.
At this point, we could apply a similarity threshold and find the
connected components to obtain the clusters (Jarvis-Patrick
algorithm)
4. Find the SNN density of each Point.
Using a user specified parameters, Eps, find the number points that
have an SNN similarity of Eps or greater to each point. This is the
SNN density of the point
3/31/2021 Introduction to Data Mining, 2nd Edition 69
Tan, Steinbach, Karpatne, Kumar
69
SNN Clustering Algorithm …
5. Find the core points
Using a user specified parameter, MinPts, find the core
points, i.e., all points that have an SNN density greater
than MinPts
6. Form clusters from the core points
If two core points are within a “radius”, Eps, of each
other they are placed in the same cluster
7. Discard all noise points
All non-core points that are not within a “radius” of Eps of
a core point are discarded
8. Assign all non-noise, non-core points to clusters
This can be done by assigning such points to the
nearest core point
(Note that steps 4-8 are DBSCAN)
3/31/2021 Introduction to Data Mining, 2nd Edition 70
Tan, Steinbach, Karpatne, Kumar
70
SNN Density
a) All Points b) High SNN Density
c) Medium SNN Density d) Low SNN Density
3/31/2021 Introduction to Data Mining, 2nd Edition 71
Tan, Steinbach, Karpatne, Kumar
71
SNN Clustering Can Handle Differing Densities
Original Points SNN Clustering
3/31/2021 Introduction to Data Mining, 2nd Edition 72
Tan, Steinbach, Karpatne, Kumar
72
SNN Clustering Can Handle Other Difficult Situations
3/31/2021 Introduction to Data Mining, 2nd Edition 73
Tan, Steinbach, Karpatne, Kumar
73
SNN Clusters in Sea-Surface Temperature (SST) Time-series Data
Steinbach, et al (KDD 2003)
107 SNN Clusters for Detrended Monthly Z SST (19 58-1998)
90
60
30
latitude
-30
-60
-90
- 180 -150 -120 -90 -60 -30 0 30 60 90 120 150 180
longitude
3/31/2021 Introduction to Data Mining, 2nd Edition 74
Tan, Steinbach, Karpatne, Kumar
74
SST Clusters that Correspond to El Nino Climate Indices
Steinbach, et al (KDD 2003)
EL Nino Related SST Clusters
90
Niño Range Range
Region Longitude Latitude
60
1+2 (94) 90°W-80°W 10°S-0°
3 (67) 150°W-90°W 5°S-5°N
30
3.4 (78) 170°W-120°W 5°S-5°N
4 (75) 160°E-150°W 5°S-5°N
latitude
75 78 67 94
0
El Nino Regions Defined
-30 by Earth Scientists
-60
-90
-180 -150 -120 -90 -60 -30 0 30 60 90 120 150 180
longitude
SNN clusters of SST that are highly correlated
with El Nino indices, ~ 0.93 correlation.
3/31/2021 Introduction to Data Mining, 2nd Edition 75
Tan, Steinbach, Karpatne, Kumar
75
SNN Clusters in Sea-Level-Pressure (SLP) Time-series Data
Steinbach, et al (KDD 2003)
SN N Density of S LP T im e Series Data
26 SLP Clusters via Shared Nearest Neighbor Clustering (100 NN, 1982-1994) 90
90
24 22
25
60
60
26 13 14
30
30
21
latitude
16 20 17 18 15
0
latitude
19
23 -30
-3 0
1 9
-60
-6 0 3 4
6 2 5
7
11 12 10 -90
8 -180 -15 0 -12 0 -90 -60 -30 0 30 60 90 1 20 1 50 1 80
-9 0
-180 -150 -120 -90 -60 -30 0 30 60 90 1 20 1 50 1 80 longitude
longi tude
SNN Clusters of SLP. SNN Density of Points on the Globe.
3/31/2021 Introduction to Data Mining, 2nd Edition 76
Tan, Steinbach, Karpatne, Kumar
76
Pairs of SLP Clusters that Correspond to El-Nino SOI
SLP Clusters 15 and 20 SOI vs. Cluster Centroid 20 - C luster Centroid 15 ( c orr = 0.78 )
3 3
Cluster 15 SOI
Cluster 20 20 - 15
2
2
1
1
-1
-1
-2
-2
-3
-3 -4
82 83 84 85 86 87 88 89 90 91 92 93 94 82 83 84 85 86 87 88 89 90 91 92 93 94
Centroids of SLP clusters 15 and 20 Centroid of cluster 20 – Centroid of cluster 15
(near Darwin, Australia and Tahiti) versus SOI
1982-1993.
3/31/2021 Introduction to Data Mining, 2nd Edition 77
Tan, Steinbach, Karpatne, Kumar
77
Limitations of SNN Clustering
Complexity of SNN Clustering is high
– O( n * time to find numbers of neighbor within Eps)
– In worst case, this is O(n2)
– For lower dimensions, there are more efficient ways to find
the nearest neighbors
R* Tree
k-d Trees
Parameterization is not easy
3/31/2021 Introduction to Data Mining, 2nd Edition 78
Tan, Steinbach, Karpatne, Kumar
78
Characteristics of Data, Clusters, and
Clustering Algorithms
A cluster analysis is affected by characteristics of
– Data
– Clusters
– Clustering algorithms
Looking at these characteristics gives us a
number of dimensions that you can use to
describe clustering algorithms and the results that
they produce
© Tan,Steinbach, Kumar Introduction to Data Mining 4/13/2006 79
79
Characteristics of Data
High dimensionality
– Dimensionality reduction
Types of attributes
– Binary, discrete, continuous, asymmetric
– Mixed attribute types (e.g., some are continuous, others nominal)
Differences in attribute scales
– Normalization techniques
Size of data set
Noise and Outliers
Properties of the data space
– Can you define a meaningful centroid or a meaningful notion of
density
3/31/2021 Introduction to Data Mining, 2nd Edition 80
Tan, Steinbach, Karpatne, Kumar
80
Characteristics of Clusters
Data distribution
– Parametric models
Shape
– Globular or arbitrary shape
Differing sizes
Differing densities
Level of separation among clusters
Relationship among clusters
Subspace clusters
3/31/2021 Introduction to Data Mining, 2nd Edition 81
Tan, Steinbach, Karpatne, Kumar
81
Characteristics of Clustering Algorithms
Order dependence
Non-determinism
Scalability
Number of parameters
3/31/2021 Introduction to Data Mining, 2nd Edition 82
Tan, Steinbach, Karpatne, Kumar
82
Which Clustering Algorithm?
Type of Clustering
– Taxonomy vs flat
Type of Cluster
– Prototype vs connected reguions vs density-based
Characteristics of Clusters
– Subspace clusters, spatial inter-relationships
Characteristics of Data Sets and Attributes
Noise and Outliers
Number of Data Objects
Number of Attributes
Algorithmic Considerations
3/31/2021 Introduction to Data Mining, 2nd Edition 83
Tan, Steinbach, Karpatne, Kumar
83
Comparison of MIN and EM-Clustering
We assume EM clustering using the Gaussian (normal) distribution.
MIN is hierarchical, EM clustering is partitional.
Both MIN and EM clustering are complete.
MIN has a graph-based (contiguity-based) notion of a cluster, while
EM clustering has a prototype (or model-based) notion of a cluster.
MIN will not be able to distinguish poorly separated clusters, but EM
can manage this in many situations.
MIN can find clusters of different shapes and sizes; EM clustering
prefers globular clusters and can have trouble with clusters of
different sizes.
Min has trouble with clusters of different densities, while EM can
often handle this.
Neither MIN nor EM clustering finds subspace clusters.
3/31/2021 Introduction to Data Mining, 2nd Edition 84
Tan, Steinbach, Karpatne, Kumar
84
Comparison of MIN and EM-Clustering
MIN can handle outliers, but noise can join clusters; EM clustering
can tolerate noise, but can be strongly affected by outliers.
EM can only be applied to data for which a centroid is meaningful;
MIN only requires a meaningful definition of proximity.
EM will have trouble as dimensionality increases and the number
of its parameters (the number of entries in the covariance matrix)
increases as the square of the number of dimensions; MIN can
work well with a suitable definition of proximity.
EM is designed for Euclidean data, although versions of EM
clustering have been developed for other types of data. MIN is
shielded from the data type by the fact that it uses a similarity
matrix.
MIN makes no distribution assumptions; the version of EM we are
considering assumes Gaussian distributions.
3/31/2021 Introduction to Data Mining, 2nd Edition 85
Tan, Steinbach, Karpatne, Kumar
85
Comparison of MIN and EM-Clustering
EM has an O(n) time complexity; MIN is O(n2log(n)).
Because of random initialization, the clusters found by EM
can vary from one run to another; MIN produces the same
clusters unless there are ties in the similarity matrix.
Neither MIN nor EM automatically determine the number
of clusters.
MIN does not have any user-specified parameters; EM
has the number of clusters and possibly the weights of
the clusters.
EM clustering can be viewed as an optimization problem;
MIN uses a graph model of the data.
Neither EM or MIN are order dependent.
3/31/2021 Introduction to Data Mining, 2nd Edition 86
Tan, Steinbach, Karpatne, Kumar
86
Comparison of DBSCAN and K-means
Both are partitional.
K-means is complete; DBSCAN is not.
K-means has a prototype-based notion of a cluster; DB
uses a density-based notion.
K-means can find clusters that are not well-separated.
DBSCAN will merge clusters that touch.
DBSCAN handles clusters of different shapes and sizes;
K-means prefers globular clusters.
3/31/2021 Introduction to Data Mining, 2nd Edition 87
Tan, Steinbach, Karpatne, Kumar
87
Comparison of DBSCAN and K-means
DBSCAN can handle noise and outliers; K-means
performs poorly in the presence of outliers
K-means can only be applied to data for which a centroid
is meaningful; DBSCAN requires a meaningful definition
of density
DBSCAN works poorly on high-dimensional data; K-
means works well for some types of high-dimensional
data
Both techniques were designed for Euclidean data, but
extended to other types of data
DBSCAN makes no distribution assumptions; K-means is
really assuming spherical Gaussian distributions
3/31/2021 Introduction to Data Mining, 2nd Edition 88
Tan, Steinbach, Karpatne, Kumar
88
Comparison of DBSCAN and K-means
K-means has an O(n) time complexity; DBSCAN is
O(n^2)
Because of random initialization, the clusters found by K-
means can vary from one run to another; DBSCAN
always produces the same clusters
DBSCAN automatically determines the number of
clusters; K-means does not
K-means has only one parameter, DBSCAN has two.
K-means clustering can be viewed as an optimization
problem and as a special case of EM clustering; DBSCAN
is not based on a formal model.
3/31/2021 Introduction to Data Mining, 2nd Edition 89
Tan, Steinbach, Karpatne, Kumar
89