PPT1
PPT1
Data Preprocessing
Prepared By
Dr. G.N.V.G. Sirisha
Assistant Professor
◼ Data Cleaning
◼ Data Integration
◼ Data Reduction
◼ Data Transformation and Data Discretization
◼ Summary
Data Cleaning
◼ Data in the Real World Is Dirty: Lots of potentially incorrect data, e.g.,
instrument faulty, human or computer error, transmission error
◼ incomplete: lacking attribute values, lacking certain attributes of
◼ technology limitation
◼ incomplete data
◼ inconsistent data
How to Handle Noisy Data?
◼ Binning
◼ first sort data and partition into (equal-frequency) bins
◼ then one can smooth by bin means, smooth by bin median, smooth by bin
boundaries, etc.
◼ Regression
◼ smooth by fitting the data into regression functions
◼ Clustering
◼ detect and remove outliers
outliers)
Binning
Clustering
Regression
Data Cleaning as a Process
◼ Data discrepancy detection
◼ Use metadata (e.g., domain, range, dependency, distribution)
◼ Data scrubbing: use simple domain knowledge (e.g., postal code, spell-check) to detect
◼ Data Quality
◼ Data Cleaning
◼ Data Integration
◼ Data Reduction
◼ Summary
Data Integration
Combines data from multiple sources into a coherent store
◼ Schema integration: e.g., A.cust-id B.cust-#
◼ Integrate metadata from different sources
◼ Entity identification problem
◼ Identify real world entities from multiple data sources, e.g., Bill Clinton = William
Clinton
◼ Detecting and resolving data value conflicts
◼ For the same real world entity, attribute values from different sources are
different
◼ Possible reasons: different representations, different scales, e.g., metric vs. British
units
Handling Redundancy in Data Integration
◼ Redundant data occur often when integration of multiple databases
◼ Object identification: The same attribute or object may have different
names in different databases
◼ Derivable data: One attribute may be a “derived” attribute in another
table, e.g., annual revenue
◼ Redundant attributes may be able to be detected by correlation analysis and
covariance analysis
◼ Careful integration of the data from multiple sources may help reduce/avoid
redundancies and inconsistencies and improve mining speed and quality
Correlation Analysis (Nominal Data)
◼ Χ2 (chi-square) test
(Observed − Expected ) 2
=
2
Expected
◼ The larger the Χ2 value, the more likely the variables are related
◼ The cells that contribute the most to the Χ2 value are those whose actual count is
very different from the expected count
◼ Correlation does not imply causality
◼ # of hospitals and # of car-theft in a city are correlated
◼ Both are causally linked to the third variable: population
Chi-Square Calculation: An Example
Play chess Not play Sum
chess (row)
Like science 250(90) 200(360) 450
◼ Χ2 (chi-square) calculation (numbers in fiction
parenthesis are expected counts Not like science 50(210) 1000(840 1050
fiction )
calculated based on the data distribution Sum(col.) 300 1200 1500
i =1 (ai − A)(bi − B)
n n
(ai bi ) − n AB
rA, B = = i =1
(n − 1) A B (n − 1) A B
A B
10 20
12 18
14 16
16 14
18 12
Correlation (viewed as linear relationship)
◼ Correlation measures the linear relationship between objects
◼ To compute correlation, we standardize data objects, A and B, and
then take their dot product
correlation( A, B) = A'•B'
Visually Evaluating Correlation
◼ Correlation Coefficient =
▪ where n is the number of tuples 𝐴ҧ , and 𝐵ത are the respective mean or expected values of
A and B, σA and σB are the respective standard deviation of A and B.
◼ Positive covariance: If CovA,B > 0, then A and B both tend to be larger than their expected
values.
◼ Negative covariance: If CovA,B < 0 then if A is larger than its expected value, B is likely to be
smaller than its expected value.
◼ Independence: CovA,B = 0 but the converse is not true:
◼ Some pairs of random variables may have a covariance of 0 but are not independent. Only under
some additional assumptions (e.g., the data follow multivariate normal distributions) does a
covariance of 0 imply independence
Covariance Example
◼ It can be simplified in computation as
◼ Suppose two stocks A and B have the following values in one week: (2, 5), (3, 8),
(5, 10), (4, 11), (6, 14).
◼ Question: If the stocks are affected by the same industry trends, will their prices rise
or fall together?
◼ Wavelet transforms
◼ Data compression
Data Reduction 1: Dimensionality Reduction
◼ Curse of dimensionality
◼ When dimensionality increases, data becomes increasingly sparse
◼ Density and distance between points, which is critical to clustering, outlier analysis,
becomes less meaningful
◼ The possible combinations of subspaces will grow exponentially
◼ Dimensionality reduction
◼ Avoid the curse of dimensionality
◼ Help eliminate irrelevant features and reduce noise
◼ Reduce time and space required in data mining
◼ Allow easier visualization
◼ Dimensionality reduction techniques
◼ Wavelet transforms
◼ Principal Component Analysis
◼ Supervised and nonlinear techniques (e.g., feature selection)
Wavelet Transformation
◼ Discrete wavelet transform (DWT) for linear signal processing, multi-resolution
analysis
◼ Compressed approximation: store only a small fraction of the strongest of the
wavelet coefficients
◼ Similar to discrete Fourier transform (DFT), but better lossy compression,
localized in space
◼ Method:
◼ Length, L, must be an integer power of 2 (padding with 0’s, when necessary)
◼ Each transform has 2 functions: smoothing, difference
◼ Applies to pairs of data, resulting in two set of data of length L/2
◼ Applies two functions recursively, until reaches the desired length
Haar2 Daubechie4
Wavelet Decomposition
◼ Wavelets: A math tool for space-efficient hierarchical decomposition of
functions
◼ S = [2, 2, 0, 2, 3, 5, 4, 4] can be transformed to S^ = [2 3/4, -1 1/4, 1/2, 0,
0, -1, -1, 0]
◼ Compression: many small detail coefficients can be replaced by 0’s, and only
the significant coefficients are retained
Why Wavelet Transform?
◼ Use hat-shape filters
◼ Emphasize region where points cluster
◼ Multi-resolution
◼ Detect arbitrary shaped clusters at different scales
◼ Efficient
◼ Complexity O(N)
• Ax = x (A- I) x=0
• How to calculate x and :
• Calculate det(A- I), yields a polynomial (degree n)
• Determine roots to det(A- I)=0, roots are eigenvalues
• Solve (A- I) x=0 for each to obtain eigenvectors x
Principal Components
1. principal component (PC1)
• The eigenvalue with the largest absolute value will indicate that the data have the
largest variance along its eigenvector, the direction along which there is greatest
variation
2. principal component (PC2)
• The direction with maximum variation left in data, orthogonal to the PC1
• In general, only few directions manage to capture most of the variability in the data.
Principal Component Analysis
• Compute the covariance matrix C of adjusted X
• Let 𝑋ሜ be the mean vector (taking the
• Find the eigenvectors and eigenvalues of C.
mean of all rows) • For matrix C, vectors e (=column vector) having same
• Adjust the original data by the mean direction as Ce :
• X’ = X –𝑋ሜ • eigenvectors of C is e such that Ce= e,
• Subtract the mean • is called an eigenvalue of C.
25
20
Variance (%) 15
10
0
PC1 PC2 PC3 PC4 PC5 PC6 PC7 PC8 PC9 PC10
Transformed Data
Eigenvalues j corresponds to variance on each component j
Thus, sort by j
Take the first p eigenvectors ei; where p is the number of top eigenvalues
𝑦𝑖1 𝑒1 𝑥𝑖1 − 𝑥1
𝑦𝑖2 𝑒2 𝑥𝑖2 − 𝑥2
... = ... ...
𝑦𝑖𝑝 𝑒𝑝 𝑥𝑖𝑛 − 𝑥𝑛
An Example Mean1=24.1
X1 X2 X1' X2' Mean2=53.8
19 63 -5.1 9.25
39 74
100
14.9 20.25 90
80
70
60
30 87 5.9 33.25 50 Series1
40
30
20
30 23 5.9 -30.75 10
0
0 10 20 30 40 50
15 35 -9.1 -18.75
40
30
15 43 -9.1 -10.75 20
10
-10
-20
-40
Covariance Matrix
C=
0.5
0.4
0.3
0.2
0.1
0
-40 -20 -0.1 0 20 40
-0.2
-0.3
-0.4
-0.5
Attribute Subset Selection
◼ Another way to reduce dimensionality of data
◼ Redundant attributes
◼ Duplicate much or all of the information contained in one or more other
attributes
◼ E.g., purchase price of a product and the amount of sales tax paid
◼ Irrelevant attributes
◼ Contain no information that is useful for the data mining task at hand
◼ E.g., students' ID is often irrelevant to the task of predicting students' GPA
Heuristic Search in Attribute Selection
◼ There are 2d possible attribute combinations of d attributes
◼ Typical heuristic attribute selection methods:
◼ Best single attribute under the attribute independence assumption: choose by
significance tests
◼ Best step-wise feature selection:
◼ Domain-specific
covered)
◼ Attribute construction
◼ Combining features: Example creating a new feature called area from height
and width
◼ Data discretization
Data Reduction 2: Numerosity Reduction
◼ Reduce data volume by choosing alternative, smaller forms of data
representation
◼ Parametric methods (e.g., regression)
◼ Assume the data fits some model, estimate model parameters, store
only the parameters, and discard the data (except possible outliers)
◼ Ex. Log-linear models—obtain value at a point in m-D space as the
◼ Multiple regression: Y = b0 + b1 X1 + b2 X2
◼ Many nonlinear functions can be transformed into the above
◼ Log-linear models:
◼ Approximate discrete multidimensional probability distributions
Y= 23.6+ 3.5 *X
Example (2-D) Dataset that can be fit using Linear Regression Model
Example (2-D) Dataset that cannot be fit using Linear Regression Model
Histogram Analysis
◼ Divide data into buckets and store average (sum) for each bucket
◼ Partitioning rules: 40
35
◼ Equal-width: equal bucket range 30
100000
10000
20000
30000
40000
50000
60000
70000
80000
90000
Clustering
◼ Partition data set into clusters based on similarity, and store cluster representation
(e.g., centroid and diameter) only
◼ Can be very effective if data is clustered but not if data is “smeared”
◼ Can have hierarchical clustering and be stored in multi-dimensional index tree
structures
◼ There are many choices of clustering definitions and clustering algorithms
◼ Cluster analysis will be studied in depth in Chapter 10
Sampling
◼ Sampling: obtaining a small sample s to represent the whole data set N
◼ Allow a mining algorithm to run in complexity that is potentially sub-linear to the size
of the data
◼ Key principle: Choose a representative subset of the data
◼ Simple random sampling may have very poor performance in the presence of skew
◼ Develop adaptive sampling methods, e.g., stratified sampling:
◼ Note: Sampling may not reduce database I/O s (page at a time)
Types of Sampling
◼ Simple random sampling
◼ There is an equal probability of selecting any particular item
◼ Stratified sampling:
◼ Partition the data set, and draw samples from each partition (proportionally, i.e.,
approximately the same percentage of the data)
◼ Used in conjunction with skewed data
Sampling: With or without Replacement
Raw Data
Sampling: Stratified Sampling
Sampling: Cluster Sampling
Data Cube Aggregation
◼ The lowest level of a data cube (base cuboid)
◼ The aggregated data for an individual entity of interest
◼ E.g., a customer in a phone calling data warehouse
◼ Multiple levels of aggregation in data cubes
◼ Further reduce the size of data to deal with
◼ Reference appropriate levels
◼ Use the smallest representation which is enough to solve the task
◼ Queries regarding aggregated information should be answered using data cube,
when possible
Data Reduction 3: Data Compression
◼ String compression
◼ There are extensive theories and well-tuned algorithms
◼ Audio/video compression
◼ Typically lossy compression, with progressive refinement
whole
◼ Time sequence is not audio
◼ Typically short and vary slowly with time
Original Data
Approximated
Chapter 3: Data Preprocessing
◼ Data Preprocessing: An Overview
◼ Data Quality
◼ Data Cleaning
◼ Data Integration
◼ Data Reduction
◼ Summary
Data Transformation
◼ A function that maps the entire set of values of a given attribute to a new set of replacement values s.t.
each old value can be identified with one of the new values
◼ Methods
◼ Smoothing: Remove noise from data
◼ Attribute/feature construction
◼ New attributes constructed from the given ones
◼ Aggregation: Summarization, data cube construction
◼ Normalization: Scaled to fall within a smaller, specified range
◼ min-max normalization
◼ z-score normalization
◼ normalization by decimal scaling
◼ Discretization: Concept hierarchy climbing
Normalization
◼ Min-max normalization: to [new_minA, new_maxA]
v − minA
v' = (new _ maxA − new _ minA) + new _ minA
maxA − minA
◼ Ex. Let income range $12,000 to $98,000 normalized to [0.0, 1.0].
73,600 − 12,000
Then $73,600 is mapped to (1.0 − 0) + 0 = 0.716
98,000 − 12,000
◼ Z-score normalization (μ: mean, σ: standard deviation):
v − A
v' =
A
73,600 − 54,000
◼ Ex. Let μ = 54,000, σ = 16,000. Then = 1.225
16,000
◼ Normalization by decimal scaling
v
v'= j Where j is the smallest integer such that Max(|ν’|) < 1
10
Normalization by Decimal Scaling Example
◼ 100,12000,13500,1200
Data
K-means clustering leads to better results
◼ Bottom-up merge: find the best neighboring intervals (those having similar
91
Summary
◼ Data quality: accuracy, completeness, consistency, timeliness, believability,
interpretability
◼ Data cleaning: e.g. missing/noisy values, outliers
◼ Data integration from multiple sources:
◼ Entity identification problem
◼ Remove redundancies
◼ Detect inconsistencies
◼ Data reduction
◼ Dimensionality reduction
◼ Numerosity reduction
◼ Data compression
92
Thank You