This document provides an overview of data preprocessing techniques discussed in Chapter 3 of the textbook "Data Mining: Concepts and Techniques". It covers topics such as data quality, data cleaning, data integration, data reduction, and data transformation. Data reduction techniques like dimensionality reduction aim to obtain a reduced representation of data that uses less space but produces similar analytical results. Dimensionality reduction methods include wavelet transforms, principal component analysis, and feature selection. Wavelet transforms decompose a signal into different frequency subbands and allow clusters to become more distinguishable at different resolution levels.
The document discusses various techniques for data preprocessing, including data cleaning, integration, reduction, and transformation. It describes why preprocessing is important for improving data quality, accuracy, and consistency. Several forms of data preprocessing are covered in detail, such as handling missing or noisy data, data integration, dimensionality reduction techniques like principal component analysis, and different strategies for data reduction.
This presentation gives the idea about Data Preprocessing in the field of Data Mining. Images, examples and other things are adopted from "Data Mining Concepts and Techniques by Jiawei Han, Micheline Kamber and Jian Pei "
The document introduces data preprocessing techniques for data mining. It discusses why data preprocessing is important due to real-world data often being dirty, incomplete, noisy, inconsistent or duplicate. It then describes common data types and quality issues like missing values, noise, outliers and duplicates. The major tasks of data preprocessing are outlined as data cleaning, integration, transformation and reduction. Specific techniques for handling missing values, noise, outliers and duplicates are also summarized.
The document discusses data preprocessing concepts from Chapter 3 of the book "Data Mining: Concepts and Techniques". It covers topics like data quality, major tasks in preprocessing including data cleaning, integration and reduction. Data cleaning involves handling incomplete, noisy and inconsistent data using techniques such as imputation of missing values, smoothing of noisy data, and resolving inconsistencies. Data integration combines data from multiple sources which requires tasks like schema integration and entity identification. Data reduction techniques include dimensionality reduction and data compression.
Data Preprocessing can be defined as a process of converting raw data into a format that is understandable and usable for further analysis. It is an important step in the Data Preparation stage. It ensures that the outcome of the analysis is accurate, complete, and consistent.
This document provides an overview of data preprocessing techniques. It discusses why preprocessing is important, including that real-world data is often dirty, incomplete, noisy, and inconsistent. The major tasks of preprocessing are described as data cleaning, integration, transformation, reduction, and discretization. Specific techniques for handling missing data, noisy data, and reducing redundancy are also summarized.
This document discusses data preprocessing and data warehouses. It explains that real-world data is often dirty, incomplete, noisy, and inconsistent. Data preprocessing aims to clean and transform raw data into a format suitable for data mining. The key tasks of data preprocessing include data cleaning, integration, transformation, reduction, and discretization. Data cleaning involves techniques like handling missing data, identifying outliers, and resolving inconsistencies. Data integration combines data from multiple sources. The document also defines characteristics of a data warehouse such as being subject-oriented, integrated, time-variant, and nonvolatile.
This document discusses data preprocessing concepts from Chapter 3 of the book "Data Mining: Concepts and Techniques". It covers the major tasks in data preprocessing including data cleaning, integration, and reduction. Data cleaning involves handling incomplete, noisy, and inconsistent data through techniques like filling in missing values, identifying outliers, and resolving inconsistencies. Data integration combines data from multiple sources. Data reduction strategies aim to reduce the volume of data for analysis through dimensionality and numerosity reduction.
The document discusses different types of data sets and various concepts related to data preprocessing. It describes common data types like records, relational data, and transaction data. It also defines key concepts in data preprocessing like data objects, attributes, handling missing/noisy data, data integration, reduction, transformation and discretization. The goal of these techniques is to clean, integrate and prepare raw data for analysis.
This document discusses data preprocessing techniques. It defines data preprocessing as transforming raw data into an understandable format. Major tasks in data preprocessing are described as data cleaning, integration, transformation, and reduction. Data cleaning involves handling missing data, noisy data, and inconsistencies. Data integration combines data from multiple sources. Data transformation techniques include smoothing, aggregation, generalization, and normalization. The goal of data reduction is to reduce the volume of data while maintaining analytical results.
Data preprocessing involves cleaning, transforming, and reducing raw data to prepare it for modeling and analysis. The document discusses several key aspects of data preprocessing including:
- Why data preprocessing is important to improve data quality and ensure accurate analysis results.
- Common data issues like missing values, noise, inconsistencies that require cleaning. Techniques for cleaning include filling in missing data, identifying and handling outliers, and resolving inconsistencies.
- Methods for reducing data like binning, regression, clustering, sampling to obtain a smaller yet representative version of the data.
- The major tasks in preprocessing like data cleaning, integration, transformation, reduction and discretization which are aimed at handling real-world data issues.
The document discusses data preprocessing tasks that are commonly performed on real-world databases before data mining or analysis. These tasks include data cleaning to handle incomplete, noisy, or inconsistent data through techniques like filling in missing values, identifying outliers, and resolving inconsistencies. Data integration is used to combine data from multiple sources by resolving attribute name differences and eliminating redundancies. Data transformation techniques like normalization, attribute construction, aggregation, and generalization are also discussed to convert data into appropriate forms for mining algorithms or users. The goal of these preprocessing steps is to improve the quality and consistency of data for subsequent analysis and knowledge discovery.
Data mining is a crucial discipline within the field of data science, focusing on extracting useful patterns, trends, and insights from large datasets. It encompasses various techniques and algorithms aimed at discovering hidden patterns and relationships that can be used to make informed decisions and predictions.
At its core, data mining involves several key steps. Firstly, it begins with data preparation, where raw data is cleaned, transformed, and formatted to ensure its suitability for analysis. This step is critical as the quality of the data directly impacts the effectiveness of mining algorithms.
The next stage involves choosing appropriate data mining techniques. These techniques can broadly be categorized into several types, including classification, clustering, regression, association rule mining, and anomaly detection. Each type serves different purposes: classification predicts categorical labels, clustering groups similar data points, regression models continuous variables, association rule mining discovers relationships between variables, and anomaly detection identifies outliers.
Supporting these techniques are various algorithms. For instance, decision trees, naive Bayes, k-nearest neighbors (k-NN), and support vector machines (SVM) are common in classification tasks, while k-means and hierarchical clustering are prevalent in clustering tasks. These algorithms employ mathematical models and statistical methods to uncover patterns within data.
Furthermore, data mining incorporates advanced concepts like feature selection, which helps in identifying the most relevant variables for analysis, and ensemble methods that combine multiple models to enhance prediction accuracy. Additionally, data mining often intersects with machine learning, utilizing algorithms that iteratively learn from data to improve performance over time.
Ethical considerations are integral to data mining practices, especially concerning privacy, bias mitigation, and transparency. As data mining often involves sensitive information, ensuring compliance with regulations like GDPR and maintaining ethical standards is paramount.
In practical applications, data mining finds diverse uses across industries. In retail, it aids in market basket analysis to understand customer purchasing behavior. In healthcare, it supports clinical decision-making by predicting patient outcomes. Financial institutions employ it for fraud detection, while manufacturing utilizes it for process optimization and quality control.
Looking ahead, the future of data mining lies in handling ever-growing datasets, integrating with artificial intelligence for automated decision-making, and advancing interpretability to enhance trust in model outputs. As technologies evolve, data mining remains pivotal in extracting actionable insights from data to drive innovation and solve complex challenges across various domains.
In conclusion, data mining stands as a cornerstone of data-driven decision-ma
This document discusses various techniques for data preprocessing, including data cleaning, integration and transformation, reduction, and discretization. It provides details on techniques for handling missing data, noisy data, and data integration issues. It also describes methods for data transformation such as normalization, aggregation, and attribute construction. Finally, it outlines various data reduction techniques including cube aggregation, attribute selection, dimensionality reduction, and numerosity reduction.
Data preprocessing involves cleaning data by handling missing values, outliers, and inconsistencies. It also includes integrating and transforming data through normalization, aggregation, and dimensionality reduction. The goals are to improve data quality and reduce data volume for mining while maintaining the essential information. Techniques like binning, clustering, regression and histograms are used to discretize and reduce numerical attributes.
Data preprocessing involves cleaning, transforming, and reducing raw data to prepare it for modeling. It addresses issues like missing values, noise, inconsistencies, and redundancy. Techniques include data cleaning (e.g. filling in missing values), integration, normalization, aggregation, dimensionality reduction, and discretization which reduces data volume while maintaining analytical ability. The goal is obtaining quality data for quality analysis and mining results.
Data preprocessing involves cleaning, transforming, and reducing raw data to prepare it for modeling. It addresses issues like missing values, noise, inconsistencies, and redundancy. Techniques include data cleaning (e.g. filling in missing values), integration, normalization, aggregation, dimensionality reduction, and discretization which reduces data volume while maintaining analytical ability. The goal is obtaining quality data for quality analysis and mining results.
Data preprocessing involves cleaning, transforming, and reducing raw data to prepare it for modeling. It addresses issues like missing values, noise, inconsistencies, and redundancy. Techniques include data cleaning (e.g. filling in missing values), integration, normalization, aggregation, dimensionality reduction, and discretization which reduces data volume while maintaining analytical ability. The goal is obtaining quality data for quality analysis and mining results.
Similar to Data Preprocessing and Visualizsdjvnovrnververdfvdfation (20)
epropelled electric propulsion systems and solutions for unmanned aerial vehi...ePropelled Systems
ePropelled leads the electric propulsion industry with advanced systems and solutions customized for aerospace, unmanned aerial vehicles, light electric vehicles, and marine electric propulsion systems, prioritizing reliability, efficiency, and sustainability.
Splendor Plus Xtec 2.0 Brochure | Hero MotocorpHero MotoCorp
Experience the all-new Splendor Plus Xtec 2.0, where enhanced comfort, convenience, and performance come together to elevate your riding experience. Discover its advanced features, detailed specifications, vibrant color options, competitive pricing, and various variants. Don't miss out on the opportunity to explore everything this exceptional bike has to offer.
Looking for the L120D Volvo BM Wheel Loader Operators Manual? This comprehensive guide covers operation, maintenance, and safety procedures to ensure optimal performance and longevity of your equipment. Perfect for maximizing efficiency in construction or material handling projects.
Looking for the L120C Volvo BM Wheel Loader Operators Manual? This comprehensive guide covers operation, maintenance, and safety procedures to ensure optimal performance and longevity of your equipment. Perfect for maximizing efficiency in construction or material handling projects.
Buy THC vape carts in Kuwait City Telegram @pplanet0 weed for sale in Kuwaithayssysy
Buy weed in Kuwait City Telegram @pplanet0 weed for sale in Kuwait
Message us on Telegram @pplanet0 to buy weed and cannabis related products online in Abu Dhabi, Dubai, Sharjah, Ajman, Ras Al Khaimah, Fujairah, Umm Al Quwain.
We have top quality weed, lsd coke, shrooms, ketamine, Ecstasy, Hash and more. Buy hash and Weed in dubai,abu dhabi,sharjah where to buy weed in doha,where can i find weed in jeddah,Can I get weed delivered to Riyadh?,Buy weed Online Jeddah Saudi Arabia,Buy Weed and THC Cannabis Oil online ,QATAR , DOHA buy kush in DOHA , buy kush in DOHAWeed in QATAR # DOHA Buy Weed and THC Cannabis Oil online who delivers at your own location in Qatar Doha ,Kuwait ,Dubai including cannabis / weed,Where can I find weed in Dubai as a tourist?,Is marijuana allowed in Dubai ? How much is medical marijuana in Dubai ?Is weed legal in Dubai ?How to get marijuana in Saudi Arabia Do people in Saudi Arabia smoke weed ? Is Hash legal in Saudi Arabia ? Where is marijuana the most illegal?Can you get weed in Baku? Dubai, United Arab Emirates Canabis smokers - Dubai, Buy Marijuana Products Online in UAE Desertcart ships the Marijuana products in Dubai ,Abu Dhabi, Sharjah, Al Ain, Ajman and more cities in UAE. Where can I find weed in Dubai? Where can I find weed in Dubai? I am going to Abu Dhabi. How hard is it to find weed Is marijuana allowed in Dubai? Now I'm in Dubai. Can I find high quality cannabis here? How much is medical marijuana in Dubai? where can i find weed, cocaine, hash,mdma in dubai as a tourist? weed, cocaine, hash, cystal, mdma for sell in uae, dubai, saudi arabia, qatar, oman, kuwait where to get good quality weed, cocaine, hash, crystal mdma online in dubai, uae, ajman, abu dhabi
Buy weed in Dubai, Buy weed in Abu Dhabi, Buy weed in Sharjah, Buy weed in Ras Al Khaimah, Buy weed in UAE, Buy weed in Al Ain, Buy weed in Umm al-Qaiwain, Buy weed in Fujairah, Buy weed in Ajman, Buy weed in qatar buy weed in jeddah buy weed in saudi arabia buy weed in riyadh buy weed in iraq buy cocaine in dubai buy cocaine in abu dhabi buy cocaine in saudi arabia buy cocaine in sharjah buy cocaine in kuwait buy cocaine in bahrain buy cocaine in iraq buy cocaine in fujairah buy cocaine in ajman buy cocaine in oman buy cocaine in qatar where can i buy weed, cocaine, hash, xanax, cbd oil, mdma in dubai, uae, abu dhabi where can i find cocaine in uae/ dubai
OZ (28grams) $250 1/8 pound(56grams) $500 1/4 pound(112grams) $750 1/2 pound(224grams) $900 1 pound(448grams) $1600
Experience luxury and performance with the Audi Q8 in Cleveland, Ohio. This vehicle combines precision engineering with impeccable style. Enjoy comfort with leather seating, a panoramic moonroof, and heated seats. Advanced tech includes Audi's MMI Navigation Plus, LCD monitors, and Bang & Olufsen sound. Safety features include Quattro all-wheel-drive, Audi pre-sense, and regenerative braking. With an impressive electric range and fast charging, it offers an eco-friendly, smooth drive. Test drive the Audi Q8 at Audi Middleburg Heights today!
2. 2
2
Chapter 3: Data Preprocessing
Data Preprocessing: An Overview
Data Quality
Major Tasks in Data Preprocessing
Data Cleaning
Data Integration
Data Reduction
Data Transformation and Data Discretization
Summary
3. 3
Data Quality: Why Preprocess the Data?
Measures for data quality: A multidimensional view
Accuracy: correct or wrong, accurate or not
Completeness: not recorded, unavailable, …
Consistency: some modified but some not, dangling, …
Timeliness: timely update?
Believability: how trustable the data are correct?
Interpretability: how easily the data can be
understood?
4. 4
Major Tasks in Data Preprocessing
Data cleaning
Fill in missing values, smooth noisy data, identify or remove
outliers, and resolve inconsistencies
Data integration
Integration of multiple databases, data cubes, or files
Data reduction
Dimensionality reduction
Numerosity reduction
Data compression
Data transformation and data discretization
Normalization
Concept hierarchy generation
5. 5
5
Chapter 3: Data Preprocessing
Data Preprocessing: An Overview
Data Quality
Major Tasks in Data Preprocessing
Data Cleaning
Data Integration
Data Reduction
Data Transformation and Data Discretization
Summary
6. 6
Data Cleaning
Data in the Real World Is Dirty: Lots of potentially incorrect data,
e.g., instrument faulty, human or computer error, transmission error
incomplete: lacking attribute values, lacking certain attributes of
interest, or containing only aggregate data
e.g., Occupation=“ ” (missing data)
noisy: containing noise, errors, or outliers
e.g., Salary=“−10” (an error)
inconsistent: containing discrepancies in codes or names, e.g.,
Age=“42”, Birthday=“03/07/2010”
Was rating “1, 2, 3”, now rating “A, B, C”
discrepancy between duplicate records
Intentional (e.g., disguised missing data)
Jan. 1 as everyone’s birthday?
7. 7
Incomplete (Missing) Data
Data is not always available
E.g., many tuples have no recorded value for several
attributes, such as customer income in sales data
Missing data may be due to
equipment malfunction
inconsistent with other recorded data and thus deleted
data not entered due to misunderstanding
certain data may not be considered important at the
time of entry
not register history or changes of the data
Missing data may need to be inferred
8. 8
How to Handle Missing Data?
Ignore the tuple: usually done when class label is missing
(when doing classification)—not effective when the % of
missing values per attribute varies considerably
Fill in the missing value manually: tedious + infeasible?
Fill in it automatically with
a global constant : e.g., “unknown”, a new class?!
the attribute mean
the attribute mean for all samples belonging to the
same class: smarter
the most probable value: inference-based such as
Bayesian formula or decision tree
9. 9
Noisy Data
Noise: random error or variance in a measured variable
Incorrect attribute values may be due to
faulty data collection instruments
data entry problems
data transmission problems
technology limitation
inconsistency in naming convention
Other data problems which require data cleaning
duplicate records
incomplete data
inconsistent data
10. 10
How to Handle Noisy Data?
Binning
first sort data and partition into (equal-frequency) bins
then one can smooth by bin means, smooth by bin
median, smooth by bin boundaries, etc.
Regression
smooth by fitting the data into regression functions
Clustering
detect and remove outliers
Combined computer and human inspection
detect suspicious values and check by human (e.g.,
deal with possible outliers)
11. 11
Data Cleaning as a Process
Data discrepancy detection
Use metadata (e.g., domain, range, dependency, distribution)
Check field overloading
Check uniqueness rule, consecutive rule and null rule
Use commercial tools
Data scrubbing: use simple domain knowledge (e.g., postal
code, spell-check) to detect errors and make corrections
Data auditing: by analyzing data to discover rules and
relationship to detect violators (e.g., correlation and clustering
to find outliers)
Data migration and integration
Data migration tools: allow transformations to be specified
ETL (Extraction/Transformation/Loading) tools: allow users to
specify transformations through a graphical user interface
Integration of the two processes
Iterative and interactive (e.g., Potter’s Wheels)
12. 12
12
Chapter 3: Data Preprocessing
Data Preprocessing: An Overview
Data Quality
Major Tasks in Data Preprocessing
Data Cleaning
Data Integration
Data Reduction
Data Transformation and Data Discretization
Summary
13. 13
13
Data Integration
Data integration:
Combines data from multiple sources into a coherent store
Schema integration: e.g., A.cust-id B.cust-#
Integrate metadata from different sources
Entity identification problem:
Identify real world entities from multiple data sources, e.g., Bill
Clinton = William Clinton
Detecting and resolving data value conflicts
For the same real world entity, attribute values from different
sources are different
Possible reasons: different representations, different scales, e.g.,
metric vs. British units
14. 14
14
Handling Redundancy in Data Integration
Redundant data occur often when integration of multiple
databases
Object identification: The same attribute or object
may have different names in different databases
Derivable data: One attribute may be a “derived”
attribute in another table, e.g., annual revenue
Redundant attributes may be able to be detected by
correlation analysis and covariance analysis
Careful integration of the data from multiple sources may
help reduce/avoid redundancies and inconsistencies and
improve mining speed and quality
15. 15
Correlation Analysis (Nominal Data)
Χ2 (chi-square) test
The larger the Χ2 value, the more likely the variables are
related
The cells that contribute the most to the Χ2 value are
those whose actual count is very different from the
expected count
Correlation does not imply causality
# of hospitals and # of car-theft in a city are correlated
Both are causally linked to the third variable: population
Expected
Expected
Observed 2
2 )
(
16. 16
Chi-Square Calculation: An Example
Χ2 (chi-square) calculation (numbers in parenthesis are
expected counts calculated based on the data distribution
in the two categories)
It shows that like_science_fiction and play_chess are
correlated in the group
93
.
507
840
)
840
1000
(
360
)
360
200
(
210
)
210
50
(
90
)
90
250
( 2
2
2
2
2
Play chess Not play chess Sum (row)
Like science fiction 250(90) 200(360) 450
Not like science fiction 50(210) 1000(840) 1050
Sum(col.) 300 1200 1500
17. 17
Correlation Analysis (Numeric Data)
Correlation coefficient (also called Pearson’s product
moment coefficient)
where n is the number of tuples, and are the respective
means of A and B, σA and σB are the respective standard deviation
of A and B, and Σ(aibi) is the sum of the AB cross-product.
If rA,B > 0, A and B are positively correlated (A’s values
increase as B’s). The higher, the stronger correlation.
rA,B = 0: independent; rAB < 0: negatively correlated
B
A
n
i i
i
B
A
n
i i
i
B
A
n
B
A
n
b
a
n
B
b
A
a
r
)
1
(
)
(
)
1
(
)
)(
( 1
1
,
A B
19. 19
Correlation (viewed as linear
relationship)
Correlation measures the linear relationship
between objects
To compute correlation, we standardize data
objects, A and B, and then take their dot product
)
(
/
))
(
(
' A
std
A
mean
a
a k
k
)
(
/
))
(
(
' B
std
B
mean
b
b k
k
'
'
)
,
( B
A
B
A
n
correlatio
20. 20
Covariance (Numeric Data)
Covariance is similar to correlation
where n is the number of tuples, and are the respective mean or
expected values of A and B, σA and σB are the respective standard
deviation of A and B.
Positive covariance: If CovA,B > 0, then A and B both tend to be larger
than their expected values.
Negative covariance: If CovA,B < 0 then if A is larger than its expected
value, B is likely to be smaller than its expected value.
Independence: CovA,B = 0 but the converse is not true:
Some pairs of random variables may have a covariance of 0 but are not
independent. Only under some additional assumptions (e.g., the data follow
multivariate normal distributions) does a covariance of 0 imply independence
A B
Correlation coefficient:
21. Co-Variance: An Example
It can be simplified in computation as
Suppose two stocks A and B have the following values in one week:
(2, 5), (3, 8), (5, 10), (4, 11), (6, 14).
Question: If the stocks are affected by the same industry trends, will
their prices rise or fall together?
E(A) = (2 + 3 + 5 + 4 + 6)/ 5 = 20/5 = 4
E(B) = (5 + 8 + 10 + 11 + 14) /5 = 48/5 = 9.6
Cov(A,B) = (2×5+3×8+5×10+4×11+6×14)/5 − 4 × 9.6 = 4
Thus, A and B rise together since Cov(A, B) > 0.
22. 22
22
Chapter 3: Data Preprocessing
Data Preprocessing: An Overview
Data Quality
Major Tasks in Data Preprocessing
Data Cleaning
Data Integration
Data Reduction
Data Transformation and Data Discretization
Summary
23. 23
Data Reduction Strategies
Data reduction: Obtain a reduced representation of the data set that
is much smaller in volume but yet produces the same (or almost the
same) analytical results
Why data reduction? — A database/data warehouse may store
terabytes of data. Complex data analysis may take a very long time to
run on the complete data set.
Data reduction strategies
Dimensionality reduction, e.g., remove unimportant attributes
Wavelet transforms
Principal Components Analysis (PCA)
Feature subset selection, feature creation
Numerosity reduction (some simply call it: Data Reduction)
Regression and Log-Linear Models
Histograms, clustering, sampling
Data cube aggregation
Data compression
24. 24
Data Reduction 1: Dimensionality
Reduction
Curse of dimensionality
When dimensionality increases, data becomes increasingly sparse
Density and distance between points, which is critical to clustering, outlier
analysis, becomes less meaningful
The possible combinations of subspaces will grow exponentially
Dimensionality reduction
Avoid the curse of dimensionality
Help eliminate irrelevant features and reduce noise
Reduce time and space required in data mining
Allow easier visualization
Dimensionality reduction techniques
Wavelet transforms
Principal Component Analysis
Supervised and nonlinear techniques (e.g., feature selection)
25. 25
Mapping Data to a New Space
Two Sine Waves Two Sine Waves + Noise Frequency
Fourier transform
Wavelet transform
26. 26
What Is Wavelet Transform?
Decomposes a signal into
different frequency subbands
Applicable to n-
dimensional signals
Data are transformed to
preserve relative distance
between objects at different
levels of resolution
Allow natural clusters to
become more distinguishable
Used for image compression
27. 27
Wavelet Transformation
Discrete wavelet transform (DWT) for linear signal
processing, multi-resolution analysis
Compressed approximation: store only a small fraction of
the strongest of the wavelet coefficients
Similar to discrete Fourier transform (DFT), but better
lossy compression, localized in space
Method:
Length, L, must be an integer power of 2 (padding with 0’s, when
necessary)
Each transform has 2 functions: smoothing, difference
Applies to pairs of data, resulting in two set of data of length L/2
Applies two functions recursively, until reaches the desired length
Haar2 Daubechie4
28. 28
Wavelet Decomposition
Wavelets: A math tool for space-efficient hierarchical
decomposition of functions
S = [2, 2, 0, 2, 3, 5, 4, 4] can be transformed to S^ =
[23/4, -11/4, 1/2, 0, 0, -1, -1, 0]
Compression: many small detail coefficients can be
replaced by 0’s, and only the significant coefficients are
retained
30. 30
Why Wavelet Transform?
Use hat-shape filters
Emphasize region where points cluster
Suppress weaker information in their boundaries
Effective removal of outliers
Insensitive to noise, insensitive to input order
Multi-resolution
Detect arbitrary shaped clusters at different scales
Efficient
Complexity O(N)
Only applicable to low dimensional data
31. 31
x2
x1
e
Principal Component Analysis (PCA)
Find a projection that captures the largest amount of variation in data
The original data are projected onto a much smaller space, resulting
in dimensionality reduction. We find the eigenvectors of the
covariance matrix, and these eigenvectors define the new space
32. 32
Given N data vectors from n-dimensions, find k ≤ n orthogonal vectors
(principal components) that can be best used to represent data
Normalize input data: Each attribute falls within the same range
Compute k orthonormal (unit) vectors, i.e., principal components
Each input data (vector) is a linear combination of the k principal
component vectors
The principal components are sorted in order of decreasing
“significance” or strength
Since the components are sorted, the size of the data can be
reduced by eliminating the weak components, i.e., those with low
variance (i.e., using the strongest principal components, it is
possible to reconstruct a good approximation of the original data)
Works for numeric data only
Principal Component Analysis (Steps)
33. 33
Attribute Subset Selection
Another way to reduce dimensionality of data
Redundant attributes
Duplicate much or all of the information contained in
one or more other attributes
E.g., purchase price of a product and the amount of
sales tax paid
Irrelevant attributes
Contain no information that is useful for the data
mining task at hand
E.g., students' ID is often irrelevant to the task of
predicting students' GPA
34. 34
Heuristic Search in Attribute Selection
There are 2d possible attribute combinations of d attributes
Typical heuristic attribute selection methods:
Best single attribute under the attribute independence
assumption: choose by significance tests
Best step-wise feature selection:
The best single-attribute is picked first
Then next best attribute condition to the first, ...
Step-wise attribute elimination:
Repeatedly eliminate the worst attribute
Best combined attribute selection and elimination
Optimal branch and bound:
Use attribute elimination and backtracking
35. 35
Attribute Creation (Feature Generation)
Create new attributes (features) that can capture the
important information in a data set more effectively than
the original ones
Three general methodologies
Attribute extraction
Domain-specific
Mapping data to new space (see: data reduction)
E.g., Fourier transformation, wavelet
transformation, manifold approaches (not covered)
Attribute construction
Combining features (see: discriminative frequent
patterns in Chapter 7)
Data discretization
36. 36
Data Reduction 2: Numerosity
Reduction
Reduce data volume by choosing alternative, smaller
forms of data representation
Parametric methods (e.g., regression)
Assume the data fits some model, estimate model
parameters, store only the parameters, and discard
the data (except possible outliers)
Ex.: Log-linear models—obtain value at a point in m-
D space as the product on appropriate marginal
subspaces
Non-parametric methods
Do not assume models
Major families: histograms, clustering, sampling, …
37. 37
Parametric Data Reduction:
Regression and Log-Linear Models
Linear regression
Data modeled to fit a straight line
Often uses the least-square method to fit the line
Multiple regression
Allows a response variable Y to be modeled as a
linear function of multidimensional feature vector
Log-linear model
Approximates discrete multidimensional probability
distributions
38. 38
Regression Analysis
Regression analysis: A collective name for
techniques for the modeling and analysis
of numerical data consisting of values of a
dependent variable (also called
response variable or measurement) and
of one or more independent variables (aka.
explanatory variables or predictors)
The parameters are estimated so as to give
a "best fit" of the data
Most commonly the best fit is evaluated by
using the least squares method, but
other criteria have also been used
Used for prediction
(including forecasting of
time-series data), inference,
hypothesis testing, and
modeling of causal
relationships
y
x
y = x + 1
X1
Y1
Y1’
39. 39
Linear regression: Y = w X + b
Two regression coefficients, w and b, specify the line and are to be
estimated by using the data at hand
Using the least squares criterion to the known values of Y1, Y2, …,
X1, X2, ….
Multiple regression: Y = b0 + b1 X1 + b2 X2
Many nonlinear functions can be transformed into the above
Log-linear models:
Approximate discrete multidimensional probability distributions
Estimate the probability of each point (tuple) in a multi-dimensional
space for a set of discretized attributes, based on a smaller subset
of dimensional combinations
Useful for dimensionality reduction and data smoothing
Regress Analysis and Log-Linear
Models
40. 40
Histogram Analysis
Divide data into buckets and
store average (sum) for each
bucket
Partitioning rules:
Equal-width: equal bucket
range
Equal-frequency (or equal-
depth)
0
5
10
15
20
25
30
35
40
10000
20000
30000
40000
50000
60000
70000
80000
90000
100000
41. 41
Clustering
Partition data set into clusters based on similarity, and
store cluster representation (e.g., centroid and diameter)
only
Can be very effective if data is clustered but not if data
is “smeared”
Can have hierarchical clustering and be stored in multi-
dimensional index tree structures
There are many choices of clustering definitions and
clustering algorithms
Cluster analysis will be studied in depth in Chapter 10
42. 42
Sampling
Sampling: obtaining a small sample s to represent the
whole data set N
Allow a mining algorithm to run in complexity that is
potentially sub-linear to the size of the data
Key principle: Choose a representative subset of the data
Simple random sampling may have very poor
performance in the presence of skew
Develop adaptive sampling methods, e.g., stratified
sampling:
Note: Sampling may not reduce database I/Os (page at a
time)
43. 43
Types of Sampling
Simple random sampling
There is an equal probability of selecting any particular
item
Sampling without replacement
Once an object is selected, it is removed from the
population
Sampling with replacement
A selected object is not removed from the population
Stratified sampling:
Partition the data set, and draw samples from each
partition (proportionally, i.e., approximately the same
percentage of the data)
Used in conjunction with skewed data
46. 46
Data Cube Aggregation
The lowest level of a data cube (base cuboid)
The aggregated data for an individual entity of interest
E.g., a customer in a phone calling data warehouse
Multiple levels of aggregation in data cubes
Further reduce the size of data to deal with
Reference appropriate levels
Use the smallest representation which is enough to
solve the task
Queries regarding aggregated information should be
answered using data cube, when possible
47. 47
Data Reduction 3: Data Compression
String compression
There are extensive theories and well-tuned algorithms
Typically lossless, but only limited manipulation is
possible without expansion
Audio/video compression
Typically lossy compression, with progressive refinement
Sometimes small fragments of signal can be
reconstructed without reconstructing the whole
Time sequence is not audio
Typically short and vary slowly with time
Dimensionality and numerosity reduction may also be
considered as forms of data compression
49. 49
Chapter 3: Data Preprocessing
Data Preprocessing: An Overview
Data Quality
Major Tasks in Data Preprocessing
Data Cleaning
Data Integration
Data Reduction
Data Transformation and Data Discretization
Summary
50. 50
Data Transformation
A function that maps the entire set of values of a given attribute to a
new set of replacement values s.t. each old value can be identified
with one of the new values
Methods
Smoothing: Remove noise from data
Attribute/feature construction
New attributes constructed from the given ones
Aggregation: Summarization, data cube construction
Normalization: Scaled to fall within a smaller, specified range
min-max normalization
z-score normalization
normalization by decimal scaling
Discretization: Concept hierarchy climbing
51. 51
Normalization
Min-max normalization: to [new_minA, new_maxA]
Ex. Let income range $12,000 to $98,000 normalized to [0.0,
1.0]. Then $73,000 is mapped to
Z-score normalization (μ: mean, σ: standard deviation):
Ex. Let μ = 54,000, σ = 16,000. Then
Normalization by decimal scaling
716
.
0
0
)
0
0
.
1
(
000
,
12
000
,
98
000
,
12
600
,
73
A
A
A
A
A
A
min
new
min
new
max
new
min
max
min
v
v _
)
_
_
(
'
A
A
v
v
'
j
v
v
10
' Where j is the smallest integer such that Max(|ν’|) < 1
225
.
1
000
,
16
000
,
54
600
,
73
52. 52
Discretization
Three types of attributes
Nominal—values from an unordered set, e.g., color, profession
Ordinal—values from an ordered set, e.g., military or academic
rank
Numeric—real numbers, e.g., integer or real numbers
Discretization: Divide the range of a continuous attribute into intervals
Interval labels can then be used to replace actual data values
Reduce data size by discretization
Supervised vs. unsupervised
Split (top-down) vs. merge (bottom-up)
Discretization can be performed recursively on an attribute
Prepare for further analysis, e.g., classification
53. 53
Data Discretization Methods
Typical methods: All the methods can be applied recursively
Binning
Top-down split, unsupervised
Histogram analysis
Top-down split, unsupervised
Clustering analysis (unsupervised, top-down split or
bottom-up merge)
Decision-tree analysis (supervised, top-down split)
Correlation (e.g., 2) analysis (unsupervised, bottom-up
merge)
54. 54
Simple Discretization: Binning
Equal-width (distance) partitioning
Divides the range into N intervals of equal size: uniform grid
if A and B are the lowest and highest values of the attribute, the
width of intervals will be: W = (B –A)/N.
The most straightforward, but outliers may dominate presentation
Skewed data is not handled well
Equal-depth (frequency) partitioning
Divides the range into N intervals, each containing approximately
same number of samples
Good data scaling
Managing categorical attributes can be tricky
55. 55
Binning Methods for Data Smoothing
Sorted data for price (in dollars): 4, 8, 9, 15, 21, 21, 24, 25, 26,
28, 29, 34
* Partition into equal-frequency (equi-depth) bins:
- Bin 1: 4, 8, 9, 15
- Bin 2: 21, 21, 24, 25
- Bin 3: 26, 28, 29, 34
* Smoothing by bin means:
- Bin 1: 9, 9, 9, 9
- Bin 2: 23, 23, 23, 23
- Bin 3: 29, 29, 29, 29
* Smoothing by bin boundaries:
- Bin 1: 4, 4, 4, 15
- Bin 2: 21, 21, 25, 25
- Bin 3: 26, 26, 26, 34
57. 57
Discretization by Classification &
Correlation Analysis
Classification (e.g., decision tree analysis)
Supervised: Given class labels, e.g., cancerous vs. benign
Using entropy to determine split point (discretization point)
Top-down, recursive split
Details to be covered in Chapter 7
Correlation analysis (e.g., Chi-merge: χ2-based discretization)
Supervised: use class information
Bottom-up merge: find the best neighboring intervals (those
having similar distributions of classes, i.e., low χ2 values) to merge
Merge performed recursively, until a predefined stopping condition
58. 58
Concept Hierarchy Generation
Concept hierarchy organizes concepts (i.e., attribute values)
hierarchically and is usually associated with each dimension in a data
warehouse
Concept hierarchies facilitate drilling and rolling in data warehouses to
view data in multiple granularity
Concept hierarchy formation: Recursively reduce the data by collecting
and replacing low level concepts (such as numeric values for age) by
higher level concepts (such as youth, adult, or senior)
Concept hierarchies can be explicitly specified by domain experts
and/or data warehouse designers
Concept hierarchy can be automatically formed for both numeric and
nominal data. For numeric data, use discretization methods shown.
59. 59
Concept Hierarchy Generation
for Nominal Data
Specification of a partial/total ordering of attributes
explicitly at the schema level by users or experts
street < city < state < country
Specification of a hierarchy for a set of values by explicit
data grouping
{Urbana, Champaign, Chicago} < Illinois
Specification of only a partial set of attributes
E.g., only street < city, not others
Automatic generation of hierarchies (or attribute levels) by
the analysis of the number of distinct values
E.g., for a set of attributes: {street, city, state, country}
60. 60
Automatic Concept Hierarchy Generation
Some hierarchies can be automatically generated based on
the analysis of the number of distinct values per attribute in
the data set
The attribute with the most distinct values is placed at
the lowest level of the hierarchy
Exceptions, e.g., weekday, month, quarter, year
country
province_or_ state
city
street
15 distinct values
365 distinct values
3567 distinct values
674,339 distinct values
61. 61
Chapter 3: Data Preprocessing
Data Preprocessing: An Overview
Data Quality
Major Tasks in Data Preprocessing
Data Cleaning
Data Integration
Data Reduction
Data Transformation and Data Discretization
Summary
62. 62
Summary
Data quality: accuracy, completeness, consistency, timeliness,
believability, interpretability
Data cleaning: e.g. missing/noisy values, outliers
Data integration from multiple sources:
Entity identification problem
Remove redundancies
Detect inconsistencies
Data reduction
Dimensionality reduction
Numerosity reduction
Data compression
Data transformation and data discretization
Normalization
Concept hierarchy generation
63. 63
References
D. P. Ballou and G. K. Tayi. Enhancing data quality in data warehouse environments. Comm. of
ACM, 42:73-78, 1999
A. Bruce, D. Donoho, and H.-Y. Gao. Wavelet analysis. IEEE Spectrum, Oct 1996
T. Dasu and T. Johnson. Exploratory Data Mining and Data Cleaning. John Wiley, 2003
J. Devore and R. Peck. Statistics: The Exploration and Analysis of Data. Duxbury Press, 1997.
H. Galhardas, D. Florescu, D. Shasha, E. Simon, and C.-A. Saita. Declarative data cleaning:
Language, model, and algorithms. VLDB'01
M. Hua and J. Pei. Cleaning disguised missing data: A heuristic approach. KDD'07
H. V. Jagadish, et al., Special Issue on Data Reduction Techniques. Bulletin of the Technical
Committee on Data Engineering, 20(4), Dec. 1997
H. Liu and H. Motoda (eds.). Feature Extraction, Construction, and Selection: A Data Mining
Perspective. Kluwer Academic, 1998
J. E. Olson. Data Quality: The Accuracy Dimension. Morgan Kaufmann, 2003
D. Pyle. Data Preparation for Data Mining. Morgan Kaufmann, 1999
V. Raman and J. Hellerstein. Potters Wheel: An Interactive Framework for Data Cleaning and
Transformation, VLDB’2001
T. Redman. Data Quality: The Field Guide. Digital Press (Elsevier), 2001
R. Wang, V. Storey, and C. Firth. A framework for analysis of data quality research. IEEE Trans.
Knowledge and Data Engineering, 7:623-640, 1995