Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
0% found this document useful (0 votes)
61 views

Data Pre Processing

datamining
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
61 views

Data Pre Processing

datamining
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 35

Data Mining:

Concepts and Techniques


Chapter 2

Jiawei Han
Department of Computer Science
University of Illinois at Urbana-Champaign
www.cs.uiuc.edu/~hanj
2006 Jiawei Han and Micheline Kamber, All rights reserved
June 7, 2014

Data Mining: Concepts and Techniques

Chapter 2: Data Preprocessing

Why preprocess the data?

Descriptive data summarization

Data cleaning

Data integration and transformation

Data reduction

Discretization and concept hierarchy generation

Summary

June 7, 2014

Data Mining: Concepts and Techniques

Why Data Preprocessing?

Data in the real world is dirty


incomplete: lacking attribute values, lacking
certain attributes of interest, or containing
only aggregate data

noisy: containing errors or outliers

e.g., Salary=-10

inconsistent: containing discrepancies in codes


or names

June 7, 2014

e.g., occupation=

e.g., Age=42 Birthday=03/07/1997


e.g., Was rating 1,2,3, now rating A, B, C
e.g., discrepancy between duplicate records
Data Mining: Concepts and Techniques

Why Is Data Dirty?

Incomplete data may come from

Noisy data (incorrect values) may come from

Faulty data collection instruments


Human or computer error at data entry
Errors in data transmission

Inconsistent data may come from

Not applicable data value when collected


Different considerations between the time when the data was
collected and when it is analyzed.
Human/hardware/software problems

Different data sources


Functional dependency violation (e.g., modify some linked data)

Duplicate records also need data cleaning

June 7, 2014

Data Mining: Concepts and Techniques

Why Is Data Preprocessing Important?

No quality data, no quality mining results!

Quality decisions must be based on quality data

e.g., duplicate or missing data may cause incorrect or even


misleading statistics.

Data warehouse needs consistent integration of quality


data

Data extraction, cleaning, and transformation comprises


the majority of the work of building a data warehouse

June 7, 2014

Data Mining: Concepts and Techniques

Major Tasks in Data Preprocessing

Data cleaning

Data integration

Normalization and aggregation

Data reduction

Integration of multiple databases, data cubes, or files

Data transformation

Fill in missing values, smooth noisy data, identify or remove


outliers, and resolve inconsistencies

Obtains reduced representation in volume but produces the same


or similar analytical results

Data discretization

June 7, 2014

Part of data reduction but with particular importance, especially


for numerical data
Data Mining: Concepts and Techniques

Forms of Data Preprocessing

June 7, 2014

Data Mining: Concepts and Techniques

Chapter 2: Data Preprocessing

Why preprocess the data?

Descriptive data summarization

Data cleaning

Data integration and transformation

Data reduction

Discretization and concept hierarchy generation

Summary

June 7, 2014

Data Mining: Concepts and Techniques

Mining Data Descriptive Characteristics

Motivation

Data dispersion characteristics

To better understand the data: central tendency, variation


and spread
median, max, min, quantiles, outliers, variance, etc.

Numerical dimensions correspond to sorted intervals

Data dispersion: analyzed with multiple granularities of


precision

Boxplot or quantile analysis on sorted intervals

Dispersion analysis on computed measures

Folding measures into numerical dimensions

Boxplot or quantile analysis on the transformed cube

June 7, 2014

Data Mining: Concepts and Techniques

Measuring the Central Tendency

1
n

Mean (algebraic measure) (sample vs. population): x


Weighted arithmetic mean:

Trimmed mean: chopping extreme values

wx
x

Median: A holistic measure

xi

i 1

i 1
n

i 1

Middle value if odd number of values, or average of the middle two


values otherwise

Estimated by interpolation (for grouped data):

median L1 (

Mode

Value that occurs most frequently in the data

Unimodal, bimodal, trimodal

Empirical formula:

June 7, 2014

n / 2 ( f )l
f median

)c

mean mode 3 (mean median)


Data Mining: Concepts and Techniques

10

Measuring the Dispersion of Data

Quartiles, outliers and boxplots

Quartiles: Q1 (25th percentile), Q3 (75th percentile)

Inter-quartile range: IQR = Q3 Q1

Five number summary: min, Q1, M, Q3, max

Boxplot: ends of the box are the quartiles, median is marked, whiskers, and
plot outlier individually

Outlier: usually, a value higher/lower than 1.5 x IQR

Variance and standard deviation (sample: s, population: )

s2

Variance: (algebraic, scalable computation)

1 n
1 n 2 1 n 2
( xi x)2
[ xi ( xi ) ]

n 1 i1
n 1 i1
n i1

1
N

(x

)2

i 1

1
N

2
i

i 1

Standard deviation s (or ) is the square root of variance s2 (or 2)

June 7, 2014

Data Mining: Concepts and Techniques

11

Boxplot Analysis

Five-number summary of a distribution:


Minimum, Q1, M, Q3, Maximum

Boxplot

June 7, 2014

Data is represented with a box


The ends of the box are at the first and third
quartiles, i.e., the height of the box is IRQ
The median is marked by a line within the box
Whiskers: two lines outside the box extend to
Minimum and Maximum
Data Mining: Concepts and Techniques

12

Histogram Analysis

Graph displays of basic statistical class descriptions


Frequency histograms

June 7, 2014

A univariate graphical method


Consists of a set of rectangles that reflect the counts or
frequencies of the classes present in the given data

Data Mining: Concepts and Techniques

13

Scatter plot

Provides a first look at bivariate data to see clusters of


points, outliers, etc
Each pair of values is treated as a pair of coordinates and
plotted as points in the plane

June 7, 2014

Data Mining: Concepts and Techniques

14

Positively and Negatively Correlated Data

June 7, 2014

Data Mining: Concepts and Techniques

15

Not Correlated Data

June 7, 2014

Data Mining: Concepts and Techniques

16

Graphic Displays of Basic Statistical Descriptions

Histogram: (shown before)


Boxplot: (covered before)
Quantile plot: each value xi is paired with fi indicating
that approximately 100 fi % of data are xi
Quantile-quantile (q-q) plot: graphs the quantiles of one
univariant distribution against the corresponding quantiles
of another
Scatter plot: each pair of values is a pair of coordinates
and plotted as points in the plane
Loess (local regression) curve: add a smooth curve to a
scatter plot to provide better perception of the pattern of
dependence

June 7, 2014

Data Mining: Concepts and Techniques

17

Chapter 2: Data Preprocessing

Why preprocess the data?

Descriptive data summarization

Data cleaning

Data integration and transformation

Data reduction

Discretization and concept hierarchy generation

Summary

June 7, 2014

Data Mining: Concepts and Techniques

18

Data Cleaning

Importance
Data cleaning is one of the three biggest problems
in data warehousingRalph Kimball
Data cleaning is the number one problem in data
warehousingDCI survey
Data cleaning tasks

Fill in missing values

Identify outliers and smooth out noisy data

Correct inconsistent data

Resolve redundancy caused by data integration

June 7, 2014

Data Mining: Concepts and Techniques

19

Missing Data

Data is not always available

Missing data may be due to

equipment malfunction

inconsistent with other recorded data and thus deleted

data not entered due to misunderstanding

June 7, 2014

E.g., many tuples have no recorded value for several


attributes, such as customer income in sales data

certain data may not be considered important at the time of


entry
not register history or changes of the data

Missing data may need to be inferred.


Data Mining: Concepts and Techniques

20

10

How to Handle Missing Data?

Ignore the tuple: usually done when class label is missing (assuming
the tasks in classificationnot effective when the percentage of
missing values per attribute varies considerably.

Fill in the missing value manually: tedious + infeasible?

Fill in it automatically with

a global constant : e.g., unknown, a new class?!

the attribute mean

the attribute mean for all samples belonging to the same class:
smarter

the most probable value: inference-based such as Bayesian


formula or decision tree

June 7, 2014

Data Mining: Concepts and Techniques

21

Noisy Data

Noise: random error or variance in a measured variable


Incorrect attribute values may due to
faulty data collection instruments
data entry problems
data transmission problems
technology limitation
inconsistency in naming convention
Other data problems which requires data cleaning
duplicate records
incomplete data
inconsistent data

June 7, 2014

Data Mining: Concepts and Techniques

22

11

How to Handle Noisy Data?

Binning
first sort data and partition into (equal-frequency) bins
then one can smooth by bin means, smooth by bin
median, smooth by bin boundaries, etc.
Regression
smooth by fitting the data into regression functions
Clustering
detect and remove outliers
Combined computer and human inspection
detect suspicious values and check by human (e.g.,
deal with possible outliers)

June 7, 2014

Data Mining: Concepts and Techniques

23

Simple Discretization Methods: Binning

Equal-width (distance) partitioning

Divides the range into N intervals of equal size: uniform grid

if A and B are the lowest and highest values of the attribute, the
width of intervals will be: W = (B A)/N.

The most straightforward, but outliers may dominate presentation

Skewed data is not handled well

Equal-depth (frequency) partitioning

Divides the range into N intervals, each containing approximately


same number of samples

Good data scaling

Managing categorical attributes can be tricky

June 7, 2014

Data Mining: Concepts and Techniques

24

12

Binning Methods for Data Smoothing


Sorted data for price (in dollars): 4, 8, 9, 15, 21, 21, 24, 25, 26,
28, 29, 34
* Partition into equal-frequency (equi-depth) bins:
- Bin 1: 4, 8, 9, 15
- Bin 2: 21, 21, 24, 25
- Bin 3: 26, 28, 29, 34
* Smoothing by bin means:
- Bin 1: 9, 9, 9, 9
- Bin 2: 23, 23, 23, 23
- Bin 3: 29, 29, 29, 29
* Smoothing by bin boundaries:
- Bin 1: 4, 4, 4, 15
- Bin 2: 21, 21, 25, 25
- Bin 3: 26, 26, 26, 34

June 7, 2014

Data Mining: Concepts and Techniques

25

Regression
y
Y1

y=x+1

Y1

X1

June 7, 2014

Data Mining: Concepts and Techniques

26

13

Cluster Analysis

June 7, 2014

Data Mining: Concepts and Techniques

27

Chapter 2: Data Preprocessing

Why preprocess the data?

Data cleaning

Data integration and transformation

Data reduction

Discretization and concept hierarchy generation

Summary

June 7, 2014

Data Mining: Concepts and Techniques

28

14

Data Integration

Data integration:
Combines data from multiple sources into a coherent
store
Schema integration: e.g., A.cust-id B.cust-#
Integrate metadata from different sources
Entity identification problem:
Identify real world entities from multiple data sources,
e.g., Bill Clinton = William Clinton
Detecting and resolving data value conflicts
For the same real world entity, attribute values from
different sources are different
Possible reasons: different representations, different
scales, e.g., metric vs. British units

June 7, 2014

Data Mining: Concepts and Techniques

29

Handling Redundancy in Data Integration

Redundant data occur often when integration of multiple


databases

Object identification: The same attribute or object


may have different names in different databases

Derivable data: One attribute may be a derived


attribute in another table, e.g., annual revenue

Redundant attributes may be able to be detected by

correlation analysis

Careful integration of the data from multiple sources may


help reduce/avoid redundancies and inconsistencies and
improve mining speed and quality

June 7, 2014

Data Mining: Concepts and Techniques

30

15

Correlation Analysis (Numerical Data)

Correlation coefficient (also called Pearsons product


moment coefficient)

rA , B

( A A )( B B ) ( AB ) n A B
( n 1) A B

( n 1) A B

where n is the number of tuples, A and B are the respective


means of A and B, A and B are the respective standard deviation
of A and B, and (AB) is the sum of the AB cross-product.

If rA,B > 0, A and B are positively correlated (As values


increase as Bs). The higher, the stronger correlation.
rA,B = 0: independent; rA,B < 0: negatively correlated

June 7, 2014

Data Mining: Concepts and Techniques

31

Correlation Analysis (Categorical Data)

2 (chi-square) test

(Observed Expected ) 2

Expected
2

The larger the 2 value, the more likely the variables are
related
The cells that contribute the most to the 2 value are
those whose actual count is very different from the
expected count
Correlation does not imply causality

# of hospitals and # of car-theft in a city are correlated

Both are causally linked to the third variable: population

June 7, 2014

Data Mining: Concepts and Techniques

32

16

Chi-Square Calculation: An Example


Play chess

Not play chess

Sum (row)

Like science fiction

250(90)

200(360)

450

Not like science fiction

50(210)

1000(840)

1050

Sum(col.)

300

1200

1500

2 (chi-square) calculation (numbers in parenthesis are


expected counts calculated based on the data distribution
in the two categories)

( 250 90) 2 (50 210 ) 2 ( 200 360 ) 2 (1000 840 ) 2

507 .93
90
210
360
840

It shows that like_science_fiction and play_chess are


correlated in the group

June 7, 2014

Data Mining: Concepts and Techniques

33

Data Transformation

Smoothing: remove noise from data

Aggregation: summarization, data cube construction

Generalization: concept hierarchy climbing

Normalization: scaled to fall within a small, specified


range

min-max normalization

z-score normalization

normalization by decimal scaling

Attribute/feature construction

June 7, 2014

New attributes constructed from the given ones


Data Mining: Concepts and Techniques

34

17

Data Transformation: Normalization

Min-max normalization: to [new_minA, new_maxA]

v'

Ex. Let income range $12,000 to $98,000 normalized to [0.0,


73,600 12,000
1.0]. Then $73,000 is mapped to 98,000 12,000 (1.0 0) 0 0.716

v minA
(new _ maxA new _ minA) new _ minA
maxA minA

Z-score normalization (: mean, : standard deviation):

v'

Ex. Let = 54,000, = 16,000. Then

v A
73,600 54,000
1.225
16,000

Normalization by decimal scaling

v'

v
10 j

Where j is the smallest integer such that Max(||) < 1

June 7, 2014

Data Mining: Concepts and Techniques

35

Chapter 2: Data Preprocessing

Why preprocess the data?

Data cleaning

Data integration and transformation

Data reduction

Discretization and concept hierarchy generation

Summary

June 7, 2014

Data Mining: Concepts and Techniques

36

18

Data Reduction Strategies

Why data reduction?


A database/data warehouse may store terabytes of data
Complex data analysis/mining may take a very long time to run
on the complete data set
Data reduction
Obtain a reduced representation of the data set that is much
smaller in volume but yet produce the same (or almost the
same) analytical results
Data reduction strategies
Data cube aggregation:
Dimensionality reduction e.g., remove unimportant attributes
Data Compression
Numerosity reduction e.g., fit data into models
Discretization and concept hierarchy generation

June 7, 2014

Data Mining: Concepts and Techniques

37

Data Cube Aggregation

The lowest level of a data cube (base cuboid)

The aggregated data for an individual entity of interest

E.g., a customer in a phone calling data warehouse

Multiple levels of aggregation in data cubes

Reference appropriate levels

Further reduce the size of data to deal with


Use the smallest representation which is enough to solve the task

Queries regarding aggregated information should be answered using


data cube, when possible

June 7, 2014

Data Mining: Concepts and Techniques

38

19

Cube: A Lattice of Cuboids


all
time

0-D(apex) cuboid

item

time,location
time,item

location

item,location

time,supplier

supplier

1-D cuboids

location,supplier

2-D cuboids
item,supplier

time,location,supplier

3-D cuboids
time,item,location

time,item,supplier

item,location,supplier

4-D(base) cuboid
time, item, location, supplier
June 7, 2014

Data Mining: Concepts and Techniques

39

Attribute Subset Selection

Feature selection (i.e., attribute subset selection):


Select a minimum set of features such that the
probability distribution of different classes given the
values for those features is as close as possible to the
original distribution given the values of all features
reduce # of patterns in the patterns, easier to
understand
Heuristic methods (due to exponential # of choices):
Step-wise forward selection
Step-wise backward elimination
Combining forward selection and backward elimination
Decision-tree induction

June 7, 2014

Data Mining: Concepts and Techniques

40

20

Attribute Subset Selection


Forward selection
Initial attribute set:
{A1,A2,A3,A4, A5, A6}

Backward elimination
Initial attribute set:
{A1,A2,A3,A4, A5, A6}

Initial reduced set:


{}
=> {A1}
=> {A1, A4}
=> Reduced attribute set:
{ A1, A4, A6}

=>{A1,A3,A4, A5, A6}


=>{A1,A4, A5, A6}
=> Reduced attribute set:
{A1,A4, A6}

June 7, 2014

Data Mining: Concepts and Techniques

41

Example of Decision Tree Induction


Initial attribute set:
{A1, A2, A3, A4, A5, A6}
A4 ?
A6?

A1?

Class 1
>
June 7, 2014

Class 2

Class 1

Class 2

Reduced attribute set: {A1, A4, A6}


Data Mining: Concepts and Techniques

42

21

Heuristic Feature Selection Methods

There are 2d possible sub-features of d features


Several heuristic feature selection methods:
Best single features under the feature independence
assumption: choose by significance tests
Best step-wise feature selection:
The best single-feature is picked first
Then next best feature condition to the first, ...
Step-wise feature elimination:
Repeatedly eliminate the worst feature
Best combined feature selection and elimination
Optimal branch and bound:
Use feature elimination and backtracking

June 7, 2014

Data Mining: Concepts and Techniques

43

Data Compression

String compression
There are extensive theories and well-tuned algorithms
Typically lossless
But only limited manipulation is possible without
expansion
Audio/video compression
Typically lossy compression, with progressive
refinement
Sometimes small fragments of signal can be
reconstructed without reconstructing the whole
Time sequence is not audio
Typically short and vary slowly with time

June 7, 2014

Data Mining: Concepts and Techniques

44

22

Data Compression
If the original data can be reconstructed
from the compressed data without any
loss of information, the data reduction is
called lossless

Compressed
Data

Original Data
lossless

Original Data
Approximated
June 7, 2014

If we can reconstruct only an


approximation of the original data, then
the data reduction is called lossy

Data Mining: Concepts and Techniques

45

Dimensionality Reduction:
Wavelet Transformation
Haar2

Daubechie4

Discrete wavelet transform (DWT): linear signal


processing, multi-resolutional analysis
Compressed approximation: store only a small fraction of
the strongest of the wavelet coefficients
Similar to discrete Fourier transform (DFT), but better
lossy compression, localized in space
Method:

Length, L, must be an integer power of 2 (padding with 0s, when


necessary)

Each transform has 2 functions: smoothing, difference

Applies to pairs of data, resulting in two set of data of length L/2

Applies two functions recursively, until reaches the desired length

June 7, 2014

Data Mining: Concepts and Techniques

46

23

Dimensionality Reduction: Principal


Component Analysis (PCA)

Given N data vectors from n-dimensions, find k n orthogonal


vectors (principal components) that can be best used to represent data
Steps
Normalize input data: Each attribute falls within the same range
Compute k orthonormal (unit) vectors, i.e., principal components
Each input data (vector) is a linear combination of the k principal
component vectors
The principal components are sorted in order of decreasing
significance or strength
Since the components are sorted, the size of the data can be
reduced by eliminating the weak components, i.e., those with low
variance. (i.e., using the strongest principal components, it is
possible to reconstruct a good approximation of the original data
Works for numeric data only
Used when the number of dimensions is large

June 7, 2014

Data Mining: Concepts and Techniques

47

Principal Component Analysis


Y1, Y2: two principal
component, for the given set of
data originally mapped to the X1
and X2

X2
Y1

Y2

X1

June 7, 2014

Data Mining: Concepts and Techniques

48

24

Numerosity Reduction
Reduce data volume by choosing alternative, smaller
forms of data representation
Parametric methods
Assume the data fits some model, estimate model
parameters, store only the parameters, and discard
the data (except possible outliers)
Example: Log-linear modelsobtain value at a point
in m-D space as the product on appropriate marginal
subspaces
Non-parametric methods
Do not assume models
Major families: histograms, clustering, sampling

June 7, 2014

Data Mining: Concepts and Techniques

49

Data Reduction Method (1):


Regression and Log-Linear Models

Linear regression: Data are modeled to fit a straight line

Often uses the least-square method to fit the line

Multiple regression: allows a response variable Y to be modeled as a


linear function of multidimensional feature vector

Log-linear model: approximates discrete multidimensional


probability distributions.

It can be used to estimate the probability of each point in a


multidimensional space for a set of discretized attributes, based
on a smaller subset of dimensional combinations

June 7, 2014

Data Mining: Concepts and Techniques

50

25

Regress Analysis and Log-Linear Models


Linear regression: Y = w X + b
Two regression coefficients, w and b, specify the line
and are to be estimated by using the data at hand
Using the least squares criterion to the known values
of Y1, Y2, , X1, X2, .
Multiple regression: Y = b0 + b1 X1 + b2 X2.
Many nonlinear functions can be transformed into the
above
Log-linear models:
The multi-way table of joint probabilities is
approximated by a product of lower-order tables
Probability: p(a, b, c, d) = ab acad bcd

Data Reduction Method (2): Histograms

Divide data into buckets and store 40


average (sum) for each bucket

Partitioning rules:

20

June 7, 2014

Data Mining: Concepts and Techniques

100000

90000

80000

70000

60000

MaxDiff: set bucket boundary


0
between each pair for pairs have
the 1 largest differences

50000

V-optimal: with the least


15
histogram variance (weighted
10
sum of the original values that
5
each bucket represents)
40000

25

30000

Equal-frequency (or equaldepth)

20000

30

Equal-width: equal bucket range

10000

35

52

26

Data Reduction Method (3): Clustering

Partition data set into clusters based on similarity, and store cluster
representation (e.g., centroid and diameter) only

Can be very effective if data is clustered but not if data is smeared

Can have hierarchical clustering and be stored in multi-dimensional


index tree structures

There are many choices of clustering definitions and clustering


algorithms

Cluster analysis will be studied in depth in Chapter 7

June 7, 2014

Data Mining: Concepts and Techniques

53

Data Reduction Method (4): Sampling

Sampling: obtaining a small sample s to represent the


whole data set N
Allow a mining algorithm to run in complexity that is
potentially sub-linear to the size of the data
Choose a representative subset of the data
Simple random sampling may have very poor
performance in the presence of skew
Develop adaptive sampling methods
Stratified sampling:
Approximate the percentage of each class (or
subpopulation of interest) in the overall database
Used in conjunction with skewed data
Note: Sampling may not reduce database I/Os (page at a
time)

June 7, 2014

Data Mining: Concepts and Techniques

54

27

Sampling: with or without Replacement

Raw Data
June 7, 2014

Data Mining: Concepts and Techniques

55

Sampling: Cluster or Stratified Sampling

Raw Data

June 7, 2014

Cluster/Stratified Sample

Data Mining: Concepts and Techniques

56

28

Chapter 2: Data Preprocessing

Why preprocess the data?

Data cleaning

Data integration and transformation

Data reduction

Discretization and concept hierarchy generation

Summary

June 7, 2014

Data Mining: Concepts and Techniques

57

Discretization

Three types of attributes:

Nominal values from an unordered set, e.g., color, profession

Ordinal values from an ordered set, e.g., military or academic


rank

Continuous real numbers, e.g., integer or real numbers

Discretization:

Divide the range of a continuous attribute into intervals

Some classification algorithms only accept categorical attributes.

Reduce data size by discretization

Prepare for further analysis

June 7, 2014

Data Mining: Concepts and Techniques

58

29

Discretization and Concept Hierarchy

Discretization

Reduce the number of values for a given continuous attribute by


dividing the range of the attribute into intervals

Interval labels can then be used to replace actual data values

Supervised vs. unsupervised

Split (top-down) vs. merge (bottom-up)

Discretization can be performed recursively on an attribute

Concept hierarchy formation

Recursively reduce the data by collecting and replacing low level


concepts (such as numeric values for age) by higher level concepts
(such as young, middle-aged, or senior)

June 7, 2014

Data Mining: Concepts and Techniques

59

Discretization and Concept Hierarchy


Generation for Numeric Data

Typical methods: All the methods can be applied recursively

Binning (covered above)

Histogram analysis (covered above)

Top-down split, unsupervised,

Top-down split, unsupervised

Clustering analysis (covered above)

Either top-down split or bottom-up merge, unsupervised

Entropy-based discretization: supervised, top-down split

Interval merging by 2 Analysis: unsupervised, bottom-up merge

Segmentation by natural partitioning: top-down split, unsupervised

June 7, 2014

Data Mining: Concepts and Techniques

60

30

Entropy-Based Discretization

Given a set of samples S, if S is partitioned into two intervals S1 and S2


using boundary T, the information gain after partitioning is
|S |
|S |
I ( S , T ) 1 Entropy ( S 1) 2 Entropy ( S 2 )
|S|
|S|
Entropy is calculated based on class distribution of the samples in the
set. Given m classes, the entropy of S1 is
m

Entropy ( S 1 ) p i log 2 ( p i )
i 1

where pi is the probability of class i in S1

The boundary that minimizes the entropy function over all possible
boundaries is selected as a binary discretization

The process is recursively applied to partitions obtained until some


stopping criterion is met

Such a boundary may reduce data size and improve classification


accuracy

June 7, 2014

Data Mining: Concepts and Techniques

61

Interval Merge by 2 Analysis

Merging-based (bottom-up) vs. splitting-based methods

Merge: Find the best neighboring intervals and merge them to form
larger intervals recursively

ChiMerge [Kerber AAAI 1992, See also Liu et al. DMKD 2002]

Initially, each distinct value of a numerical attr. A is considered to be


one interval

2 tests are performed for every pair of adjacent intervals

Adjacent intervals with the least 2 values are merged together,


since low 2 values for a pair indicate similar class distributions

This merge process proceeds recursively until a predefined stopping


criterion is met (such as significance level, max-interval, max
inconsistency, etc.)

June 7, 2014

Data Mining: Concepts and Techniques

62

31

Segmentation by Natural Partitioning

A simply 3-4-5 rule can be used to segment numeric data


into relatively uniform, natural intervals.

If an interval covers 3, 6, 7 or 9 distinct values at the


most significant digit, partition the range into 3 equiwidth intervals

If it covers 2, 4, or 8 distinct values at the most


significant digit, partition the range into 4 intervals

If it covers 1, 5, or 10 distinct values at the most


significant digit, partition the range into 5 intervals

June 7, 2014

Data Mining: Concepts and Techniques

63

Example of 3-4-5 Rule


count

Step 1:
Step 2:

-$351

-$159

Min

Low (i.e, 5%-tile)

msd=1,000

profit

Low=-$1,000

(-$1,000 - 0)

(-$400 - 0)

(-$200 -$100)
(-$100 0)

June 7, 2014

Max

High=$2,000

(0 -$ 1,000)

($1,000 - $2,000)

(-$400 -$5,000)

Step 4:

(-$300 -$200)

$4,700

(-$1,000 - $2,000)

Step 3:

(-$400 -$300)

$1,838
High(i.e, 95%-0 tile)

($1,000 - $2, 000)

(0 - $1,000)
(0 $200)

($1,000 $1,200)

($200 $400)

($1,200 $1,400)
($1,400 $1,600)

($400 $600)
($600 $800)

($800 $1,000)

($1,600 $1,800) ($1,800 $2,000)

Data Mining: Concepts and Techniques

($2,000 - $5, 000)

($2,000 $3,000)
($3,000 $4,000)
($4,000 $5,000)

64

32

Concept Hierarchy Generation for Categorical Data

Specification of a partial/total ordering of attributes


explicitly at the schema level by users or experts
street < city < state < country

Specification of a hierarchy for a set of values by explicit


data grouping
{Urbana, Champaign, Chicago} < Illinois

Specification of only a partial set of attributes


E.g., only street < city, not others

Automatic generation of hierarchies (or attribute levels) by


the analysis of the number of distinct values
E.g., for a set of attributes: {street, city, state, country}

June 7, 2014

Data Mining: Concepts and Techniques

65

Automatic Concept Hierarchy Generation

Some hierarchies can be automatically generated based


on the analysis of the number of distinct values per
attribute in the data set
The attribute with the most distinct values is placed
at the lowest level of the hierarchy
Exceptions, e.g., weekday, month, quarter, year
15 distinct values

country
province_or_ state

365 distinct values

city

3567 distinct values

street
June 7, 2014

674,339 distinct values


Data Mining: Concepts and Techniques

66

33

Chapter 2: Data Preprocessing

Why preprocess the data?

Data cleaning

Data integration and transformation

Data reduction

Discretization and concept hierarchy


generation

Summary

June 7, 2014

Data Mining: Concepts and Techniques

67

Summary

Data preparation or preprocessing is a big issue for both


data warehousing and data mining
Discriptive data summarization is need for quality data
preprocessing
Data preparation includes

Data cleaning and data integration

Data reduction and feature selection

Discretization

A lot a methods have been developed but data


preprocessing still an active area of research

June 7, 2014

Data Mining: Concepts and Techniques

68

34

References

D. P. Ballou and G. K. Tayi. Enhancing data quality in data warehouse environments. Communications
of ACM, 42:73-78, 1999

T. Dasu and T. Johnson. Exploratory Data Mining and Data Cleaning. John Wiley & Sons, 2003

T. Dasu, T. Johnson, S. Muthukrishnan, V. Shkapenyuk. Mining Database Structure; Or, How to Build
a Data Quality Browser. SIGMOD02.

H.V. Jagadish et al., Special Issue on Data Reduction Techniques. Bulletin of the Technical
Committee on Data Engineering, 20(4), December 1997

D. Pyle. Data Preparation for Data Mining. Morgan Kaufmann, 1999


E. Rahm and H. H. Do. Data Cleaning: Problems and Current Approaches. IEEE Bulletin of the

Technical Committee on Data Engineering. Vol.23, No.4

V. Raman and J. Hellerstein. Potters Wheel: An Interactive Framework for Data Cleaning and
Transformation, VLDB2001

T. Redman. Data Quality: Management and Technology. Bantam Books, 1992

Y. Wand and R. Wang. Anchoring data quality dimensions ontological foundations. Communications of
ACM, 39:86-95, 1996

R. Wang, V. Storey, and C. Firth. A framework for analysis of data quality research. IEEE Trans.
Knowledge and Data Engineering, 7:623-640, 1995

June 7, 2014

Data Mining: Concepts and Techniques

69

35

You might also like