A Discretization Algorithm For Uncertain Data
A Discretization Algorithm For Uncertain Data
Uncertain Data
1
Department of Computer and Information Science,
Indiana University – Purdue University, Indianapolis, USA
{jiaqge,yxia}@cs.iupui.edu
2
Computer Science and Engineering
University of South Florida
ytu@cse.usf.edu
1 Introduction
Data discretization is a commonly used technique in data mining. Data discretization
reduces the number of values for a given continuous attribute by dividing the range of
the attribute into intervals. Interval labels are then used to replace actual data values.
Replacing numerous values of a continuous attribute by a small number of interval
labels thereby simplifies the original data. This leads to a concise, easy-to-use, knowl-
edge-level representation of mining results [32]. Discretization is often performed
*
Please note that the LNCS Editorial assumes that all authors have used the western naming
convention, with given names preceding surnames. This determines the structure of the
names in the running heads and the author index.
P. García Bringas et al. (Eds.): DEXA 2010, Part II, LNCS 6262, pp. 485–499, 2010.
© Springer-Verlag Berlin Heidelberg 2010
486 J. Ge, Y. Xia, and Y. Tu
prior to the learning process and has played an important role in data mining and
knowledge discovering. For example, many classification algorithms as AQ [1], CLIP
[2], and CN2 [3] are only designed for category data, therefore, numerical data are
usually first discretized before being processed by these classification algorithms.
Assume A is one of the continuous attributes of a dataset, A can be discretized into n
intervals as D = {[d0, d1), [d1, d2),…, [dn-1, dn]}, where di is the value of the endpoints
in each interval. Then D is called as a discretization scheme on attribute A. A good
discretization algorithm not only produces a concise view of continuous attributes so
that experts and users can have a better understanding of the data, but also helps ma-
chine learning and data mining applications to be more effective and efficient [4]. A
number of discretization algorithms have been proposed in literature, most of them
focus on certain data. However, data tends to be uncertain in many applications [9],
[10], [11], [12], [13]. Uncertainty can originate from diverse sources such as data
collection error, measurement precision limitation, data sampling error, obsolete
source, and transmission error. The uncertainty can degrade the performance of vari-
ous data mining algorithms if it is not well handled. In previous work, uncertainty in
data is commonly treated as a random variable with probability distribution. Thus,
uncertain attribute value is often represented as an interval with a probability distribu-
tion function over the interval [14], [15].
In this paper, we propose a data discretization technique called Uncertain Class-
Attribute Interdependency Maximization (UCAIM) for uncertain data. It is based on
the CAIM discretization algorithm and we extend it with a new mechanism to process
uncertainty. Probability distribution function (pdf) is commonly used to model data
uncertainty and pdf can be represented as either formulas or samples. We adopt the
concept of probability cardinality to build the quanta matrix for uncertain data. Based
on the quanta matrix, we define a new criterion value ucaim to measure the interde-
pendency between uncertain attributes and uncertain class memberships. The optimal
discretization scheme is determined by searching the one with the largest ucaim value.
In the experiments, we applied the discretization algorithm as the preprocessing step
of an uncertain naïve Bayesian classifier [16], and measured the discretization quality
by its classification accuracy. Results illustrated that the application of the UCAIM
algorithm as a front-end discretization algorithm significantly improve the classifica-
tion performance.
The paper is organized as following. In section 2, we discuss related work. Section
3 introduces the model of uncertain data. In section 4, we present the ucaim algorithm
in detail. The experiments results are shown in section 5, and section 6 concludes the
paper.
2 Related Work
Discretization algorithms can be divided into top-down and bottom-up methods ac-
cording to how the algorithms generate discrete schemes [6]. Both top-down and
bottom-up discretization algorithms can be further subdivided into unsupervised and
supervised methods [17]. Equal Width and Equal Frequency [5] are well-known un-
supervised top-down algorithms, while the supervised top-down algorithms include
MDLP [7], CADD (class-attribute dependent discretize algorithm) [18], Information
A Discretization Algorithm for Uncertain Data 487
ೠ (ೣషഋ)మ
.ݎ ଵ
= ೠೕ .݈ ݁ మ ݀ ݔ.
ξଶగఙ
ೕ .
488 J. Ge, Y. Xia, and Y. Tu
Here pij is the probability distribution of uncertain attribute Aijun which can be seen
as a random variable.
In case that the uncertainty cannot be modeled by any mathematical formula ex-
pression, a sample based method is often used to model the probability distribution:
௨
= {ܣ |(ݔଵ : ଵ ), (ݔଶ : ଶ ), … (ݔ : ), … , (ݔ : )} .
Where, X = {x1, x2… xi …xn} is the set of all possible values for attribute Aijun, and pi is
the probability that Aijun = xi.
Not only can the attributes be uncertain, class labels may also contain uncertainty.
Instead of having the accurate class label, a class membership may be a probability
distribution as following:
Here, {c1, c2,…, cn} is the set containing all possible class labels, and pi is the prob-
ability that this instance ti belongs to class ci.
Table 1 shows an example of an uncertain database. Both attributes and class labels
of the dataset are uncertain. Their precise values are unavailable and we only have
knowledge of the probability distribution. For attribute 1, its uncertainty is repre-
sented as a Gaussian distribution with parameters (μ, σ) to model the pdf. For attribute
2, it lists all possible values with their corresponding probabilities for each instance.
Note that the uncertainty of class label is always represented in the sample format as
the values are discrete.
Where, Aijun .xk is the possible value of Aijun, and Aijun.pk is the probability that Aijun= xk.
We assume that class uncertainty is independent to the probability distributions of
attribute values. Thus, for a tuple ti belonging to class C, the probability that its attrib-
ute value Aijun falls in the interval [left, right] is:
௨
ܲ൫ܣ ݐ݂݈݁[ א, ]ݐ݄݃݅ݎ, ܿ = ܥ൯ = ܿ( כ ݆݊݅ݑܣ = )ܥ (3)
is defined in formula (1) and (2) and p(c = C) is the probability that t belongs to
݊ݑ
݆݅ܣ
i i
class C.
For each class C, we compute the sum of the probabilities that an uncertain attrib-
ute Aiju falls in partition [left, right] for all the tuples in dataset D. This summation is
called probabilistic cardinality. For example, the probability cardinality of partition
P= [a,b) for class C is calculated as:
௨
ܲ ( = )σୀଵ ܲ ቀܣ ܽ[ א, ܾ)ቁ ܿ(ܲ כ = )ܥ (4)
The discretization algorithm aims to find the minimal number of discrete intervals
while minimizing the loss of class-attribute interdependency. Suppose F is a continu-
ous numeric attribute, and there exists a discretization scheme D on F, which divides
the whole continuous domain of attribute F into n discrete intervals bounded by the
endpoints as:
D: {[d0, d1), [d1, d2), [d2, d3),…, [dn-1,dn] } (5)
where d0 is the minimal value and dn is the maximal value of attribute F; d1, d2,…, dn-1
are cutting points arranged in ascending order.
For certain dataset, every value of attribute F is precise; therefore it will fall into
only one of the n intervals defined in (5). However, the value of an uncertain attribute
can be an interval or a series of values with associated probability distribution. There-
fore, it could fall into multiple intervals. The class membership for a specific interval
in (5) varies with different discretization scheme D.
The class variable and the discretization variable of attribute F are treated as two
random variables defining a two-dimensional quanta matrix (also known as the con-
tingency table). Table 2 is an example of quanta matrix.
In Table 2, qir is the probability cardinality of the uncertain attribute AFun which be-
longs to the ith class and has its value within the interval [dr-1, dr]. Thus, according to
formula (4), qir can be calculated as:
௨
ݍ = ܲ (ܿ = ݅ܥ, ܣி ݀[ אିଵ , ݀ ]) (6)
490 J. Ge, Y. Xia, and Y. Tu
Table 2. Quanta matrix for uncertain attribute AFun and discretization scheme D
Mi+ is the sum of the probability cardinality for objects belonging to the ith class,
and M+r is the total probability cardinality of uncertain attribute AFun that are within
the interval [dr-1, dr], for i = 1, 2… S, and r= 1, 2… n.
The estimated joint probability that uncertain attribute values AFun is within the in-
terval Dr= [dr-1, dr], and belong to class Ci can be calculated as:
௨ ೝ
= ൫ܥ , ܦ หܣி ൯ = ெ (7)
ೌೣమ
σ
ೝసభ
ೝ
௨ ಾశೝ
ܯܫܣܥ൫ܥ, ܦหܣி ൯ = (8)
Where n is the number of intervals, r iterates through all intervals, i.e. r=1, 2,…, n,
maxr is the maximum value among all qir values (maximum value within the rth col-
umn of the quanta matrix), i = 1,2,…,S, M+r is the total probability of continues values
of attribute F that are within the interval Dr= [dr-1, dr].
From the definition, we can see that caim value increases when the values of maxi
grow, which corresponds to the increase of the interdependence between the class
labels and the discrete intervals. Thus CAIM algorithm finds the optimal discretiza-
tion scheme by searching the scheme with the highest caim value. Since the maximal
value maxr is the most significant part in the definition of CAIM criterion, the class
which maxr corresponds to is called main class and the larger maxr the more interde-
pendent between this main class and the interval Dr= [dr-1, dr].
Although CAIM performances well on certain datasets, it encounters new chal-
lenges in uncertain case. For each interval, CAIM algorithm only takes the main class
into account, but does not consider the distribution over all other classes, which leads
to problems when dealing with uncertain data. In an uncertain dataset, each instance
no longer has a deterministic class label, but may have a probability distribution over
A Discretization Algorithm for Uncertain Data 491
all possible classes and this reduces the interdependency between attribute and class.
We use the probability cardinality to build the quanta matrix for uncertain attributes,
and we observe that the original caim criterion causes problems when handling uncer-
tain quanta matrix. Below we give one such example. Suppose a simple uncertain
dataset containing 5 instances is shown in Table 3. Its corresponding quanta matrix is
shown in Table 4.
From table 3, we can calculate the probability distribution of attribute values x each
class as following:
class Interval
[0, 1]
0 3
1 2
According to formula (8), the caim value for the quanta matrix in table 4 is: caim =
32/(3+2) = 1.8. From the distribution of attribute values in each class, we can see the
attribute values of instances in class 0 have a high probability around x = 0.9; and
those for instances in class 1 are mainly located in the small end around x=0.1 and
0.2. Obviously, x = 0.5 is a reasonable cutting point to generate the discretization
scheme {[0, 0.5) [0.5, 1]}. After the splitting, the quanta matrix is shown in table 5,
whose corresponding caim value is:
⎛ 1.312 2.412 ⎞
⎜ + ⎟
caim = ⎜ 1.31 + 0.59 2.41 + 0.69 ⎟ = 1.38
⎜ 2 ⎟
⎜ ⎟
⎝ ⎠
492 J. Ge, Y. Xia, and Y. Tu
class Interval
[0, 0.5) [0.5, 1]
0 0.59 2.41
1 1.31 0.69
The goal of the CAM algorithm is to find the discretization scheme with highest
caim value, so {[0, 0.5) [0.5, 1]} will not be accepted as a better discretization
scheme, because the caim value decreases from 1.8 to 1.38 after splitting at x = 0.5.
Data uncertainty obscures the interdependence between classes and attribute values
by flatting the probability distributions. Therefore, when the original CAIM criterion
is applied to uncertain data, it results in two problems. First, it usually does not create
enough intervals in the discretization scheme or it stops splitting too early, which
causes the loss of much class-attribute interdependence. Second, in order to increase
the caim value, it is possible that the algorithm generates intervals with very small
probability cardinalities, which reduces the robustness of the algorithm.
For uncertain data, the attribute-class interdependence is in form of a probability
distribution. The original caim definition as in formula (8) ignores this distribution,
and only considers the main class. Therefore, we need to revise the original definition
to handle uncertain data. Now that uncertainty blurs the attribute-class interdepend-
ence and reduces the difference between the main class and the rest of the classes, we
try to make the CAIM value more sensitive to change of values in quanta matrix. We
propose the uncertain CAIM criterion UCAIM, which is defined as follows:
max v2 × Offset r
∑ r =1 M
n
∑
s
i =1,qir ≠ max r
(max r − qir )
Offsetr = (10)
s −1
In formula (9), maxr is the maximum value among all qir values (maximum value
within the rth column of the quanta matrix), i = 1, 2,…, S, M+r is the total probability of
continues values of attribute F that are within the interval Dr= [dr-1, dr]. Offsetr defined
in (10) is the average of the offsets or differences for all other qir values to maxr.
Because the larger the attribute-class interdependence, the larger the value
maxr/M+r, CAIM therefore uses it to identify splitting points in formula (8). In the
UCAIM definition we proposed, Offsetr shows how significant the main class is,
compared to other classes. When Offsetr is large, it means that within interval r, the
probability an instance belongs to the main class is much higher than the other
classes, so the interdependence between interval r and the main class becomes is also
high. Therefore, we propose the ucaim definition in formula (9) for the following
reasons:
A Discretization Algorithm for Uncertain Data 493
1) Compared with maxr/M+r, we multiply it with the factor Offsetr to make the
value Offsetr*maxr/M+r more sensitive to interdependence changes, which are usually
less significant for uncertain data.
2) The value maxr/M+r may be large merely because M+ris small, which happens
when there are not many instances falling into interval r. However, Offsetr does not
have such this problem, because it measures the relative relationship between main
class and other classes.
Now we apply the new definition to the sample uncertain dataset in Table 3. For
the original quanta matrix as in Table 4, the ucaim value is
32 × (3 − 2)
ucaim = = 1.8
5
For the quanta matrix after splitting as in Table 5, we have
Algorithm
1. Find the maximal and minimal possible values of the uncertain attribute AFun,
recorded as d0, dn.
2. Create a set B of all potential boundary endpoints. For uncertain attribute
modelled in sample based pdf, simply sort all distinct possible values and use them
as the set; for uncertain data modelled as formula based pdf, we use the mean of
each distribution to build the set.
3. Set the initial discretization scheme as D:{[d0, dn]}, set GlobalUCAIM = 0
4. initialize k=1;
5. tentatively add an inner boundary, which is not already in D, from B and cal-
culate corresponding UCAIM value
6. after all the tentative additions have been tested, accept the one with the high-
est value of UCAIM
7. if UCAIM > GlobalUCAIM or k<S, update D with the accepted boundary and
set GlobalUCAIM = UCAIM, else terminate
8. set k=k+1 and go to 5
Output: D
494 J. Ge, Y. Xia, and Y. Tu
5 Experiments
In this section, we present the experimental results of ucaim discretization algorithm
on eight datasets. We compare our technique with the traditional CAIM discretization
algorithm, to show the effectiveness of UCAIM algorithm on uncertain data.
The datasets selected to test the ucaim algorithm are: Iris Plants dataset (iris), Johns
Hopkins University Ionosphere dataset (ionosphere), Pima Indians Diabetes dataset
(pima), Glass Identification dataset (glass), Wine dataset (wine), Breast Cancer Wis-
consin Original dataset (breast), Vehicle Silhouettes dataset (vehicle), Statlog Heart
dataset (heart). All these datasets were obtained from UCI ML repository [30], and
their detailed information is shown in table 7.
label uncertainty, we assume the original class for each instance is the main class, and
assign it a probability pmc, and there is a uniform distribution over all other classes. As
a comparison, assume the real data does not center in the original position, but sit in
the noised value, and the noises are in the same distribution as those described above.
We use the accuracy of uncertain naïve Bayesian classifier to evaluate the quality
of discretization algorithms. As the purpose of our experiment is to compare discreti-
zation algorithms, when we build the classifier, we ignore nominal attributes. In the
experiments, we first compare our UCAIM algorithm for uncertain data with the
original algorithm CAIM-O which does not take the uncertainty into account. We also
compare the UCAIM with a discretization algorithm named CAIM-M which simply
applies CAIM-O algorithm on uncertain quanta matrix (without using the Offset).
Table 8. Accuracies of the uncertain Naïve Bayesian classifier with different discretization
algorithms
100.00%
classification accuracy 80.00%
60.00%
40.00%
20.00% UCAIM
0.00% CAIM-M
CAIM-O
100.00%
classification accuracy
80.00%
60.00%
40.00%
UCAIM
20.00%
0.00% CAIM-M
CAIM-O
f = 2, pmc = 0.8
100.00%
classification accuracy
80.00%
60.00%
40.00%
UCAIM
20.00%
CAIM-M
0.00%
CAIM-O
Fig. 1. Classification accuracies with different discretization methods under different uncertain level
A Discretization Algorithm for Uncertain Data 497
Table 9. Average classification accuracies with different discretization methods under different
uncertain level
From table 8, table 9 and figure 1, we can see that UCAIM outperforms the other
two algorithms in most cases. Particularly, UCAIM has a more significant perform-
ance improvement for datasets with higher uncertainty. That is because UCAIM util-
izes extra information such as probability distribution of uncertain data, and employs
the new criterion to retrieve the class-attribute interdependency which is not obvious
when data is uncertain. Therefore, the discretization process of UCAIM is more so-
phisticated and comprehensive, and the discretized data can help data mining algo-
rithms such as Naïve Bayesian classifier to gain a higher accuracy.
6 Conclusion
In this paper, we propose a new discretization algorithm for uncertain data. We em-
ploy both the formula based and sample based probability distribution function to
model data uncertainty. We use probability cardinality to build the uncertain quanta
matrix, which is then used to calculate ucaim to find the optimal discretization
scheme with highest class-attribute interdependency. Experiments show that our algo-
rithm can help the naïve Bayesian classifier to reach higher classification accuracy.
We also observe that the proper use of data uncertainty information can significantly
improve the quality of data miming results and we plan to explore more data mining
approaches for various uncertain models in the future.
References
1. Kaufman, K.A., Michalski, R.S.: Learning from inconsistent and noisy data: the AQ18 ap-
proach. In: Proceeding of 11th International Symposium on Methodologies for Intelligent
Systems (1999)
2. Cios, K.J., et al.: Hybrid inductive machine learning: an overview of clip algorithm. In:
Jain, L.C., Kacprzyk, J. (eds.) New Learning Paradigms in Soft Computing, pp. 276–322.
Springer, Heidelberg (2001)
3. Clark, P., Niblett, T.: The CN2 Algorithm. Machine Learning 3(4), 261–283 (1989)
4. Catlett, J.: On Changing Continues Attributes into Ordered Discrete Attributes. In: Kodra-
toff, Y. (ed.) EWSL 1991. LNCS, vol. 482, pp. 164–178. Springer, Heidelberg (1991)
5. Liu, H., Hussain, F., Tan, C.L., Dash, M.: Discretization: An Enable Technique. Data Min-
ing and Knowledge Discovery 6, 393–423 (2002)
498 J. Ge, Y. Xia, and Y. Tu
28. Qin, B., Xia, Y., Li, F.: DTU: A Decision Tree for Classifying Uncertain Data. In: Theera-
munkong, T., Kijsirikul, B., Cercone, N., Ho, T.-B. (eds.) PAKDD 2009. LNCS, vol. 5476,
pp. 4–15. Springer, Heidelberg (2009)
29. Cheng, R., Kalashnikov, D., Prabhakar, S.: Evaluating Probabilistic Queries over Impre-
cise Data. In: Proceedings of the ACM SIGMOD, pp. 551–562 (2003)
30. Asuncion, A., Newman, D.: UCI machine learning repository (2007),
http://www.ics.uci.edu/mlearn/MLRepository.html
31. Aggarwal, C.C., Yu, P.S.: Outlier Detection with Uncertain Data. In: SIAM International
Conference on Data Mining (2009)
32. Han, J., Kamber, M.: Data Mining: Concepts and Techniques, 2nd edn. Morgan Kaufmann,
San Francisco (2006)