Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
Integrating uncertainty in data mining
  • Author:
  • Yi Xia,
  • Adviser:
  • Richard R. Muntz
Publisher:
  • University of California at Los Angeles
  • Computer Science Department 405 Hilgard Avenue Los Angeles, CA
  • United States
ISBN:978-0-542-50903-2
Order Number:AAI3202794
Pages:
165
Reflects downloads up to 01 Nov 2024Bibliometrics
Skip Abstract Section
Abstract

Uncertainty issues arise in data mining in different ways. Uncertainty exists not only in data sets to be mined but also in knowledge that is mined, as well as in the process of applying knowledge with uncertainty to new data sets. Incorporating measures of uncertainty into data and knowledge management raises significant research challenges. In this dissertation, we focus on three specific problems involving uncertainty: (1) Efficient filtering of a large data set based on uncertain knowledge. We choose Bayesian networks to represent uncertain knowledge and use the result of probabilistic queries to Bayesian networks to filter a large data set. Due to the potentially high computational complexity of a probabilistic query operation in a Bayesian network, we propose three techniques that exploit properties of the network and the data set to speed up the filtering process. They are: network pruning, computation reuse by tuple reordering and early termination. (2) Mining frequent itemsets from uncertain data sets. We define an uncertain data set to be one where each value is associated with a tag. The tag specifies a belief about the truth of the value. We propose an algorithm (EST) to efficiently discover frequent itemsets from such an uncertain data set, and evaluate it with respect to the number of false positive and false negative frequent itemsets generated. As an extension, we study privacy-preserving association rule mining where data are distorted to preserve privacy. We propose an algorithm (RE) to efficiently discover frequent itemsets from a data set where items are subject to different degrees of randomization. (3) Classifying uncertain data. To accommodate local uncertainty specific to each data item, we extend Naive Bayes classification algorithms for multi-dimensional data and Hidden Markov Model algorithms for sequential data, based on Pearl's virtual evidence theory. Empirical results show that classification accuracy can be improved by explicitly incorporating local uncertainty measures. Another benefit by explicitly considering local uncertainty is the possibility to learn a clean model separately from uncertain data. The learned clean model combined with various local uncertainty can be used to classify data collected under situations different from the training data.

Contributors
  • University of California, Los Angeles
  • University of California, Los Angeles

Recommendations