Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
2020 •
Today, thanks to the strong development of applications of information technology and Internet in many fields, a huge of database has been created. The number of records and the size of each record collected very quickly make it difficult to store and process information. Exploiting information sources from large databases effectively is an urgent issue and plays an important role in solving practical problems. In addition to traditional exploiting information methods, researchers have developed attribute reduction methods to reduce the size of the data space and eliminate irrelevant attributes. Our attribute reduction is based on the dependence between attributes in traditional rough set theory and in fuzzy rough set. The author built the tool which is inclusion degree and tolerance-based contingency table to solve the problem of finding the approximation set on set-valued information systems.
In this paper, we introduce a new automated Fuzzy-Based Rough Decision Model algorithm. Our Algorithm consists of three phases are: () automatic attributes fuzzification, () eliminate redundant attributes using rough set theory, () Generating Fuzzy rough rules then calculate automatically fitness value (Confidence) and support for each rule. In phase one, the user input the number of fuzzy sets of each attributes, our algorithm determine the maximum and minimum values of each attribute that define and calculates automatically the width (∆) which divides the universe of discourse of each attribute into " n " intervals according to the number of fuzzy sets, also the algorithm calculates automatically the width (δ i) according to the width (∆). In phase two, we use the rough set techniques to reduce the number of attributes that comes from phase one and produce fuzzy-rough rules. In phase three, our algorithm automatically calculates the confidence (weight or fitness value) and support of each fuzzy rough rule then it calculates the total weight or fitness value of all linguistic rules. The result of fitness value of our algorithm that applied on dataset in (X.Hu, T.Lin, and J.Han.) [ ] is. Rough sets theory was first introduced by Pawlak in the s [ , , , ] and it has been applied in many applications such as machine learning, knowledge discovery, and expert systems since then. It provides powerful tools for data analysis and data mining from imprecise and ambiguous data. Many rough sets models have been developed in the rough set community in the last decades [ , , ]. Applying the traditional rough set models in large data sets in data mining applications has shown that one of the strong drawbacks of the classical rough set theory assumes that all attributes values are discrete. In real life datasets values of attributes could be both of symbolic and real-valued. Therefore, the traditional rough set theory will have difficulty in handling such values. There is a need for some methods which have the capability of utilizing set approximations and attributes reduction for real-valued attributed dataset. This can be done by combining (integrate) fuzzy sets and rough sets in a Fuzzy-Based Rough Model [ , , , , , ]. Another major drawback of traditional Fuzzy-Based Rough Model is that the linguistic values (fuzzy sets) for numeric values of each attribute should determining by the membership functions of these linguistic terms which, every element in the universe of discourse is a member of the fuzzy set with some grade (degree of membership functions). Therefore, the user should define the parameters of membership functions of these linguistic values from his view which is different from one user to another. Therefore, we propose a new automated Fuzzy-Based Rough Decision Model algorithm that can define the parameters of membership functions of these linguistic values automatically that the user determine only the number of fuzzy sets (linguistic values) then the maximum and minimum values of each attribute are determined automatically then the algorithm calculates the width (∆) that divides the universe of discourse " u " of each attribute into " n " intervals according to the number of fuzzy sets then the algorithm calculates automatically the width (δi) according to the width (∆). Another strong drawback of the traditional rough set theory is the inefficiency of rough set methods and algorithms of computing the core attributes and reduct and identifying the dispensable attributes, which limits the suitability of the traditional rough set model in data mining applications. Further investigation of the problem reveals that most existing rough set models [ , , , ] do not integrate with the relational database systems and a lot of computational intensive operations are performed in flat files rather than utilizing the high performance database set operations. Moreover, not much attention and attempt have been paid to design new rough sets model by effectively combining database technologies to generate the core attributes and reducts so as to make their computations efficient and scalable in the large data set. To overcome this problem, a New Rough Sets Model Based on Database Systems has been introduced [ , ] for this purpose to redefine some concepts of rough set theory such as core attributes and reducts by using relational algebra so that the computation of core attributes and reducts can be performed with very efficient set-oriented database operations, such as Cardinality to denote the count and Projection. The paper is organized as follows: We give an overview of the rough set theory based on the model proposed by Pawlak [ , ] with some examples in Section. In Section , we give an
Rough set theory is a new method that deals with vagueness and uncertainty emphasized in decision making. Data mining is a discipline that has an important contribution to data analysis, discovery of new meaningful knowledge, and autonomous decision making. The rough set theory offers a viable approach for decision rule extraction from data.This paper, introduces the fundamental concepts of rough set theory and other aspects of data mining, a discussion of data representation with rough set theory including pairs of attribute-value blocks, information tables reducts, indiscernibility relation and decision tables. Additionally, the rough set approach to lower and upper approximations and certain possible rule sets concepts are introduced. Finally, some description about applications of the data mining system with rough set theory is included.
International Journal of Computer Science and Information Technologies
Retrieving the Missing Information from Information Systems Using Rough Set, Covering Based Rough Set and Soft Set2016 •
In this paper, we study the various aspects of rough set theory, covering-based rough set and soft set theory to handle the missing information systems. The rough set theory, based on the indiscernibility relation, it helps to develop automated computational systems using mathematical model that can help to understand and to handle imperfect knowledge. Covering based rough set is an extension of the basic rough set. Soft set theory can be applied to problems that contain uncertainties in decision making problems. These concepts are applied to a real life malaria disease dataset to replace missing information in the malaria disease information table and compared the results.
2011 •
The paper deals with the decision rules synthesis i n the domain of expert systems and knowledge based systems. These systems incorporate expert knowledge which is often expressed in the If Then form. As i t is very hard for experts to formally articulate their knowledge, automated decision rule composing algorithms have been used. The rule composing algorithms are often based on the rough sets theory. Originally, ru les are composed from data table by equivalence relation, while in this paper we investigate the rules based o n dominance relation. The main goal of this paper is to single out possible benefits and advantages of dominance relation based rules over equivalence rel ation based rules.
International Journal of Fuzzy Systems
A Novel Decision-Making Method Based on Rough Fuzzy Information2017 •
Proceedings of World Academy of …
An Intelligent Approach of Rough Set in Knowledge Discovery Databases2007 •
AbstractKnowledge Discovery in Databases (KDD) has evolved into an important and active area of research because of theoretical challenges and practical applications associated with the problem of discovering (or extracting) interesting and previously unknown knowledge ...
International Journal of Database Theory and Application
A Rough Set Based Classification Model for the Generation of Decision Rules2014 •
Journal of Fuzzy Set Valued Analysis
Fuzzy-rough set and fuzzy ID3 decision approaches to knowledge discovery in datasets2012 •
16th International IEEE Conference on Intelligent Transportation Systems (ITSC 2013)
A general framework for road marking detection and analysis2013 •
Decrecimiento: un vocabulario para una nueva era (Editador por Giacomo D'alisa, Federico Demaria y Giorgos Kallis)
Transición energética (energías renovables)2018 •
Project Management World Journal
Assessing Earned Value Management and Earned Schedule Forecasting2017 •
2019 •
2014 •
Cancer Epidemiology, Biomarkers & Prevention
Accelerating Translation of Physical Activity and Cancer Survivorship Research into Practice: Recommendations for a More Integrated and Collaborative Approach2014 •
2019 •
European journal of public health
Fruit and vegetable consumption trends among adolescents from 2002 to 2010 in 33 countries2015 •
The Annals of Thoracic Surgery
Fate of the Remaining Neo-Aortic Root After Autograft Valve Replacement With a Stented Prosthesis for the Failing Ross Procedure2013 •