Attribute Reduction Based on Lift and Random Sampling
Abstract
:1. Introduction
2. Preliminaries
2.1. Basic Concept of Rough Set
2.2. Attribute Reduction
- (1)
- A meets the constraint ;
- (2)
- , does not meet the constraint .
- (1)
- The greater the degree of the constrain obtained by , the less the level of approximation will be described. One example of such a metric is the definition of approximation quality.
- (2)
- The less the degree of the constrain we derived by utilizing , the less the amount of approximation in the dataset. One example of such a metric is the idea of decision error rate.
2.3. Obtaining Reduct
Algorithm 1: Forward Greedy Searching for Attribute Reduction (FGSAR) |
2.4. Strategy Based on Sample
- Bucket based searching for attribute reduction. Liu et al. [10] have considered the bucket method for fast obtaining reduct. A hash function is used in their technique, and every sample in the dataset will then be mapped into a number of separate buckets. Following the intrinsic properties of such a mapping, only samples from the same bucket must be compared rather than samples from the entire dataset. According to this viewpoint, the computing burden of information granulation may be minimized. As a result, the technique for obtaining reduct by the bucket-based strategy is identical to that of Algorithm 1, except for the device of information granulation.
- Positive approximation for attribute reduction (PAAR). Qian et al. [11] have presented the method based on positive approximation to calculate reduct rapidly. The essence to positive approximation is to compress gradually the sample space. The compressing process should be guided by the values associated with the constrain . The following are the precise steps of the attribute reduction based on positive approximation.
- (1)
- The hypothetical reduct A will be defined as ∅, as well as the sample compressed space will be defined as U.
- (2)
- By using the constrain of , analyse all hypothetical which in .
- (3)
- Using the acquired constrain, choose one qualified attribute then merge in b to A.
- (4)
- Based on A, calculate and then reanalysis construction of the sample compressed space .
- (5)
- If the specified constraint is met, produce reduct A; otherwise, back to step(2).
- Random sampling accelerator for attribute reduction(RSAR). Chen et al. [12] examined the above-mentioned strategies, they found that: (1) Regardless of the searching technique, information granulation over the entire dataset is required; (2) Information granulation over the dataset always needs to be regenerated in each iteration throughout the entire searching process. In this aspect, sample distribution may influence the effectiveness of searching. Therefore, the two above-mentioned strategies have their own restrictions since they are directly tied to sample distribution. The restriction of BBSAR is that Bucket strategy will become inefficient when sample is too centralized, which is time-consuming. The restriction of PAAR is that the sample distribution strongly affect the construction of positive approximation, and then affect the effectiveness of attribute reduction. In view of this, Chen et al. [12] developed a new random sampling strategy. The following shows the exact structure of the random sampling being used to derive reduct.
- (1)
- The samples were randomly separated to n sample groups of equal size: .
- (2)
- Compute the reduct over ; reduct will then provide advice for computing the reduct over , and so on.
- (3)
- Get the reduct , use it as the ultimate reduct over the entire dataset.
3. Lift for Attribute Reduction
3.1. Theoretical Foundations
- (1)
- Collect all cluster centers into .
- (2)
- The other samples were randomly separated to n − 1 sample groups of equal size: .
- (3)
- Compute the first reduct over ; this reduct will then provide advice for computing the second reduct over ; will also provide advice for computing over and so on.
- (4)
- Get the n-th reduct , use it as the ultimate reduct over the entire dataset.
3.2. Detailed Algorithm
Algorithm 2: Attribute Reduction Based on Lift and Random Sampling |
4. Analysis of Experiment
4.1. Datasets
4.2. Basic Experiment Setting
4.3. Time Consumption
4.4. Classification Performance
4.5. Discussion
- (1)
- The higher the dimension of the datasets is, the more information the selected sample can carry. Then, the more information can better help to guide the subsequent attribute reduction. Therefore, ARLRS algorithm performs well in handling big data.
- (2)
- Due to the low value density of big data, it is often necessary to preferentially extract relevant and useful information from a large amount of datasets. The above-mentioned needs are well solved by introducing Lift. The most representative samples found by LIFT make the unclear information under each label have available structure, from the perspective of samples, thus different samples form a certain association. Such preprocessing is very effective in the processing big data, so ARLRS will have certain advantages in processing big data.
5. Conclusions and Future Perspectives
- (1)
- In light of the uncertainty existed in constraints and classification performances, the resultant reduct may result in over-fitting. As a result, in future investigation, we will try balance the efficiency of attribute reduction with the classification performance.
- (2)
- The attribute reduction strategy presented in this research is only applied from the sample’s perspective. Therefore, we will try to investigate some novel algorithms taking into account both samples and attributes for improving more efficiency.
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
References
- Chen, D.G.; Yang, Y.Y.; Dong, Z. An incremental algorithm for attribute reduction with variable precision rough sets. Appl. Soft Comput. 2016, 45, 129–149. [Google Scholar] [CrossRef]
- Jiang, Z.H.; Yang, X.B.; Yu, H.L.; Liu, D.; Wang, P.X.; Qian, Y.H. Accelerator for multi-granularity attribute reduction. Knowl.-Based Syst. 2019, 177, 145–158. [Google Scholar] [CrossRef]
- Ju, H.R.; Yang, X.B.; Yu, H.L.; Li, T.J.; Yu, D.J.; Yang, J.Y. Cost-sensitive rough set approach. Inf. Sci. 2016, 355–356, 282–298. [Google Scholar] [CrossRef]
- Qian, Y.H.; Wang, Q.; Cheng, H.H.; Liang, J.Y.; Dang, C.Y. Fuzzy-rough feature selection accelerator. Fuzzy Sets Syst. 2015, 258, 61–78. [Google Scholar] [CrossRef]
- Wei, W.; Cui, J.B.; Liang, J.Y.; Wang, J.H. Fuzzy rough approximations for set-valued data. Inf. Sci. 2016, 360, 181–201. [Google Scholar] [CrossRef]
- Wei, W.; Wu, X.Y.; Liang, J.Y.; Cui, J.B.; Sun, Y.J. Discernibility matrix based incremental attribute reduction for dynamic data. Knowl.-Based Syst. 2018, 140, 142–157. [Google Scholar] [CrossRef]
- Dong, L.J.; Chen, D.G. Incremental attribute reduction with rough set for dynamic datasets with simultaneously increasing samples and attributes. Int. J. Mach. Learn. Cybern. 2020, 11, 213–227. [Google Scholar] [CrossRef]
- Zhang, A.; Chen, Y.; Chen, L.; Chen, G.T. On the NP-hardness of scheduling with time restrictions. Discret. Optim. 2017, 28, 54–62. [Google Scholar] [CrossRef]
- Guan, L.H. A heuristic algorithm of attribute reduction in incomplete ordered decision systems. J. Intell. Fuzzy Syst. 2019, 36, 3891–3901. [Google Scholar] [CrossRef]
- Liu, Y.; Huang, W.L.; Jiang, Y.L.; Zeng, Z.Y. Quick attribute reduct algorithm for neighborhood rough set model. Inf. Sci. 2014, 271, 65–81. [Google Scholar] [CrossRef]
- Qian, Y.H.; Liang, J.Y.; Pedrycz, W.; Dang, C.Y. Positive approximation: An accelerator for attribute reduction in rough set theory. Artif. Intell. 2010, 174, 597–618. [Google Scholar] [CrossRef]
- Chen, Z.; Liu, K.Y.; Yang, X.B.; Fujita, H. Random sampling accelerator for attribute reduction. Int. J. Approx. Reason. 2022, 140, 75–91. [Google Scholar] [CrossRef]
- Wang, K.; Thrampoulidis, C. Binary classification of gaussian mixtures: Abundance of support vectors, benign overfitting, and regularization. SIAM J. Math. Data Sci. 2022, 4, 260–284. [Google Scholar] [CrossRef]
- Bejani, M.M.; Ghatee, M.A. A systematic review on overfitting control in shallow and deep neural networks. Artif. Intell. Rev. 2021, 54, 6391–6438. [Google Scholar] [CrossRef]
- Park, Y.; Ho, J.C. Tackling overfitting in boosting for noisy healthcare data. IEEE Trans. Knowl. Data Eng. 2021, 33, 2995–3006. [Google Scholar] [CrossRef]
- Zhang, M.L.; Wu, L. Lift: Multi-label learning with label-specific features. IEEE Trans. Pattern Anal. Mach. Intell. 2015, 37, 107–120. [Google Scholar] [CrossRef] [PubMed]
- Hu, Q.H.; Zhang, L.; Chen, D.G.; Pedrycz, W.; Yu, D. Gaussian kernel based fuzzy rough sets: Model, uncertainty measures and applications. Int. J. Approx. Reason. 2010, 51, 453–471. [Google Scholar] [CrossRef]
- Sun, L.; Wang, L.Y.; Ding, W.P.; Qian, Y.H.; Xu, J.C. Feature selection using fuzzy neighborhood entropy-nased uncertainty measures for fuzzy neighborhoodm mltigranulation rough sets. IEEE Trans. Fuzzy Syst. 2021, 29, 19–33. [Google Scholar] [CrossRef]
- Wang, Z.C.; Hu, Q.H.; Qian, Y.H.; Chen, D.E. Label enhancement-based feature selection via fuzzy neighborhood discrimination index. Knowl.-Based Syst. 2022, 250, 109119. [Google Scholar] [CrossRef]
- Li, W.T.; Wei, Y.L.; Xu, W.H. General expression of knowledge granularity based on a fuzzy relation matrix. Fuzzy Sets Syst. 2022, 440, 149–163. [Google Scholar] [CrossRef]
- Liu, F.L.; Zhang, B.W.; Ciucci, D.; Wu, W.Z.; Min, F. A comparison study of similarity measures for covering-based neighborhood classifiers. Inf. Sci. 2018, 448–449, 1–17. [Google Scholar] [CrossRef]
- Ma, X.A.; Yao, Y.Y. Min-max attribute-object bireducts: On unifying models of reducts in rough set theory. Inf. Sci. 2019, 501, 68–83. [Google Scholar] [CrossRef]
- Xu, T.H.; Wang, G.Y.; Yang, J. Finding strongly connected components of simple digraphs based on granulation strategy. Int. J. Approx. Reason. 2020, 118, 64–78. [Google Scholar] [CrossRef]
- Jia, X.Y.; Rao, Y.; Shang, L.; Li, T.J. Similarity-based attribute reduction in rough set theory: A clustering perspective. Int. J. Approx. Reason. 2020, 11, 1047–1060. [Google Scholar] [CrossRef]
- Ding, W.P.; Pedrycz, W.; Triguero, I.; Cao, Z.H.; Lin, C.T. Multigranulation supertrust model for attribute reduction. IEEE Trans. Fuzzy Syst. 2021, 29, 1395–1408. [Google Scholar] [CrossRef]
- Chu, X.L.; Sun, B.D.; Chu, X.D.; Wu, J.Q.; Han, K.Y.; Zhang, Y.; Huang, Q.C. Multi-granularity gominance rough concept attribute reduction over hybrid information systems and its application in clinical decision-making. Inf. Sci. 2022, 597, 274–299. [Google Scholar] [CrossRef]
- Yuan, Z.; Chen, H.M.; Yang, X.L.; Li, T.R.; Liu, K.Y. Fuzzy complementary entropy using hybrid-kernel function and its unsupervised attribute reduction. Knowl.-Based Syst. 2021, 231, 107398. [Google Scholar] [CrossRef]
- Zhang, Q.Y.; Chen, Y.; Zhang, G.Q.; Li, Z.; Chen, L.; Wen, C.F. New uncertainty measurement for categorical data based on fuzzy information structures: An application in attribute reduction. Inf. Sci. 2021, 580, 541–577. [Google Scholar] [CrossRef]
- Ding, W.P.; Wang, J.D.; Wang, J.H. Multigranulation consensus fuzzy-rough based attribute reduction. Knowl.-Based Syst. 2020, 198, 105945. [Google Scholar] [CrossRef]
- Chen, Y.; Yang, X.B.; Li, J.H.; Wang, P.X.; Qian, Y.H. Fusing attribute reduction accelerators. Inf. Sci. 2022, 587, 354–370. [Google Scholar] [CrossRef]
- Yan, W.W.; Ba, J.; Xu, T.H.; Yu, H.L.; Shi, J.L.; Han, B. Beam-influenced attribute selector for producing stable reduct. Mathematics 2022, 10, 533. [Google Scholar] [CrossRef]
- Ganguly, S.; Bhowal, P.; Oliva, D.; Sarkar, R. BLeafNet: A bonferroni mean operator based fusion of CNN models for plant identification using leaf image classification. Ecol. Inform. 2022, 69, 101585. [Google Scholar] [CrossRef]
- Zhang, C.F.; Feng, Z.L. Convolutional analysis operator learning for multifocus image fusion. Signal Process. Image Commun. 2022, 103, 116632. [Google Scholar] [CrossRef]
- Jiang, Z.H.; Dou, H.L.; Song, J.J.; Wang, P.X.; Yang, X.B.; Qian, Y.H. Data-guided multi-granularity selector for attribute eduction. Artif. Intell. Rev. 2021, 51, 876–888. [Google Scholar] [CrossRef]
- Kanungo, T.; Mount, D.M.; Netanyahu, N.S.; Piatko, C.D.; Silverman, R.; Wu, A.Y. An efficient k-means clustering algorithm: Analysis and implementation. IEEE Trans. Pattern Anal. Mach. Intell. 2002, 24, 881–892. [Google Scholar] [CrossRef]
- Quinlan, J.R. Simplifying decision trees. Int. J.-Hum.-Comput. Stud. 1999, 51, 497–510. [Google Scholar] [CrossRef]
- Street, N.; Wolberg, W.; Mangasarian, O. Nuclear feature extraction for breast tumor diagnosis. Int. Symp. Electron. Imaging Sci. Technol. 1999, 1993, 861–870. [Google Scholar] [CrossRef]
- Ayres-de-Campos, D.; Bernardes, J.; Garrido, A.; Marques-de-sá, J.; Pereira-leite, L. SisPorto 2.0: A program for automated analysis of cardiotocograms. J. Matern.-Fetal Med. 2000, 9, 311–318. [Google Scholar] [CrossRef]
- Gorman, R.P.; Sejnowski, T.J. Analysis of hidden units in a layered network trained to classify sonar sargets. Neural Netw. 1988, 16, 75–89. [Google Scholar] [CrossRef]
- Johnson, B.A.; Iizuka, K. Integrating open street map crowd sourced data and landsat time-series imagery for rapid land use/land cover (LULC) mapping: Case study of the laguna de bay area of the philippines. Appl. Geogr. 2016, 67, 140–149. [Google Scholar] [CrossRef]
- Antal, B.; Hajdu, A. An ensemble-based system for automatic screening of diabetic retinopathy. Knowl.-Based Syst. 2014, 60, 20–27. [Google Scholar] [CrossRef]
- Díaz-Chito, K.; Hernàndez, A.; López, A. A reduced feature set for driver head pose estimation. Appl. Soft Comput. 2016, 45, 98–107. [Google Scholar] [CrossRef]
- Johnson, B.; Tateishi, R.; Xie, Z. Using geographically-weighted variables for image classification. Remote Sens. Lett. 2012, 3, 491–499. [Google Scholar] [CrossRef]
- Evett, I.W.; Spiehler, E.J. Rule induction in forensic science. Knowl. Based Syst. 1989, 152–160. Available online: https://dl.acm.org/doi/abs/10.5555/67040.67055 (accessed on 15 August 2022).
- Sigillito, V.; Wing, S.; Hutton, L.; Baker, K. Classification of radar returns from the ionosphere using neural networks. Johns Hopkins APL Tech. Dig. 1989, 10, 876–890. [Google Scholar] [CrossRef]
- Bock, R.K.; Chilingarian, A.; Gaug, M.; Hakl, F.; Hengstebeck, T.; Jiřina, M.; Klaschka, J.; Kotrč, E.; Savický, P.; Towers, S.; et al. Methods for multidimensional event classification: A case study using images from a cherenkov gamma-ray telescope. Nucl. Instrum. Methods Phys. Res. Sect. Accel. Spectrometers Detect. Assoc. Equip. 2004, 516, 511–528. [Google Scholar] [CrossRef]
- Sakar, B.E.; Isenkul, M.E.; Sakar, C.O.; Sertbas, A.; Gurgen, F.; Delil, S.; Apaydin, H.; Kursun, O. Collection and analysis of a parkinson speech dataset with multiple types of sound recordings. IEEE J. Biomed. Health Inform. 2013, 17, 828–834. [Google Scholar] [CrossRef]
- Mansouri, K.; Ringsted, T.; Ballabio, D.; Todeschini, R.; Consonni, V. Quantitative structure–activity relationship models for ready biodegradability of chemicals. J. Chem. Inf. Model. 2013, 53, 867–878. [Google Scholar] [CrossRef]
- Dietterich, T.G.; Lathrop, R.H.; Lozano-Pérez, T. Solving the multiple instance problem with axis-parallel rectangles. Artif. Intell. 1997, 89, 31–71. [Google Scholar] [CrossRef]
- Malerba, D.; Esposito, F.; Semeraro, G. A further comparison of simplification methods for decision-tree induction. In Learning from Data; Springer: New York, NY, USA, 1996; pp. 365–374. [Google Scholar] [CrossRef]
- Johnson, B.; Xie, Z. Classifying a high resolution image of an urban area using super-object information. ISPRS J. Photogramm. Remote Sens. 2013, 83, 40–49. [Google Scholar] [CrossRef]
- Fernandes, K.; Cardoso, J.S.; Fernandes, J. Transfer learning with partial observability applied to cervical cancer screening. Pattern Recognit. Image Anal. 2017, 243–250. [Google Scholar] [CrossRef]
- Sun, L.; Wang, T.X.; Ding, W.P.; Xu, J.C.; Lin, Y.J. Feature selection using fisher score and multilabel neighborhood rough sets for multilabel classification. Inf. Sci. 2021, 578, 887–912. [Google Scholar] [CrossRef]
- Luo, S.; Miao, D.Q.; Zhang, Z.F.; Zhang, Y.J.; Hu, S.D. A neighborhood rough set model with nominal metric embedding. Inf. Sci. 2020, 520, 373–388. [Google Scholar] [CrossRef]
- Chu, X.L.; Sun, B.Z.; Li, X.; Han, K.Y.; Wu, J.Q.; Zhang, Y.; Huang, Q.C. Neighborhood rough set-based three-way clustering considering attribute correlations: An approach to classification of potential gout groups. Inf. Sci. 2020, 535, 28–41. [Google Scholar] [CrossRef]
- Shu, W.H.; Qian, W.B.; Xie, Y.H. Incremental feature selection for dynamic hybrid data using neighborhood rough set. Knowl.-Based Syst. 2020, 194, 105516. [Google Scholar] [CrossRef]
- Wan, J.H.; Chen, H.M.; Yuan, Z.; Li, T.R.; Sang, B.B. A novel hybrid feature selection method considering feature interaction in neighborhood rough set. Knowl.-Based Syst. 2021, 227, 107167. [Google Scholar] [CrossRef]
- Sang, B.B.; Chen, H.M.; Yang, L.; Li, T.R.; Xu, W.H.; Luo, W.H. Feature selection for dynamic interval-valued ordered data based on fuzzy dominance neighborhood rough set. Knowl.-Based Syst. 2021, 227, 107223. [Google Scholar] [CrossRef]
- Hu, Q.H.; Xie, Z.X.; Yu, D. Hybrid attribute reduction based on a novel fuzzy-rough model and information granulation. Pattern Recognit. 2007, 40, 3509–3521. [Google Scholar] [CrossRef]
- Jensen, R.; Qiang, S. Fuzzy–rough attribute reduction with application to web categorization. Fuzzy Sets Syst. 2004, 141, 469–485. [Google Scholar] [CrossRef]
- Xu, S.P.; Yang, X.B.; Yu, H.L.; Yu, D.J.; Yang, J.Y.; Tsang, E.C. Multi-label learning with label-specific feature reduction. Knowl.-Based Syst. 2016, 104, 52–61. [Google Scholar] [CrossRef]
- Chen, Y.; Liu, K.Y.; Song, J.J.; Yang, X.B.; Qian, Y.H. Attribute group for attribute reduction. Inf. Sci. 2020, 535, 64–80. [Google Scholar] [CrossRef]
ID | Datasets | # Samples | # Attributes | # Labels | Attributes Type |
---|---|---|---|---|---|
1 | Australian Credit Approval [36] | 690 | 14 | 2 | Real |
2 | Breast Cancer Wisconsin (Diagnostic) [37] | 569 | 30 | 2 | Real |
3 | Cardiotocography [38] | 2126 | 21 | 10 | Real |
4 | Connectionist Bench (Sonar, Mines vs. Rocks) [39] | 208 | 60 | 2 | Real |
5 | Crowdsourced Mapping [40] | 10,845 | 28 | 6 | Real |
6 | Diabetic Retinopathy Debrecen [41] | 1151 | 19 | 2 | Integer & Real |
7 | DrivFace [42] | 606 | 6400 | 3 | Real |
8 | Forest Type Mapping [43] | 523 | 27 | 4 | Integer & Real |
9 | Glass Identification [44] | 214 | 9 | 6 | Real |
10 | Ionosphere [45] | 351 | 34 | 2 | Integer & Real |
11 | MAGIC Gamma Telescope [46] | 19,020 | 28 | 6 | Real |
12 | Parkinson Multiple Sound Recording [47] | 1208 | 26 | 2 | Real |
13 | QSAR Biodegradation [48] | 1055 | 41 | 2 | Real |
14 | Musk (Version 1) [49] | 476 | 166 | 2 | Integer |
15 | Page Blocks Classification [50] | 5473 | 10 | 5 | Integer & Real |
16 | Urban Land Cover [51] | 675 | 147 | 9 | Integer & Real |
17 | Quality Assessment of Digital Colposcopies [52] | 287 | 62 | 2 | Real |
ID | ARLRS | RSAR | FGSAR | PAAR | BBSAR | AGAR |
---|---|---|---|---|---|---|
1 | 1.3439 | 1.6046 | 1.9683 | 1.9533 | 3.0378 | 2.0001 |
2 | 2.8514 | 3.2616 | 3.7553 | 3.6474 | 4.9878 | 6.3496 |
3 | 1.7156 | 2.3740 | 3.6215 | 3.3452 | 4.0248 | 2.6923 |
4 | 3.114 | 3.8712 | 4.6534 | 4.4532 | 4.9458 | 6.3526 |
5 | 0.0331 | 0.0351 | 0.3561 | 0.3424 | 0.4138 | 0.0493 |
6 | 1.5043 | 1.6542 | 3.3462 | 3.3977 | 2.7745 | 3.4527 |
7 | 21.5071 | 21.6074 | 22.9543 | 22.6464 | 32.7441 | 24.2342 |
8 | 3.2707 | 4.9173 | 10.2732 | 10.2480 | 4.4253 | 4.5429 |
9 | 36.6825 | 39.9173 | 55.4524 | 54.2809 | 46.355 | 40.6939 |
10 | 2.2156 | 2.3740 | 3.5643 | 3.5754 | 4.0248 | 2.5551 |
11 | 32.6542 | 34.7645 | 35.3475 | 35.3342 | 35.4578 | 34.9578 |
12 | 2.5127 | 2.7545 | 3.1642 | 3.0008 | 2.8361 | 3.147 |
13 | 9.6467 | 11.9454 | 13.6468 | 13.4522 | 12.5647 | 12.3949 |
14 | 16.9655 | 18.3361 | 19.7468 | 19.2312 | 19.1485 | 18.5435 |
15 | 44.4554 | 46.1265 | 48.4641 | 48.8529 | 45.5659 | 48.4844 |
16 | 3.6125 | 4.0415 | 4.8353 | 4.1237 | 4.7264 | 4.9645 |
17 | 38.4124 | 41.5415 | 43.8515 | 43.4529 | 45.4554 | 44.5455 |
ID | RSAR | FGSAR | PAAR | BBSAR | AGAR |
---|---|---|---|---|---|
1 | 16.37 | 31.19 | 41.17 | 30.34 | 32.81 |
2 | 12.58 | 24.07 | 21.82 | 20.53 | 19.31 |
3 | 27.73 | 35.12 | 27.82 | 18.21 | 26.34 |
4 | 19.63 | 33.14 | 30.13 | 25.75 | 7.22 |
5 | 2.36 | 23.38 | 2.65 | 2.71 | 17.78 |
6 | 9.06 | 14.64 | 42.91 | 34.52 | 27.98 |
7 | 13.15 | 19.01 | 18.62 | 4.57 | 5.99 |
8 | 8.11 | 33.85 | 32.42 | 34.91 | 9.86 |
9 | 20.13 | 37.84 | 38.03 | 24.24 | 13.29 |
10 | 10.23 | 11.09 | 22.97 | 23.13 | 30.23 |
11 | 6.07 | 7.02 | 7.88 | 7.91 | 6.59 |
12 | 25.97 | 20.59 | 16.27 | 11.40 | 20.16 |
13 | 19.24 | 9.39 | 7.71 | 23.22 | 22.17 |
14 | 25.09 | 18.28 | 13.71 | 12.93 | 6.77 |
15 | 3.62 | 8.27 | 6.00 | 2.44 | 8.31 |
16 | 15.23 | 13.88 | 14.76 | 17.84 | 20.76 |
17 | 31.25 | 4.33 | 6.12 | 7.32 | 15.9 |
avg | 15.63 | 20.29 | 20.64 | 17.79 | 17.14 |
ID | ARLRS | RSAR | FGSAR | PAAR | BBSAR | AGAR |
---|---|---|---|---|---|---|
1 | 7.14 | 7.32 | 8.03 | 8.03 | 8.03 | 7.17 |
2 | 8.67 | 8.21 | 9.19 | 9.19 | 9.19 | 8.11 |
3 | 7.38 | 8.66 | 7.57 | 7.57 | 7.57 | 8.45 |
4 | 7.13 | 7.43 | 8.17 | 8.17 | 8.17 | 7.43 |
5 | 14.34 | 15.32 | 14.38 | 14.38 | 14.38 | 16.64 |
6 | 9.46 | 9.21 | 8.32 | 8.32 | 8.32 | 8.76 |
7 | 8.74 | 9.87 | 7.15 | 7.15 | 7.15 | 9.75 |
8 | 8.65 | 9.07 | 9.25 | 9.25 | 9.25 | 8.98 |
9 | 7.91 | 9.87 | 8.57 | 8.57 | 8.57 | 8.07 |
10 | 8.65 | 8.97 | 8.78 | 8.78 | 8.78 | 9.21 |
11 | 23.87 | 24.58 | 25.32 | 25.32 | 25.32 | 27.98 |
12 | 7.03 | 10.34 | 10.01 | 10.01 | 10.01 | 9.28 |
13 | 6.98 | 9.01 | 8.34 | 8.34 | 8.34 | 7.32 |
14 | 8.75 | 8.65 | 8.90 | 8.90 | 8.90 | 10.76 |
15 | 13.66 | 15.32 | 16.24 | 16.24 | 16.24 | 15.23 |
16 | 14.31 | 15.09 | 15.65 | 15.65 | 15.65 | 16.32 |
17 | 8.02 | 8.06 | 8.19 | 8.19 | 8.19 | 9.15 |
ID | ARLRS | RSAR | FGSAR | PAAR | BBSAR | AGAR |
---|---|---|---|---|---|---|
1 | 0.8555 | 0.8742 | 0.8168 | 0.8168 | 0.8557 | 0.9175 |
2 | 0.9785 | 0.9000 | 0.8534 | 0.8534 | 0.9347 | 0.9456 |
3 | 0.9399 | 0.9320 | 0.8824 | 0.8824 | 0.8356 | 0.8943 |
4 | 0.9569 | 0.9642 | 0.8074 | 0.8074 | 0.9741 | 0.9456 |
5 | 0.8848 | 0.8351 | 0.8022 | 0.8022 | 0.8149 | 0.8624 |
6 | 0.8449 | 0.8272 | 0.9074 | 0.9074 | 0.8736 | 0.8488 |
7 | 0.9089 | 0.9003 | 0.8075 | 0.8075 | 0.8572 | 0.8197 |
8 | 0.8437 | 0.8326 | 0.8731 | 0.8731 | 0.8186 | 0.8279 |
9 | 0.8441 | 0.8741 | 0.8507 | 0.8507 | 0.9117 | 0.7986 |
10 | 0.9575 | 0.9320 | 0.9269 | 0.9269 | 0.8356 | 0.8943 |
11 | 0.9145 | 0.8487 | 0.8366 | 0.8366 | 0.8869 | 0.8747 |
12 | 0.9035 | 0.9055 | 0.9266 | 0.9266 | 0.8845 | 0.8647 |
13 | 0.8364 | 0.8641 | 0.8644 | 0.8644 | 0.9153 | 0.8279 |
14 | 0.9634 | 0.9534 | 0.9128 | 0.9128 | 0.8934 | 0.8634 |
15 | 0.9314 | 0.9169 | 0.8674 | 0.8674 | 0.8467 | 0.8469 |
16 | 0.9299 | 0.9064 | 0.8796 | 0.8796 | 0.9074 | 0.8534 |
17 | 0.8036 | 0.8712 | 0.8642 | 0.8642 | 0.8541 | 0.8779 |
ID | ARLRS | RSAR | FGSAR | PAAR | BBSAR | AGAR |
---|---|---|---|---|---|---|
1 | 0.8396 | 0.8614 | 0.8318 | 0.8318 | 0.8496 | 0.8779 |
2 | 0.8984 | 0.8401 | 0.8302 | 0.8302 | 0.7121 | 0.7326 |
3 | 0.8351 | 0.8741 | 0.9164 | 0.9164 | 0.8813 | 0.8801 |
4 | 0.8433 | 0.8381 | 0.8041 | 0.8041 | 0.8701 | 0.8912 |
5 | 0.9103 | 0.8701 | 0.9328 | 0.9328 | 0.9488 | 0.8468 |
6 | 0.9434 | 0.9362 | 0.9723 | 0.9723 | 0.9356 | 0.9147 |
7 | 0.8841 | 0.8766 | 0.8452 | 0.8452 | 0.8141 | 0.7998 |
8 | 0.8321 | 0.8311 | 0.9723 | 0.9723 | 0.7318 | 0.7323 |
9 | 0.9194 | 0.9248 | 0.9049 | 0.9049 | 0.8722 | 0.8413 |
10 | 0.8954 | 0.8888 | 0.7763 | 0.7763 | 0.8052 | 0.7113 |
11 | 0.9165 | 0.8971 | 0.9460 | 0.9460 | 0.8741 | 0.8492 |
12 | 0.9365 | 0.9207 | 0.9194 | 0.9194 | 0.8279 | 0.7812 |
13 | 0.9153 | 0.8803 | 0.7436 | 0.7436 | 0.7799 | 0.8940 |
14 | 0.8915 | 0.8766 | 0.8622 | 0.8622 | 0.8348 | 0.8786 |
15 | 0.9255 | 0.8903 | 0.8802 | 0.8802 | 0.8584 | 0.9205 |
16 | 0.9044 | 0.8909 | 0.8633 | 0.8633 | 0.8261 | 0.8759 |
17 | 0.8759 | 0.9204 | 0.8816 | 0.8816 | 0.9362 | 0.8294 |
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Chen, Q.; Xu, T.; Chen, J. Attribute Reduction Based on Lift and Random Sampling. Symmetry 2022, 14, 1828. https://doi.org/10.3390/sym14091828
Chen Q, Xu T, Chen J. Attribute Reduction Based on Lift and Random Sampling. Symmetry. 2022; 14(9):1828. https://doi.org/10.3390/sym14091828
Chicago/Turabian StyleChen, Qing, Taihua Xu, and Jianjun Chen. 2022. "Attribute Reduction Based on Lift and Random Sampling" Symmetry 14, no. 9: 1828. https://doi.org/10.3390/sym14091828