Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
research-article

Meta-features for meta-learning

Published: 15 March 2022 Publication History
  • Get Citation Alerts
  • Abstract

    Meta-learning is increasingly used to support the recommendation of machine learning algorithms and their configurations. These recommendations are made based on meta-data, consisting of performance evaluations of algorithms and characterizations on prior datasets. These characterizations, also called meta-features, describe properties of the data which are predictive for the performance of machine learning algorithms trained on them. Unfortunately, despite being used in many studies, meta-features are not uniformly described, organized and computed, making many empirical studies irreproducible and hard to compare. This paper aims to deal with this by systematizing and standardizing data characterization measures for classification datasets used in meta-learning. Moreover, it presents an extensive list of meta-features and characterization tools, which can be used as a guide for new practitioners. By identifying particularities and subtle issues related to the characterization measures, this survey points out possible future directions that the development of meta-features for meta-learning can assume.

    References

    [1]
    Wolpert D.H., Stacked generalization, Neural Netw. 5 (2) (1992) 241–259.
    [2]
    Adam S.P., Alexandropoulos S.-A.N., Pardalos P.M., Vrahatis M.N., No free lunch theorem: A review, in: Approximation And Optimization : Algorithms, Complexity And Applications, Springer International Publishing, 2019, pp. 57–82.
    [3]
    Brazdil P., Giraud-Carrier C., Soares C., Vilalta R., Metalearning: Applications to Data Mining, Springer-Verlag Berlin Heidelberg, 2009.
    [4]
    Vanschoren J., Meta-learning: A survey, 2018, pp. 1–29. arXiv:1810.03548.
    [5]
    J.N. van Rijn, F. Hutter, Hyperparameter importance across datasets, in: 24th ACM SIGKDD International Conference On Knowledge Discovery & Data Mining, 2018, pp. 2367–2376.
    [6]
    Pan S.J., Yang Q., A survey on transfer learning, IEEE Trans. Knowl. Data Eng. 22 (10) (2010) 1345–1359.
    [7]
    Hospedales T.M., Antoniou A., Micaelli P., Storkey A.J., Meta-learning in neural networks: A survey, 2020, CoRR abs/2004.05439.
    [8]
    R. Elshawi, S. Sakr, Automated machine learning: Techniques and frameworks, in: European Big Data Management And Analytics Summer School, EBISS, 2019, pp. 40–69.
    [9]
    Hutter F., Kotthoff L., Vanschoren J., Automated Machine Learning, Springer International Publishing, 2019.
    [10]
    Smith-Miles K., Cross-disciplinary perspectives on meta-learning for algorithm selection, ACM Comput. Surv. 41 (1) (2008) 6:1–6:25.
    [11]
    noz M.A.M., Villanova L., Baatar D., Smith-Miles K., Instance spaces for machine learning classification, Mach. Learn. 107 (1) (2018) 109–147.
    [12]
    H. Bensusan, A. Kalousis, Estimating the predictive accuracy of a classifier, in: 12th European Conference On Machine Learning, ECML, 2001, pp. 25–36.
    [13]
    Bilalli B., Abelló A., Aluja-Banet T., On the predictive power of meta-features in OpenML, Int. J. Appl. Math. Comput. Sci. 27 (4) (2017) 697–712.
    [14]
    A. Rivolli, L.P.F. Garcia, A.C. Lorena, A.C.P.L.F. de Carvalho, A study of the correlation of metafeatures used for metalearning, in: International Work-Conference On Artificial Neural Networks, IWANN, 2021, pp. 471–483.
    [15]
    B. Pfahringer, H. Bensusan, C. Giraud-Carrier, Meta-learning by landmarking various learning algorithms, in: 17th International Conference On Machine Learning, ICML, 2000, pp. 743–750.
    [16]
    Y. Peng, P.A. Flach, C. Soares, P. Brazdil, Improved dataset characterisation for meta-learning, in: 5th International Conference On Discovery Science, DS, 2002, pp. 141–152.
    [17]
    C. Castiello, G. Castellano, A.M. Fanelli, Meta-data: Characterization of input features for meta-learning, in: 2nd International Conference On Modeling Decisions For Artificial Intelligence, MDAI, 2005, pp. 457–468.
    [18]
    A. Filchenkov, A. Pendryak, Datasets meta-feature description for recommending feature selection algorithm, in: Artificial Intelligence And Natural Language And Information Extraction, Social Media And Web Search FRUCT Conference, AINL-ISMW FRUCT, 2015, pp. 11–18.
    [19]
    M. Reif, F. Shafait, A. Dengel, Prediction of classifier training time including parameter optimization, in: 34th German Conference On Advances In Artificial Intelligence, KI, 2011, pp. 260–271.
    [20]
    Reif M., Shafait F., Goldstein M., Breuel T., Dengel A., Automatic classifier selection for non-experts, Pattern Anal. Appl. 17 (1) (2014) 83–96.
    [21]
    Vilalta R., Drissi Y., A perspective view and survey of meta-learning, Artif. Intell. Rev. 18 (2) (2002) 77–95.
    [22]
    Lemke C., Budka M., Gabrys B., Metalearning: a survey of trends and technologies, Artif. Intell. Rev. 44 (1) (2015) 117–130.
    [23]
    Khan I., Zhang X., Rehman M., Ali R., A literature survey and empirical study of meta-learning for classifier selection, IEEE Access 8 (2020) 10262–10281.
    [24]
    Macià N., Bernadó-Mansilla E., Towards UCI+: A mindful repository design, Inform. Sci. 261 (2014) 237–262.
    [25]
    M. Reis, A.C. Lorena, sample bias effect on meta-learning, in: Anais do Encontro Nacional de Inteligência Artificial e Computacional, ENIAC 2020, 2020, pp. 294–305.
    [26]
    Kalousis A., ao Gama J., Hilario M., On data and algorithms: Understanding inductive performance, Mach. Learn. 54 (3) (2004) 275–312.
    [27]
    Oreski D., Oreski S., Klicek B., Effects of dataset characteristics on the performance of feature selection techniques, Appl. Soft Comput. 52 (2017) 109–119.
    [28]
    T.R. França, P.B.C. de Miranda, R.B.C. Prudêncio, A.C. Lorenaz, A.C.A. Nascimento, A many-objective optimization approach for complexity-based data set generation, in: IEEE Congress On Evolutionary Computation, CEC, 2020, pp. 1–8.
    [29]
    Mitchell T.M., Machine Learning, McGraw Hill, 1997.
    [30]
    Wolpert D.H., Macready W.G., No Free Lunch Theorems for Search, Santa Fe Institute, 1995, pp. 1–38.
    [31]
    P. Brazdil, J. ao Gama, B. Henery, Characterizing the applicability of classification algorithms using meta-level learning, in: 7th European Conference On Machine Learning, ECML, 1994, pp. 83–102.
    [32]
    Rice J.R., The algorithm selection problem, Adv. Comput. 15 (1976) 65–118.
    [33]
    C. Soares, J. Petrak, P. Brazdil, Sampling-based relative landmarks: Systematically test-driving algorithms before choosing, in: Portuguese Conference On Artificial Intelligence, vol. 2258, EPIA, 2001, pp. 88–95.
    [34]
    M. Reif, A comprehensive dataset for evaluating approaches of various meta-learning tasks, in: 1st International Conference On Pattern Recognition Applications And Methods, ICPRAM, 2012, pp. 273–276.
    [35]
    Ali S., Smith-Miles K.A., A meta-learning approach to automatic kernel selection for support vector machines, Neurocomputing 70 (1) (2006) 173–186.
    [36]
    Mantovani R.G., Rossi A.L.D., Alcobaça E., Vanschoren J., de Carvalho A.C.P.L.F., A meta-learning recommender system for hyperparameter tuning: Predicting when tuning improves SVM classifiers, Inform. Sci. 501 (2019) 193–221.
    [37]
    Sáez J.A., Corchado E., A meta-learning recommendation system for characterizing unsupervised problems: On using quality indices to describe data conformations, IEEE Access 7 (2019) 63247–63263.
    [38]
    Garcia L.P.F., Rivolli A., Alcobaça E., Lorena A.C., de Carvalho A.C.P.L.F., Boosting meta-learning with simulated data complexity measures, Intelligent Data Analysis 24 (5) (2020) 1011–1028.
    [39]
    V.H. Barella, L.P.F. Garcia, A.C.P.L.F. de Carvalho, Simulating complexity measures on imbalanced datasets, in: Brazilian Conference On Intelligent Systems, BRACIS, 2020, pp. 498–512.
    [40]
    Meskhi M.M., Rivolli A., Mantovani R.G., Vilalta R., Learning abstract task representations, 2021, pp. 1–7. arXiv:2101.07852.
    [41]
    J.W. Lee, C. Giraud-Carrier, Predicting algorithm accuracy with a small set of effective meta-features, in: 7th International Conference On Machine Learning And Applications, ICMLA, 2008, pp. 808–812.
    [42]
    Bilalli B., Abelló A., Aluja-Banet T., Wrembel R., Intelligent assistance for data pre-processing, Comput. Stand. Interfaces 57 (2018) 101–109.
    [43]
    Ali S., Smith K.A., On learning algorithm selection for classification, Appl. Soft Comput. 6 (2) (2006) 119–138.
    [44]
    Pimentel B.A., de Carvalho A.C.P.L.F., A new data characterization for selecting clustering algorithms using meta-learning, Inform. Sci. 477 (2019) 203–219.
    [45]
    A.C.A. Nascimento, R.B.C. Prudêncio, M.C.P. de Souto, I.G. Costa, Mining rules for the automatic selection process of clustering methods applied to cancer gene expression data, in: 19th International Conference On Artificial Neural Networks, vol. 5769, ICANN, 2009, pp. 20–29.
    [46]
    Vukicevic M., Radovanovic S., Delibasic B., Suknovic M., Extending meta-learning framework for clustering gene expression data with component-based algorithm design and internal evaluation measures, Int. J. Data Min. Bioinform. 14 (2) (2016) 101–119.
    [47]
    Rokach L., Decomposition methodology for classification tasks: a meta decomposer framework, Pattern Anal. Appl. 9 (2–3) (2006) 257–271.
    [48]
    Aguiar G.J., Mantovani R.G., Mastelini S.M., de Carvalho A.C.P.L.F., Campos G.F.C., Junior S.B., A meta-learning approach for selecting image segmentation algorithm, Pattern Recognit. Lett. 128 (2019) 480–487.
    [49]
    Garcia L.P.F., de Carvalho A.C.P.L.F., Lorena A.C., Noise detection in the meta-learning level, Neurocomputing 176 (2) (2015) 1–12.
    [50]
    Rossi A.L.D., de Souza B.F., Soares C., de Leon Ferreira de Carvalho A.C.P., A guidance of data stream characterization for meta-learning, Intell. Data Anal. 21 (4) (2017) 1015–1035.
    [51]
    Cunha T., Soares C., de Carvalho A.C., Metalearning and recommender systems: A literature review and empirical study on the algorithm selection problem for collaborative filtering, Inform. Sci. (ISSN ) 423 (2018) 128–144.
    [52]
    Elsken T., Metzen J.H., Hutter F., Neural architecture search: A survey, 2019, arXiv:1808.05377.
    [53]
    Hospedales T.M., Antoniou A., Micaelli P., Storkey A.J., Meta-learning in neural networks: A survey, IEEE Trans. Pattern Anal. Mach. Intell. (2021) 1–20.
    [54]
    Elshawi R., Maher M., Sakr S., Automated machine learning: State-of-the-art and open challenges, 2019, arXiv:1906.02287.
    [55]
    Zöller M.-A., Huber M.F., Benchmark and survey of automated machine learning frameworks, J. Artif. Intell. Res. 70 (2021) 409–472.
    [56]
    M. Huisman, J.N. van Rijn, A. Plaat, A survey of deep meta-learning, Artif. Intell. Rev. 54, 4483–4541.
    [57]
    A. Kalousis, M. Hilario, Feature selection for meta-learning, in: 5th Pacific-Asia Conference On Knowledge Discovery And Data Mining, vol. 2035, PAKDD, 2001, pp. 222–233.
    [58]
    Sohn S.Y., Meta analysis of classification algorithms for pattern recognition, IEEE Trans. Pattern Anal. Mach. Intell. 21 (11) (1999) 1137–1144.
    [59]
    Kalousis A., Theoharis T., NOEMON: Design, implementation and performance results of an intelligent assistant for classifier selection, Intell. Data Anal. 3 (5) (1999) 319–337.
    [60]
    L. Todorovski, P. Brazdil, C. Soares, Report on the experiments with feature selection in meta-level learning, in: PKDD Workshop On Data Mining, Decision Support, Meta-Learning And Inductive Logic Programming, 2000, pp. 27–39.
    [61]
    M. Reif, F. Shafait, A. Dengel, Meta2-features: Providing meta-learners more information, in: 35th German Conference On Artificial Intelligence, KI, 2012, pp. 74–77.
    [62]
    H. Bensusan, C. Giraud-Carrier, C. Kennedy, A higher-order approach to meta-learning, in: 10th International Conference Inductive Logic Programming, ILP, 2000, pp. 33–42.
    [63]
    C. Kopf, C. Taylor, J. Keller, Meta-analysis: From data characterisation for meta-learning to meta-regression, in: PKDD Workshop On Data Mining, Decision Support,Meta-Learning And Inductive Logic Programming, 2000, pp. 15–26.
    [64]
    Segrera S., Pinho J., Moreno M.N., Information-theoretic measures for meta-learning, in: Hybrid Artificial Intelligence Systems, HAIS, 2008, pp. 458–465.
    [65]
    J. Fürnkranz, J. Petrak, An evaluation of landmarking variants, in: 1st ECML/PKDD International Workshop On Integration And Collaboration Aspects Of Data Mining, Decision Support And Meta-Learning, IDDM, 2001, pp. 57–68.
    [66]
    Vanschoren J., Blockeel H., Pfahringer B., Holmes G., Experiment databases, Mach. Learn. 87 (2) (2012) 127–158.
    [67]
    L.P.F. Garcia, F. Campelo, G.N. Ramos, A. Rivolli, A.C.P.L.F. de Carvalho, Evaluating clustering meta-features for classifier recommendation, in: 10th Brazilian Conference On Intelligent Systems, BRACIS, 2021, pp. 453–467.
    [68]
    R. Engels, C. Theusinger, Using a data metric for preprocessing advice for data mining applications, in: 13th European Conference On On Artificial Intelligence, ECAI, 1998, pp. 430–434.
    [69]
    G. Lindner, R. Studer, AST: Support for algorithm selection with a CBR approach, in: European Conference On Principles Of Data Mining And Knowledge Discovery, PKDD, 1999, pp. 418–423.
    [70]
    Vanschoren J., Understanding Machine Learning Performance with Experiment Databases, (Ph.D. thesis) Leuven Univeristy, 2010.
    [71]
    F. Pinto, C. Soares, J. ao Mendes-Moreira, Towards automatic generation of metafeatures, in: Pacific-Asia Conference On Knowledge Discovery And Data Mining, PAKDD, 2016, pp. 215–226.
    [72]
    P. Kuba, P. Brazdil, C. Soares, A. Woznica, Exploiting sampling and meta-learning for parameter setting for support vector machines, in: 8th IBERAMIA Workshop On Learning And Data Mining, 2002, pp. 209–216.
    [73]
    M. Feurer, J.T. Springenberg, F. Hutter, Using meta-learning to initialize Bayesian optimization of hyperparameters, in: International Conference On Meta-Learning And Algorithm Selection, MLAS, 2014, pp. 3–10.
    [74]
    Y. Peng, P.A. Flach, P. Brazdil, C. Soares, Decision tree-based data characterization for meta-learning, in: 2nd ECML/PKDD International Workshop On Integration And Collaboration Aspects Of Data Mining, Decision Support And Meta-Learning, IDDM, 2002, pp. 111–122.
    [75]
    Michie D., Spiegelhalter D.J., Taylor C.C., Machine Learning, Neural And Statistical Classification, Ellis Horwood, 1994.
    [76]
    Kalousis A., Hilario M., Model selection via meta-learning: a comparative study, Int. J. Artif. Intell. Tools 10 (4) (2001) 525–554.
    [77]
    Brazdil P.B., Soares C., da Coasta J.P., Ranking learning algorithms: Using IBL and meta-learning on accuracy and time results, Mach. Learn. 50 (3) (2003) 251–277.
    [78]
    Rodgers J.L., Nicewander W.A., Thirteen ways to look at the correlation coefficient, Amer. Statist. 42 (1) (1988) 59–66.
    [79]
    Joanes D.N., Gill C.A., Comparing measures of sample skewness and kurtosis, J. R. Stat. Soc. 47 (1) (1998) 183–189.
    [80]
    Smith K., Woo F., Ciesielski V., Ibrahim R., Modelling the relationship between problem characteristics and data mining algorithm performance using neural networks, in: Smart Engineering System Design: Neural Networks, Fuzzy Logic, Evolutionary Programming, Data Mining, And Complex Systems, 2001, pp. 357–362.
    [81]
    Loh W.-Y., Fifty years of classification and regression trees, Internat. Statist. Rev. 82 (3) (2014) 329–348.
    [82]
    P. Nguyen, J. Wang, M. Hilario, A. Kalousis, Learning heterogeneous similarity measures for hybrid-recommendations in meta-mining, in: IEEE International Conference On Data Mining, ICDM, 2012, pp. 1026–1031.
    [83]
    R. Leite, P. Brazdil, Predicting relative performance of classifiers from samples, in: 22nd International Conference On Machine Learning, vol. 119, ICML, 2005, pp. 497–503.
    [84]
    Sun Q., Pfahringer B., Pairwise meta-rules for better meta-learning-based algorithm ranking, Mach. Learn. 93 (1) (2013) 141–161.
    [85]
    Garcia L.P.F., Lorena A.C., de Souto M.C.P., Ho T.K., Classifier recommendation using data complexity measures, in: 24th International Conference On Pattern Recognition, ICPR, 2018, pp. 874–879.
    [86]
    G. Morais, R.C. Prati, Complex network measures for data set characterization, in: Brazilian Conference On Intelligent Systems, BRACIS, 2013, pp. 12–18.
    [87]
    D. Ler, H. Teng, Y. He, R. Gidijala, Algorithm selection for classification problems via cluster-based meta-features, in: IEEE International Conference On Big Data, Big Data, 2018, pp. 4952–4960.
    [88]
    Handl J., Knowles J.D., Kell D.B., Computational cluster validation in post-genomic data analysis, Bioinformatics 21 (15) (2005) 3201–3212.
    [89]
    Desgraupes B., An R package for computing clustering quality indices, 2017, URL: https://CRAN.R-project.org/package=clusterCrit/vignettes/clusterCrit.pdf.
    [90]
    Ho T.K., Basu M., Complexity measures of supervised classification problems, IEEE Trans. Pattern Anal. Mach. Intell. 24 (3) (2002) 289–300.
    [91]
    Luengo J., Herrera F., An automatic extraction method of the domains of competence for learning classifiers using data complexity measures, Knowl. Inf. Syst. 42 (1) (2015) 147–180.
    [92]
    Smith M.R., Martinez T., Giraud-Carrier C., An instance level analysis of data complexity, Mach. Learn. 95 (2) (2014) 225–256.
    [93]
    Lorena A.C., Garcia L.P.F., Lehmann J., Souto M.C.P., Ho T.K., How complex is your classification problem? A survey on measuring classification complexity, ACM Comput. Surv. 52 (5) (2019).
    [94]
    V.H. Barella, L.P.F. Garcia, M.C.P. de Souto, A.C. Lorena, A.C.P.L.F. de Carvalho, Data complexity measures for imbalanced classification tasks, in: International Joint Conference On Neural Networks, IJCNN, 2018, pp. 1–8.
    [95]
    C. Kopf, I. Iglezakis, Combination of task description strategies and case base properties for meta-learning, in: 2nd ECML/PKDD International Workshop On Integration And Collaboration Aspects Of Data Mining, Decision Support And Meta-Learning, IDDM, 2002, pp. 65–76.
    [96]
    R. Vilalta, Y. Drissi, A characterization of difficult problems in classification, in: International Conference On Machine Learning And Applications, ICMLA, 2002, pp. 133–138.
    [97]
    R. Vilalta, Understanding accuracy performance through concept characterization and algorithm analysis, in: ECML Workshop On Recent Advances In Meta-Learning And Future Work, 1999, pp. 3–9.
    [98]
    Wang G., Song Q., Zhu X., An improved data characterization method and its application in classification algorithm recommendation, Appl. Intell. 43 (4) (2015) 892–912.
    [99]
    Song Q., Wang G., Wang C., Automatic recommendation of classification algorithms based on data set characteristics, Pattern Recognit. 45 (7) (2012) 2672–2689.
    [100]
    Burton S.H., Morris R.G., Giraud-Carrier C., West J.H., Thackeray R., Mining useful association rules from questionnaire data, Intell. Data Anal. 18 (3) (2014) 479–494.
    [101]
    Aggarwal C.C., Data Mining, Springer International Publishing, 2015.
    [102]
    Tan P.-N., Steinbach M., Kumar V., Introduction to Data Mining, Addison-Wesley Longman Publishing, 2005.
    [103]
    U.M. Fayyad, K.B. Irani, Multi-interval discretization of continuous-valued attributes for classification learning, in: 13th International Joint Conference On Artificial Intelligence, IJCAI, 1993, pp. 1022–1029.
    [104]
    Han J., Kamber M., Pei J., Data Mining: Concepts And Techniques, Morgan Kaufmann, 2005.
    [105]
    Wang G., Song Q., Sun H., Zhang X., Xu B., Zhou Y., A feature subset selection algorithm automatic recommendation method, J. Artif. Intell. Res. 47 (2013) 1–34.
    [106]
    Salama M.A., Hassanien A.E., Revett K., Employment of neural network and rough set in meta-learning, Memet. Comput. 5 (3) (2013) 165–177.
    [107]
    Hotelling H., Analysis of a complex of statistical variables with principal components, J. Educ. Psychol. 24 (1933) 417–441.
    [108]
    S.D. Abdelmessih, F. Shafait, M. Reif, M. Goldstein, Landmarking for meta-learning using RapidMiner, in: RapidMiner Community Meeting And Conference, RCOMM, 2010, pp. 1–6.
    [109]
    Dua D., Graff C., UCI machine learning repository, 2017, URL: http://archive.ics.uci.edu/ml.
    [110]
    Alcalá-Fdez J., Fernández A., Luengo J., Derrac J., García S., Sánchez L., Herrera F., KEEL data-mining software tool: Data set repository, integration of algorithms and experimental analysis framework, Multiple-Valued Log. Soft Comput. 17 (2–3) (2011) 255–287.
    [111]
    Braun M.L., Ong C.S., Hoyer P.O., Henschel S., Sonnenburg S., mldata.org: machine learning data set repository, 2014, http://mldata.org/.
    [112]
    Vanschoren J., van Rijn J.N., Bischl B., Torgo L., OpenML: Networked science in machine learning, ACM SIGKDD Explor. Newsl. 15 (2) (2013) 49–60.
    [113]
    J. Vanschoren, H. Blockeel, Towards understanding learning behavior, in: 15th Annual Machine Learning Conference Of Belgium And The Netherlands, 2006, pp. 89–96.
    [114]
    R.B.C. Prudêncio, T.B. Ludermir, Active learning to support the generation of meta-examples, in: 17th International Conference On Artificial Neural Networks, vol. 4668, ICANN, 2007, pp. 817–826.
    [115]
    R.B.C. Prudêncio, C. Soares, T.B. Ludermir, Uncertainty sampling-based active selection of datasetoids for meta-learning, in: 21st International Conference On Artificial Neural Networks, vol. 6792, ICANN, 2011, pp. 454–461.
    [116]
    H. Bensusan, C. Giraud-Carrier, Discovering task neighbourhoods through landmark learning performances, in: 4th European Conference On Principles Of Data Mining And Knowledge Discovery, PKDD, 2000, pp. 325–330.
    [117]
    Mathworks J., Statistics Toolbox: for Use with MATLAB: User’s Guide, 2001.
    [118]
    Hall M., Frank E., Holmes G., Pfahringer B., Reutemann P., Witten I.H., The WEKA data mining software: An update, ACM SIGKDD Explor. Newsl. 11 (1) (2009) 10–18.
    [119]
    I. Mierswa, M. Wurst, R. Klinkenberg, M. Scholz, T. Euler, YALE: rapid prototyping for complex data mining tasks, in: 12th International Conference On Knowledge Discovery And Data Mining, KDD, 2006, pp. 935–940.
    [120]
    Balte A., Pise N., Kulkarni P., Meta-learning with landmarking : A survey, Int. J. Comput. Appl. 105 (8) (2014) 47–51.
    [121]
    Alcobaça E., Siqueira F., Rivolli A., Garcia L.P., Oliva J.T., de Carvalho A.C., MFE: Towards reproducible meta-feature extraction, J. Mach. Learn. Res. 21 (111) (2020) 1–5.
    [122]
    Kalousis A., Algorithm Selection via Meta-Learning, (Ph.D. thesis) Faculty of Science of the University of Geneva, 2002.
    [123]
    Royston P., Remark AS R94: A remark on algorithm AS 181: The W-test for normality, J. R. Stat. Soc. Ser. C Appl. Stat. 44 (4) (1995) 547–551.
    [124]
    Rousseeuw P.J., Hubert M., Robust statistics for outlier detection, Wiley Interdiscip. Rev. Data Min. And Knowledge Discovery 1 (1) (2011) 73–79.
    [125]
    Ferrari D.G., de Castro L.N., Clustering algorithm selection by meta-learning systems: A new distance-based problem characterization and ranking combination methods, Inform. Sci. 301 (2015) 181–194.

    Cited By

    View all
    • (2024)Do We Really Need Imputation in AutoML Predictive Modeling?ACM Transactions on Knowledge Discovery from Data10.1145/364364318:6(1-64)Online publication date: 12-Apr-2024
    • (2023)ADGymProceedings of the 37th International Conference on Neural Information Processing Systems10.5555/3666122.3669197(70179-70207)Online publication date: 10-Dec-2023
    • (2023)Selecting Top-k Data Science Models by Example DatasetProceedings of the 32nd ACM International Conference on Information and Knowledge Management10.1145/3583780.3615051(2686-2695)Online publication date: 21-Oct-2023
    • Show More Cited By

    Index Terms

    1. Meta-features for meta-learning
        Index terms have been assigned to the content through auto-classification.

        Recommendations

        Comments

        Information & Contributors

        Information

        Published In

        cover image Knowledge-Based Systems
        Knowledge-Based Systems  Volume 240, Issue C
        Mar 2022
        830 pages

        Publisher

        Elsevier Science Publishers B. V.

        Netherlands

        Publication History

        Published: 15 March 2022

        Author Tags

        1. Meta-features
        2. Characterization measures
        3. Meta-learning
        4. Classification problems

        Qualifiers

        • Research-article

        Contributors

        Other Metrics

        Bibliometrics & Citations

        Bibliometrics

        Article Metrics

        • Downloads (Last 12 months)0
        • Downloads (Last 6 weeks)0

        Other Metrics

        Citations

        Cited By

        View all
        • (2024)Do We Really Need Imputation in AutoML Predictive Modeling?ACM Transactions on Knowledge Discovery from Data10.1145/364364318:6(1-64)Online publication date: 12-Apr-2024
        • (2023)ADGymProceedings of the 37th International Conference on Neural Information Processing Systems10.5555/3666122.3669197(70179-70207)Online publication date: 10-Dec-2023
        • (2023)Selecting Top-k Data Science Models by Example DatasetProceedings of the 32nd ACM International Conference on Information and Knowledge Management10.1145/3583780.3615051(2686-2695)Online publication date: 21-Oct-2023
        • (2023)Model Performance Prediction: A Meta-Learning Approach for Concept Drift DetectionHybrid Artificial Intelligent Systems10.1007/978-3-031-40725-3_5(51-62)Online publication date: 5-Sep-2023
        • (2022)An Ontological Approach for Recommending a Feature Selection AlgorithmWeb Engineering10.1007/978-3-031-09917-5_20(300-314)Online publication date: 5-Jul-2022

        View Options

        View options

        Get Access

        Login options

        Media

        Figures

        Other

        Tables

        Share

        Share

        Share this Publication link

        Share on social media