Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
research-article

Techniques for Automated Machine Learning

Published: 17 January 2021 Publication History

Abstract

Automated machine learning (AutoML) aims to find optimal machine learning solutions automatically given a problem description, its task type, and datasets. It could release the burden of data scientists from the multifarious manual tuning process and enable the access of domain experts to the off-the-shelf machine learning solutions without extensive experience. In this paper, we portray AutoML as a bi-level optimization problem, where one problem is nested within another to search the optimum in the search space, and review the current developments of AutoML in terms of three categories, automated feature engineering (AutoFE), automated model and hyperparameter tuning (AutoMHT), and automated deep learning (AutoDL). Stateof- the-art techniques in the three categories are presented. The iterative solver is proposed to generalize AutoML techniques. We summarize popular AutoML frameworks and conclude with current open challenges of AutoML.

References

[1]
Amazon. Perform automatic model tuning. https://docs.aws.amazon.com/en_us/sagemaker/ latest/dg/automatic-model-tuning.html. Accessed: 2020-02--21.
[2]
A. Asuncion and D. Newman. Uci machine learning repository, 2007.
[3]
A. Balaji and A. Allen. Benchmarking automatic machine learning frameworks. arXiv preprint arXiv:1808.06492, 2018.
[4]
T. Ben-Nun and T. Hoefler. Demystifying parallel and distributed deep learning: An in-depth concurrency analysis. ACM Computing Surveys (CSUR), 52(4):1-- 43, 2019.
[5]
J. Bergstra and Y. Bengio. Random search for hyperparameter optimization. Journal of machine learning research, 13(Feb):281--305, 2012.
[6]
J. S. Bergstra, R. Bardenet, Y. Bengio, and B. K´egl. Algorithms for hyper-parameter optimization. In Advances in neural information processing systems, pages 2546--2554, 2011.
[7]
A. Blum and R. L. Rivest. Training a 3-node neural network is np-complete. In Advances in neural information processing systems, pages 494--501, 1989.
[8]
L. Bottou, F. E. Curtis, and J. Nocedal. Optimization methods for large-scale machine learning. Siam Review, 60(2):223--311, 2018.
[9]
J. Bourgain. On lipschitz embedding of finite metric spaces in hilbert space. Israel Journal of Mathematics, 52(1--2):46--52, 1985.
[10]
H. Cai, L. Zhu, and S. Han. ProxylessNAS: Direct neural architecture search on target task and hardware. In International Conference on Learning Representations, 2019.
[11]
B. Chen, H. Wu, W. Mo, I. Chattopadhyay, and H. Lipson. Autostacker: A compositional evolutionary learning system. In Proceedings of the Genetic and Evolutionary Computation Conference, GECCO '18, pages 402--409, New York, NY, USA, 2018. ACM.
[12]
T. Chen, I. Goodfellow, and J. Shlens. Net2net: Accelerating learning via knowledge transfer. arXiv preprint arXiv:1511.05641, 2015.
[13]
T. Chen, M. Li, Y. Li, M. Lin, N. Wang, M. Wang, T. Xiao, B. Xu, C. Zhang, and Z. Zhang. Mxnet: A flexible and efficient machine learning library for heterogeneous distributed systems. arXiv preprint arXiv:1512.01274, 2015.
[14]
X. Chen, Q. Lin, C. Luo, X. Li, H. Zhang, Y. Xu, Y. Dang, K. Sui, X. Zhang, B. Qiao, et al. Neural feature search: A neural architecture for automated feature engineering. In 2019 IEEE International Conference on Data Mining (ICDM), pages 71--80. IEEE, 2019.
[15]
Y.-W. CHEN, Q. Song, X. Liu, P. Sastry, and X. Hu. On robustness of neural architecture search under label noise. Frontiers in Big Data, 3:2, 2020.
[16]
B. Colson, P. Marcotte, and G. Savard. An overview of bilevel optimization. Annals of operations research, 153(1):235--256, 2007.
[17]
P. Covington, J. Adams, and E. Sargin. Deep neural networks for youtube recommendations. In Proceedings of the 10th ACM conference on recommender systems, pages 191--198, 2016.
[18]
E. D. Cubuk, B. Zoph, S. S. Schoenholz, and Q. V. Le. Intriguing properties of adversarial examples. arXiv preprint arXiv:1711.02846, 2017.
[19]
U. Dem'sar, P. Harris, C. Brunsdon, A. S. Fotheringham, and S. McLoone. Principal component analysis on spatial data: an overview. Annals of the Association of American Geographers, 103(1):106--128, 2013.
[20]
J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei-Fei. ImageNet: A Large-Scale Hierarchical Image Database. In CVPR09, 2009.
[21]
J. E. Dennis, Jr and J. J. Mor´e. Quasi-newton methods, motivation and theory. SIAM review, 19(1):46--89, 1977.
[22]
M. Du, N. Liu, and X. Hu. Techniques for interpretable machine learning. Communications of the ACM, 63(1):68--77, 2019.
[23]
K. Eggensperger, M. Feurer, F. Hutter, J. Bergstra, J. Snoek, H. Hoos, and K. Leyton-Brown. Towards an empirical foundation for assessing bayesian optimization of hyperparameters. In NIPS workshop on Bayesian Optimization in Theory and Practice, volume 10, page 3, 2013.
[24]
T. Elsken, J. H. Metzen, and F. Hutter. Neural architecture search: A survey. Journal of Machine Learning Research, 20(55):1--21, 2019.
[25]
S. Falkner, A. Klein, and F. Hutter. Bohb: Robust and efficient hyperparameter optimization at scale. arXiv preprint arXiv:1807.01774, 2018.
[26]
M. Feurer and F. Hutter. Hyperparameter Optimization, pages 3--33. Springer International Publishing, Cham, 2019.
[27]
M. Feurer, A. Klein, K. Eggensperger, J. Springenberg, M. Blum, and F. Hutter. Efficient and robust automated machine learning. In Advances in Neural Information Processing Systems, pages 2962--2970, 2015.
[28]
N. Fusi, R. Sheth, and M. Elibol. Probabilistic matrix factorization for automated machine learning. In Advances in Neural Information Processing Systems, pages 3348--3357, 2018.
[29]
Y. Gao, H. Yang, P. Zhang, C. Zhou, and Y. Hu. Graphnas: Graph neural architecture search with reinforcement learning. arXiv preprint arXiv:1904.09981, 2019.
[30]
P. Gijsbers, E. LeDell, J. Thomas, S. Poirier, B. Bischl, and J. Vanschoren. An open source automl benchmark. arXiv preprint arXiv:1907.00909, 2019.
[31]
D. E. Goldberg and K. Deb. A comparative analysis of selection schemes used in genetic algorithms. In Foundations of genetic algorithms, volume 1, pages 69--93. Elsevier, 1991.
[32]
D. Golovin, B. Solnik, S. Moitra, G. Kochanski, J. Karro, and D. Sculley. Google vizier: A service for black-box optimization. In Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD '17, pages 1487--1495, New York, NY, USA, 2017. ACM.
[33]
X. Gong, S. Chang, Y. Jiang, and Z. Wang. Autogan: Neural architecture search for generative adversarial networks. In Proceedings of the IEEE International Conference on Computer Vision, pages 3224-- 3234, 2019.
[34]
Google. Overview of hyperparameter tuning. https://cloud.google.com/ai-platform/ training/docs/hyperparameter-tuning-overview. Accessed: 2020-02--21.
[35]
A. Graves, A.-r. Mohamed, and G. Hinton. Speech recognition with deep recurrent neural networks. In 2013 IEEE international conference on acoustics, speech and signal processing, pages 6645--6649. IEEE, 2013.
[36]
K. He, X. Zhang, S. Ren, and J. Sun. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 770--778, 2016.
[37]
X. He, K. Zhao, and X. Chu. Automl: A survey of the state-of-the-art. arXiv preprint arXiv:1908.00709, 2019.
[38]
A. G. Howard, M. Zhu, B. Chen, D. Kalenichenko, W. Wang, T. Weyand, M. Andreetto, and H. Adam. Mobilenets: Efficient convolutional neural networks for mobile vision applications. arXiv preprint arXiv:1704.04861, 2017.
[39]
F. Hutter, H. H. Hoos, and K. Leyton-Brown. Sequential model-based optimization for general algorithm configuration. In International Conference on Learning and Intelligent Optimization, pages 507--523. Springer, 2011.
[40]
F. Iandola, M. Moskewicz, S. Karayev, R. Girshick, T. Darrell, and K. Keutzer. Densenet: Implementing efficient convnet descriptor pyramids. arXiv preprint arXiv:1404.1869, 2014.
[41]
K. Jamieson and A. Talwalkar. Non-stochastic best arm identification and hyperparameter optimization. In Artificial Intelligence and Statistics, pages 240--248, 2016.
[42]
H. Jin, Q. Song, and X. Hu. Auto-keras: Efficient neural architecture search with network morphism, 2018.
[43]
R. Johnson and T. Zhang. Accelerating stochastic gradient descent using predictive variance reduction. In Advances in neural information processing systems, pages 315--323, 2013.
[44]
K. Kandasamy, W. Neiswanger, J. Schneider, B. Poczos, and E. P. Xing. Neural architecture search with bayesian optimisation and optimal transport. In S. Bengio, H. Wallach, H. Larochelle, K. Grauman, N. Cesa-Bianchi, and R. Garnett, editors, Advances in Neural Information Processing Systems 31, pages 2016--2025. Curran Associates, Inc., 2018.
[45]
J. M. Kanter and K. Veeramachaneni. Deep feature synthesis: Towards automating data science endeavors. In 2015 IEEE International Conference on Data Science and Advanced Analytics, DSAA 2015, Paris, France, October 19--21, 2015, pages 1--10. IEEE, 2015.
[46]
G. Katz, E. C. R. Shin, and D. Song. Explorekit: Automatic feature generation and selection. In 2016 IEEE 16th International Conference on Data Mining (ICDM), pages 979--984, Dec 2016.
[47]
A. Kaul, S. Maheshwary, and V. Pudi. Autolearn: Automated feature generation and selection. In 2017 IEEE International Conference on Data Mining (ICDM), pages 217--226, Nov 2017.
[48]
U. Khurana and H. Samulowitz. Automating predictive modeling process using reinforcement learning. arXiv preprint arXiv:1903.00743, 2019.
[49]
U. Khurana, H. Samulowitz, and D. Turaga. Feature engineering for predictive modeling using reinforcement learning. In Thirty-Second AAAI Conference on Artificial Intelligence, 2018.
[50]
U. Khurana, D. Turaga, H. Samulowitz, and S. Parthasrathy. Cognito: Automated feature engineering for supervised learning. In 2016 IEEE 16th International Conference on Data Mining Workshops (ICDMW), pages 1304--1307. IEEE, 2016.
[51]
A. Klein, S. Falkner, S. Bartels, P. Hennig, and F. Hutter. Fast bayesian optimization of machine learning hyperparameters on large datasets. arXiv preprint arXiv:1605.07079, 2016.
[52]
B. Korte, J. Vygen, B. Korte, and J. Vygen. Combinatorial optimization, volume 2. Springer, 2012.
[53]
A. Krizhevsky, G. Hinton, et al. Learning multiple layers of features from tiny images. 2009.
[54]
H. T. Lam, J.-M. Thiebaut, M. Sinn, B. Chen, T. Mai, and O. Alkan. One button machine for automating feature engineering in relational databases. arXiv preprint arXiv:1706.00327, 2017.
[55]
H. O. Lancaster and E. Seneta. Chi-square distribution. Encyclopedia of biostatistics, 2, 2005.
[56]
L. Li, K. Jamieson, G. DeSalvo, A. Rostamizadeh, and A. Talwalkar. Hyperband: Bandit-based configuration evaluation for hyperparameter optimization. 2016.
[57]
R. Liaw, E. Liang, R. Nishihara, P. Moritz, J. E. Gonzalez, and I. Stoica. Tune: A research platform for distributed model selection and training. arXiv preprint arXiv:1807.05118, 2018.
[58]
C. Liu, L.-C. Chen, F. Schroff, H. Adam, W. Hua, A. Yuille, and L. Fei-Fei. Auto-deeplab: Hierarchical neural architecture search for semantic image segmentation. arXiv preprint arXiv:1901.02985, 2019.
[59]
H. Liu, K. Simonyan, O. Vinyals, C. Fernando, and K. Kavukcuoglu. Hierarchical representations for efficient architecture search. arXiv preprint arXiv:1711.00436, 2017.
[60]
H. Liu, K. Simonyan, and Y. Yang. DARTS: Differentiable architecture search. In International Conference on Learning Representations, 2019.
[61]
R. Luo, F. Tian, T. Qin, E. Chen, and T.-Y. Liu. Neural architecture optimization. In Advances in Neural Information Processing Systems, pages 7826--7837, 2018.
[62]
M. P. Marcus, B. Santorini, M. A. Marcinkiewicz, and A. Taylor. Treebank-3. Linguistic Data Consortium, Philadelphia, 14, 1999.
[63]
H. Mendoza, A. Klein, M. Feurer, J. T. Springenberg, M. Urban, M. Burkart, M. Dippel, M. Lindauer, and F. Hutter. Towards automatically-tuned deep neural networks. In F. Hutter, L. Kotthoff, and J. Vanschoren, editors, AutoML: Methods, Sytems, Challenges, chapter 7, pages 141--156. Springer, Dec. 2018. To appear.
[64]
S. Merity, C. Xiong, J. Bradbury, and R. Socher. Pointer sentinel mixture models. arXiv preprint arXiv:1609.07843, 2016.
[65]
Microsoft. What is automated machine learning? https://docs.microsoft.com/en-us/azure/ machine-learning/concept-automated-ml. Accessed: 2020-02--21.
[66]
D. Mladenic. Machine learning on non-homogeneous, distributed text data. Computer Science, University of Ljubljana, Slovenia, 1998.
[67]
F. Nargesian, H. Samulowitz, U. Khurana, E. B. Khalil, and D. S. Turaga. Learning feature engineering for classification. In IJCAI, pages 2529--2535, 2017.
[68]
T. Nickson, M. A. Osborne, S. Reece, and S. J. Roberts. Automated machine learning on big data using stochastic algorithm tuning. arXiv preprint arXiv:1407.7969, 2014.
[69]
R. S. Olson, N. Bartley, R. J. Urbanowicz, and J. H. Moore. Evaluation of a tree-based pipeline optimization tool for automating data science. In Proceedings of the Genetic and Evolutionary Computation Conference 2016, GECCO '16, pages 485--492, New York, NY, USA, 2016. ACM.
[70]
H. Pham, M. Guan, B. Zoph, Q. Le, and J. Dean. Efficient neural architecture search via parameters sharing. In J. Dy and A. Krause, editors, Proceedings of the 35th International Conference on Machine Learning, volume 80 of Proceedings of Machine Learning Research, pages 4095--4104, Stockholmsm¨assan, Stockholm Sweden, 10--15 Jul 2018. PMLR.
[71]
F. Qi, Z. Xia, G. Tang, H. Yang, Y. Song, G. Qian, X. An, C. Lin, and G. Shi. Darwinml: A graph-based evolutionary algorithm for automated machine learning. arXiv preprint arXiv:1901.08013, 2018.
[72]
Y. Quanming, W. Mengshuo, J. E. Hugo, G. Isabelle, H. Yi-Qi, L. Yu-Feng, T. Wei-Wei, Y. Qiang, and Y. Yang. Taking human out of learning applications: A survey on automated machine learning. arXiv preprint arXiv:1810.13306, 2018.
[73]
E. Real, A. Aggarwal, Y. Huang, and Q. V. Le. Regularized evolution for image classifier architecture search. arXiv preprint arXiv:1802.01548, 2018.
[74]
E. Real, S. Moore, A. Selle, S. Saxena, Y. L. Suematsu, J. Tan, Q. V. Le, and A. Kurakin. Large-scale evolution of image classifiers. In D. Precup and Y. W. Teh, editors, Proceedings of the 34th International Conference on Machine Learning, volume 70 of Proceedings of Machine Learning Research, pages 2902--2911, International Convention Centre, Sydney, Australia, 06-- 11 Aug 2017. PMLR.
[75]
F. Sebastiani. Machine learning in automated text categorization. ACM computing surveys (CSUR), 34(1):1--47, 2002.
[76]
B. Shahriari, K. Swersky, Z. Wang, R. P. Adams, and N. De Freitas. Taking the human out of the loop: A review of bayesian optimization. Proceedings of the IEEE, 2016.
[77]
A. Sharma and K. K. Paliwal. Linear discriminant analysis for the small sample size problem: an overview. International Journal of Machine Learning and Cybernetics, 6(3):443--454, 2015.
[78]
D. Silver, J. Schrittwieser, K. Simonyan, I. Antonoglou, A. Huang, A. Guez, T. Hubert, L. Baker, M. Lai, A. Bolton, et al. Mastering the game of go without human knowledge. Nature, 550(7676):354--359, 2017.
[79]
J. Snoek, H. Larochelle, and R. P. Adams. Practical bayesian optimization of machine learning algorithms. In F. Pereira, C. J. C. Burges, L. Bottou, and K. Q. Weinberger, editors, Advances in Neural Information Processing Systems 25, pages 2951--2959. Curran Associates, Inc., 2012.
[80]
M. Suganuma, M. Ozay, and T. Okatani. Exploiting the potential of standard convolutional autoencoders for image restoration by evolutionary search. arXiv preprint arXiv:1803.00370, 2018.
[81]
R. S. Sutton and A. G. Barto. Reinforcement learning: An introduction. MIT press, 2018.
[82]
R. S. Sutton, D. A. McAllester, S. P. Singh, and Y. Mansour. Policy gradient methods for reinforcement learning with function approximation. In Advances in neural information processing systems, pages 1057--1063, 2000.
[83]
K. Swersky, J. Snoek, and R. P. Adams. Freezethaw bayesian optimization. arXiv preprint arXiv:1406.3896, 2014.
[84]
B. Tran, B. Xue, and M. Zhang. Genetic programming for feature construction and selection in classification on high-dimensional data. Memetic Computing, 8(1):3--15, 2016.
[85]
P. Tseng and S. Yun. A coordinate gradient descent method for nonsmooth separable minimization. Mathematical Programming, 117(1--2):387--423, 2009.
[86]
V. N. Vapnik. Statistical Learning Theory. Wiley- Interscience, New York, NY, 1998.
[87]
J. R. Vergara and P. A. Est´evez. A review of feature selection methods based on mutual information. Neural computing and applications, 24(1):175--186, 2014.
[88]
F. Viegas, L. Rocha, M. Gon¸calves, F. Mour?ao, G. S´a, T. Salles, G. Andrade, and I. Sandin. A genetic programming approach for feature selection in highly dimensional skewed data. Neurocomputing, 273:554--569, 2018.
[89]
C. Villani. Optimal transport: old and new, volume 338. Springer Science & Business Media, 2008.
[90]
C. Weill, J. Gonzalvo, V. Kuznetsov, S. Yang, S. Yak, H. Mazzawi, E. Hotaj, G. Jerfel, V. Macko, B. Adlam, M. Mohri, and C. Cortes. Adanet: A scalable and flexible framework for automatically learning ensembles, 2019.
[91]
Y. Weng, T. Zhou, Y. Li, and X. Qiu. Nas-unet: Neural architecture search for medical image segmentation. IEEE Access, 7:44247--44257, 2019.
[92]
R. J.Williams. Simple statistical gradient-following algorithms for connectionist reinforcement learning. Machine learning, 8(3--4):229--256, 1992.
[93]
M. Wistuba. Deep learning architecture search by neuro-cell-based evolution with function-preserving mutations. In Joint European Conference on Machine Learning and Knowledge Discovery in Databases, pages 243--258. Springer, 2018.
[94]
M. Wistuba, A. Rawat, and T. Pedapati. A survey on neural architecture search. arXiv preprint arXiv:1905.01392, 2019.
[95]
Q. Yao, X. Chen, J. T. Kwok, Y. Li, and C.-J. Hsieh. Efficient neural interaction function search for collaborative filtering.
[96]
Q. Yao, J. Xu, W.-W. Tu, and Z. Zhu. Differentiable neural architecture search via proximal iterations. arXiv preprint arXiv:1905.13577, 2019.
[97]
C. Ying, A. Klein, E. Real, E. Christiansen, K. Murphy, and F. Hutter. Nas-bench-101: Towards reproducible neural architecture search. arXiv preprint arXiv:1902.09635, 2019.
[98]
L. Yuanfei, W. Mengshuo, Z. Hao, Y. Quanming, T. WeiWei, C. Yuqiang, Y. Qiang, and D. Wenyuan. Autocross: Automatic feature crossing for tabular data in real-world applications. arXiv preprint arXiv:1904.12857, 2019.
[99]
J. Zhang, Z.-h. Zhan, Y. Lin, N. Chen, Y.-j. Gong, J.- h. Zhong, H. S. Chung, Y. Li, and Y.-h. Shi. Evolutionary computation meets machine learning: A survey. IEEE Computational Intelligence Magazine, 6(4):68-- 75, 2011.
[100]
K. Zhou, Q. Song, X. Huang, and X. Hu. Auto-gnn: Neural architecture search of graph neural networks. arXiv preprint arXiv:1909.03184, 2019.
[101]
M.-A. Z¨oller and M. F. Huber. Benchmark and survey of automated machine learning frameworks.
[102]
B. Zoph and Q. V. Le. Neural architecture search with reinforcement learning. arXiv preprint arXiv:1611.01578, 2016.
[103]
B. Zoph, V. Vasudevan, J. Shlens, and Q. V. Le. Learning transferable architectures for scalable image recognition. June 2018.

Cited By

View all
  • (2024)Using Auto-ML on Synthetic Point Cloud GenerationApplied Sciences10.3390/app1402074214:2(742)Online publication date: 15-Jan-2024
  • (2024)Unlocking AutoML: Enhancing Data with Deep Learning Algorithms for Medical ImagingJournal of Data and Information Quality10.1145/370589616:4(1-17)Online publication date: 26-Nov-2024
  • (2024)AutoDW: Automatic Data Wrangling Leveraging Large Language ModelsProceedings of the 39th IEEE/ACM International Conference on Automated Software Engineering10.1145/3691620.3695267(2041-2052)Online publication date: 27-Oct-2024
  • Show More Cited By
  1. Techniques for Automated Machine Learning

    Recommendations

    Comments

    Information & Contributors

    Information

    Published In

    cover image ACM SIGKDD Explorations Newsletter
    ACM SIGKDD Explorations Newsletter  Volume 22, Issue 2
    December 2020
    50 pages
    ISSN:1931-0145
    EISSN:1931-0153
    DOI:10.1145/3447556
    Issue’s Table of Contents
    Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for third-party components of this work must be honored. For all other uses, contact the Owner/Author.

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    Published: 17 January 2021
    Published in SIGKDD Volume 22, Issue 2

    Check for updates

    Qualifiers

    • Research-article

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • Downloads (Last 12 months)139
    • Downloads (Last 6 weeks)8
    Reflects downloads up to 21 Dec 2024

    Other Metrics

    Citations

    Cited By

    View all
    • (2024)Using Auto-ML on Synthetic Point Cloud GenerationApplied Sciences10.3390/app1402074214:2(742)Online publication date: 15-Jan-2024
    • (2024)Unlocking AutoML: Enhancing Data with Deep Learning Algorithms for Medical ImagingJournal of Data and Information Quality10.1145/370589616:4(1-17)Online publication date: 26-Nov-2024
    • (2024)AutoDW: Automatic Data Wrangling Leveraging Large Language ModelsProceedings of the 39th IEEE/ACM International Conference on Automated Software Engineering10.1145/3691620.3695267(2041-2052)Online publication date: 27-Oct-2024
    • (2024)Traceable Group-Wise Self-Optimizing Feature Transformation Learning: A Dual Optimization PerspectiveACM Transactions on Knowledge Discovery from Data10.1145/363805918:4(1-22)Online publication date: 13-Feb-2024
    • (2024)Unsupervised Generative Feature Transformation via Graph Contrastive Pre-training and Multi-objective Fine-tuningProceedings of the 30th ACM SIGKDD Conference on Knowledge Discovery and Data Mining10.1145/3637528.3672015(3966-3976)Online publication date: 25-Aug-2024
    • (2024)A Genetic Algorithm-based Auto-ML System for Survival AnalysisProceedings of the 39th ACM/SIGAPP Symposium on Applied Computing10.1145/3605098.3635954(370-377)Online publication date: 8-Apr-2024
    • (2024)Streamlining the generation of AI tools on a cloud medical imaging platform2024 IEEE 22nd Mediterranean Electrotechnical Conference (MELECON)10.1109/MELECON56669.2024.10608464(1101-1106)Online publication date: 25-Jun-2024
    • (2024)A dynamic adaptive multi-view fusion graph convolutional network recommendation model with dilated mask convolution mechanismInformation Sciences10.1016/j.ins.2023.120028658(120028)Online publication date: Feb-2024
    • (2023)Reinforcement-enhanced autoregressive feature transformationProceedings of the 37th International Conference on Neural Information Processing Systems10.5555/3666122.3668009(43563-43578)Online publication date: 10-Dec-2023
    • (2023)Benchmarking Biologically-Inspired Automatic Machine Learning for Economic TasksSustainability10.3390/su15141123215:14(11232)Online publication date: 19-Jul-2023
    • Show More Cited By

    View Options

    Login options

    View options

    PDF

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    Media

    Figures

    Other

    Tables

    Share

    Share

    Share this Publication link

    Share on social media