Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
research-article

Empirical Analysis of Machine Learning Algorithms for Multiclass Prediction

Published: 01 January 2022 Publication History

Abstract

With the emergence of big data and the interest in deriving valuable insights from ever-growing and ever-changing streams of data, machine learning has appeared as an effective data analytic technique as compared to traditional methodologies. Big data has become a source of incredible business value for almost every industry. In this context, machine learning plays an indispensable role of providing smart data analysis capabilities for uncovering hidden patterns. These patterns are later translated into automating certain aspects of the decision-making processes using machine learning classifiers. This paper presents a state-of-the-art comparative analysis of machine learning and deep learning-based classifiers for multiclass prediction. The experimental setup consisted of 11 datasets derived from different domains, publicly available at the repositories of UCI and Kaggle. The classifiers include Naïve Bayes (NB), decision trees (DTs), random forest (RF), gradient boosted decision trees (GBDTs), and deep learning-based convolutional neural networks (CNN). The results prove that the ensemble-based GBDTs outperform other algorithms in terms of accuracy, precision, and recall. RF and CNN show nearly similar performance on most datasets and outperform the traditional NB and DTs. On the other hand, NB shows the lowest performance as compared to other algorithms. It is worth mentioning that DTs show the lowest precision score on the Titanic dataset. One of the main reasons is that DTs suffer from overfitting and use a greedy approach for attribute relationship analysis.

References

[1]
Z. Chen and B. Liu, “Lifelong machine learning,” Synthesis Lectures on Artificial Intelligence & Machine Learning, vol. 10, no. 3, pp. 1–145, 2016.
[2]
V. Grover, R. H. Chiang, T.-P. Liang, and D. Zhang, “Creating strategic business value from big data analytics: a research framework,” Journal of Management Information Systems, vol. 35, no. 2, pp. 388–423, 2018.
[3]
B. Zou, V. Lampos, and I. Cox, “Multi-task learning improves disease models from web search,” in World Wide Web, pp. 87–96, 2018.
[4]
D. Geol, J. M. Khandelwal, and R. Tiwari, “Intelligent and integrated book recommendation & best price identifier system using machine learning,” in Intelligent Engineering Informatics, pp. 397–412, Springer, Berlin/Heidelberg, Germany, 2018.
[5]
L. Buczak and E. Guven, “A survey of data mining and machine learning methods for cyber security intrusion detection,” IEEE Communications Surveys & Tutorials, vol. 18, no. 2, pp. 1153–1176, 2016.
[6]
P. Chahuara, N. Grislain, G. Jauvion, and J.-M. Renders, “Real-time optimization of web publisher RTB revenues,” in Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 1743–1751, New York, United States, 2017.
[7]
S. Bhatia, P. Sharma, R. Burman, S. Hazari, and R. Hande, “Credit scoring using machine learning techniques,” International Journal of Computer Applications, vol. 161, no. 11, pp. 1–4, 2017.
[8]
D. Nawrocka, Machine learning for trading and portfolio management using Python, 2018, Hochschulbibliothek HWR Berlin.
[9]
B. Huang, Y. Huan, L. D. Xu, L. Zheng, and Z. Zou, “Automated trading systems statistical and machine learning methods and hardware implementation: a survey,” Enterprise Information Systems, vol. 13, pp. 1–13, 2018.
[10]
J. Mata, I. de Miguel, R. J. Duran, N. Merayo, S. K. Singh, A. Jukan, and M. Chamania, “Artificial intelligence (AI) methods in optical networks: a comprehensive survey,” Optical Switching and Networking, vol. 28, pp. 43–57, 2018.
[11]
T. O. Ayodele, “Types of machine learning algorithms, in new advances in machine learning,” Portsmouth, United Kingdom, 2010.
[12]
L. Breiman, “Random forests,” Machine Learning, vol. 45, no. 1, pp. 5–32, 2001.
[13]
Y. Bengio, “Learning deep architectures for AI,” Machine Learning, vol. 2, no. 1, pp. 1–127, 2009.
[14]
S. Amiri, M. Akbarabadi, F. Abdolali, A. Nikoofar, A. J. Esfahani, and S. Cheraghi, “Radiomics analysis on CT images for prediction of radiation-induced kidney damage by machine learning models,” Computers in Biology and Medicine, vol. 133, article 104409, 2021.
[15]
M. Loreto, T. Lisboa, and V. P. Moreira, “Early prediction of ICU readmissions using classification algorithms,” Computers in Biology and Medicine, vol. 118, article 103636, 2020.
[16]
R. Caruana and A. Niculescu-Mizil, “An empirical comparison of supervised learning algorithms,” in Proceedings of the 23rd international conference on Machine learning, pp. 161–168, Pittsburgh, Pennsylvania, USA, 2006.
[17]
A. Cieslak and N. V. Chawla, “Learning decision trees for unbalanced data,” in Joint European Conference on Machine Learning and Knowledge Discovery in Databases, pp. 241–256, Springer, Berlin, Heidelberg, 2008.
[18]
I. Brown and C. Mues, “An experimental comparison of classification algorithms for imbalanced credit scoring data sets,” Expert Systems with Applications, vol. 39, no. 3, pp. 3446–3453, 2012.
[19]
L. Thomas, J. Crook, and D. Edelman, “Credit scoring and its applications,” in Society for industrial and Applied Mathematics, vol. 2, Siam, Philadelphia, USA, 2017.
[20]
J. Howard, “The business impact of deep learning,” in Proceedings of the 19th ACM SIGKDD international conference on Knowledge discovery and data mining, pp. 1135–1135, August 2013.
[21]
D. W. Luo, D. Wu, and D. Wu, “A deep learning approach for credit scoring using credit default swaps,” Engineering Applications of Artificial Intelligence, vol. 65, pp. 465–470, 2017.
[22]
M. Sewak, S. K. Sahay, and H. Rathore, “Comparison of deep learning and the classical machine learning algorithm for the malware detection,” in International Conference on Software Engineering, Artificial Intelligence, Networking and Parallel/Distributed Computing, pp. 293–296, Busan, Korea, June 2018.
[23]
J. Abellán and J. G. Castellano, “A comparative study on base classifiers in ensemble methods for credit scoring,” Expert Systems with Applications, vol. 73, pp. 1–10, 2017.
[24]
S. Lessmann, B. Baesens, H.-V. Seow, and L. C. Thomas, “Benchmarking state-of-the-art classification algorithms for credit scoring: an update of research,” European Journal of Operational Research, vol. 247, no. 1, pp. 124–136, 2015.
[25]
A. C. Lorena, L. F. Jacintho, M. F. Siqueira, R. De Giovanni, L. G. Lohmann, A. C. De Carvalho, and M. Yamamoto, “Comparing machine learning classifiers in potential distribution modelling,” Expert Systems with Applications, vol. 38, no. 5, pp. 5268–5275, 2011.
[26]
J. I. Li and C. Beier, “Machine learning approaches for forest classification and change analysis using multi-temporal Landsat TM images over Huntington Wildlife Forest,” GIScience & Remote Sensing, vol. 50, no. 4, pp. 361–384, 2013.
[27]
N. Macià and E. Bernadó-Mansilla, “Towards UCI+: a mindful repository design,” Information Sciences, vol. 261, pp. 237–262, 2014.
[28]
S. K. Onan and H. Bulut, “Ensemble of keyword extraction methods and classifiers in text classification,” Expert Systems with Applications, vol. 57, pp. 232–247, 2016.
[29]
A. Onan, “Mining opinions from instructor evaluation reviews: a deep learning approach,” Computer Applications in Engineering Education, vol. 28, no. 1, pp. 117–138, 2020.
[30]
A. Onan, “Topic-enriched word embeddings for sarcasm identification,” in Computer Science On-line Conference, pp. 293–304, Springer, Cham, 2019.
[31]
A. Onan and M. A. Toçoğlu, “A term weighted neural language model and stacked bidirectional LSTM based framework for sarcasm identification,” Access, vol. 9, pp. 7701–7722, 2021.
[32]
A. Onan, “Hybrid supervised clustering-based ensemble scheme for text classification,” Kybernetes, vol. 46, no. 2, pp. 330–348, 2017.
[33]
F. Ma, G. Meng, H. Yan, B. C. Yan, and F. Song, “Diagnostic classification of cancers using extreme gradient boosting algorithm and multi-omics data,” Computers in Biology and Medicine, vol. 121, article 103761, 2020.
[34]
X. Wang, Y. Zhang, B. Yu, A. Salhi, R. Chen, L. Wang, and Z. Liu, “Prediction of protein-protein interaction sites through eXtreme gradient boosting with kernel principal component analysis,” Computers in Biology and Medicine, vol. 134, article 104516, 2021.
[35]
A. McCallum and K. Nigam, “A comparison of event models for naive Bayes text classification,” AAAI-98 workshop on learning for text categorization, vol. 752, pp. 41–48, 1998.
[36]
J. R. Quinlan, “Induction of decision trees,” Machine Learning, vol. 1, no. 1, pp. 81–106, 1986.
[37]
J. H. Friedman, “Stochastic gradient boosting,” Computational Statistics & Data Analysis, vol. 38, no. 4, pp. 367–378, 2002.
[38]
J. Ye, J.-H. Chow, J. Chen, and Z. Zheng, “Stochastic gradient boosted distributed decision trees,” in Proceedings of the 18th ACM conference on Information and knowledge management, pp. 2061–2064, November 2009.
[39]
R. Genuer, J.-M. Poggi, and C. Tuleau-Malot, “Variable selection using random forests,” Pattern Recognition Letters, vol. 31, no. 14, pp. 2225–2236, 2010.
[40]
A. Liaw and M. Wiener, “Classification and regression by random forest,” R news, vol. 2, pp. 18–22, 2002.
[41]
T. G. Dietterich, “Approximate statistical tests for comparing supervised classification learning algorithms,” Neural Computation, vol. 10, no. 7, pp. 1895–1923, 1998.
[42]
K. T. Dheeru, Machine Learning Repository, School of Information and Computer Sciences, University of California Irvine, 2017.
[43]
Kaggle dataset library, 2021, http://www.kaggle.com/).
[44]
J. Schmidhuber, “Deep learning in neural networks: an overview,” Neural Networks, vol. 61, pp. 85–117, 2015.
[45]
Z.-H. Zhou and J. Feng, “Deep forest: towards an alternative to deep neural networks,” pp. 3553–3559, 2017, http://arxiv.org/abs/ 1702.08835.
[46]
E. B. Sudakov and D. Koroteev, “Driving digital rock towards machine learning: predicting permeability with gradient boosting and deep neural networks,” Computers & Geosciences, vol. 127, pp. 91–98, 2019.
[47]
X. Wang, S. Yin, M. Shafiq, A. A. Laghari, S. Karim, O. Cheikhrouhou, W. Alhakami, and H. Hamam, “A new V-net convolutional neural network based on four-dimensional hyperchaotic system for medical image encryption,” Networks, vol. 2022, pp. 1–14, 2022.
[48]
T. Shahwar, J. Zafar, A. Almogren, H. Zafar, A. U. Rehman, M. Shafiq, and H. Hamam, “Automated detection of Alzheimer’s via hybrid classical quantum neural networks,” Electronics, vol. 11, no. 5, p. 721, 2022.

Recommendations

Comments

Information & Contributors

Information

Published In

cover image Wireless Communications & Mobile Computing
Wireless Communications & Mobile Computing  Volume 2022, Issue
2022
25330 pages
This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Publisher

John Wiley and Sons Ltd.

United Kingdom

Publication History

Published: 01 January 2022

Qualifiers

  • Research-article

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • 0
    Total Citations
  • 0
    Total Downloads
  • Downloads (Last 12 months)0
  • Downloads (Last 6 weeks)0
Reflects downloads up to 11 Feb 2025

Other Metrics

Citations

View Options

View options

Figures

Tables

Media

Share

Share

Share this Publication link

Share on social media