Abstract
An Extreme Learning Machine (ELM) performs the training of a single-layer feedforward neural network (SLFN) in less time than the back-propagation algorithm. An ELM defines the input weights and biases of the hidden layer with random values, and then analytically calculates the output weights. The use of random values causes SLFN performance to decrease significantly. The present work carries out the adaptation of three continuous optimization algorithms of high dimensionality (IHDELS, DECC-G and MOS) and compares their performance to each other and with the state-of-the-art method, a memetic algorithm based on differential evolution called M-ELM. The results of the comparison show that IHDELS using a validation model based on retention (Training/Testing) obtains the best results, followed by DECC-G and MOS. All three algorithms obtain better results than M-ELM. The experimentation was carried out on 38 classification problems recognized by the scientific community, while Friedman and Wilcoxon nonparametric statistical tests support the results.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Zhang, Y., Wu, J., Cai, Z., Zhang, P., Chen, L.: Memetic extreme learning machine. Pattern Recognit. 58, 135–148 (2016)
Matias, T., Souza, F., Araújo, R., Antunes, C.H.: Learning of a single-hidden layer feedforward neural network using an optimized extreme learning machine. Neurocomputing 129, 428–436 (2014)
Zhu, Q.-Y., Qin, A.K., Suganthan, P.N., Huang, G.-B.: Evolutionary extreme learning machine. Pattern Recognit. 38(10), 1759–1763 (2005)
Huang, G., Huang, G.B., Song, S., You, K.: Trends in extreme learning machines: a review. Neural Netw. 61, 32–48 (2015)
Cao, J., Lin, Z., Huang, G.B.: Self-adaptive evolutionary extreme learning machine. Neural Process. Lett. 36(3), 285–305 (2012)
Wolpert, D.H., Macready, W.G.: No free lunch theorems for optimization. IEEE Trans. Evol. Comput. 1(1), 67–82 (1997)
Molina, D., Herrera, F.: Hibridación iterativa de DE con búsqueda local con reinicio para problemas de alta dimensionalidad. In: XVI Conferencia CAEPIA, pp. 251–260 (2015)
Huang, G.B., Zhu, Q.Y., Siew, C.K.: Extreme learning machine: theory and applications. Neurocomputing 70(1–3), 489–501 (2006)
Kong, H.: Evolving extreme learning machine paradigm with adaptive operator selection and parameter control. Int. J. Uncertainty, Fuzziness Knowl.-Base Syst. 21(December), 143–154 (2013)
Luke, S.: Essentials of Metaheuristics (2013)
Qin, A.K., Suganthan, P.N.: Self-adaptive differential evolution algorithm for numerical optimization. In: 2005 IEEE Congress on Evolutionary Computation, pp. 1785–1791 (2005)
Nebro, A.J., Durillo, J.J.: jMetal: a Java framework for multi-objective optimization. Adv. Eng. Softw. 42, 760–771 (2011)
Yang, Z., Tang, K., Yao, X.: Large scale evolutionary optimization using cooperative coevolution. Inf. Sci. (Ny) 178(15), 2985–2999 (2008)
LaTorre, A., Muelas, S., Peña, J.M.: Multiple offspring sampling in large scale global optimization. In: IEEE World Congress on Computational Intelligence, WCCI 2012 (2012)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2019 Springer Nature Switzerland AG
About this paper
Cite this paper
Sotelo, D., Velásquez, D., Cobos, C., Mendoza, M., Gómez, L. (2019). Optimization of Neural Network Training with ELM Based on the Iterative Hybridization of Differential Evolution with Local Search and Restarts. In: Nicosia, G., Pardalos, P., Giuffrida, G., Umeton, R., Sciacca, V. (eds) Machine Learning, Optimization, and Data Science. LOD 2018. Lecture Notes in Computer Science(), vol 11331. Springer, Cham. https://doi.org/10.1007/978-3-030-13709-0_4
Download citation
DOI: https://doi.org/10.1007/978-3-030-13709-0_4
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-13708-3
Online ISBN: 978-3-030-13709-0
eBook Packages: Computer ScienceComputer Science (R0)