Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
2 March 2015 Optimizing extreme learning machine for hyperspectral image classification
Author Affiliations +
Abstract
Extreme learning machine (ELM) is of great interest to the machine learning society due to its extremely simple training step. Its performance sensitivity to the number of hidden neurons is studied under the context of hyperspectral remote sensing image classification. An empirical linear relationship between the number of training samples and the number of hidden neurons is proposed. Such a relationship can be easily estimated with two small training sets and extended to large training sets to greatly reduce computational cost. The kernel version of ELM (KELM) is also implemented with the radial basis function kernel, and such a linear relationship is still suitable. The experimental results demonstrated that when the number of hidden neurons is appropriate, the performance of ELM may be slightly lower than the linear SVM, but the performance of KELM can be comparable to the kernel version of SVM (KSVM). The computational cost of ELM and KELM is much lower than that of the linear SVM and KSVM, respectively.
© 2015 Society of Photo-Optical Instrumentation Engineers (SPIE) 1931-3195/2015/$25.00 © 2015 SPIE
Jiaojiao Li, Qian Du, Wei Li, and Yunsong Li "Optimizing extreme learning machine for hyperspectral image classification," Journal of Applied Remote Sensing 9(1), 097296 (2 March 2015). https://doi.org/10.1117/1.JRS.9.097296
Published: 2 March 2015
Lens.org Logo
CITATIONS
Cited by 27 scholarly publications.
Advertisement
Advertisement
RIGHTS & PERMISSIONS
Get copyright permission  Get copyright permission on Copyright Marketplace
KEYWORDS
Neurons

Image classification

Hyperspectral imaging

Neural networks

Lithium

Machine learning

Remote sensing

Back to Top