Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
research-article

A Multi-Tree Genetic Programming-Based Ensemble Approach to Image Classification With Limited Training Data [Research Frontier]

Published: 01 November 2024 Publication History

Abstract

Large variations across images make image classification a challenging task; limited training data further increases its difficulty. Genetic programming (GP) has been considerably applied to image classification. However, most GP methods tend to directly evolve a single classifier or depend on a predefined classification algorithm, which typically does not lead to ideal generalization performance when only a few training instances are available. Applying ensemble learning to classification often outperforms employing a single classifier. However, single-tree representation (each individual contains a single tree) is widely employed in GP. Training multiple diverse and accurate base learners/classifiers based on single-tree GP is challenging. Therefore, this article proposes a new ensemble construction method based on multi-tree GP (each individual contains multiple trees) for image classification. A single individual forms an ensemble, and its multiple trees constitute base learners. To find the best individual in which multiple trees are diverse and effectively cooperate, i.e., the nth tree can correct the errors of the previous n-1 trees, the new method assigns different weights to multiple trees using the idea of AdaBoost and performs classification via weighted majority voting. Furthermore, a new tree representation is developed to evolve diverse and accurate base learners that extract useful features and conduct classification simultaneously. The new approach achieves significantly better performance than almost all benchmark methods on eight datasets. Additional analyses highlight the effectiveness of the new ensembles and tree representation, demonstrating the potential for providing valuable interpretability in ensemble trees.

References

[1]
Y. Bi, B. Xue, and M. Zhang, “An effective feature learning approach using genetic programming with image descriptors for image classification [research frontier],” IEEE Comput. Intell. Mag., vol. 15, no. 2, pp. 65–77, May 2020.
[2]
D. G. Lowe, “Distinctive image features from scale-invariant keypoints,” Int. J. Comput. Vis., vol. 60, no. 2, pp. 91–110, 2004.
[3]
N. Dalal and B. Triggs, “Histograms of oriented gradients for human detection,” in Proc. IEEE Comput. Soc. Conf. Comput. Vis. Pattern Recognit., 2005, pp. 886–893.
[4]
T. Ojala, M. Pietikäinen, and D. Harwood, “A comparative study of texture measures with classification based on featured distributions,” Pattern Recognit., vol. 29, no. 1, pp. 51–59, 1996.
[5]
Y. Bi, B. Xue, and M. Zhang, Genetic Programming for Image Classification: An Automated Approach to Feature Learning. Berlin, Germany: Springer, 2021.
[6]
F. Shi et al., “Semi-supervised deep transfer learning for benign-malignant diagnosis of pulmonary nodules in chest CT images,” IEEE Trans. Med. Imag., vol. 41, no. 4, pp. 771–781, Apr. 2022.
[7]
B. Xi, J. Li, Y. Li, R. Song, D. Hong, and J. Chanussot, “Few-shot learning with class-covariance metric for hyperspectral image classification,” IEEE Trans. Image Process., vol. 31, pp. 5079–5092, 2022.
[8]
F. Pourpanah et al., “A review of generalized zero-shot learning methods,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 45, no. 4, pp. 4051–4070, Apr. 2023.
[9]
X. Zhou, A. K. Qin, M. Gong, and K. C. Tan, “A survey on evolutionary construction of deep neural networks,” IEEE Trans. Evol. Comput., vol. 25, no. 5, pp. 894–912, Oct. 2021.
[10]
Y. Mei, Q. Chen, A. Lensen, B. Xue, and M. Zhang, “Explainable artificial intelligence by genetic programming: A survey,” IEEE Trans. Evol. Comput., vol. 27, no. 3, pp. 621–641, Jun. 2023.
[11]
J. R. Koza, Genetic Programming: On the Programming of Computers by Means of Natural Selection. Cambridge, MA, USA: MIT Press, 1992.
[12]
L. Shao, L. Liu, and X. Li, “Feature learning for image classification via multiobjective genetic programming,” IEEE Trans. Neural Netw. Learn. Syst., vol. 25, no. 7, pp. 1359–1371, Jul. 2014.
[13]
Y. Bi, B. Xue, and M. Zhang, “Genetic programming with image-related operators and a flexible program structure for feature learning in image classification,” IEEE Trans. Evol. Comput., vol. 25, no. 1, pp. 87–101, Feb. 2021.
[14]
Q. Fan, Y. Bi, B. Xue, and M. Zhang, “Genetic programming for feature extraction and construction in image classification,” Appl. Soft Comput., vol. 118, 2022, Art. no.
[15]
Y. Wang, Q. Yao, J. T. Kwok, and L. M. Ni, “Generalizing from a few examples: A survey on few-shot learning,” ACM Comput. Surv., vol. 53, no. 3, pp. 1–34, 2020.
[16]
Z.-H. Zhou, Ensemble Methods: Foundations and Algorithms. London, U.K.: Chapman & Hall/CRC Press, 2012.
[17]
L. Zhang and P. N. Suganthan, “Benchmarking ensemble classifiers with novel co-trained kernel ridge regression and random vector functional link ensembles [research frontier],” IEEE Comput. Intell. Mag., vol. 12, no. 4, pp. 61–72, Nov. 2017.
[18]
D. Muni, N. Pal, and J. Das, “A novel approach to design classifiers using genetic programming,” IEEE Trans. Evol. Comput., vol. 8, no. 2, pp. 183–196, Apr. 2004.
[19]
Q. U. Ain, H. Al-Sahaf, B. Xue, and M. Zhang, “Generating knowledge-guided discriminative features using genetic programming for melanoma detection,” IEEE Trans. Emerg. Topics Comput. Intell., vol. 5, no. 4, pp. 554–569, Aug. 2021.
[20]
S. H. Rubin, R. Valencia-Garcia, S. S. Liaw, J.-H. Lee, C. W. Ahn, and J. An, “An approach to self-assembling swarm robots using multitree genetic programming,” Sci. World J., vol. 2013, 2013, Art. no.
[21]
D. J. Montana, “Strongly typed genetic programming,” Evol. Comput., vol. 3, no. 2, pp. 199–230, 1995.
[22]
S. Ruberto, V. Terragni, and J. H. Moore, “Image feature learning with genetic programming,” in Proc. Int. Conf. Parallel Problem Solving Nature, 2020, pp. 63–78.
[23]
Y. Lecun, L. Bottou, Y. Bengio, and P. Haffner, “Gradient-based learning applied to document recognition,” Proc. IEEE, vol. 86, no. 11, pp. 2278–2324, Nov. 1998.
[24]
Q. Fan, Y. Bi, B. Xue, and M. Zhang, “Genetic programming for image classification: A new program representation with flexible feature reuse,” IEEE Trans. Evol. Comput., vol. 27, no. 3, pp. 460–474, Jun. 2023.
[25]
A. Takemura, A. Shimizu, and K. Hamamoto, “Discrimination of breast tumors in ultrasonic images using an ensemble classifier based on the adaboost algorithm with feature selection,” IEEE Trans. Med. Imag., vol. 29, no. 3, pp. 598–609, Mar. 2010.
[26]
S. B. Pooja, R. S. Balan, M. Anisha, M. S. Muthukumaran, and R. Jothikumar, “Techniques tanimoto correlated feature selection system and hybridization of clustering and boosting ensemble classification of remote sensed Big Data for weather forecasting,” Comput. Commun., vol. 151, pp. 266–274, 2020.
[27]
U. Bhowan, M. Johnston, M. Zhang, and X. Yao, “Evolving diverse ensembles using genetic programming for classification with unbalanced data,” IEEE Trans. Evol. Comput., vol. 17, no. 3, pp. 368–386, Jun. 2013.
[28]
H. Chen and X. Yao, “Multiobjective neural network ensembles based on regularized negative correlation learning,” IEEE Trans. Knowl. Data Eng., vol. 22, no. 12, pp. 1738–1751, Dec. 2010.
[29]
A. Chandra and X. Yao, “Ensemble learning using multi-objective evolutionary algorithms,” J. Math. Modelling Algorithms, vol. 5, no. 4, pp. 417–445, 2006.
[30]
N. M. Rodrigues, J. E. Batista, and S. Silva, “Ensemble genetic programming,” in Genetic Programming. Cham, Switzerland: Springer, 2020, pp. 151–166.
[31]
Y. Bi, B. Xue, and M. Zhang, “Genetic programming with a new representation to automatically learn features and evolve ensembles for image classification,” IEEE Trans. Cybern., vol. 51, no. 4, pp. 1769–1783, Apr. 2021.
[32]
D. Van De Ville, M. Nachtegael, D. Van Der Weken, E. Kerre, W. Philips, and I. Lemahieu, “Noise reduction by fuzzy image filtering,” IEEE Trans. Fuzzy Syst., vol. 11, no. 4, pp. 429–436, Aug. 2003.
[33]
R. Chan, C.-W. Ho, and M. Nikolova, “Salt-and-pepper noise removal by median-type noise detectors and detail-preserving regularization,” IEEE Trans. Image Process., vol. 14, no. 10, pp. 1479–1485, Oct. 2005.
[34]
A. I. Awad and M. Hassaballah, Image Feature Detectors and Descriptors: Foundations and Applications. Cham, Switzerland: Springer, 2016.
[35]
M. Zhang, V. Ciesielski, and P. Andreae, “A domain-independent window approach to multiclass object detection using genetic programming,” EURASIP J. Appl. Signal Process., vol. 2003, no. 8, pp. 841–859, 2003.
[36]
A. Ben-Hur and J. Weston, A User’s Guide to Support Vector Machines. Totowa, NJ, USA: Humana Press, 2010, pp. 223–239.
[37]
Z.-H. Zhou and J. Feng, “Deep forest,” Nat. Sci. Rev., vol. 6, no. 1, pp. 74–86, 2018.
[38]
A. Lensen, B. Xue, and M. Zhang, “Genetic programming for evolving similarity functions for clustering: Representations and analysis,” Evol. Comput., vol. 28, no. 4, pp. 531–561, 2020.
[39]
C. E. Thomaz, “Fei face database,” 2012. [Online]. Available: https://fei.edu.br/cet/facedatabase.html
[40]
O. Langner, R. Dotsch, G. Bijlstra, D. H. J. Wigboldus, S. T. Hawk, and A. Van Knippenberg, “Presentation and validation of the radboud faces database,” Cogn. Emotion, vol. 24, no. 8, pp. 1377–1388, 2010.
[41]
M. Lyons, S. Akamatsu, M. Kamachi, and J. Gyoba, “Coding facial expressions with gabor wavelets,” in Proc. 3rd IEEE Int. Conf. Autom. Face Gesture Recognit., 1998, pp. 200–205.
[42]
L. Fei-Fei and P. Perona, “A Bayesian hierarchical model for learning natural scene categories,” in Proc. IEEE Comput. Soc. Conf. Comput. Vis. Pattern Recognit., 2005, vol. 2, pp. 524–531.
[43]
T. Ojala, T. Maenpaa, M. Pietikainen, J. Viertola, J. Kyllonen, and S. Huovinen, “Outex-new framework for empirical evaluation of texture analysis algorithms,” in Proc. Object Recognit. Supported User Interact. Serv. Robots, 2002, vol. 1, pp. 701–706.
[44]
M. in Research Methods Psychology of Faces, “Psychological image collection at stirling (pics),” 2012. [Online]. Available: http://pics.stir.ac.uk/
[45]
K.-C. Lee, J. Ho, and D. J. Kriegman, “Acquiring linear subspaces for face recognition under variable lighting,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 27, no. 5, pp. 684–698, May 2005.
[46]
A. Howard et al., “Searching for mobilenetv3,” in Proc. IEEE/CVF Int. Conf. Comput. Vis., 2019, pp. 1314–1324.
[47]
C. Szegedy, V. Vanhoucke, S. Ioffe, J. Shlens, and Z. Wojna, “Rethinking the inception architecture for computer vision,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit., 2016, pp. 2818–2826.
[48]
K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit., 2016, pp. 770–778.
[49]
S. Xie, R. Girshick, P. Dollár, Z. Tu, and K. He, “Aggregated residual transformations for deep neural networks,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit., 2017, pp. 1492–1500.
[50]
S. Young, T. Abdou, and A. Bener, “Deep super learner: A deep ensemble for classification problems,” in Proc. Can. Conf. Artif. Intell., 2018, pp. 84–95.
[51]
F. Pedregosa et al., “Scikit-learn: Machine learning in python,” J. Mach. Learn. Res., vol. 12, no. 85, pp. 2825–2830, 2011.
[52]
S. Holm, “A simple sequentially rejective multiple test procedure,” Scand. J. Statist., pp. 65–70, 1979.
[53]
Y. Jin and B. Sendhoff, “Pareto-based multiobjective machine learning: An overview and case studies,” IEEE Trans. Syst., Man, Cybern., Part C. (Appl. Rev.), vol. 38, no. 3, pp. 397–415, May 2008.

Index Terms

  1. A Multi-Tree Genetic Programming-Based Ensemble Approach to Image Classification With Limited Training Data [Research Frontier]
          Index terms have been assigned to the content through auto-classification.

          Recommendations

          Comments

          Information & Contributors

          Information

          Published In

          cover image IEEE Computational Intelligence Magazine
          IEEE Computational Intelligence Magazine  Volume 19, Issue 4
          Nov. 2024
          70 pages

          Publisher

          IEEE Press

          Publication History

          Published: 01 November 2024

          Qualifiers

          • Research-article

          Contributors

          Other Metrics

          Bibliometrics & Citations

          Bibliometrics

          Article Metrics

          • 0
            Total Citations
          • 0
            Total Downloads
          • Downloads (Last 12 months)0
          • Downloads (Last 6 weeks)0
          Reflects downloads up to 26 Jan 2025

          Other Metrics

          Citations

          View Options

          View options

          Figures

          Tables

          Media

          Share

          Share

          Share this Publication link

          Share on social media