Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
research-article

Classical vs. neural network-based PCA approaches for lossy image compression: : Similarities and differences

Published: 08 August 2024 Publication History

Abstract

The paper describes three lossy data compression techniques based on the principal component analysis (PCA), which are compared using the image compression task. The presented approach uses both classical PCA method based on the eigen-decomposition of the image data covariance matrix and two different neural network structures. The first neural structure is a two-layer feed-forward network with supervised learning acting as the so-called autoencoder, and the second one is a single-layered network with an unsupervised learning method commonly known as the generalized Hebbian algorithm. For each compression method mentioned above, the influence of the number of image segments (frames) and the number of eigenvalues/eigenvectors on the compression ratio and the image quality are examined using three different gray-scale test images. The paper also addresses a vital implementation issue regarding selecting appropriate data types to represent the compressed data. The paper’s main conclusion is that the classical PCA method outperforms its neural counterparts, both in terms of the decompressed image quality and the time required to perform the compression procedure. The positive aspect of using neural networks as a tool for PCA-based lossy data compression is that they do not require calculating the correlation matrix explicitly and thus can be used in online data acquisition schemes.

Graphical abstract

Display Omitted

Highlights

Classical PCA compression outweighs neural nets in image quality and operation time.
Appropriate learning algorithm is essential for the neural PCA compression.
Neural PCA compression can have an advantage in online data acquisition schemes.
Appropriate data types are important for compression ratio and image quality.

References

[1]
Berkooz G., Holmes P., Lumley J.L., The proper orthogonal decomposition in the analysis of turbulent flows, Annu. Rev. Fluid Mech. 25 (1993) 539–575,.
[2]
K. Bartecki, Approximation of a class of distributed parameter systems using proper orthogonal decomposition, in: 16th IEEE International Conference on Methods and Models in Automation and Robotics, Miȩdzyzdroje, Poland, 2011, pp. 351–356, https://doi.org/10.1109/MMAR.2011.6031372.
[3]
Eriksson P., Jiménez C., Bühler S., Murtagh D., A hotelling transformation approach for rapid inversion of atmospheric spectra, J. Quant. Spectrosc. Radiat. Transfer 73 (6) (2002) 529–543,.
[4]
Lartizien C., Tomei S., Maxim V., Odet C., Combining a wavelet transform with a channelized hotelling observer for tumor detection in 3D PET oncology imaging, in: Jiang Y., Sahiner B. (Eds.), Medical Imaging 2007: Image Perception, Observer Performance, and Technology Assessment, Vol. 6515, International Society for Optics and Photonics, SPIE, 2007,.
[5]
Chatterjee A., An introduction to the proper orthogonal decomposition, Current Sci. 78 (7) (2000) 808–817.
[6]
Trudu M., Pilia M., Hellbourg G., Pari P., Antonietti N., Maccone C., Melis A., Perrodin D., Trois A., Performance analysis of the Karhunen–Loève transform for artificial and astrophysical transmissions: denoizing and detection, Mon. Not. R. Astron. Soc. 494 (1) (2020) 69–83,.
[7]
Bartecki K., Neural network–based PCA: An application to approximation of a distributed parameter system, Rutkowski L., Korytkowski M., Scherer R., Tadeusiewicz R., Zadeh L., Zurada J. (Eds.), Artificial Intelligence and Soft Computing, Lecture Notes in Comput. Sci. Vol. 7267 (2012) 3–11,.
[8]
Clausen C., Wechsler H., Color image compression using PCA and backpropagation learning, Pattern Recognit. 33 (9) (2000) 1555–1560,.
[9]
Hernandez W., Mendez A., Application of principal component analysis to image compression, in: Göksel T. (Ed.), Statistics, IntechOpen, Rijeka, 2018,.
[10]
Lu C., Zhao Q., A universal PCA for image compression, in: Yang L.T., Amamiya M., Liu Z., Guo M., Rammig F.J. (Eds.), Embedded and Ubiquitous Computing – EUC 2005, Springer Berlin Heidelberg, Berlin, Heidelberg, 2005, pp. 910–919.
[11]
Vaish A., Kumar M., A new image compression technique using principal component analysis and huffman coding, in: 2014 International Conference on Parallel, Distributed and Grid Computing, 2014, pp. 301–305,.
[12]
Hertz J., Krogh A., Palmer R., Introduction to the Theory of Neural Computation, Addison Wesley, Redwood City, USA, 1991.
[13]
Kung S., Diamantaras K., Taur J., Adaptive principal component extraction (APEX) and applications, IEEE Trans. Signal Process. 42 (5) (1994) 1202–1217,.
[14]
Costa S., Fiori S., Image compression using principal component neural networks, Image Vis. Comput. 19 (9) (2001) 649–668,.
[15]
Mi J., Huang D.-S., Image compression using principal component neural network, in: ICARCV 2004 8th Control, Automation, Robotics and Vision Conference, 2004, Vol. 1, 2004, pp. 698–701,.
[16]
Cierniak R., An image compression algorithm based on neural networks, in: Rutkowski L., Siekmann J.H., Tadeusiewicz R., Zadeh L.A. (Eds.), Artificial Intelligence and Soft Computing - ICAISC 2004, Springer Berlin Heidelberg, 2004, pp. 706–711.
[17]
Rutkowski L., New Soft Computing Techniques for System Modeling, Pattern Classification and Image Processing, 2005,.
[18]
Soliman H.S., Omari M., A neural networks approach to image data compression, Appl. Soft Comput. 6 (3) (2006) 258–271,.
[19]
Iyoda E.M., Shibata T., Nobuhara H., Pedrycz W., Hirota K., Image compression and reconstruction using Pi t-sigma neural networks, Soft Comput. 11 (1) (2007) 53–61,.
[20]
Qiu J., Wang H., Lu J., Zhang B., Du K.-L., Neural network implementations for PCA and its extensions, Int. Sch. Res. Notices 2012 (2012) 1–19.
[21]
Jamil S., Piran M.J., Rahman M., Kwon O.-J., Learning-driven lossy image compression: A comprehensive survey, Eng. Appl. Artif. Intell. 123 (2023),.
[22]
Hinton G.E., Salakhutdinov R.R., Reducing the dimensionality of data with neural networks, Science 313 (5786) (2006) 504–507,.
[23]
Wang W., Huang Y., Wang Y., Wang L., Generalized autoencoder: A neural network framework for dimensionality reduction, in: 2014 IEEE Conference on Computer Vision and Pattern Recognition Workshops, 2014, pp. 496–503,.
[24]
Plaut E., From principal subspaces to principal components with linear autoencoders, 2018, arXiv:1804.10253.
[25]
Li P., Pei Y., Li J., A comprehensive survey on design and application of autoencoder in deep learning, Appl. Soft Comput. 138 (2023),.
[26]
Oja E., A simplified neuron model as a principal component analyzer, J. Math. Biol. 15 (3) (1982) 267–273.
[27]
Oja E., Neural networks, principal components, and subspaces, Int. J. Neural Syst. 1 (1) (1989) 61–68,.
[28]
Sanger T.D., Optimal unsupervised learning in a single-layer linear feedforward neural network, Neural Netw. 2 (6) (1989) 459–473.
[29]
Karhunen J., Joutsensalo J., Generalizations of principal component analysis, optimization problems, and neural networks, Neural Netw. 8 (1995) 549–562.
[30]
Huynh-Thu Q., Ghanbari M., Scope of validity of PSNR in image/video quality assessment, Electron. Lett. 44 (2008) 800–801,.
[31]
Rojas R., Neural Networks: A Systematic Introduction, Springer, Berlin, Germany, 1996.
[32]
Zurada J., Introduction to Artificial Neural Systems, West Publishing Company, Saint Paul, USA, 1992.
[33]
K. Bartecki, On some peculiarities of neural network approximation applied to the inverse kinematics problem, in: Conference on Control and Fault-Tolerant Systems, Nice, France, 2010, pp. 317–322, https://doi.org/10.1109/SYSTOL.2010.5676041.
[34]
Yu H., Wilamowski B., Levenberg–Marquardt training, in: Intelligent Systems, CRC Press, 2011, pp. 1–16,.
[35]
Robitaille B., Marcos B., Veillette M., Payre G., Modified quasi-Newton methods for training neural networks, Comput. Chem. Eng. 20 (9) (1996) 1133–1140,.
[36]
Wang J., Wu W., Zurada J.M., Deterministic convergence of conjugate gradient method for feedforward neural networks, Neurocomputing 74 (14) (2011) 2368–2376,.

Recommendations

Comments

Information & Contributors

Information

Published In

cover image Applied Soft Computing
Applied Soft Computing  Volume 161, Issue C
Aug 2024
1077 pages

Publisher

Elsevier Science Publishers B. V.

Netherlands

Publication History

Published: 08 August 2024

Author Tags

  1. Artificial neural network
  2. Autoencoder
  3. Generalized Hebbian algorithm
  4. Lossy image compression
  5. Principal component analysis

Qualifiers

  • Research-article

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • 0
    Total Citations
  • 0
    Total Downloads
  • Downloads (Last 12 months)0
  • Downloads (Last 6 weeks)0
Reflects downloads up to 13 Jan 2025

Other Metrics

Citations

View Options

View options

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media