Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
Skip to main content

Deep reconstruction of 1D ISOMAP representations

  • Regular Paper
  • Published:
Multimedia Systems Aims and scope Submit manuscript

A Correction to this article was published on 18 June 2021

This article has been updated

Abstract

This paper proposes a deep learning priors-based data reconstruction method of 1D isometric feature mapping (ISOMAP) representations. ISOMAP is a classical algorithm of nonlinear dimensionality reduction (NLDR) or manifold leaning (ML), which is devoted to questing for the low dimensional structure of high dimensional data. The reconstruction of ISOMAP representations, or the inverse problem of ISOMAP, reestablishes the high dimensional data from its low dimensional ISOMAP representations, and owns a bright future in data representation, generation, compression and visualization. Due to the fact that the dimension of ISOMAP representations is far less than that of the original high dimensional data, the reconstruction of ISOMAP representations is ill-posed or undetermined. Hence, the residual learning of deep convolutional neural network (CNN) is employed to boost reconstruction performance, via achieving the priors between the low-quality result of general ISOMAP reconstruction method and its residual relative to the original data. In the situation of 1D representations, it is evaluated by the experimental results that the proposed method outbalances the state-of-the-art methods, such as nearest neighbor (NN), discrete cosine transformation (DCT) and sparse representation (SR), in reconstruction performance of video data. In summary, the proposed method is suitable for low-bitrate and high-performance applications of data reconstruction.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11

Similar content being viewed by others

Change history

References

  1. Xian, W., Hao, S., Yuanxiang, Li.: Reconstructible nonlinear dimensionality reduction via joint dictionary learning. IEEE Trans. Neural Netw. Learn. Syst. 30(1), 175–189 (2018). https://doi.org/10.1109/TNNLS.2018.2836802

    Article  MathSciNet  Google Scholar 

  2. Tenenbaum, J.B., de Silva, V., Langford, J.C.: A global geometric framework for nonlinear dimensionality reduction. Science 290(5500), 2319–2323 (2000). https://doi.org/10.1126/science.290.5500.2319

    Article  Google Scholar 

  3. Zhu, B., Liu, J.Z., Cauley, S.F.: Image reconstruction by domain-transform manifold learning. Nature 555(7697), 487 (2018)

    Article  Google Scholar 

  4. Zhang, K., Zuo, W., Chen, Y., Meng, D., Zhang, L.: Beyond a Gaussian denoiser: residual learning of deep CNN for image denoising. IEEE Trans. Image Process. 26(7), 3142–3155 (2017). https://doi.org/10.1109/TIP.2017.2662206

    Article  MathSciNet  MATH  Google Scholar 

  5. Zhang, L., Zuo, W.: Image restoration: from sparse and low-rank priors to deep priors. IEEE Signal Process. Mag. 34(5), 172–179 (2017). https://doi.org/10.1109/MSP.2017.2717489

    Article  Google Scholar 

  6. Li, H.: 1D representation of Isomap for united video coding. Multimedia Syst. 24(3), 297–312 (2018). https://doi.org/10.1007/s00530-017-0551-z

    Article  Google Scholar 

  7. Kai, Z., Wangmeng, Z., Lei, Z.: FFDNet: toward a fast and flexible solution for CNN-based image denoising. IEEE Trans. Image Process. 27(9), 4608–4622 (2018). https://doi.org/10.1109/TIP.2018.2839891

    Article  MathSciNet  Google Scholar 

  8. Li, J., Yu, J., Xu, L.: A cascaded algorithm for image quality assessment and image denoising based on CNN for image security and authorization. Secur. Commun. Netw. Article Number: UNSP 8176984 (2018) https://doi.org/10.1155/2018/8176984.

  9. Yuan, Y., Cao, Z., Su, L.: Light-field image superresolution using a combined deep CNN based on EPI. IEEE Signal Process. Lett. 25(9), 1359–1363 (2018). https://doi.org/10.1109/LSP.2018.2856619

    Article  Google Scholar 

  10. Ren, C., He, X., Pu, Y.: Nonlocal similarity modeling and deep CNN gradient prior for super resolution. IEEE Signal Process. Lett. 25(7), 916–920 (2018). https://doi.org/10.1109/LSP.2018.2829766

    Article  Google Scholar 

  11. Zhang, M., Li, W., Du, Q.: Diverse region-based CNN for hyperspectral image classification. IEEE Trans. Image Process. 27(6), 2623–2634 (2018). https://doi.org/10.1109/TIP.2018.2809606

    Article  MathSciNet  MATH  Google Scholar 

  12. Guo, Y., Liu, Y., Bakker, E.M.: EM CNN-RNN: a large-scale hierarchical image classification framework. Multimed. Tools Appl. 77(8), 10251–10271 (2018). https://doi.org/10.1007/s11042-017-5443-x

    Article  Google Scholar 

  13. Li, J., Qiu, T., Wen, C.: Robust face recognition using the deep C2D-CNN model based on decision-level fusion, Sensors 18(7), Article Number: 2080 (2018) https://doi.org/10.3390/s18072080

  14. He, R., Wu, X., Sun, Z.: Wasserstein CNN: learning invariant features for NIR-VIS face recognition. IEEE Trans. Pattern Anal. Mach. Intell. 41(7), 1761–1773 (2019). https://doi.org/10.1109/TPAMI.2018.2842770

    Article  Google Scholar 

  15. Chenggang, Y., Biao, G., Yuxuan, W., Yue, G.: Deep Multi-view enhancement hashing for image retrieval. IEEE Trans. Pattern Anal. Mach. Intell. (2020). https://doi.org/10.1109/TPAMI.2020.2975798

    Article  Google Scholar 

  16. Yan, C., Shao, B., Zhao, H., Ning, R., Zhang, Y., Feng, Xu.: 3D room layout estimation from a single RGB image. IEEE Trans. Multimed. 22(11), 3104–3124 (2020). https://doi.org/10.1109/TMM.2020.2967645

    Article  Google Scholar 

  17. Chenggang, Y., Zhisheng, L., Yongbing, Z., Yutao, L., Xiangyang J., Yongdong Z.: Depth image denoising using nuclear norm and learning graph model. ACM Trans. Multimed. Comput. Commun. Appl. pp. 1–17. Preprint at arXiv:2008.03741v1[cs.CV] (2020)

  18. Weiying, X., Yunsong, Li., Xiuping, J.: Deep convolutional networks with residual learning for accurate spectral-spatial denoising. Neurocomputing 312, 372–381 (2018). https://doi.org/10.1016/j.neucom.2018.05.115

    Article  Google Scholar 

  19. Chen, C., Xu, Z.: Aerial-image denoising based on convolutional neural network with multi-scale residual learning approach. Information. 9(7), Article Number: UNSP 169 (2018). https://doi.org/10.3390/info9070169.

  20. Zhang, Y., MacDougall, R., Yu, H.: Residual learning based projection domain denoising for low-dose CT. Med. Phys. 45(6), E215–E216 (2018). https://doi.org/10.3390/info9070169

    Article  Google Scholar 

  21. Zhan, Q., Yuan, Q., Li, J.: Learning a dilated residual network for SAR image despeckling. Remote Sens. 10(2), Article Number: 196 (2018). https://doi.org/10.3390/rs10020196

  22. Shi, J., Liu, Q., Wang, C.: Super-resolution reconstruction of MR image with a novel residual learning network algorithm. Phys. Med. Biol. 63(8), Article Number: 085011 (2018). https://doi.org/10.1088/1361-6560/aab9e9

  23. Wenjun, W., Chao, R., Xiaohai, He.: Video super-resolution via residual learning. IEEE Access 6, 23767–23777 (2018). https://doi.org/10.1109/ACCESS.2018.2829908

    Article  Google Scholar 

  24. Dingyi, L., Zengfu, W.: Video superresolution via motion compensation and deep residual learning. IEEE Trans. Comput. Imaging 3(4), 749–762 (2017). https://doi.org/10.1109/TCI.2017.2671360

    Article  MathSciNet  Google Scholar 

  25. Wenhan, Y., Jiashi, F., Jianchao, Y.: Deep edge guided recurrent residual learning for image super-resolution. IEEE Trans. Image Process. 26(12), 5895–5907 (2017). https://doi.org/10.1109/TIP.2017.2750403

    Article  MathSciNet  Google Scholar 

  26. Feiwei, Q., Nannan, G., Yong, P.: Fine-grained leukocyte classification with deep residual learning for microscopic images. Comput. Methods Programs Biomed. 162, 243–252 (2018). https://doi.org/10.1016/j.cmpb.2018.05.024

    Article  Google Scholar 

  27. Haijun, L., Tao, H., Feng, Z.: A deeply supervised residual network for HEp-2 cell classification via cross-modal transfer learning. Pattern Recogn. 79, 290–302 (2018). https://doi.org/10.1016/j.patcog.2018.02.006

    Article  Google Scholar 

  28. McAllister, P., Zheng, H., Bond, R.: Combining deep residual neural network features with supervised machine learning algorithms to classify diverse food image datasets. Comput. Biol. Med. 95, 217–233 (2018). https://doi.org/10.1016/j.compbiomed.2018.02.008

    Article  Google Scholar 

  29. Zilong, Z., Jonathan, Li., Zhiming, L.: Spectral-spatial residual network for hyperspectral image classification: a 3-D deep learning framework. IEEE Trans. Geosci. Remote Sens. 56(2), 847–858 (2018). https://doi.org/10.1109/TGRS.2017.2755542

    Article  Google Scholar 

  30. Mou, L., Ghamisi, P., Zhu, X.X.: Unsupervised spectral-spatial feature learning via deep residual conv-deconv network for hyperspectral image classification. IEEE Trans. Geosci. Remote Sending 56(1), 391–406 (2018). https://doi.org/10.1109/TGRS.2017.2748160

    Article  Google Scholar 

  31. Xi, C., Youhua, Z., Yiqiong, C.: Pest identification via deep residual learning in complex background. Comput. Electron. Agric. 141, 351–356 (2017). https://doi.org/10.1016/j.compag.2017.08.005

    Article  Google Scholar 

  32. Yancong, W., Qiangqiang, Y., Huanfeng, S.: Boosting the accuracy of multispectral image pansharpening by learning a deep residual network. IEEE Geosci. Remote Sens. Lett. 14(10), 1795–1799 (2017). https://doi.org/10.1109/LGRS.2017.2736020

    Article  Google Scholar 

  33. Songtao, W., Shenghua, Z., Yan, L.: Deep residual learning for image steganalysis. Multimed. Tools Appl. 77(9), 10437–10453 (2018). https://doi.org/10.1007/s11042-017-4440-4

    Article  Google Scholar 

  34. Devalla, S.K., Renukanand, P.K., Sreedhar, B.K.: DRUNET: a dilated-residual U-Net deep learning network to segment optic nerve head tissues in optical coherence tomography images. Biomed. Opt. Express 9(7), 3244–3265 (2018). https://doi.org/10.1364/BOE.9.003244

    Article  Google Scholar 

  35. Yang-Yu, F., Shu, L., Bo, Li.: Label distribution-based facial attractiveness computation by deep residual learning. IEEE Trans. Multimed. 20(8), 2196–2208 (2018). https://doi.org/10.1109/TMM.2017.2780762

    Article  Google Scholar 

  36. Sullivan, G.J., Ohm, J.-R., Han, W.-J., Wiegand, T.: Overview of the high efficiency video coding (HEVC) standard. IEEE Trans. Circuits Syst. Video Technol. 22(12), 1649–1668 (2012). https://doi.org/10.1109/TCSVT.2012.2221191

    Article  Google Scholar 

  37. van der Maaten, L.J.P., Hinton, G.E.: Visualizing high-dimensional data using t-SNE. J. Mach. Learn. Res. 9(2), 2579–2605 (2008)

    MATH  Google Scholar 

Download references

Acknowledgements

We would very much like to express our deep thanks to professor Maria Trocan from Institut Supérieur d'Électronique de Paris (ISEP) for her invaluable guidance and discussion on the theory and applications of deep learning.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Honggui Li.

Ethics declarations

Conflict of interest

The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.

Additional information

Communicated by Y. Zhang.

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Li, H., Galayko, D. Deep reconstruction of 1D ISOMAP representations. Multimedia Systems 27, 503–518 (2021). https://doi.org/10.1007/s00530-021-00750-4

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00530-021-00750-4

Keywords