Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
research-article

Nonnegative Tensor Patch Dictionary Approaches for Image Compression and Deblurring Applications

Published: 01 January 2020 Publication History

Abstract

In recent work [S. Soltani, M. Kilmer, and P. C. Hansen, BIT, 56 (2016)], an algorithm for nonnegative tensor patch dictionary learning in the context of X-ray CT imaging and based on a tensor-tensor product called the t-product [M. E. Kilmer and C. D. Martin, Linear Algebra Appl., 435 (2011), pp. 641--658] was presented. Building on that work, in this paper, we use nonnegative tensor patch--based dictionaries trained on other data, such as facial image data, for the purpose of either compression or image deblurring. We begin with an analysis in which we address issues such as suitability of the tensor-based approach relative to a matrix-based approach, dictionary size, and patch size to balance computational efficiency and qualitative representations. Next, we develop an algorithm that is capable of recovering nonnegative tensor coefficients given a nonnegative tensor dictionary. The algorithm is based on a variant of the modified residual norm steepest descent method. We show how to augment the algorithm to enforce sparsity in the tensor coefficients and note that the approach has broader applicability since it can be applied to the matrix case as well. We illustrate the surprising result that dictionaries trained on image data from one class can be successfully used to represent and compress image data from different classes and across different resolutions. Finally, we address the use of nonnegative tensor dictionaries in image deblurring. We show that tensor treatment of the deblurring problem coupled with nonnegative tensor patch dictionaries can give superior restorations as compared to standard treatment of the nonnegativity constrained deblurring problem.

References

[1]
J. A. Bengua, H. N. Phien, H. D. Tuan, and M. N. Do, Efficient tensor completion for color image and video recovery: Low-rank tensor train, IEEE Trans. Image Process., 26 (2017), pp. 2466--2479.
[2]
S. Boyd, N. Parikh, E. Chu, B. Peleato, and J. Eckstein, Distributed optimization and statistical learning via the alternating direction method of multipliers, Found. Trends Mach. Learn., 3 (2010), pp. 1--122.
[3]
R. B. Cattell, ``Parallel proportional profiles” and other principles for determining the choice of factors by rotation, Psychometrika, 9 (1944), pp. 267--283.
[4]
R. B. Cattell, The three basic factor-analytic research designs---their interrelations and derivatives, Psychological Bull., 49 (1952), pp. 499--520.
[5]
Y. Chen, W. He, N. Yokoya, and T.-Z. Huang, Hyperspectral image restoration using weighted group sparsity-regularized low-rank tensor decomposition, IEEE Trans. Cybernet., to appear.
[6]
A. Cichocki, R. Zdunek, A. H. Phan, and S.-I. Amari, Nonnegative Matrix and Tensor Factorizations: Applications to Exploratory Multi-way Data Analysis and Blind Source Separation, Wiley, New York, 2009, https://pdfs.semanticscholar.org/94cc/6daad548a03c6edb0351d686c2d4aa364634.pdf.
[7]
R. Dian, S. Li, and L. Fang, Learning a low tensor-train rank representation for hyperspectral image super-resolution, IEEE Trans. Neural Netw. Learn. Syst., 30 (2019), pp. 2672--2683.
[8]
P. C. Hansen, J. G. Nagy, and D. P. O'Leary, Deblurring Images: Matrices, Spectra, and Filtering, Fundam. Algorithms 3, SIAM, Philadelphia, 2006.
[9]
N. Hao, M. E. Kilmer, K. Braman, and R. C. Hoover, Facial recognition using tensor-tensor decompositions, SIAM J. Imaging Sci., 6 (2013), pp. 457--463.
[10]
R. A. Harshman, Foundations of the PARAFAC procedure: Models and conditions for an “explanatory" multimodal factor analysis, UCLA Working Papers in Phonetics, 16 (1970).
[11]
F. L. Hitchcock, The expression of a tensor or a polyadic as a sum of products, J. Math. Phys., 6 (1927), pp. 164--189.
[12]
F. L. Hitchcock, Multiple invariants and generalized rank of a p-way matrix or tensor, J. Math. Phys., 7 (1928), pp. 40--79.
[13]
B. Hunyadi, P. Dupont, W. Van Paesschen, and S. Van Huffel, Tensor decompositions and data fusion in epileptic electroencephalography and functional magnetic resonance imaging data, Wiley Interdiscip. Rev. Data Mining Knowledge Discovery, 7 (2017).
[14]
A. Kapteyn, H. Neudecker, and T. Wansbeek, An approach to n-mode components analysis, Psychometrika, 51 (1986), pp. 269--275.
[15]
L. Kaufman, Maximum likelihood, least squares, and penalized least squares for PET, IEEE Trans. Medical Imaging, 12 (1993), pp. 200--214.
[16]
E. Kernfeld, M. Kilmer, and S. Aeron, Tensor--tensor products with invertible linear transforms, Linear Algebra Appl., 485 (2015), pp. 545--570.
[17]
M. Kilmer, L. Horesh, H. Avron, and E. Newman, Tensor--Tensor Products for Optimal Representation and Compression, preprint, https://arxiv.org/abs/2001.00046, 2019.
[18]
M. E. Kilmer, K. Braman, N. Hao, and R. C. Hoover, Third-order tensors as operators on matrices: A theoretical and computational framework with applications in imaging, SIAM J. Matrix Anal. Appl., 34 (2012), pp. 148--172.
[19]
M. E. Kilmer and C. D. Martin, Factorization strategies for third-order tensors, Linear Algebra Appl., 435 (2011), pp. 641--658.
[20]
T. Kolda and B. Bader, Tensor decompositions and applications, SIAM Rev., 51 (2009), pp. 455--500.
[21]
P. M. Kroonenberg and J. D. Leeuw, Principal component analysis of three-mode data by means of alternating least squares algorithms, Psychometrika, 45 (1980), pp. 69--97.
[22]
R. F. L. Fei-Fei and P. Perona, Learning generative visual models from few training examples: An incremental bayesian approach tested on $101$ object categories, presented at IEEE CVPR Workshop on Generative-Model Based Vision, 2004.
[23]
L. D. Lathauwer, B. D. Moor, and J. Vandewalle, A multilinear singular value decomposition, SIAM J. Matrix Anal. Appl., 21 (2000), pp. 1253--1278.
[24]
N. Lee and A. Cichocki, Fundamental tensor operations for large-scale data analysis using tensor network formats, Multidimens. Syst. Signal Process., 29 (2018), pp. 921--960.
[25]
N. Lee, A.-H. Phan, F. Cong, and A. Cichocki, Nonnegative tensor train decompositions for multi-domain feature extraction and clustering, in Proceedings of Neural Information Processing, 2016, pp. 87--95.
[26]
X. Li, M. K. Ng, G. Cong, Y. Ye, and Q. Wu, MR-NTD: Manifold regularization nonnegative Tucker decomposition for tensor data dimension reduction and representation, IEEE Trans. Neural Netw. Learn. Syst., 28 (2017), pp. 1787--1800.
[27]
J. Liu, P. Musialski, P. Wonka, and J. Ye, Tensor completion for estimating missing values in visual data, IEEE Trans. Pattern Anal. Mach. Intell., 35 (2013), pp. 208--220.
[28]
C. D. Martin, R. Shafer, and B. LaRue, An order-$p$ tensor factorization with applications in imaging, SIAM J. Sci. Comput., 35 (2013), pp. A474--A490.
[29]
M. Mørup, L. K. Hansen, and S. M. Arnfred, Algorithms for sparse nonnegative Tucker decompositions, Neural Comput., 20 (2008), pp. 2112--2131.
[30]
J. Nagy, S. Berisha, J. Chung, K. Palmer, L. Perrone, and R. Wright, RestoreTools: An Object Oriented MATLAB Package for Image Restoration, https://www.mathcs.emory.edu/~nagy/RestoreTools/index.html, 2012.
[31]
J. Nagy and Z. Strakos, Enforcing nonnegativity in image reconstruction algorithms, in Mathematical Modeling, Estimation, and Imaging, Proc. SPIE 4121, SPIE, Bellingham, WA, 2000, pp. 182--190.
[32]
E. Newman, A Step in the Right Dimension: Tensor Algebra and Applications, PhD thesis, Tufts University, 2019.
[33]
E. Newman, M. E. Kilmer, and L. Horesh, Image classification using local tensor singular value decompositions, in Proceedings of CAMSAP, IEEE, 2017, pp. 1--5.
[34]
E. Newman, M. E. Kilmer, and L. Horesh, Image classification using local tensor singular value decompositions, in Proceedings of CAMSAP, IEEE, 2017, preprint, https://arxiv.org/abs/1706.09693, 2017.
[35]
V. Oseledets, Tensor-train decomposition, SIAM J. Sci. Comput., 33 (2011), pp. 2295--2317.
[36]
N. Parikh and S. Boyd, Proximal algorithms, Found. Trends Optim., 1 (2013), pp. 123--231.
[37]
N. Qi, Y. Shi, X. Sun, J. Wang, B. Yin, and J. Gao, Multi-dimensional sparse models, IEEE Trans. Pattern Anal. Mach. Intell., 40 (2018), pp. 163--178.
[38]
O. Semerci, N. Hao, M. E. Kilmer, and E. L. Miller, Tensor-based formulation and nuclear norm regularization for multienergy computed tomography, IEEE Trans. Image Process., 23 (2014), pp. 1678--1693.
[39]
S. Soltani, DLCT-Toolbox, a MATLAB Package for the Dictionary Learning Approach to Tomograhic Image Reconstruction, https://www.imm.dtu.dk/~pcha/HDtomo/, 2015.
[40]
S. Soltani, M. Andersen, and P. C. Hansen, Tomographic image reconstruction using training images, J. Comput. Appl. Math., 313 (2016).
[41]
S. Soltani, M. Kilmer, and P. C. Hansen, A tensor-based dictionary learning approach to tomographic image reconstruction, BIT, 56 (2016), https://doi.org/10.1007/s10543-016-0607-z.
[42]
L. R. Tucker, Some mathematical notes on three-mode factor analysis, Psychometrika, 31 (1966), pp. 279--311.
[43]
M. A. O. Vasilescu and D. Terzopoulos, Multilinear analysis of image ensembles: Tensorfaces, in Proceedings of the European Conference on Computer Vision, 2002, pp. 447--460.
[44]
M. A. O. Vasilescu and D. Terzopoulos, Multilinear image analysis for facial recognition, in Proceedings of the 16th International Conference on Pattern Recognition, Vol. 2, IEEE, 2002, pp. 511--514.
[45]
Y. Xu, Z. Wu, J. Chanussot, and Z. Wei, Joint reconstruction and anomaly detection from compressive hyperspectral images using Mahalanobis distance-regularized tensor RPCA, IEEE Trans. Geosci. Remote Sensing, 56 (2018), pp. 2919--2930.
[46]
Y. Xu, Z. Wu, J. Chanussot, and Z. Wei, Nonlocal patch tensor sparse representation for hyperspectral image super-resolution, IEEE Trans. Image Process., 28 (2019), pp. 3034--3047.
[47]
J. Yang, Y. Zhu, K. Li, J. Yang, and C. Hou, Tensor completion from structually-missing entries by low-TT-rankness and fiber-wise sparsity, IEEE J. Selected Topics Signal Process., 12 (2018), pp. 1420--1434.
[48]
Q. Zhang, H. Wang, R. J. Plemmons, and V. P. Pauca, Tensor methods for hyperspectral data analysis: A space object matrix identification study, J. Opt. Soc. Amer. A, 25 (2009), pp. 3001--3012.
[49]
Y. Zhang, X. Mou, G. Wang, and H. Yu, Tensor-based dictionary learning for spectral CT reconstruction, IEEE Trans. Med. Imaging, 36 (2017), pp. 142--154.
[50]
Z. Zhang and S. Aeron, Denoising and completion of 3D data via multidimensional dictionary learning, in Proceedings of the International Joint Conference on Artificial Intelligence, 2015.
[51]
Z. Zhang, G. Ely, S. Aeron, N. Hao, and M. Kilmer, Novel methods for multilinear data completion and de-noising based on tensor-SVD, in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Washington, DC, IEEE, pp. 3842--3849, https://doi.org/10.1109/CVPR.2014.485.

Cited By

View all

Recommendations

Comments

Information & Contributors

Information

Published In

cover image SIAM Journal on Imaging Sciences
SIAM Journal on Imaging Sciences  Volume 13, Issue 3
EISSN:1936-4954
DOI:10.1137/sjisbi.13.3
Issue’s Table of Contents

Publisher

Society for Industrial and Applied Mathematics

United States

Publication History

Published: 01 January 2020

Author Tags

  1. tensor
  2. patch dictionary
  3. image compression
  4. image deblurring
  5. MRNSD
  6. sparsity constraint

Author Tags

  1. 65F22
  2. 65F99
  3. 65N20
  4. 65N21

Qualifiers

  • Research-article

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)0
  • Downloads (Last 6 weeks)0
Reflects downloads up to 03 Oct 2024

Other Metrics

Citations

Cited By

View all

View Options

View options

Get Access

Login options

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media