Abstract
We consider tomographic reconstruction using priors in the form of a dictionary learned from training images. The reconstruction has two stages: first we construct a tensor dictionary prior from our training data, and then we pose the reconstruction problem in terms of recovering the expansion coefficients in that dictionary. Our approach differs from past approaches in that (a) we use a third-order tensor representation for our images and (b) we recast the reconstruction problem using the tensor formulation. The dictionary learning problem is presented as a non-negative tensor factorization problem with sparsity constraints. The reconstruction problem is formulated in a convex optimization framework by looking for a solution with a sparse representation in the tensor dictionary. Numerical results show that our tensor formulation leads to very sparse representations of both the training images and the reconstructions due to the ability of representing repeated features compactly in the dictionary.
Similar content being viewed by others
References
Aharon, M., Elad, M., Bruckstein, A.M.: K-SVD: an algorithm for designing of overcomplete dictionaries for sparse representation. IEEE Trans. Signal Process. 54, 4311–4322 (2006)
Andò, E., Hall, S.A., Viggiani, G., Desrues, J., Beésuelle, P.: Grain-scale experimental investigation of localised deformation in sand: a discrete particle tracking approach. Acta Geotech. 7, 1–13 (2012)
Aja-Fernandez, S., de Luis Garcia, R., Tao, D., Li, X. (eds.): Tensors in Image Processing and Computer Vision. Advances in Pattern Recognition. Springer, New York (2009)
Becker, S., Candès, E.J., Grant, M.: Templates for convex cone problems with applications to sparse signal recovery. Math. Prog. Comput. 3, 165–218 (2011)
Bian, J., Siewerdsen, J.H., Han, X., Sidky, E.Y., Prince, J.L., Pelizzari, C.A., Pan, X.: Evaluation of sparse-view reconstruction from flat-panel-detector cone-beam CT. Phys. Med. Biol. 55, 6575–6599 (2010)
Boyd, S., Parikh, N., Chu, E., Peleato, B., Eckstein, J.: Distributed optimization and statistical learning via the alternating direction method of multipliers. Found. Trends Mach. Learn. 3, 1–122 (2011)
Braman, K.: Third-order tensors as linear operators on a space of matrices. Linear Algebra Appl. 433, 1241–1253 (2010)
Bruckstein, A.M., Donoho, D.L., Elad, M.: From sparse solutions of systems of equations to sparse modeling of signals and images. SIAM Rev. 51(1), 34–81 (2009)
Cai, J.F., Candès, E.J., Shen, Z.: A singular value thresholding algorithm for matrix completion. SIAM J. Optim. 20(4), 1956–1982 (2010)
Caiafa, C.F., Cichocki, A.: Multidimensional compressed sensing and their applications. Wiley Interdiscip. Rev. Data Min. Knowl. Discov. 3, 355–380 (2013)
Cichocki, A., Zdunek, R., Phan, A.H., Amari, S.: Nonnegative Matrix and Tensor Factorizations: Applications to Exploratory Multi-Way Data Analysis and Blind Source Separation. Wiley, New York (2009)
Combettes, P.L., Pesquet, J.: Proximal splitting methods in signal processing. In: Bauschke, H.H., et al. (eds.) Fixed-Point Algorithms for Inverse Problems in Science and Engineering, pp. 185–212. Springer, New York (2011)
Donoho, D., Stodden, V.: When does non-negative matrix factorization give a correct decomposition into parts? In: Thrun, S., Saul, L., Schölkopf, B. (eds.) Adv. Neural Inf. Process. Syst., vol. 16. MIT Press, Cambridge (2004)
Duan, G., Wang, H., Liu, Z., Deng, J., Chen, Y.-W.: K-CPD: learning of overcomplete dictionaries for tensor sparse coding. In: IEEE 21st Int. Conf. on Pattern Recognition (ICPR), pp. 493–496 (2012)
Elad, M., Aharon, M.: Image denoising via sparse and redundant representations over learned dictionaries. IEEE Trans. Image Process. 15, 3736–3745 (2006)
Elad, M.: Sparse and Redundant Representations. From Theory to Applications in Signal and Image Processing. Springer, New York (2010)
Engan, K., Aase, S.O., Husøy, J.H.: Multi-frame compression: theory and design. EURASIP Signal Process. 80, 2121–2140 (2000)
Etter, V., Jovanović, I., and Vetterli, M.: Use of learned dictionaries in tomographic reconstruction. In: Proc. SPIE 8138, Wavelets and Sparsity XIV, 81381C (2011)
Frikel, J., Quinto, E.T.: Characterization and reduction of artifacts in limited angle tomography. Inverse Probl. 29, 125007 (2013)
Ghadimi, E., Teixeira, A., Shames, I., Johansson, M.: Optimal parameter selection for the alternating direction method of multipliers (ADMM): quadratic problems. IEEE Trans. Autom. Control. 60, 644–658 (2015)
Golbabaee, M., Vandergheynst, P.: Hyperspectral image compressed sensing via low-rank and joint-sparse matrix recovery. In: IEEE Int. Conf. on Acoustics, Speech and Signal Processing, pp. 2741–2744 (2012)
Golub, G., Van Loan, C.: Matrix Computations, 3rd edn. John Hopkins University Press, Baltimore (1996)
Hansen, P.C., Saxild-Hansen, M.: AIR Tools—a MATLAB package of algebraic iterative reconstruction methods. J. Comput. Appl. Math. 236, 2167–2178 (2012)
Hao, N., Kilmer, M.E., Braman, K., Hoover, R.C.: Facial recognition using tensor–tensor decompositions. SIAM J. Imaging Sci. 6(1), 437–463 (2013)
Hao, N., Horesh, L., Kilmer, M.E.: Nonnegative tensor decomposition. In: Carmi, A.Y., et al. (eds.) Compressed Sensing and Sparse Filtering, pp. 123–148. Springer, Berlin (2014)
Hastie, T., Tibshirani, R., Friedman, J.: The Elements of Statistical Learning, Data Mining, Inference, and Prediction, 2nd edn. Springer, New York (2008)
Hoyer, P.O.: Non-negative matrix factorization with sparseness constraints. J. Mach. Learn. Res. 5, 1457–1469 (2004)
Jensen, T.L., Jørgensen, J.H., Hansen, P.C., Jensen, S.H.: Implementation of an optimal first-order method for strongly convex total variation regularization. BIT 52, 329–356 (2011)
Kernfeld, E., Kilmer, M., Aeron, S.: Tensor-tensor products with invertible linear transforms. Linear Algebra Appl. 485, 545–570 (2015)
Kiers, H.A.L.: Towards a standardized notation and terminology in multiway analysis. J. Chemom. 14, 105–122 (2000)
Kilmer, M.E., Martin, C.D.: Factorization strategies for third-order tensors. Linear Algebra Appl. 435, 641–658 (2011)
Kilmer, M.E., Braman, K., Hao, N., Hoover, R.C.: Third-order tensors as operators on matrices: a theoretical and computational framework with applications in imaging. SIAM J. Matrix Anal. Appl. 34, 148–172 (2013)
Kolda, T.G., Bader, B.W.: Tensor decompositions and applications. SIAM Rev. 51(3), 455–500 (2009)
LaRoque, S.J., Sidky, E.Y., Pan, Xi: Accurate image reconstruction from few-view and limited-angle data in diffraction tomography. J. Opt. Soc. Am. A Opt. Image Sci. Vis. 25, 1772–1782 (2008)
Lee, D.D., Seung, H.S.: Learning the parts of objects by non-negative matrix factorization. Nature 401, 788–791 (1999)
Liu, Q., Liang, D., Song, Y., Luo, J., Zhu, Y., Li, W.: Augmented Lagrangian-based sparse representation method with dictionary updating for image deblurring. SIAM J. Imaging Sci. 6, 1689–1718 (2013)
Mairal, J., Bach, F., Ponce, J., Sapiro, G.: Online learning for matrix factorization and sparse coding. J. Mach. Learn. Res. 11, 19–60 (2010)
Mairal, J., Sapiro, G., Elad, M.: Learning multiscale sparse representations for image and video restoration. SIAM Multiscale Model. Simul. 7, 214–241 (2008)
Martin, C.D., Shafer, R., LaRue, B.: An order-\(p\) tensor factorization with applications in imaging. SIAM J. Sci. Comput. 35m, A474–A490 (2013)
Mueller, J.L., Siltanen, S.: Linear and Nonlinear Inverse Problems with Practical Applications. SIAM, Philadephia (2012)
Olshausen, B.A., Field, D.J.: Emergence of simple-cell receptive field properties by learning a sparse code for natural images. Nature 381, 607–609 (1996)
Roemer, F., Del Galdo, G., Haardt, M.: Tensor-based algorithms for learning multidimensional separable dictionaries. In: IEEE Int. Conf. on Acoustics, Speech and Signal Processing, pp. 3963–3967 (2014)
Soltani, S.: Studies of Sensitivity in the Dictionary Learning Approach to Computed Tomography: Simplifying the Reconstruction Problem, Rotation, and Scale. DTU Compute Technical Report 2015-4 (2015)
Soltani, S., Andersen, M.S., Hansen, P.C.: Tomographic image reconstruction using dictionary priors. J. Comput. Appl. Math. arXiv:1503.01993 (2014, submitted)
Strong, D., Chan, T.: Edge-preserving and scale-dependent properties of total variation regularization. Inverse Probl. 19, S165–S187 (2003)
Tucker, L.R.: Some mathematical notes on three-mode factor analysis. Psychometrika 31, 279–311 (1966)
Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: From error visibility to structural similarity. IEEE Trans. Image Process. 13(4), 600–612 (2004)
Xu, Q., Yu, H., Mou, X., Zhang, L., Hsieh, J., Wang, G.: Low-dose X-ray CT reconstruction via dictionary learning. IEEE Trans. Med. Imaging 31, 1682–1697 (2012)
Xu, Y., Yin, W., Wen, X., Zhang, Y.: An alternating direction algorithm for matrix completion with nonnegative factors. Front. Math. China 7, 365–384 (2012)
Zhang Z., Ely, G., Aeron, S., Hao N., Kilmer, M.: Novel methods for multilinear data completion and de-noising based on tensor-SVD. In: IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), pp. 3842–3849 (2014)
Zubair, S., Wang, W.: Tensor dictionary learning with sparse TUCKER decomposition. In: IEEE 18th Int. Conf. on Digital Signal Processing (2013)
Acknowledgments
The authors acknowledge collaboration with associate professor Martin S. Andersen from DTU Compute. We would like to thank professor Samuli Siltanen from University of Helsinki for providing the high-resolution image of the peppers and Dr. Hamidreza Abdolvand from University of Oxford for providing the zirconium image. We also thank the referees for many comments and suggestions that helped to improve the presentation.
Author information
Authors and Affiliations
Corresponding author
Additional information
Communicated by Lars Eldén.
This work is a part of the project HD-Tomo funded by Advanced Grant No. 291405 from the European Research Council. Kilmer acknowledges support from NSF 1319653.
Appendix: Reconstruction solution via TFOCS
Appendix: Reconstruction solution via TFOCS
The reconstruction problem (15) is a convex, but \(\Vert {\mathcal {C}} \Vert _{\mathrm {sum}}\) and \(\Vert C\Vert _*\) are not differentiable which rules out conventional smooth optimization techniques. The TFOCS software [4] provides a general framework for solving convex optimization problems, and the core of the method computes the solution to a standard problem of the form
where the functions l and h are convex, A is a linear operator, and b is a vector; moreover l is smooth and h is non-smooth.
To solve problem (15) by TFOCS, it is reformulated as a constrained linear least squares problem:
where \(c = \sqrt{2( M(M/p-1)+N(N/r-1) )}\). Referring to (17), \(l(\cdot )\) is the squared 2-norm residual and \(h(\cdot ) = \mu \, \varphi _{\nu }(\cdot )\).
The methods used in TFOCS require computation of the proximity operators of the non-smooth function h. The proximity operator of a convex function is a natural extension of the notion of a projection operator onto a convex set [12].
Let \(f = \Vert {\mathcal {C}}\Vert _{\mathrm {sum}} = \Vert C\Vert _{\mathrm {sum}}\) and \(g=\Vert C\Vert _*\) be defined on the set of real-valued matrices and note that \({\mathrm {dom}} f \cap {\mathrm {dom}}\, g \ne \emptyset \). For \(Z \in {\mathbb {R}}^{m\times n}\) consider the minimization problem
whose unique solution is \(X = {\mathrm {prox}}_{f+g}(Z)\). While the prox operators for \(\Vert C\Vert _{\mathrm {sum}}\) and \(\Vert C\Vert _*\) are easily computed, the prox operator of the sum of two functions is intractable. Although the TFOCS library includes implementations of a variety of prox operators—including norms and indicator functions of many common convex sets—implementation of prox operators of the form \({\mathrm {prox}}_{f+g}(\cdot )\) is left out. Hence we compute the prox operator for \(\Vert \cdot \Vert _{\mathrm {sum}}+\Vert \cdot \Vert _*\) iteratively using a Dykstra-like proximal algorithm [12], where prox operators of \(\Vert \cdot \Vert _{\mathrm {sum}}\) and \(\Vert \cdot \Vert _*\) are consecutively computed in an iterative scheme.
Let \(\tau =\mu /q \ge 0\). For \(f(X)=\tau \Vert X\Vert _{\mathrm {sum}}\) and \(X \ge 0\), \({\mathrm {prox}}_f\) is the one-sided elementwise shrinkage operator
The proximity operator of \(g(X)=\tau \Vert X\Vert _*\) has an analytical expression via the singular value shrinkage (soft threshold) operator
where \(X=U \Sigma \, V^T\) is the singular value decomposition of X [9]. The computation of \(\tau \Vert C \Vert _*\) can be done very efficiently since C is \(sq \times r\) with \(r \ll sq\).
The iterative algorithm which computes an approximate solution to \({\mathrm {prox}}_{f+g}\) is given in Algorithm 2. Every sequence \(X_k\) generated by Algorithm 2 converges to the unique solution \({\mathrm {prox}}_{f+g}\) of problem (19) [12].
Rights and permissions
About this article
Cite this article
Soltani, S., Kilmer, M.E. & Hansen, P.C. A tensor-based dictionary learning approach to tomographic image reconstruction. Bit Numer Math 56, 1425–1454 (2016). https://doi.org/10.1007/s10543-016-0607-z
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s10543-016-0607-z
Keywords
- Tensor decomposition
- Tensor dictionary learning
- Inverse problem
- Regularization
- Sparse representation
- Tomographic image reconstruction