Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
Skip to main content

A tensor-based dictionary learning approach to tomographic image reconstruction

  • Published:
BIT Numerical Mathematics Aims and scope Submit manuscript

Abstract

We consider tomographic reconstruction using priors in the form of a dictionary learned from training images. The reconstruction has two stages: first we construct a tensor dictionary prior from our training data, and then we pose the reconstruction problem in terms of recovering the expansion coefficients in that dictionary. Our approach differs from past approaches in that (a) we use a third-order tensor representation for our images and (b) we recast the reconstruction problem using the tensor formulation. The dictionary learning problem is presented as a non-negative tensor factorization problem with sparsity constraints. The reconstruction problem is formulated in a convex optimization framework by looking for a solution with a sparse representation in the tensor dictionary. Numerical results show that our tensor formulation leads to very sparse representations of both the training images and the reconstructions due to the ability of representing repeated features compactly in the dictionary.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11

Similar content being viewed by others

References

  1. Aharon, M., Elad, M., Bruckstein, A.M.: K-SVD: an algorithm for designing of overcomplete dictionaries for sparse representation. IEEE Trans. Signal Process. 54, 4311–4322 (2006)

    Article  Google Scholar 

  2. Andò, E., Hall, S.A., Viggiani, G., Desrues, J., Beésuelle, P.: Grain-scale experimental investigation of localised deformation in sand: a discrete particle tracking approach. Acta Geotech. 7, 1–13 (2012)

    Article  Google Scholar 

  3. Aja-Fernandez, S., de Luis Garcia, R., Tao, D., Li, X. (eds.): Tensors in Image Processing and Computer Vision. Advances in Pattern Recognition. Springer, New York (2009)

    MATH  Google Scholar 

  4. Becker, S., Candès, E.J., Grant, M.: Templates for convex cone problems with applications to sparse signal recovery. Math. Prog. Comput. 3, 165–218 (2011)

    Article  MathSciNet  MATH  Google Scholar 

  5. Bian, J., Siewerdsen, J.H., Han, X., Sidky, E.Y., Prince, J.L., Pelizzari, C.A., Pan, X.: Evaluation of sparse-view reconstruction from flat-panel-detector cone-beam CT. Phys. Med. Biol. 55, 6575–6599 (2010)

    Article  Google Scholar 

  6. Boyd, S., Parikh, N., Chu, E., Peleato, B., Eckstein, J.: Distributed optimization and statistical learning via the alternating direction method of multipliers. Found. Trends Mach. Learn. 3, 1–122 (2011)

    Article  MATH  Google Scholar 

  7. Braman, K.: Third-order tensors as linear operators on a space of matrices. Linear Algebra Appl. 433, 1241–1253 (2010)

    Article  MathSciNet  MATH  Google Scholar 

  8. Bruckstein, A.M., Donoho, D.L., Elad, M.: From sparse solutions of systems of equations to sparse modeling of signals and images. SIAM Rev. 51(1), 34–81 (2009)

    Article  MathSciNet  MATH  Google Scholar 

  9. Cai, J.F., Candès, E.J., Shen, Z.: A singular value thresholding algorithm for matrix completion. SIAM J. Optim. 20(4), 1956–1982 (2010)

    Article  MathSciNet  MATH  Google Scholar 

  10. Caiafa, C.F., Cichocki, A.: Multidimensional compressed sensing and their applications. Wiley Interdiscip. Rev. Data Min. Knowl. Discov. 3, 355–380 (2013)

    Article  Google Scholar 

  11. Cichocki, A., Zdunek, R., Phan, A.H., Amari, S.: Nonnegative Matrix and Tensor Factorizations: Applications to Exploratory Multi-Way Data Analysis and Blind Source Separation. Wiley, New York (2009)

    Book  Google Scholar 

  12. Combettes, P.L., Pesquet, J.: Proximal splitting methods in signal processing. In: Bauschke, H.H., et al. (eds.) Fixed-Point Algorithms for Inverse Problems in Science and Engineering, pp. 185–212. Springer, New York (2011)

    Chapter  Google Scholar 

  13. Donoho, D., Stodden, V.: When does non-negative matrix factorization give a correct decomposition into parts? In: Thrun, S., Saul, L., Schölkopf, B. (eds.) Adv. Neural Inf. Process. Syst., vol. 16. MIT Press, Cambridge (2004)

    Google Scholar 

  14. Duan, G., Wang, H., Liu, Z., Deng, J., Chen, Y.-W.: K-CPD: learning of overcomplete dictionaries for tensor sparse coding. In: IEEE 21st Int. Conf. on Pattern Recognition (ICPR), pp. 493–496 (2012)

  15. Elad, M., Aharon, M.: Image denoising via sparse and redundant representations over learned dictionaries. IEEE Trans. Image Process. 15, 3736–3745 (2006)

    Article  MathSciNet  Google Scholar 

  16. Elad, M.: Sparse and Redundant Representations. From Theory to Applications in Signal and Image Processing. Springer, New York (2010)

    Book  MATH  Google Scholar 

  17. Engan, K., Aase, S.O., Husøy, J.H.: Multi-frame compression: theory and design. EURASIP Signal Process. 80, 2121–2140 (2000)

    Article  MATH  Google Scholar 

  18. Etter, V., Jovanović, I., and Vetterli, M.: Use of learned dictionaries in tomographic reconstruction. In: Proc. SPIE 8138, Wavelets and Sparsity XIV, 81381C (2011)

  19. Frikel, J., Quinto, E.T.: Characterization and reduction of artifacts in limited angle tomography. Inverse Probl. 29, 125007 (2013)

    Article  MathSciNet  MATH  Google Scholar 

  20. Ghadimi, E., Teixeira, A., Shames, I., Johansson, M.: Optimal parameter selection for the alternating direction method of multipliers (ADMM): quadratic problems. IEEE Trans. Autom. Control. 60, 644–658 (2015)

    Article  MathSciNet  Google Scholar 

  21. Golbabaee, M., Vandergheynst, P.: Hyperspectral image compressed sensing via low-rank and joint-sparse matrix recovery. In: IEEE Int. Conf. on Acoustics, Speech and Signal Processing, pp. 2741–2744 (2012)

  22. Golub, G., Van Loan, C.: Matrix Computations, 3rd edn. John Hopkins University Press, Baltimore (1996)

    MATH  Google Scholar 

  23. Hansen, P.C., Saxild-Hansen, M.: AIR Tools—a MATLAB package of algebraic iterative reconstruction methods. J. Comput. Appl. Math. 236, 2167–2178 (2012)

    Article  MathSciNet  MATH  Google Scholar 

  24. Hao, N., Kilmer, M.E., Braman, K., Hoover, R.C.: Facial recognition using tensor–tensor decompositions. SIAM J. Imaging Sci. 6(1), 437–463 (2013)

    Article  MathSciNet  MATH  Google Scholar 

  25. Hao, N., Horesh, L., Kilmer, M.E.: Nonnegative tensor decomposition. In: Carmi, A.Y., et al. (eds.) Compressed Sensing and Sparse Filtering, pp. 123–148. Springer, Berlin (2014)

    Chapter  Google Scholar 

  26. Hastie, T., Tibshirani, R., Friedman, J.: The Elements of Statistical Learning, Data Mining, Inference, and Prediction, 2nd edn. Springer, New York (2008)

    MATH  Google Scholar 

  27. Hoyer, P.O.: Non-negative matrix factorization with sparseness constraints. J. Mach. Learn. Res. 5, 1457–1469 (2004)

    MathSciNet  MATH  Google Scholar 

  28. Jensen, T.L., Jørgensen, J.H., Hansen, P.C., Jensen, S.H.: Implementation of an optimal first-order method for strongly convex total variation regularization. BIT 52, 329–356 (2011)

    Article  MathSciNet  MATH  Google Scholar 

  29. Kernfeld, E., Kilmer, M., Aeron, S.: Tensor-tensor products with invertible linear transforms. Linear Algebra Appl. 485, 545–570 (2015)

    Article  MathSciNet  MATH  Google Scholar 

  30. Kiers, H.A.L.: Towards a standardized notation and terminology in multiway analysis. J. Chemom. 14, 105–122 (2000)

    Article  MathSciNet  Google Scholar 

  31. Kilmer, M.E., Martin, C.D.: Factorization strategies for third-order tensors. Linear Algebra Appl. 435, 641–658 (2011)

    Article  MathSciNet  MATH  Google Scholar 

  32. Kilmer, M.E., Braman, K., Hao, N., Hoover, R.C.: Third-order tensors as operators on matrices: a theoretical and computational framework with applications in imaging. SIAM J. Matrix Anal. Appl. 34, 148–172 (2013)

    Article  MathSciNet  MATH  Google Scholar 

  33. Kolda, T.G., Bader, B.W.: Tensor decompositions and applications. SIAM Rev. 51(3), 455–500 (2009)

    Article  MathSciNet  MATH  Google Scholar 

  34. LaRoque, S.J., Sidky, E.Y., Pan, Xi: Accurate image reconstruction from few-view and limited-angle data in diffraction tomography. J. Opt. Soc. Am. A Opt. Image Sci. Vis. 25, 1772–1782 (2008)

    Article  Google Scholar 

  35. Lee, D.D., Seung, H.S.: Learning the parts of objects by non-negative matrix factorization. Nature 401, 788–791 (1999)

    Article  Google Scholar 

  36. Liu, Q., Liang, D., Song, Y., Luo, J., Zhu, Y., Li, W.: Augmented Lagrangian-based sparse representation method with dictionary updating for image deblurring. SIAM J. Imaging Sci. 6, 1689–1718 (2013)

    Article  MathSciNet  MATH  Google Scholar 

  37. Mairal, J., Bach, F., Ponce, J., Sapiro, G.: Online learning for matrix factorization and sparse coding. J. Mach. Learn. Res. 11, 19–60 (2010)

    MathSciNet  MATH  Google Scholar 

  38. Mairal, J., Sapiro, G., Elad, M.: Learning multiscale sparse representations for image and video restoration. SIAM Multiscale Model. Simul. 7, 214–241 (2008)

    Article  MathSciNet  MATH  Google Scholar 

  39. Martin, C.D., Shafer, R., LaRue, B.: An order-\(p\) tensor factorization with applications in imaging. SIAM J. Sci. Comput. 35m, A474–A490 (2013)

    Article  MathSciNet  MATH  Google Scholar 

  40. Mueller, J.L., Siltanen, S.: Linear and Nonlinear Inverse Problems with Practical Applications. SIAM, Philadephia (2012)

    Book  MATH  Google Scholar 

  41. Olshausen, B.A., Field, D.J.: Emergence of simple-cell receptive field properties by learning a sparse code for natural images. Nature 381, 607–609 (1996)

    Article  Google Scholar 

  42. Roemer, F., Del Galdo, G., Haardt, M.: Tensor-based algorithms for learning multidimensional separable dictionaries. In: IEEE Int. Conf. on Acoustics, Speech and Signal Processing, pp. 3963–3967 (2014)

  43. Soltani, S.: Studies of Sensitivity in the Dictionary Learning Approach to Computed Tomography: Simplifying the Reconstruction Problem, Rotation, and Scale. DTU Compute Technical Report 2015-4 (2015)

  44. Soltani, S., Andersen, M.S., Hansen, P.C.: Tomographic image reconstruction using dictionary priors. J. Comput. Appl. Math. arXiv:1503.01993 (2014, submitted)

  45. Strong, D., Chan, T.: Edge-preserving and scale-dependent properties of total variation regularization. Inverse Probl. 19, S165–S187 (2003)

    Article  MathSciNet  MATH  Google Scholar 

  46. Tucker, L.R.: Some mathematical notes on three-mode factor analysis. Psychometrika 31, 279–311 (1966)

    Article  MathSciNet  Google Scholar 

  47. Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: From error visibility to structural similarity. IEEE Trans. Image Process. 13(4), 600–612 (2004)

    Article  Google Scholar 

  48. Xu, Q., Yu, H., Mou, X., Zhang, L., Hsieh, J., Wang, G.: Low-dose X-ray CT reconstruction via dictionary learning. IEEE Trans. Med. Imaging 31, 1682–1697 (2012)

    Article  Google Scholar 

  49. Xu, Y., Yin, W., Wen, X., Zhang, Y.: An alternating direction algorithm for matrix completion with nonnegative factors. Front. Math. China 7, 365–384 (2012)

    Article  MathSciNet  MATH  Google Scholar 

  50. Zhang Z., Ely, G., Aeron, S., Hao N., Kilmer, M.: Novel methods for multilinear data completion and de-noising based on tensor-SVD. In: IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), pp. 3842–3849 (2014)

  51. Zubair, S., Wang, W.: Tensor dictionary learning with sparse TUCKER decomposition. In: IEEE 18th Int. Conf. on Digital Signal Processing (2013)

Download references

Acknowledgments

The authors acknowledge collaboration with associate professor Martin S. Andersen from DTU Compute. We would like to thank professor Samuli Siltanen from University of Helsinki for providing the high-resolution image of the peppers and Dr. Hamidreza Abdolvand from University of Oxford for providing the zirconium image. We also thank the referees for many comments and suggestions that helped to improve the presentation.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Per Christian Hansen.

Additional information

Communicated by Lars Eldén.

This work is a part of the project HD-Tomo funded by Advanced Grant No. 291405 from the European Research Council. Kilmer acknowledges support from NSF 1319653.

Appendix: Reconstruction solution via TFOCS

Appendix: Reconstruction solution via TFOCS

The reconstruction problem (15) is a convex, but \(\Vert {\mathcal {C}} \Vert _{\mathrm {sum}}\) and \(\Vert C\Vert _*\) are not differentiable which rules out conventional smooth optimization techniques. The TFOCS software [4] provides a general framework for solving convex optimization problems, and the core of the method computes the solution to a standard problem of the form

$$\begin{aligned} \text{ minimize } \quad l(A(x)-b)+h(x) , \end{aligned}$$
(17)

where the functions l and h are convex, A is a linear operator, and b is a vector; moreover l is smooth and h is non-smooth.

To solve problem (15) by TFOCS, it is reformulated as a constrained linear least squares problem:

$$\begin{aligned} \min _{{\mathcal {C}}} \frac{1}{2} \left\| \begin{pmatrix} 1/\sqrt{m} \, A \\ \delta /c \, L \end{pmatrix} \Pi \mathtt {vec}({\mathcal {D}}*{\mathcal {C}}) - \begin{pmatrix} b \\ 0 \end{pmatrix} \right\| _2^2 + \mu \, \varphi _{\nu }({\mathcal {C}}) \qquad \text {s.t.} \qquad {\mathcal {C}} \ge 0, \end{aligned}$$
(18)

where \(c = \sqrt{2( M(M/p-1)+N(N/r-1) )}\). Referring to (17), \(l(\cdot )\) is the squared 2-norm residual and \(h(\cdot ) = \mu \, \varphi _{\nu }(\cdot )\).

The methods used in TFOCS require computation of the proximity operators of the non-smooth function h. The proximity operator of a convex function is a natural extension of the notion of a projection operator onto a convex set [12].

Let \(f = \Vert {\mathcal {C}}\Vert _{\mathrm {sum}} = \Vert C\Vert _{\mathrm {sum}}\) and \(g=\Vert C\Vert _*\) be defined on the set of real-valued matrices and note that \({\mathrm {dom}} f \cap {\mathrm {dom}}\, g \ne \emptyset \). For \(Z \in {\mathbb {R}}^{m\times n}\) consider the minimization problem

$$\begin{aligned} \text{ minimize }_X \ f(X)+g(X) + 1/2 \Vert X-Z\Vert _{\mathrm {F}}^2 \end{aligned}$$
(19)

whose unique solution is \(X = {\mathrm {prox}}_{f+g}(Z)\). While the prox operators for \(\Vert C\Vert _{\mathrm {sum}}\) and \(\Vert C\Vert _*\) are easily computed, the prox operator of the sum of two functions is intractable. Although the TFOCS library includes implementations of a variety of prox operators—including norms and indicator functions of many common convex sets—implementation of prox operators of the form \({\mathrm {prox}}_{f+g}(\cdot )\) is left out. Hence we compute the prox operator for \(\Vert \cdot \Vert _{\mathrm {sum}}+\Vert \cdot \Vert _*\) iteratively using a Dykstra-like proximal algorithm [12], where prox operators of \(\Vert \cdot \Vert _{\mathrm {sum}}\) and \(\Vert \cdot \Vert _*\) are consecutively computed in an iterative scheme.

Let \(\tau =\mu /q \ge 0\). For \(f(X)=\tau \Vert X\Vert _{\mathrm {sum}}\) and \(X \ge 0\), \({\mathrm {prox}}_f\) is the one-sided elementwise shrinkage operator

$$\begin{aligned} {\mathrm {prox}}_f(X)_{i,j} = {\left\{ \begin{array}{ll} 0 , &{}\quad X_{i,j} \ge \tau \\ X_{i,j}-\tau , &{}\quad |X_{i,j}| \le \tau \\ 0, &{}\quad X_{i,j} \le -\tau \\ \end{array}\right. } \end{aligned}$$

The proximity operator of \(g(X)=\tau \Vert X\Vert _*\) has an analytical expression via the singular value shrinkage (soft threshold) operator

$$\begin{aligned} {\mathrm {prox}}_g (X) = U {\mathrm {diag}}(\sigma _i-\tau ) \, V^T, \end{aligned}$$

where \(X=U \Sigma \, V^T\) is the singular value decomposition of X [9]. The computation of \(\tau \Vert C \Vert _*\) can be done very efficiently since C is \(sq \times r\) with \(r \ll sq\).

The iterative algorithm which computes an approximate solution to \({\mathrm {prox}}_{f+g}\) is given in Algorithm 2. Every sequence \(X_k\) generated by Algorithm 2 converges to the unique solution \({\mathrm {prox}}_{f+g}\) of problem (19) [12].

figure b

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Soltani, S., Kilmer, M.E. & Hansen, P.C. A tensor-based dictionary learning approach to tomographic image reconstruction. Bit Numer Math 56, 1425–1454 (2016). https://doi.org/10.1007/s10543-016-0607-z

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10543-016-0607-z

Keywords

Mathematics Subject Classification